playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_3_Part_2_Circular_points_and_Absolute_conic.txt | so one of the use of the dual of the circular points denoted by c infinity star is to make use of it to compute the angles between any two lines on a projective plane and but before we look at how to do this in the projective plane let's refresh our memory on how to compute the angle between two lines uh suppose that i have two lines l and m over here what i want to do here is that i want to compute this angle theta between the these two lines here and suppose that the line l is given by this three vector homogeneous coordinates l1 l2 and l3 as well as the line of m is given by m1 m2 and m3 respectively we know that from lecture 1 i have shown that the any line that is represented by suppose l and a three vector homogeneous coordinate l1 l2 and l3 we know that the normal vector of this particular line is given by the first two and three of the of the line in homogeneous coordinate hence this normal vector here is given by l one and l two and uh similarly any line of m given by the three vector m1 m2 and m3 in the homogeneous coordinates we would have a normal vector given by the first two entries which is m1 and m2 respectively and in order for us to find the angle between these two lines uh here theta it's equivalent to finding the angle between these two normal vectors here in the two vector coordinates that we are now given as the normal vector from the homogeneous coordinates and we know that this can be found from the dot product of these two vectors so what we are going to do here it would be equivalent to l1 l2 transpose multiplied by m1 and m2 and this divided this would be equals to the magnitude of l or 1 and l 2 transpose multiplied by the magnitude of m 1 and m 2 transpose multiplied by cosine theta which is the angle between these two normal vectors here and this is equivalent to the angle between the two lines as a result we will be able to compute cosine theta equals to this expression over here but the problem with this expression is that it's not defined under any projective transformation what this means is that if i were to transform the lines l and m respectively via any projective transformation which i denote as h into l prime and m prime respectively which is where l prime is given by h of l or inverse transpose and m prime is given by h of inverse transpose of m respectively then i wouldn't be able to directly make use of l prime and m prime 2 or to incorporate this projective transformation into this equation over here to compute the angle between the two lines after the projective transformation hence this expression cannot be used after an affine or projective transformation of the plane so we'll make use of the conics c infinity star that we have defined which is the dual of the circular points that we have defined earlier to do this transformation and compute the euclidean angles between the two lines after the transformation we'll just simply put this c infinity star between l transpose and m this is the dot product so in this case here we're no longer just looking at the normal vector of l1 and l2 but we are looking at the whole line homogeneous coordinates here so this is l transpose here and we are looking at this guy here well where we incorporate these three entities or three elements into the line vector here and c infinity star is what we have defined earlier to be this uh matrix over here where the off diagonal it's a zero and uh the diagonals would be one one and zero so substituting this guy here and also m equals to m one m2 and m3 transpose here into this equation we'll get exactly what we saw earlier to be l1 m1 plus l2 m2 and similarly for the denominator terms we'll get exactly the expressions in the euclidean uh space that we have computed in the previous slide and but one of the advantage of doing this by including c infinity star into the formulation is that now we will see that it becomes invariant to projective transformation and here's the proof the mathematical proof on why is this so so uh recall that i have mentioned that the line transformation is given by h inverse transpose l uh where h here is a general projective transformation and the koenigs transformation would be uh or the duel of the koenigs transformation is given by h c star h transpose and will transform this into c star prime under any point transformation of x prime equals to h of x and hence the numerator will transform into this form over here where we are initially given l transpose c infinity star multiplied by m and uh after all these entities l c star as well as m undergoes the transformation of h we'll get this expression over here where this guy here the first term over here would be equivalent to l prime and the second term over here this guy here would be equivalent to c star infinity prime in the transform space and the last term over here would be the transformation of m into m prime and we can see very clearly that uh over here we would have a h inverse multiplied by h and this guy over here is equivalent to identity and similarly we have another term over here which is equivalent to h transpose multiplied by h inverse transpose and this will also give us identity hence these two terms here can be cancelled out and now let's look at the result we'll be left with l transpose c infinity star multiplied by m which is equivalent to what we have began with and hence it can be verified that also that the denominator terms would stay the same because the denominator term is also the of the same format as the numerator term and uh the scales of l and m will cancel out hence we can conclude that the by expressing the euclidean angle into this expression which incorporates the conics c star infinity it will be invariant to projective transformation that means that what happens here is that if i am given two lines here and this angle theta l and m and if i were to transform this using any uh h or any of this projective transformation then the angle between these two lines computed this way would be would stay the same it would be invariant on the projective plane and l and m are octagonal if the this particular expression over here l transpose of c infinity star m equals to 0. this can be easily proven from the equation earlier and here this equation gives the angle between any two lines l and m which is denoted by theta here so if these two lines are perpendicular to each other then we can pretty much see that cosine 90 degree here is equals to zero hence we can write this expression and use it as a constraint uh for our observations between any two lines so once the conic c infinity star which is the dual of the circular point is defined or identified on the projective plane then the projective distortion may be rectified up to up to a similarity what this means is that given an image and i'm able to if i'm able to identify the projection of c uh infinity star onto the image i would be able to remove the projective distortion directly up to a similarity transform and here's the proof if the point transformation is given by x prime equals to hx this is where h here is the general projective transformation we have c star uh infinity prime equals to uh this guy over here where i'm where we call that we can write any form of projective transformation we can decompose it into this three terms over here we can factorize it into three terms over here where hp here is a projective transformation a is a fine transformation and s h s here is our similarity transformation so substituting these terms into the expression here we can see that uh h we can group this h s together and call this c infinity star where we have defined we or we have proven earlier that c infinity star this guy here it's going to be it's going to stay the same after it undergoes a similarity transformation hence we'll group this together and call it c infinity staff since there's no change to the to these conics dual chronic here then we are left with hp and h a term in this expression which we can see that these two terms according to what we have defined earlier these two terms here we can get express this with respect to c infinity star which is given by this matrix here and obtain this particular final matrix here to represent c star prime after the projection projective transformation and this is what we will use to denote the c infinity star prime and it's clear that the image of c star infinity gives the projective v and a fine k component but not the similarity component because the similarity component here is absorbed into c infinity star and because it remains the same so there is no way hence there is no way for us to figure out what's the similarity transform after the this c infinity star has undergone some form of projective uh transformation but we will be able to express this once we observe this guy here c infinity star prime on the image then we will be able to make use of this information to compute the projective uh transformation using uh to find out what's v here as well as to find out the affine component k from this equation so also recall that the decomposition here is given by these three terms here so where this is hs the similarity transform this h a as well as this is h v here so we'll be what we're interested in is to find v and k in our computation here now uh so we'll after having looked at the definition of c infinity star and how it remains invariant under similarity transform as well as to express it using projective transformation let's now look at how to do metric ratification using c infinity star so given to identify uh c infinity star in an image that's denoted by c star infinity prime this under this guy here is the c infinity star that has undergone some form of projective transformation as what we have defined in the expression in the previous slide a suitable rectified homography can be found from svd of this this guy over here so what this means is that suppose that uh i have my c infinity star this is the doer of our circular point and if i'm able to identify this point or this conics on my or on my image over here then i would be able to uh decompose this into uh this term over here using svd since this is a three by three uh symmetrical metric the what this means is that after i done i have done svd here the usual svd of a let's say a matrix a which is n by m what i will be what i'll get from here after the svd would be u transpose sigma v or the other way around i mean you can put the transpose here it doesn't matter but in this case here what happens here is that u would be n by n dimension because this is a non-square matrix and sigma here would be a n by n matrix a diagonal n by m matrix which is consisting of all the eigenvalues and v here would be a matrix of size m by m so u and v here are both the left and right octagonal matrix of a and what this means is that this is they are not going to be the same or the equal size over here because a is a non-square matrix but if we were to take the svd of a square matrix c for example then the end result would be u sigma u transpose where u and both of the u's here they are going to be the same if this is the square matrix m by n square matrix then i would have u m by n sigma to be a diagonal of the eigenvalues m by n and u here would be also m by n and these two u's are the same actually and uh what's interesting here is that uh sigma would be also the its eigenvalue and it will also reveal the rank of c and a respectively and here after we have taken the svd of the c infinity star prime metric that we have observed from the image we can uh decompose it using svd into this term since it's a square matrix and what happens here is that we'll get the eigenvalues which is one and one as well as the last term here is zero because this guy here as i have mentioned that sigma here would reveal the rank so usually sigma is a in this case m by n matrix if i have some values here and then the last few uh diagonal here are zero then i would be able to tell uh all the non-zeros here would be equivalent to the rank of the the number of non-zeros would be equivalent to the rank of uh this this matrix c here so here recall that uh c infinity star is a rank division uh conex because it defines two points i and j the circular point using lines so these are the degenerate conics with rank two so the rank of c infinity star prime here is equals to two hence we can see that after the singular value decomposition the eigenvalues here in the entries here we only have two non-zero components and the third one will be zero so we can see that this reviews the rank of c infinity star prime as well and what's interesting here is that this matrix here will be directly equivalent to c infinity star and hence we can take the left and right octal gunner matrix u here to equate it to h which is equivalent to this term here that we have seen earlier because this after the decomposition this is equivalent to u and this is equivalent to u transpose here so now uh we can equate this and find all the terms of k and v inside here up to us similarity uh ambiguity because we cannot find uh hs here since hs is absorbed into this c infinity star here now let me show you an example on how to do this metric ratification uh previously when we talked about the line at infinity we saw how to make use of the line at infinity the identification of the projection of line at infinity on an image to do uh projective rectification but i have also mentioned that uh after the projective uh rectification from this step to this step well we cannot uh we are still remaining a fine distortion here so now i will show how to make use of c infinity star the dual of the circular points which is a degenerate conics to do metric ratification of a finely rectified image this means that this is left with a fine distortion here so we are going to now identify this guy here and remove this distortion so after we've seen earlier that after the projective distortion remover we will turn paler lines into palera again and here because of the projective distortion we can see that it's a parallel line meet at a certain point here at certain finite point but after the removal of projective distortion parallel line becomes parallel but we are still not able to recover the angle here where the true angle here is supposed to be 90 degree but apparently it doesn't appear so on the image so now the next step is that we are going to identify c infinity star prime on this image and then making use of this will be able to compute h a and remove it then we can see that after the affine distortion remover the perpendicular line will become perpendicular again here now we have seen that c infinity star prime is equal to this expression over here which can be further simplified into this expression here since hs the similarity transform is being absorbed into the conics into the degenerate conics here which can be rewritten into this form here this equation here so this is simply just uh putting hp here bringing hp here into the left side of the equation so hp becomes h inverse p as well as h transpose p becomes h inverse transpose p on the left side and on the right side we are remaining h a c infinity star and h a transpose here so now let us collectively write this term here on the left side as c prime prime star infinity where this c prime prime star infinity denotes the image of the conics after removal of the projective distortion here so because here we are undoing the projective distortion by bringing this projective transformation onto the left side of the equation here so what happens here is that c prime prime star infinity would still contain the affinity transformation but we would have the projective distortion removed here so now we can compute c prime prime star infinity from the image where we have removed the projective distortion which we have seen using the line at infinity in the earlier slide and because this c prime prime star infinity is what we have defined uh to be an entity to be the doer of the circular point projection where the projective transformation has been removed and we are still remaining the fine transformation hence the objective here is to def to identify c prime prime star infinity on its this image and to make use of this to compute h a such that we can remove the affine distortion from this particular image here and we can uh compute this we'll see that we can compute c prime prime star infinity from two pairs of octagonal lines from this uh image here which is still distorted by a fine transformation but we already have the projective transformation removed from this particular image here now suppose that i have the lines l prime and m prime in the a finely rectified image and this is the image that we are looking at so we'll be able to identify this octagonal line pair l and m on this particular image here since we know that this corresponds to the real world where it's supposed to be perpendicular to each other and of course in this image because it's still subjected to a fine distortion this means that this guy over here is not going to be equal to 90 degree from this image but in reality it's going to be identi 90 degrees here so well we will be able to identify this pair of lines l prime and m prime here which we can directly substitute into the this equation here so l prime m prime will be able to identify this and we'll let's say denote this by l 1 prime l 2 prime as well as l 3 prime and transpose and m prime that is equals to m 1 prime m 2 prime and m 3 prime here transpose i'll be able to substitute this into this equation here which we have seen earlier where l prime and m prime are these two lines over here and c prime prime infinity star is going to be what we are interested in solving here and uh since we know h a here it's given by this is a fine transformation is given by this term over here which consists of the only unknown here is going to be k where k here is actually a 2 by 2 matrix this means that there are four terms inside here so what we can do here is that we can substitute this into this term over here where c infinity star here is what we known to be 1 1 0 and the off diagonals to be all zero and we can substitute this inside this equation here so now since l prime and m prime here are known from the image which we can measure this by we can easily compute this from any two points here uh to express this line over here in terms of this equation here these two equations here uh it will it will be simply any two points here the cross product of any two points here will give us these two lines over this l prime as well as any two points on m prime here will give us the the cross product of this would give us m prime over here and we can observe this two from the image and now uh substituting everything back into this octagonal equation here which is equating to zero this is a constraint here uh which because we know that the two lines are perpendicular to each other and hence we'll now see that we are only left with the unknowns of k and k transpose here which we can rewrite this into a s two by two matrix so there are four elements but only three independent elements inside here so uh what we can do here is that uh once we have this equation we can express it out into this form over here since uh we only need three independent uh eq numbers or elements from s so s equals to s 1 1 s 1 2 s 2 1 and s 2 2 over here and since we know k is a symmetrical uh matrix here so this means that s 1 2 must be equals to s 2 1 or k k transpose is a symmetrical matrix here so this means that these two guys here must be the same hence we can rewrite this into this form over here and we'll get this equation over here which is in terms of s 1 1 s 1 2 and s 2 2 these are the three unknowns over here two constraints here from two octagonal pair of line can be stacked together to give us a two by three matrix so this guy here is actually uh it's a one by three vector transpose and this s here can be rewritten into s 1 1 s 1 2 and s 2 2 here so this will give us one constraint this means that one pair of octagonal line will give us one equation here in terms of the three unknowns here but it's up to defining up to scale so this means that uh we will only be able to define the third element as a linear combination of the first two elements here and what happens here is that uh we only need two constraints to solve for this null space equation here so we can stack these two together that means that uh let's call this guy here a a1 or a i here so what happens here is that i can have two equations here a1 and a2 and i can stack them together such that it becomes s11 s12 and s22 here equals to zero so this becomes a 2 by 3 matrix and this would become a 3 by 1 vector over here hence this will be the our familiar equation the homogeneous linear equation of ax equals to 0 in linear algebra so we can solve for the null space of x over here or s over here and that would be the equation that would be the solution that we get for the s vector over here and once we have solved for x s over here since s is equals to k k transpose here we can this is a square root matrix this means that this is a square matrix of k and k would becomes the square root of s and this is uh the denoted or this is given by simply the koliski decomposition and so once we have obtained k we will be able to get h a over here which we can use to remove the fine distortion from the image completely and we'll be left with the similarity distortion which we will not be able to remove unless we know the corresponding points in the 3d scene and so in the previous example i have shown how to remove the remaining file distortion after we have removed the projective distortion from the line at infinity so this is actually a two-step process that we use to rectify the image with projective and a fine transformation to the original image so this is actually a two-step process to remove both the projective and the fine transformation from the image so now let's look at one a one-step process on to remove so given uh an image with both projective and a fine transformation we are going to remove it completely that means that recover the undistorted image from a one step process here and this can be achieved by identifying the c infinity star the projection of the c infinity star directly on the perspective image as compared to what we have seen earlier that we have identified c infinity star the projection on to the a phi image so in this case here we'll directly make use of this equation that we have derived earlier c star prime equals to this expression over here which consists of both the projective and a fine transformation inside here and we saw earlier that these two matrices can be multiplied together to get this expression here so now we are interested in finding a k as well as v that represents the fine transformation as well as the projective transformation and we can do this by identifying the octagonal pair of lines l prime and m prime directly on the perspective image so this means that the perspective image is uh containing the a fine transformation as well as the projective transformation we'll do the same thing we'll simply from this particular image here we'll identify a pair of line which is supposed to be octagonal in the real world so uh in this case here i'm going to call this l prime and m prime over here and here i can express this substitute it into the constraint equation onto the octagonal constraint equation that i have seen earlier and this would give us this equation here each line here will give us one constraint over here and we can similarly we can rewrite this into this is going to be a one by six uh matrix over here and this guy over here is going to be a six by one vector which consists of a b c d e and f because we can see that uh from here we can collectively call this into s which is equals to s 1 1 s 1 2 and s 2 1 that we have seen earlier and v here would be another three more elements in the last row of hp and this would be a three by one vector here so there are three unknowns plus another three unknowns in s all together there are six unknowns in this uh equation here which we can rewrite into this form here so we will call c here as a vector of a b c d e and f which is represent the six unknown in our constraint equation earlier and since this can be stacked together each constraint give us this equation here we can observe more constraints we have six unknowns here where it's defined up to only a scale and here this means that we all together we need five constraints to solve for this null space equation here uh let's call this guy here as b i for example all i need to do would be to each pair of lines is going to give me a constraint i'll stack them all together to form this equation here c and equals to 0. so this is a this would become a five by six matrix and this would become a six by one vector as usual and then what happens here is that uh this becomes our famous uh null space or the homogeneous linear equation ax equals to 0 which we can simply solve for c here by taking the svd of this guy here and so hence we will be able to once we have solved for these six unknowns over here we will be able to decompose it and put it back into this equation here note that i have mentioned the in example one the two-step process that means that i'm going to remove the projective distortion with l infinity first and then remove the fine distortion using c infinity star and this is also known as the stratified approach and the one step process is to directly identify the projection of c infinity star on the perspective image and directly computing the unknowns of h p and h a together |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_5_Part_1_Camera_models_and_calibration.txt | hello everyone welcome to the lectron 3d computer vision and today we are going to talk about camera models and calibration hopefully by the end of today's lecture you will be able to describe the camera projection with the pinhole camera model and you'll be able to identify the camera center principle plane's principle point and principle axis from the projection matrix we'll then look at how to use the projection matrix to get the forward and backward projection of a point and we will explain the properties of an affine camera finally we'll do calibration to find the intrinsic and extrinsic values of a projective camera of course i didn't invent any of today's material i took most of the contents of today's lecture from the textbook written by richard hartley and andrew zizerman multiple view geometry in computer vision in particular chapter 6. i also took some of the materials from chapter 3 from the textbook written by my an invitation to 3d vision and i took the calibration method from a paper written by zhangtan yo a flexible new technique for camera calibration which was published at t palmy in the year 2000 i strongly encourage everyone of you to take a look at this materials after today's lecture in the introductory part we look at the basic definition of a camera it's essentially a very simple device that maps the geometric entities in the 3d world onto a 2d image plane and we look at the 2d projective space denoted by p2 this is essentially a projective plane where we define geometric entities such as a point a line as well as the konig and we also define the hierarchy of transformation that maps the geometric entities from a p2 space to the same space and furthermore we also look at the geometric entities in the 3d projective space denoted by p3 and we also look at the transformation the hierarchy of transformation that maps these entities from the p3 space to the same space and this geometric entities includes a point the line plane as well as the quadratic in today's lecture we'll look at the projection matrix that maps the 3d geometric entity in the 3d projective space onto the projective 2d space we'll look at the specific type of camera model which we call the camera model with a central projection in this particular lecture and what it means is that all light rays that is being projected onto an image plane they will converge at a single center of projection and generally the camera models with a central projection falls into two major classes the first type it's a camera with a finite center what this means is that this particular point here if i denote it by c i express this particular coordinate in the r3 space that means that it's in a 3d space this particular coordinate with respect to a world frame it will have a finite value the second class of the camera models with a central projection would be those with a center at infinity and if i denote this particular camera center it will lie on the plane at infinity we'll find out more detail about these two classes or camera models those with finite center and those with center at infinity in particular we'll look at two camera models with the first type which you call the projective camera this is a representative camera model finite center and the second type that we will see would be the a fine camera where the center of the projection lies at the plane at infinity so a projective camera can be defined using the very basic pinhole camera model which is what we are going to look at now let us first define a euclidean coordinate which we can see this illustration where the y-axis is here and there's an x-axis and there's a z-axis that's pointing out so where the center of projection is being defined at the origin of this particular euclidean coordinate system which we denote by c over here now let's consider the image plane which is a plane that is parallel to the x y plane of the euclidean system that we have defined there is a certain distance only in the z direction that is spaced out between the x y plane of the euclidean coordinate system that we have defined earlier on and this particular image plane and we define this particular distance as f which will see that this is actually the focal length later on and we call this particular plane either the image plane or the focal plane and now having defined this if we were to look at the side view from this particular point of view we can represent this projection system onto where a point x in the 3d world it actually joins up the the camera center via a light ray where the intersection of this particular point which we call small x over here it's actually the image projection of this particular 3d point onto the image plane following a camera model with your central projection so now by the virtue of similar triangle we can see that there's two triangles over here one which is the smaller one which is here and the other one which is the bigger triangle which is given by this guy over here we can see that the base distance of the smaller triangle is essentially given by f which we have defined earlier on and the base distance of the largest triangle let's denote it by z and the height of the larger triangle is also given by y which is essentially the inhomogeneous coordinate of the 3d point that is being projected onto the image that we have seen earlier on by the virtue of the similar triangle the height of the smaller triangle is naturally given by the product of f over z which is the ratio of the base length multiplied by the height of the bigger triangle now we can formally express the 3d 2d projection as the coordinate form where given a 3d coordinate denoted by xyz it's actually mapped onto the 2d space of f y where the y coordinate is being mapped onto f y over z we can follow the same concept of a similar triangle to derive the x coordinate of the projection of this 3d point on the image coordinate we will find that this is given by f x over z since we are projecting it onto a 2d space we can ignore the third coordinate which is essentially the distance of this plane from the x y plane of the euclidean coordinate f now this can be seen that its definition of a 3d projection onto a 2d space we will further define some terminologies in the basic pinhole camera model so we have look at the camera center otherwise can be known as the optical center this is essentially the point where all the light rays that is being projected onto the camera will meet and there's another definition which we call the principal axis and essentially this is the z-axis of the euclidean coordinates that we have defined so this particular principle axis is perpendicular to the image plane and we will also define what is known as the principal point so principal point is essentially the point of intersection between the principal axis and the image plane and finally we'll define what is known as the principal plane so the principle plane is essentially the plane that lies on the x y plane so this this guy here is the principal plane and this is the plane that lies on the x y plane of the euclidean coordinate now we can see that the mapping from the 3d coordinate onto the 2d image plane it can be expressed as a linear mapping if we were to write it in terms of homogeneous coordinates previously we wrote that this is the 3d point this is the coordinate of the 3d point which essentially lies at x y z and if we were to write it into homogeneous coordinate we'll add a 1 at the end and then we saw that by the virtual of a similar triangle the x coordinate on the image plane is denoted by f x over z and the y coordinate of the image plane is denoted by f y over z if we were to express this inhomogeneous coordinates into a homogeneous frame we can bring the denominator the common denominator which is z and put it into the third coordinate in the homogeneous coordinate frame and now essentially this becomes f x f y and z uh in the homogeneous representation we can see that we can factorize these terms over here the homogeneous coordinate f x f y and z into a three by four matrix multiplied by the four by one vector which represents the 3d point so this three by four matrix can be further simplified because it's actually a diagonal matrix which consists of f the focal length as well as one we can further factorize this into a three by three matrix which consists of a diagonal entries of f so this matrix here is essentially equals to f zero zero zero f zero and zero zero one and then multiply by an identity matrix one one one followed by all zeros here so this guy here is three by four and this guy here is a three by three matrix so essentially if you multiply these two matrixes together this is the matrix that you will end up with and now with this three by four matrix if we multiply it to the 3d point what we will recover here is essentially the 2d projection f y f f x f y and z over here let's further denote p equals to this matrix that we have factorized earlier on and it simply represents the diagonal matrix multiplied by the identity identity three by four matrix that we have seen earlier on and we will denote small x as the 2d projection on the image and the big x as the 3d point that we have seen in short or we can simplify this equation into this form where x where x is equals to p multiplied by x the small x here is actually a three by one homogeneous coordinate p here is a three by four matrix and x here is a four by one homogeneous coordinate and we'll give this p matrix which is a three by four homogeneous calling uh matrix a name and it will be called the camera projection matrix we have seen earlier on that the origin of the 2d coordinate frame is being defined at the principal point of the image which is often the center image but in practice the origin of the coordinates in the real image plane might be defined anywhere need not necessarily coincide with the principal point of the coordinate system and we can in fact define anywhere say for example let's define the reference frame of the the image reference frame as small x and y that is being assigned to one corner of the image which essentially means that the principal point is now p x and p y with respect to the image reference frame and what we need to do here is that we need to compensate for this p x and py with respect to the reference weight of the image so as a result we can see that the projection from the 3d point so this is the 3d point the projection that we have seen earlier on from the 3d point to the 2d point it's no longer just these two terms f x over z and f y over z uh alone now we have to compensate for the translation of the coordinate frame by an amount of p x and p y in the respective direction we can express this particular expression here which is in the inner homogeneous coordinate frame do a homogeneous coordinate and we can see that this becomes a linear mapping what i meant by linear mapping here is that it simply becomes a matrix multiplied by a three by four matrix multiplied by a four by one homogeneous coordinate the two additional terms that we are seeing that defines the principal point it appears uh as a p x and p y over here so if we were to evaluate the multiplication of this three by four matrix with the four by one vector over here we will get the 2d coordinate frame which is f x plus z p x and f y plus z p y and z here so now let's further write this three by four matrix into a factorization of what we call the k matrix multiplied by a three by four identity matrix where this particular k matrix here can be expressed as this three by three matrix over the diagonal consists of the focal length f and one and then the principal point will appear on the third column of the k matrix so essentially if we were to multiply k this matrix over here by 1 and 0 and this guy here is a 3 by four matrix what we will get back would be the original matrix over here so this is simply just a reorganization or a refactorization of the matrices and now we will also give this k matrix a name we'll call it the camera calibration metric this particular k matrix is also otherwise commonly known as the camera in trin6 metric if you have noticed earlier on that this particular 3d point is actually expressed in the camera coordinate frame which is this particular guy over here that we have defined the location of the camera center the principal axis the principal point as well as the principal plane but in practice any 3d point that is given to us might not always be expressed with respect to this camera frame it could be expressed with respect to any arbitrary frame which we call the world frame where the camera frame would be at a certain rigid transformation away from the wall frame that is denoted by rnt so see here is actually expressed with respect to this origin by a rigid transformation of r and t that means that we should also consider the projection of the 3d point with respect to the world frame let's call this guy here x y and z and one the homogeneous coordinate of this which is expressed with respect to the world frame so we'll call this capital x with respect to the world frame the camera projection expression that we have derived earlier on which is small x equals to k i 0 multiplied by big x so in this case here x is defined with respect to the camera frame but let's say now x is defined with respect to the world frame we would have to compensate for this rotation and translation in this projection equation the aim here is actually to denote this particular point with respect to the camera frame instead so this can be easily done let's denote the inhomogeneous coordinate of the camera center as c to the c to the essentially it simply means the x y and z coordinate of c with respect to this particular uh frame that means that x the camera coordinate is expressed in the world frame and we will denote this by c tilde so given any point if we know this relation and we also know that there is a rotation that means that the this particular frame the orientation of this particular frame the the camera frame with respect to the wall frame it need not be a line as shown this in this figure here it could be actually in any particular orientation for example this could be our camera frame that is not aligned not perfectly aligned with the world frame so there exists a certain rotation that relates the orientation of the camera frame in the world frame now given a point x here defined with respect to the world frame what we want to do here is that we want to transform this particular point into the camera frame so that it will be consistent with this particular projection equation that we have to define earlier on with respect to the camera frame and the relation it will be simply given by a isometric transformation this is a isometric or a matrix transformation that we have seen in lecture one where the rotation is simply r because the orientation of camera and the world frame is related by r the translation amount here t is simply given by minus r of c applying this four by four matrix over here onto the 3d point that is expressed in the world frame what we'll essentially get here is that we'll transform this pore express with respect to the world frame into the camera frame which we now denote as x can now put this particular x cam equation back into this projection equation what we will get here is that we will get k i 0 multiplied by r minus r c and 0 1 and multiply by capital x where this guy here is simply our x cam which is consistent with what we have seen in the definition of the projection matrix earlier on because this this is a zero here which essentially means that after multiplication the last row of this isometric transformation would be eliminated and we can simply denote this as r minus rc x which is our three by four matrix over here and this guy here is our three by three camera intrinsic and this is the 3d point that is expressed with respect to the world frame so as we have also seen earlier on a rotation matrix is essentially a over parameterization of the euler angle which has three degrees of freedom row pitch and yaw so if we were to look at this in the x y z axis so the yaw angle rotation matrix is essentially obtained from first rotating around the z axis which we call the yaw and we denote this by gamma over here and then the second step that we do would be to rotate after the rotation of your angle we rotate with respect to the pitch angle along the y axis which is given by this rotation matrix over here finally we applied the row angle on the x-axis along the x-axis this is the final rotation matrix for that represents the row angle and now if we were to concatenate all the your pitch and row together to get the final rotation matrix we'll be able to represent any arbitrary uh rotation in the 3d space so essentially this is the rotation matrix this guy here this this rotation matrix is the rotation matrix we are interested in this isometric transformation which means that is a general arbitrary rotation matrix that represents the orientation transformation from the world frame to the camera frame in any general case this rotation matrices it's it could be a two by two rotation matrix which is in the 2d space if we were talking about the r2 space then the this rotation matrix would be a 2 by 2 square matrix or it could be also a 3 by 3 3 by three square matrix when we are talking about rotations uh in the r3 space so in this particular case r2 space then there will be only one degree of freedom because essentially in the xy plane there's only one rotation that is possible that is the your angle we can only have an implant rotation that is along the yaw angle so this two by two matrix on the 2d space it will have only one degree of freedom but in the general 3d space r3 space we would have 3 degrees of freedom which is what we have seen earlier on here that this express is parameterized by the raw pitch your angle which we collectively call the euler angle so the rotation matrix is regardless of whether it's a two by two or three by three rotation matrix you always have the following properties the first property is that the determinant of r is going to be either plus 1 or minus 1 depending on how we define the coordinate frame so in the right hand coordinate frame where we do this x y and z is pointing in this direction where when we move from x to y our thumb is actually pointing towards the z direction so if we use the right hand or four fingers to do this sweeping action our thumb is pointing towards the z axis of the right hand and in this case we call it the right hand coordinate frame and this is usually the coordinate frame that we use to express any euclidean uh reference frames and there's also another coordinate frame where if we were to you do the same sweeping action with our left hand then we will see that uh in this case here we'll end up having x here and then y here and sweeping this direction where z now is going to be pointing in the same direction but we will see that x and y sort direction so if we were to be consistent if we were to draw x in the same direction as the right hand coordinate frame and y in the same direction as the right hand coordinate frame as well then we'll see that z points to the opposite direction and we'll call this the left-hand coordinate frame so the difference between this two reference frame system is that the determinant of the rotation matrices that is expressed in the right-hand coordinate system will always have a positive one determinant and the determinant of the rotation matrix that is defined with the right hand coordinate frame is always going to have a minus one since the rotation matrix is a octa normal matrix what this means is that the transpose of the rotational matrix is going to be equal to the inverse of the rotation matrix and hence and also since it's an octa normal matrix what this means is that the axis of the rotation matrix has to be octagonal to each other this means that the third column of the rotation matrix is equal to the cross product of the other two columns in fact any column of the rotation matrix is equal to the cross product of the other two columns that is in the rotation matrix we can see that actually the rotation matrix over here each column it defines the x y axis so the column one actually defines the x axis column two defines the y axis and uh the cross product of this two is going to give us the z axis in the reference frame and this will work for any combination so if we will take z and x cross together we will get the y axis and so on so forth so the rotation matrix it actually defines the basis of the coordinate space and what this also means is that since the cross product of the any two of the columns is going to give the third column simply because every one of this frame is actually octagonal two to each other and this also means that the top product of any two columns in the rotation matrix will always be zero since they are perpendicular to each other and the fact that it's an autonomous matrix it simply means that the norm of each column has to be equals to 1. all these properties here simply points out that the rotation matrix it defines the basis of the coordinate system after we define x the 3d points with respect to the coordinate frame so if we recall that we have a 3d point x and this 3d point is actually defined with respect to the world frame where this is the origin and then there is a camera frame over here that will be defined as fc where the center of projection is actually given by c over here and this particular frame relates to the world frame by rnt if we were to express this particular coordinate express with respect to the world frame in the camera frame we will get x cam over here that is related to the coordinate in the world frame by r and minus rc tilde so this is the relation that we have seen earlier on so we put it back to the general mapping of the pinhole camera this is the relation that we have seen earlier on where we can now factorize out the rotation matrix which appear twice in the three by four matrix over here this particular three by four matrix is actually equals to r and minus r tilde that we have seen earlier on so this r here since they are the same it can be factorized out in the rotation matrix that is in this particular form over here so now we'll end up with the general mapping of a pinhole camera this means that we will have a 3d point that is arbitrarily defined with respect to a world frame anywhere for any reference world frame where the relation between the camera coordinate frame with respect to the world frame is by a rigid transformation of r and t or we can also write the projection matrix to take into account the rotation and translation of the world frame and the camera frame into this particular form over here the p matrix now we can see that the p matrix uh still contains k which is the intrinsic value so this guy here k is the called the intrinsic value and it also consists of the rotation and the translation which is given by minus r and c or the inhomogeneous coordinate of the camera center with respect to the world frame that we have seen earlier on so this uh this three by four matrix so the the intrinsic here is a three by three matrix and this three by four matrix here is also what we call the extrinsic value which consists of r and t all together the camera projection matrix has nine degrees of freedom where three of the degrees of freedom goes to k because k we have seen earlier on that k equals to f and then p x p y and one over here so they are all together three d o f in the camera intrinsics and then three other degrees of freedom goes to the rotation matrix this is because we have seen earlier on that the parameterization a rotation matrix is actually over parameterization of the row pitch and your angle so there's altogether only three degrees of freedom that can define any arbitrary orientation in the 3d space finally there's another additional three degrees of freedom in the inhomogeneous coordinates of the camera center and all together this adds up to be 90 degrees of freedom i have said earlier on that the parameters contains in the camera matrix k over here which is essentially f 0 p x 0 f p y and 0 0 1 over here the parameters that's contained inside here is also called the internal camera parameters because this really depends on the inter internally on the camera uh we can also see this from when we are defining the basic pinhole camera we first started off with the camera frame and then we expressed everything in in terms of the camera frame we will already have obtained the k matrix in this particular form so this is the 2d coordinate and then we have a k multiplied by 0 1 and x that is defined uh with x cam over here that is defined with respect to the camera frame since k here is defined with respect to the local camera coordinate frame we will also call k the internal parameters because it doesn't depend on the world frame or otherwise the intrinsic values of the camera there's another set of parameter which we call the x36 as i have mentioned earlier on so this set of parameters is just the r and c so this is the camera center and uh which relates the camera frame and the world frame so this guy here r and c since it requires the world frame for definition we will say that this is not internal to the camera this is actually relying on external parameters and or otherwise we'll call it the extrinsic of the camera is instead of writing the this three by four matrix over here as what we have seen earlier on as this term over here or rather our identity and uh coordinate the camera coordinate center over here a more convenient way would be simply to collectively call this term here as t the translation that relates the camera center with respect to the to the world frame so the three by four matrix can be simply written as an isometric transformation where which consists of the rotation matrix and the translation vector so this rotation matrix over here is a three by three rotation matrix and the translation vector is a three by one translation vector now so so far we have seen that the projection relation between a 3d world point onto a 2d image is based on the assumption that the image formation is based on a square pixel assumption so what it means here is that on the image that we get all the pixels are regular squarish point sometimes due to manufacturing error in the lens or the photo sensors that we have in cameras this pixels they might not appear to be perfect square there might be some skew that occurs in the x direction so in our camera model we have to compensate for this non-square matrix or the skewness in the real cameras so it hence it's actually more accurate to have two things two additional things in our camera projection matrix the first one would be different focal length because we can see that there's a shift in the projection onto the matrix in the x direction and hence there is a need to have different focal length in order to define the similar triangle relation that we have seen earlier on in the beginning of the lecture another thing that is because we saw that the the skew could happen in the x direction and we would have to add another scaling factor in the x direction so as a result the camera intrinsic matrix would become f x f y where this replaces uh we will have two different focal lengths in each respective direction that replaces the that replace the same focal length uh in the in both direction and as well as we will have a scaling vector as in in the x direction over here so the principle point remains the same now this becomes our new camera intrinsic metric and as a summary in general uh we we would have all together now 11 degrees of freedom where uh we have seen earlier the the camera projection x-ray six over here where r it's uh three degrees of freedom given by the euler angles over here and t over here is three degrees of freedom because we all together have t x t y and t z in the 3d space now k the intrinsic uh since we have two separate focal length in each one of the respective direction which we denote by f x and f y will have two degrees of freedom here we have a skew factor which we denote by s so all together there's additional three degrees of freedom plus two more for the principal point so all together we have five degrees of freedom for the intrinsic value and in total uh five plus three plus 3 will have all together 11 degrees of freedom for our camera projection matrix p that is given by k multiplied by r and t so essentially a finite projective camera means that this is a camera with a central projection where all the light rays that are projecting onto the image have to converge at the one single point of convergence and this particular point of convergence which we denote by c it has to have a finite value with respect to a world frame illustrated here and we've seen earlier on that the projection matrix is given by this particular equation over here which consists in of the intrinsics as well as the extrinsic of the camera and we can now rewrite this projection matrix into another form which is given by this particular equation over here where we can denote m as a three by three matrix and the remaining of the matrix would be a three by four matrix that consists of our identity matrix which is three by three and the inverse of the n matrix of the three by three matrix that we have defined earlier on multiplied by the last column of the original projection matrix so what it means is that we are going to rewrite p as p1 p2 p3 and p4 where each of the p is a column in the original projection matrix and we can see from this relation over here that if we were to multiply m inside we will get this m and for the first three by three matrix and the last column is p4 which essentially means that m here which we have defined earlier on in this form of factorization is equivalent to the first three columns of the original camera projection matrix in another words this is simply a refactorization of the original camera projection matrix that we have seen we'll see later that for finite projection camera this m matrix over here which essentially represents the first three by three entries of the camera projection matrix it has to be non-singular in order for a finite camera center to exist so of course the first thing that we can define from a finite projection camera is the camera center and we can see that this camera center has to lie in the null space of the camera projection matrix that is given by pc equals to zero and what this simply means is that the projection of the camera center is going to be projected into an undefined point where this undefined point is going to be at zero zero zero and we will see a sketch of proof on why is it that the camera has to be on the null space of the camera projection metric let's first consider a line that consists the camera center c as well as any other point in the 3d space and what does what this means is that i'm actually looking at a light ray that is a projection onto the camera center so there could be many other light rays that converges to this that meets at this single center of projection and now i'm considering one light ray which contains the point a in the 3d space and this actually joins up with the camera center c to form a line that we which we denote as x over here and this is given by the span of the two points a and c which i have seen in the past lecture under the projective mapping where a 3d point is being projected onto a 2d image point this is the developed from the basic pinhole camera model that we have seen earlier on we can make use of this projection to project this line any point on this line which we denote by x it's being projected onto the image via the camera projection matrix that we have defined earlier on and now let's substitute this x lambda by this the span of a and c into this equation we will see that the projected points can be decomposed into two terms over here where the first term is simply the projection of a this 3d point onto the image and the second term here is the projection of the camera center to the image point and we know that a could be any point on this line and we know that any point on the light ray it has to be projected onto the image plane as p of a as p of a which is defined by this mapping over here so what this simply means is that the second term here that is given by the projection of the camera center it has to be in the null space it has to be equal to 0 in order for this relation x equals to pa to hold true hence we conclude that the camera center must be in the null space of the camera projection matrix so let's write the camera projection matrix into four columns that we have seen earlier on p1 p2 p3 as well as p4 that we have seen we can see that the first three columns of this projection matrix actually defines the vanishing point of the world coordinate x y and z axis respectively what this means is that the the point that is being given by the z axis for example the infinite point this is going to intersect the plane and infinity the plane at infinity at the infinite point over here which is actually given by 0 0 1 0 transpose because this last term here has to be equals to 0 for it to be an infinite point and the first three term simply says that this is in the direction of the z axis so it has to be zero zero one so this particular point here when it projects back to the camera it actually lies on this point which is p3 which we can see by looking at the projection so if i were to take this p1 p2 p3 which is essentially the camera matrix p multiplied by this point zero zero one zero we can see that the first column here multiplied by zero it will turn out to be zero similarly p two multiplied by zero it will turn out to be zero and only p p3 will remain so what this means is that the projection the ideal that is from the z-axis you would have to project to a vanishing point of p3 on the image so this is similarly for the x-axis here for example so in the x-axis we can see that this guy over here it intersects the plane at infinity the plane at infinity at a point of one zero zero zero transpose this has to be zero for it to be an ideal point and the first three and threes is simply the direction of x so we can see that if we were to multiply this guy over here with p1 p2 p3 and p4 which is the camera projection matrix multiplied by one zero zero zero and only the first column remains which is p1 so hence p1 here would be the vanishing point of the ideal point that is in the direction of the x-axis and uh what's interesting here is that the last column of the camera projection matrix is actually the projection of the origin of the the world coordinate frame and we can easily verify this by looking at this uh since the the origin of the world coordinate frame it has to be at 0 0 0 1 this particular point over here so if we were to multiply this pre-multiply this by the camera projection matrix 0 0 0 1 we can see that only the last term over here p4 remains hence p4 has to be the projection of the origin of the world frame that is being projected onto the camera image now let's look at the row vectors of the camera projection matrix and we'll see that the row vectors actually corresponds to a particular world plane in the 3d coordinate frame so let's denote p1 transpose p2 transpose and p3 transpose as the respective row so p i transpose here is actually a four by one vector that consists of the four entries of each respective row in the camera projection matrix the first plane that we are going to look at is the principal plane and we'll see that the principal plane is actually given by the third row of the camera projection matrix we have seen earlier on that the definition of principal plane is a plane that lies on the x y plane of the camera coordinate frame as well as it must contain the camera center and since it contains the camera center what this means is that any point that lies on the principal plane it must be projected onto a line at infinity that is given by this equation over here x y and zero the third coordinate here has to be zero because this has to be an ideal point in order for it to lock at the line at infinity so looking at this particular equation over here p x equals to x y and zero if we were to rewrite p the projection matrix in terms of each row of the matrix you can see that is p1 transpose p2 transpose and p3 transpose multiplied by x and this must be equal to x y and zero since it must be projected into the line at infinity what we can see here is that since the third element of the projection must be at zero uh it simply means that the last entry over here which is the last row of the projection matrix multiplied by x it must be equals to 0 which gives rise to this equation over here p 3 transpose x must be equals to 0 and since we have defined earlier on that x to be a set of points or to be any point that is lying on the principal plane now what this means is that this relation here defines the incidence relation and this says that x which lies on the principal plane must also lie on this plane that has been being defined by p three transpose and hence this piece right transport must be the same as the principal plane so the other set of planes the other two planes that can be defined by the remaining rows of p1 and p2 are the axis planes and we'll see that these planes for example p1 is defined to be the plane that contains the camera center which is denoted by c over here as well as the line in the image coordinate frame where x equals to 0 and this essentially means that this is the y axis which is given by this guy so this means that p1 has to be this particular plane that contains this line where x equals to 0 as well as the camera center that is denoted by this point over here similarly p2 is defined by the camera center as well as the line where y equals to 0 and this is essentially equals to the x-axis of the image coordinate frame we can see that this is the line and the camera center is here so p2 is essentially defined by this plane that contains the x-axis as well as the camera center and we can see that this relation is true let's first consider a set of points any points that lies on p2 so this point let's denote it by x according to our definition this point is defined to lie on the plane p2 that means that the incidence relation p2 transpose multiplied by the dot product of p2 transpose and x it must be equals to 0 for x to be lying on p2 and hence we can see that if we were to project these points x that lies on p2 any points that lies on p2 it must all be projected onto this line which is the x axis at y equals to 0 as seen earlier on and this is true because we can see that the dot product of p2 transpose which is the second row of p we can see that let's say if we write this relation into this p 1 transpose p 2 transpose and p 3 transpose multiplied by x over here so the second row multiplied by this guy over here it must always be equals to zero which is given by the incidence relation since we define this set of points to be lying on the plane and the first element of this can be any point which is x and the last one p3 transposed the last row of the projection matrix multiplied by x can be also anything which is denoted by w so what this simply means is that any point that lies on p2 will be projected to the x axis since any point here the y entry element of this projection is going to be 0. so now we have proven that p2 must contain this line where y equals to 0 which is essentially the x axis the next step to the proof is that we must also prove that the this plane p2 contains the camera center this is easy because from the null space equation that we have seen earlier on that uh p the projection of a camera center it must be equals to zero equals to undefined point of one zero zero zero transpose over here and uh so it also follows that p two since p multiplied by c this means that p1 transpose p2 transpose and p3 transpose multiplied by c it must be equals to 0 0 0 and what this means is that the second row multiplied by c p 2 transpose multiplied by c must also be 0 hence this shows the incidence relation of c with the plane p given by the second row of the projection matrix over here hence this means that the camera center it must also lie on the plane p2 as well as that the line here which is the x-axis it must also lie on the plane p2 uh this proves our definition over here that p2 which is the second row of the camera projection matrix is it's the axis plane that contain the line of x-axis as well as the camera center now the last thing that we can get from a camera projection matrix is the principal point so since we have defined uh and this principle point is what we have defined earlier on is defined to be the point that intersects the principle axis the point of intersection of the principal axis and the image plane so this is the principal point over here and we have seen earlier on that p3 the third row of the camera projection matrix p1 transpose p2 transpose and p3 transpose so the third row over here gives us the principal plane well and what this means is that the first three element of this principal plane essentially gives us the direction of the normal vector of this principle plane so this is our principal plane over here which is given by p3 transpose and that means the direction of the principal axis which is perpendicular to the principal plane is essentially given by the first three elements of the third row of the camera matrix and we can also treat this direction over here as an ideal point that is given by the direction of the principal axis of our camera coordinate frame this ideal point over here in the direction of the principal axis given by p hat 3 over here this point here actually projects onto the image at the principal point this can be mathematically given by this projection over here where this is the p3 head here represents the ideal point and it's projected by the projection matrix onto x0 which represents the principal point and we can rewrite this relation as capital m multiplied by m3 where capital m here represents the first three by three entries of the camera projection matrix and m3 here represents the third row of this m matrix of this three by three m matrix which is rewritten as m one transpose m two transpose and m three transpose over here so earlier on in the camera projection matrix that we have defined from the basic pinhole camera model we have seen that any points mathematically any points in the 3d space can be mapped by fire this projection matrix onto the image but in reality only half of these points in the 3d space in particular those are lying in front of the camera can be observed on the image we can define this half of the points in in space using the principal axis vector and the principal axis vector is actually given by v equals to the determinant of m which is the first three by three entries of the p matrix over here p four which is the first three by three entry of the camera projection matrix and multiply by the last row of the m matrix which is given by this which is rewritten by as this guy we have seen that m3 here really defines the direction of the principal axis because we have seen that p is given by rewrite into p 1 transpose p 2 transpose and p three transpose and this guy here the last row is it represents the principal plane and we have said earlier on that the first three entries of prince plane denotes the direction vector the first three elements of this which we earlier on define as p hat 3 transpose and now we can see that this first three elements of this if we were to rewrite the projection matrix into m and p4 the first three elements of the last row is actually given by the last row of the m matrix which we denote by m three transpose over here which means that m3 transpose is actually the principal axis that we have seen earlier on however we we must also note that this particular vector m3 transpose over here is only defined up to a certain sign with respect to a world frame we do not know whether this is pointing this direction or pointing the other direction so but this ambiguity can be resolved using sign of the principal axis that is the obtained from the determinant of m the determinant of m is essentially equivalent to the sine area distance so if we were to write m as m1 transpose m2 transpose and m3 transpose we can treat the first row as i j and k and the second and third row as this so what it means when we write the determinant when we are computing the determinant of this guy over here is equivalent to the cross product of and the two rows geometrically what it means here is that i have two vectors on the principal plane i'm going to cross the i'm going to do a cross product of these two and this this going to give me another vector that tells me the sign distance of this it's either going to be pointing this direction depending on the sign of these two vectors if this factor is pointing the other way this means that i would end up to have another vector our determinant of this is going to be negative in the other direction over here so by doing this i'll be able to figure out that this principle axis is actually pointing to and figure out the set of points that is lying in front of the camera so summary of the properties that we can obtain from the camera projection matrix is uh given by this table over here we have seen that the camera center is essentially given by the right now spot of this equation over here since we know that the projection of the camera center has to be projected onto a point of 0 0 0 an undefined point of zero zero zero we will see that this null space is essentially given by the inverse of the m matrix where m is the first three by three matrix of the projection since this is a finite camera but we'll see that this is an invertible three by three metric that can be used to find out the null space we can actually verify this uh if we take m multiplied by and p multiplied by this guy over here p 4 1 this essentially this essentially gives us p 4 minus p 4 plus p4 which has to be a zero vector and this is equivalent to the this equation over here which is which satisfy this equation over here and then towards the end of the lecture we will also look at the camera at infinity where the first three by three element of the camera projection matrix is no longer invertible and the camera center lies at infinity where it has to be given by this equation over here we'll look at this in more detail in a in a moment so we also look at the columns of the camera matrix the first three columns you define the three vanishing points that corresponds to the directions to the directions of the axis of the world frame and the fourth column corresponds to the projection of the of the world of the origin of the world frame onto the image and then we've also seen the third row of the camera matrix represents the principle plane and there are also two additional axis plane that contains the camera center as well as the x-axis or the y-axis of the image coordinate frame that is defined by the first row and the second row of the camera projection matrix and we also look at the principle point so essentially this is the projection of the ideal point that is given by the normal vector from the principal plane which is the first three entries of the third row of the principal plane then finally we look at the principal array and this ray this principle axis it actually defines the direction of the camera principle axis by defining this direction we'll also be able to figure out the half set of points that is lying in front of the camera that can be projected onto an image for observation |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_10_Part_2_StructurefromMotion_SfM_and_bundle_adjustment.txt | so after having computed the scene graph the next thing to do in large scale 3d reconstruction would be the structure for motion to get the sparse 3d reconstruction in general there are three paradigms of structure from motion the incremental structure for motion global structure for motion as well as the hierarchical structure for motion we'll look at incremental structure from motion in more detail and i'll just briefly go through global and hierarchical structure from motion in this lecture as the name implies it does structure for motion in an incremental way first we will choose from the scene graph any two views where uh the translation is a non-zero translation this means that we omit any two views with pure rotation and this can be computed or obtained from the two view geometry that we use to compute the same graph and there are many pairs of images in the same graph that might fulfill this criteria over here the first criteria over here and the best choice out of all this possible choices would be the one that has the highest in liar count and the reason is because we learned in the earlier lecture that the computation of our fundamental matrix and all the essential is substantially a least square computation and a least square computation will be much more accurate when we have more observations from the correspondences hence we want to choose the initial pair which we also call the seat pair of images to do our incremental structure for motion as the one with the highest in-layer count so after we have chosen this seat pair the next thing to do would be to use the eight-point algorithm to compute the fundamental matrix or the essential matrix for non-planar scene alternatively we can also use the four-point algorithm to compute the homography for planar scene and in both cases we can use the fundamental matrix essential matrix or the homography to decompose it into the camera matrices for this first two view we will also set the scale to one what do you mean what this means is that after computing the the fundamental matrix or especially the essential matrix which you can decompose to the rotation and translation we'll simply set the scale of the translation to one and in the case of the fundamental matrix well we recall that from the lecture of fundamental matrix the fundamental matrix can be decomposed into uh two camera projection matrices p and p prime where we will simply set this p prime to a scale of one and the next step after we have obtained this p and p prime the two camera projection matrices of the seed views then we will do a triangulation based on the in-layer correspondences from ransack from the robust estimation of fund the fundamental matrix essential matrix or the homography and we'll do this using the linear triangulation algorithm that we have seen earlier on to get the 3d points so for these two views for example x1 j in the first view and we'll use i and j to denote the view index as well as the point index and in the second view it will be the correspondence of x i j would be 2 j and we will make use of this image correspondences as well as the two camera projection matrices to do a linear triangulation to get the 3d point x j and once we have done this we will apply the bundle adjustment we will talk about the detail later to refine the camera matrices as well as the 3d points by minimizing the reprojection errors and subsequently after we have this first two view and the reconstructed 3d points will subsequently add more views into the reconstruction and this is what this is also why uh the this method or this paradigm of a structure from motion is called incremental structure for motion because we start off from a two view seat views over here and then we incrementally add on new views into the reconstruction so an example here would be we start off with p1 p2 and then we reconstruct all these uh 3d points with the seed views of p1p2 and we'll do image correspondences for p3 which is linked to p2 in the same graph so for example let's say if i choose this as my seed view and this is my p1 and this is my p2 so i will choose from p2 over here all the other possible pairs from those that link to p2 for example so suppose that i chose this view over here because it's linked to p2 and this i would i would call p3 to add on to the structure from motion pipeline and now p3 once i decided p3 from the or the third view from the scene graph the next thing that i would do would be to establish the 2d 2d correspondences so 2d 2d correspondences and this is done by uh the image correspondences so i can extract the c feature from here for example and then i will establish these uh correspondences from p3 to p2 and once i have established the 2d to 2d correspondences i can see that from these correspondence for example i know that these two points are correspondence from the 2d to 2d correspondence and i know that the corresponding point in the second view is used to reconstruct this particular 3d point over here so from this relation i'll be able to establish the 2d to 3d correspondence over here and uh by doing this for all the other 2d image features on p3 to p2 i'll be able to establish a set of 2d to 3d correspondences from p3 to the existing 3d point in the scene so once we have established the 2d to 3d correspondence we can use the pmp algorithm to compute p3 the pose of the third image with respect to the 3d scene and this would be with respect to the world frame as well since we are expressing all the 3d coordinates of the points with respect to a 3d world frame and once we have done this the next thing that we can do would be to establish more correspondences other correspondences between p2 and p3 such that we can do linear triangulation on this point correspondences between p2 and p3 since p3 now is already computed by the pmp algorithm we can use it to do a linear triangulation of additional points and add it to the scene and we can do this for subsequent views for example a fourth view where we establish the 2d 2d correspondence first and use it to establish the 2d 3d correspondence to compute p4 using the pnp algorithm similarly we can find additional correspondences between p3 and p4 and do linear triangulation to get more points and we do this for many other views in the until we complete the every image in the scene graph now once we get the initial reconstruction from the two views fundamental matrix essential matrix or homography as well as a pnp to add in more views in the incremental structure for motion finally we can do a refinement a non-linear refinement in the multi-view setting on over the multi-view camera poses as well as the 3d structures with what we call the bundle adjustment so essentially what this means is that we are going to minimize the reprojection error so for every single camera pose we're going to project the 3d points that is seen in that particular image and minimize its area with the corresponding observation of the point in that particular view over the camera matrices as well as the 3d points we'll talk about this in more detail in the next few slides the second paradigm of structure for motion would be the global structure for motion and what this means is that uh the first step here we will make use of the still make use of the scene graph but the question can be posed in this way for every edge in the scene graph we'll compute the relative pose which is represented by r and t so these are all the relative poses between every pair of images that is linked by the uh scene graph edges and here we can easily compute these relative poses from either the fundamental matrix or the essential matrix or the homography and uh depending on whether this is a tool view so in the usual case we wouldn't use uh homography because most of the scene in the 3d worlds are non-planar so the homography can only be applied to planar scenes and a preferred choice or much more accurate choice is actually the essential matrix when the camera is calibrated because we wouldn't have a projective ambiguity here so once we have uh computed all the relative transformation which we can decompose from the essential matrix so the essential matrix will give us r and t so for every one of this pair of images in the same graph we can compute the relative transformation from the essential matrix and we do obtain this particular graph over here so where each one of the note here represents a camera location so there is an absolute location at each one of the point here and now what we are given here the edges the relative post r and t between each one of the nodes here so this let's say if i were to denote this as r1 and t1 for example this would be the global pose of that particular camera with respect to a global frame fw for example and this would be r2 t2 for example and the relative edge over here would be denoted as r12 and t12 between the two views of one and two so similarly we can denote this over here as r3 t3 the absolute pose with respect to the world frame and the relative pose between these two views would be denoted as r2 3 and t 2 3. so we'll do this for every single uh note in the graph to represent it as a absolute post problem now becomes given the all the relative measurements all the rotation relative rotation and translation between every single pair of views in our same graph can we find a way to compute the absolute poses of all these camera poses in the sync graph and the way to do this can be simply summarized as follows we'll first estimate the global rotation by minimizing this cost function over here so what we are trying to do here is that ri and rj here represents the global pose of any pair of images for example r1 and r2 so ri it's this guy over here and rj is this guy over here and what we want to do here is that we want to minimize by computing the relative transformation which is given by ri and rj transpose over here we'll be able to compute the relative rotation between these two poses that is given by r 1 2 and since r 1 2 is what we observe over here and these are the unknown global poses that we have to find out in other words what we want to do would be to minimize to find out what are all the global poses such that when we formulate it in this way when we compute the relative transformation between these two rotations over here we'll be able to minimize its difference from what we observe the relative transformation of the rotation between r i and r j and to in order to solve for this optimization the first step that we need to do would be to convert the rotation matrix which is a three by three matrix in the special octagonal group we in short we denote this as so3 group two omega which is a vector of a three dimension such that the three by three rotation matrix is given by this formulation over here exponential of the skew symmetric uh matrix so this is a three by three skew symmetric matrix that is obtained from the omega over here which is in the r3 space and we call this the exponential map that means that the mapping function that takes in omega and outputs are and the inverse of this exponential map will be denoted as the logarithmic map that is given by the skew symmetric matrix over here three by three matrix that's formed by the omega over here will be equals to the log of r and so what more specifically what we wish to do here is that given this rotation over here which is a three by three matrix we want to reformalize this or reformulate this cost function over here into the format of omega where omega is just a vector of 3 so that we can actually do an optimization over the three parameters over here so in other words what we can do over here would be to simply compute the logarithmic logarithmic map given rij for example we want to compute the logarithmic map and convert it into omega and similarly for the unknowns ri and rjs and all the global rotations over here we can put it into the logarithmic map to get omega and more specifically given a rotation matrix over here uh that looks like this which is a three by three matrix omega can be obtained from this uh equation over here so after we have converted the rotation matrix into omega using the logarithmic map we can rewrite this relation over here r i j equals to r i r j transpose into linear equation in terms of omega over here where omega i j it's the equivalent of rij after we put in the logarithmic map and we get omega i j which can also be written as omega j minus omega i that and that is representing the relative transformation between the two global rotations over here and what we can do here is that we can rewrite this particular omega j and omega i into this linear form over here where omega global is simply a vector of omega 1 omega 2 on o and all the way until omega n where n here denotes the number of global poses in the scene graph and a i j would be simply made up of all the identity matrices where it would be identity if it corresponds to that selection of that particular pose in the in that location otherwise it would be zero and then now the this particular minimization function over here of rij can be solved as a least square problem because we have simply reformulated it into this linear homogeneous system of equation and it could be rewritten as a omega global equals to omega relative over here this is omega relative where a and omega relative are simply made up of stacking multiple of these equations together all the equations all the relative poses together so every rij over here will give us uh one equation over here and we can do this for every single edge that we have in our same graph and by stacking all these equations together we will get a omega global equals to omega relative now the unknowns over here would be omega global which we can solve for it using the linearly square algorithm and once we have computed omega global we can convert it back into the rotation matrix by using the inverse of the logarithmic map which is the exponential map as shown in this equation over here so similarly we can solve for the global translations using similar methods but one complication over here is that the scale between every relative translation is unknown so we can formulate it as a tij which is the relative translation between any two views let's call this ti and this tj over here so the relative translation over here would be t i j and this is all computed up to a scale and we will normalize that scale to 1 by simply after we compute t from the essential matrix for example we will simply normalize this observation to be equals to one and now we want to make sure that the difference between the observed tij over here it's uh minimized with respect to the relative transformation that is computed from global post ti and tj over here so we can formalize this as a ti minus tj and taking the difference between this and tij that is observed from the computation of the essential matrix and of course we have to also normalize this particular ti j over here and we'll follow the same method as before and as what we have seen earlier on in our rotation matrix which we can cast this into a least square problem of ax equals to b and we will solve for x which consists of all the unknown global translations then after we have gotten all the global poses from the in the same graph the next thing that we can do would be to compute the 3d scene points using the linear triangulation algorithm with the image correspondences as well as all the global poses that we have obtained from our global structure for motion technique that was described earlier and once we have obtained all the camera poses as well as the 3d points we can apply bundle adjustment to refine the 3d reconstruction more detail of bundle adjustment will be discard and the third paradigm which we will look at would be the hierarchical structure for motion i will very briefly go through this technique in today's lecture and the first thing that we will do here will be from the scene graph we will do hierarchical clustering to get clusters of the scenes that are closely related to each other so each one of this cluster could be seen as a strong a connected component which are closely related to each other we'll apply hierarchical clustering where the edges here will be weighted by the number of in-layers between a pair of images that is computed from the robust two-view geometry that we have discussed earlier on and once we get all these clusters in a hierarchical structure from motion the next thing that we will do is that we will do incremental structure for motion on each one of this cluster independently so here is an illustration where this cluster we will do apply incremental structure for motion to get a certain view of this as well as to get the camera poses with respect to a world frame which we call fw1 over here and then the next step we will do the similar approach on the next cluster over here so all the 3d reconstruction as well as the camera poses would be with respect to another local world frame which called fw over here and so on so forth then finally once we uh con reconstructed all the 3d points and the camera poses of each respective cluster the final step would be to merge all these clusters together to for global 3d reconstruction and this merging of the clusters together would be done using similarity transform so this is similar to the absolute orientation algorithm that we have seen earlier on when we talk about the pmp problem when we have all the 3d 3d correspondences which can be established from the 2d 2d correspondences between clusters we can apply the absolute orientation problem to get the rotation translation and finally we will just need one point correspondence to compute the scale to compute the scale difference between the two clusters before we can merge them all into a single consistent 3d model that looks something like this here |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_2_Part_2_Rigid_body_motion_and_3D_projective_geometry.txt | so now let us look at the projective geometry and transformation in the 3d space after we have looked at the se3 transformation for rigid body motions and many properties of the entities in the projective three space are actually straightforward generalization of those in the projective 2d space an example here is that we'll see in a few slides time that the homogeneous coordinates of a point in the p3 space euclidean space is actually augmented simply augmented with an extra dimension in the p2 space an example here is that in the p2 space let's say x is in the projective two space then this would be x1 x2 and 1 here so in the p3 space x in the p3 space here would be simply the augmentation of x1 x2 with the third dimension and one here to represent this so it was a straightforward extension of this nonetheless additional properties may appear by the virtual of the extra dimension an example here would be two lines always intersect on a projective plane so as soon as it's on the plane they will always intersect at a certain point even at infinity they are going to intersect at the infinite point as what we have seen in the previous lecture but in the 3d space they did not necessarily intersect so in the 3d space even though the lines might not be parallel they might not necessarily intersect so this line here could be under this line in the in the z-axis according to the z-axis and they might not intersect at all and let us first uh define the how to describe a point in the projective free space as i have mentioned earlier that this would be a four vector given by x1 x2 x3 and x4 where the scale doesn't matter so this we can convert the homogeneous representation here with a four vector into a inhomogeneous one by simply dividing by x for the last element of this homogeneous coordinates so this property is the same as what we have seen in the p2 space for a representation of the points where it's simply x 1 x 2 and x 3 in the p 2 space and we'll simply take x 1 divided by x 3 and x 2 divided by x 3 to convert it into the inhomogeneous coordinate and homogeneous point with the last element to be equal to 0 represents the point at infinity so this would be x1 x2 3 and 0 and it's analogous to the the p 2 space where at the point infinity where the last element here would be equals to 0. so we simply extrapolate this thought by adding us one more dimension to the homogeneous representation and the projective transformation acting on the p3 space is a linear transformation on x by a non-singular four by four matrix so this definition here is also a direct extension of the transformation the linear transformation on the p2 space as what we have seen in the previous lecture h x prime here will be given by x multiplied by h multiplied by x where h here is a 3 by 3 matrix the x and on the 3 by 1 homogeneous coordinate so here uh h will become four by four uh this is a direct extension of this uh p2 space or linear transformation this would be a four by four matrix and the x over here would be a four by one uh vector that represents the homogeneous coordinates of a point in the projective three space and uh here the metric h is homogeneous and has 15 degrees of freedom all together it's four by four so there are 16 elements in this h matrix and less one for scaling this is similar property as the homogeneous transformation or the linear transformation of h in the p2 space and as in the p2 space the map is a coordination that means that in the 3d space here if i have any line and or i have any points that are called linear that forms a line after mapping each one of the points by h over here uh it's going to form uh it's also going to sit in the line be collinear in the line and this preserves the incidence relation of uh such as the intersection of a point on the or with a line or a line with a plane in the order of contact and the we'll also define the plane in the 3d space so this is an entity that doesn't exist in the 2d space because everything is actually projected onto the plane in the 2d space in the projective 2d space but in the projective 3d space this would become an entity itself and a projective a plane in the projective 3d space may be written in this form here as a this polynomial equation over here where at the coefficient pi 1 pi 2 pi 3 and pi 4 and the x y z coordinates over here and homogeneousing this would simply be dividing x by x 1 over x 4 and y x 2 over x 4 and then z x 2 x 3 over x 4 by substituting these three terms here into this equation here this is what we will get here in the homogeneous coordinate form and we can factorize this by uh putting pi 1 to pi uh 4 here into the into the vector here and then uh we will get this uh we will get this homogeneous equation over here the dot product of this pi of the plane uh multiplied by the point x would be equal to 0 and this expresses the point is in the plane or on the plane when the dot product of this 2 is equals to 0 hence this this is similar to the line point uh dot product in the p2 in the p2 space here this is analogous hence uh we can see that since uh the dot product of the line and the point tells us the duality constraint or the duality principle of the line and the point we can also see that these two are interchangeable hence plane and points are dual in the 3d space and we can define that only three independent ratios of this pi 1 pi 2 pi 3 and pi 4 of the plane coefficients are significant therefore a plane has three degrees of freedom in the 3d space and the first three components of pi corresponds to the plane normal this means that pi 1 pi 2 pi 3 it makes up the vector the normal vector of this particular plane here and using the inhomogeneous notation which we have seen earlier that it's a pi 1 x plus pi 2 y plus pi 3 z plus pi 4 equals to 0. so this homogeneous uh so in homogeneous notation the first 3 over here can be factorized into this form here where x is x y z and x 4 is 1 and then d here is equals to pi 4 in this formulation over here so this the first term over here would be the normal vector and the last term here can be used d here can be used to form the distance of a plane from the origin so here let's say my origin the reference plane uh point is here so this will give me the normal vector or that is the the the distance the distance between the origin and the normal form by this plane over here and this is given by d divided by the normal vector the norm of the normal vector and under the point transformation of x prime equals to h multiplied by x the plane transformed in this way the plane of pi prime is equals to h inverse transpose multiplied by pi so i won't uh derive this but i'll leave this derivation to to you to prove basically we can make use of the relation of pi transpose x equals to 0 to prove this and in the projective 3d space there are numerous geometrical relations between the planes points and lines so a plane is defined uniquely by the join of three pla three points that are not collinear so any three points in the 3d space will always define a plane and all the joint of a line and a point they are not incidents so uh if i have a line and a plane they are not incidents on each other i can use this tool to define a plane and we'll see in more detail later on and two distinctive planes always intersect at a unique line so this is analogous of a line on the 2d projective plane that you always intersect at a point and three distinctive planes will intersect the unique point so we'll see the explanation for all these three points later and so first let's let us look at the definition of three points defining a plane suppose that we have three points x1 x2 x3 or generally denoted by x i and they are incident uh with the plane pi this means that uh the three points are sitting on the plane in any case any three points that are sitting on the plane are definitely not going to be collinear and vice versa so if they are not collinear they will also imply that it's always going to form a plane where each point here would satisfy this point plane equation over here the incidence relation between the point and plane equals to 0 for 1 i equals to 1 2 and 3 and stacking all of them together this would be the three points together this would be the equation that i will get so if you take this out you'll be you'll get three equation here uh x1 transpose pi equals to 0 x2 transpose pi equals to 0 and x3 transpose of pi equals to zero here and stacking them all this is the relation that we will get and the three by four matrix of this has a rank of uh three when the points are in the general position this means that they are linearly independent i would have three unique constraints here hence this matrix would have full rank over here and the plane pi defined by the points is obtained uniquely up to scale as the one-dimensional right null space since this is equals to zero this is equivalent to a homogeneous linear equation it's a three by four matrix by a four multiplied by a four by one vector equals to zero so this is equivalent to the famous equation in linear algebra the homogeneous linear equation of ax equals to 0. so to solve for x here in the null space i need to solve for the null space of a as the null space has the right null space of a and this can be uniquely obtained since it has a rank of three and in the case where this three by four matrix here has a rank of only two this means the points are linearly dependent on each other then the null space would be two dimensional the null space of x one x two and x three multiplied by pi equals to zero would have a rank of will have a null space of two dimension then this means that the points are collinear this means that all these three points here they form a single line and hence they are dependent on each other and this defines a pencil of planes with the line of collinear points as the axis so since all these three points must lie on the plane any plane and we can see that the plane can actually rotate around this particular line so any plane that is on this pencil of planes rotating around this line over here would contain these three planes in the or will contain these three points in the plane and hence this is a rank division configuration and where the null space is only two dimensional and the second property that we talk about is that the three planes will define a point so uh in the case where the rank of a here is a this guy here is equals to three then this means that all the three planes are unique and they intersect at a certain unique point over here and in the case where it's a rank division uh it's equals to two the rank is equal to two then uh we can see that these are the few these are the four configurations that uh might be happening so uh it means that there is no unique point of intersection but they are most they are likely to intersect at either a line or a pair of line or three lines over here and then there's also the configuration where the rank equals to one where the plane are simply parallel to each other and in a case where the rank of a equals to three we can see that three planes you intersect at a unique point of here we given by x over here so this what it means is that this point here will be sitting on all the three planes and this would define the point plane duality principle and the points on the plane x may be written as this equation over here where x here is the point that is in the plane of pi so here i'm defining a point denoted by a capital x here and this can be denoted as the linear combination of m a m matrix a 4 by 3 matrix over here multiply by a vector so in this case here we can see that this 4 by 3 matrix is a of m so this matrix of m multiplied by x 1 x 2 and x3 it becomes a linear combination so this means that if i have a vector of m 1 and then a vector of m 2 and a vector of m 3 so this is a four by three this means that each one of this m is actually a four by one vector i would end up do the multiplication of this which is x over here would be simply a linear combination of m m1 plus m2 plus x3 multiplied by m3 over here and this is going to give me my point of x over here and a linear combination of m this means that m is defining the whole subspace within the plane any point within the plane is going to be a linear combination of any three of the any three of the uh this this basis vector of m1 m2 and m3 three over here and they are going to form a linear subspace in the uh in in the space of subspace of pi and since the linear combination of m is in the subspace of pi this means that the dot product of this uh the basis of m here is going to still be living in the subspace of pi here hence the dot product of pi with m is always going to be 0 because it's this means that the m over here intuitively or geometrically since m over here is the linear combination of m is going to give a point that lives in the plane on the plane a poi plane relation uh incidence relation will tell us that the dot product of the plane and the point is going to be equals to zero hence uh x here is just a coefficient or uh it's just a linear coefficient that tells us the linear combination of m to be in the any point in the plane over here and m here is not unique suppose that the plane is given by the four by one vector here a b c d and a is non-zero then m can be simply written as this uh form over here where p is given by this guy so i'll leave it to you to prove that to just simply substitute this pi over here and transpose multiply by m to show that this is indeed equals to 0 over here and this is m transpose equals to this is indeed a solution to this and this now after looking at the definition of the plane in the projective three space let's look at the line definition in the projective three space a line is defined by the joint of two points or the intersection of two planes so in this case here we can see that the line is here defined by the intersection of any two planes or it can be defined by the join of any two points over here and line have four degrees of freedom in the 3d space so a general sketch of proof would be why it has four degree of freedom is that we can see that after we fix this plane over here after we fix this plane over here this line here that is defined by the intercept the two planes over here can be defined by two points on the on this line over here and these two points each one of these would have two degrees of freedom on the plane it can move anywhere x y direction in the two planes and this will also move x y direction hence 2 plus 2 will give us 2 degrees of freedom we can also think of this as the two points here on the projective plane over here so in this case here this could also move x y and this could also move x y and hence the line has four degrees of freedom and but it's awkward to represent the free space line with a homogeneous five vector since it has four degree of freedom so we'll look at two alternative representations here instead of representing a line with five vectors here suppose a and b are two non-incidence space points in the 3d space the line joining these two points is represented by the spam of the row space of two the two by four matrix w composed of a transpose and b transpose as this so the span of the row space the span of this guy over here a and b here would be given would give us the points that are joining there are lines on this particular line here and we can see that the span of w transposer is indeed given by the linear combination of a and b so since a and b here are two points on the line any linear combination of this any linear combination of these two is going to give us any point on this particular line as well since the direction of a and b is pointing towards uh is a line with the line with this line so any linear combination is going to be in the subspace of the line and hence the span of w transpose is going to be give us a pencil points on the line and the span of the two dimensional right now space of w is the tensor of planes with the x line as the axis and what this means is that mu and a over here let's re denote this as a transfer a prime and another point a prime and another point b prime so this is like uh we are moving along this line here by a certain magnitude by a certain scale of uh mu over here and we are moving along this line by a certain scale of alpha over here so it becomes a prime and v prime and the span of these two a prime and b prime is still going to be equal to the span of a and b over here hence they are going to define the same line so what this means is that suppose we have a plane uh p and another plane q over here which is the basis of the for the null vector of this uh w the uh that is representing the line over here then uh we would have w multiplied by p would be equals to zero since p is in the null space of w and that would be equal to 0. similarly w multiplied by q here is also going to be 0 since q is also in the null space of p and consequently we would have a transpose p equals to 0 and b transpose p equals to 0 given by the definition of this guy here so multiplying this would be equals to 0 and multiplying this will also be equal to 0 given by this guy here so that the plane is a p is actually a plane that containing the point both the point a and b since by this equation here shows the incidence relation of the a and the plane p as well as b and the plane of p here similarly q is also a distinctive plane since this mass or true as well as a transpose of q must be equals to b transpose of q equals to 0 since q is also a basis in the null space of w and q that means that q is a distinctive plane also containing both a and b given by the plane point incidence relationship over here and hence a and b here must lie on both of the planes p and q since it fulfills this incidence relation for p as well as for q hence it must be lying on both the plane p and q and so the line here defined by w here which contains a and b these two points of a and b must lie at the plane intersection hence it defines this particular line over here and any plane of the pencil with the line that axis is given by the span of this uh the two planes of p and q this means that uh any uh linear combination of the two planes would all be given us will be giving us another plane that is intersecting the two planes p and q here at the line of a and b and the dual representation of the line as the intersection of two planes p and q follows in the similar manner so we can define w star transpose as the dual previously we define w as a transpose and b transpose as the span over two points a and b now we are defining this as the intersection of the two planes p and q in the similar manner which we call w transpose here and this dual representation has the property that the span of w transpose is the pencil of planes of p and q so this is directly the following the duality principle where w here defined by a transpose and b transpose two points uh it's given by the span of this two a and b here so it's lambda plus a plus mu of b to be any point on this particular line defined by the two points over here and the span of the two-dimensional space of w is the pencil of points on the line so this also directly is a duality of the span of the two-dimensional null space of w uh defined by the two point is the pencil of planes on the line so the this shows that there is a duality principle between point and planes and the two representations are represented by w star w transpose equals to w multiplied by w star transpose equals to 0 since the the poi should lie on the line so this actually this actually gives rise to a transpose of p equals to b transpose of p equals to 0 since a and b should lie on the plane and as well as b a transpose of q plus b transpose of q should be equal to 0 as well now we can also see that the join and incidence relation can also be computed from the null spaces the plane of pi defined by the join of a plane a point x and line w so i have a line here which i denote as w and a point that are not coincidence with each other this means that the point doesn't lie on the line then i can use the line and this point w and x here to define the plane of pi over here and this is uh given by the null space of the uh of this uh matrix over here defined by w and x prime over here which is obtained by the null space of this guy this particular relation here can be easily proven here where since we define w over here as any two points this span of any two points which we call a and b this means that m over here is actually equals to a transpose b transpose and x transpose defined by these three points a b and x and therefore uh these three points transpose a b transpose and x transpose multiplied by pi should give us zero since these three points are lying on the plane itself and the duality of this also holds this means that i have a plane defined by pi transpose as well as w here w here is defined by two planes which we define this to be w star that is equals to p transpose q transpose as well as pi transpose here and then if we put it into multiplied by a point x which has to be lying on all the three planes so since this pencil of this line is defined by a pencil of planes this means that the point would have to define by pq and you will have it due to need to sit on pq as well as pi hence the dot product of this p transpose q transpose and pi transpose multiplied by x is going to be zero now let us look at the second way of representing line which we call the plucker line coordinate and the line coordinates are the six point zero vector is a six by one uh vector represented by l over here where the first three element is the direction of the line suppose that we have a line that is span by a and b two points of a and b over here and we will first compute the direction of this it's directly given by b minus a and that would be the three-dimensional vector that occupies the first three elements of the plucker line and the second the set of uh the the plucker coordinate or which is the last three coordinates of the plucker coordinate is simply given by the cross product of these two points so this is the equivalent to the movement moment vector that is uh given by the vector a and b over here the cross product of this so and the directional vector of this and this will form the plucker line uh coordinate we'll see more of this glucose line coordinates representation the usage of this tuker line coordinate representation when we look at the generalized camera where it's a non-central projection camera in the later lectures and now suppose that we have two lines l and l hats are the joints of the points a b as well as a and b hat respectively then the lines intersect if and only if so i have two points here two lines uh here uh which represented by a and b and then a hat and b had two respective lines here and then the two lines will intersect uh if the four points are coplanar this means that they're all lie on the plane this means that these two lines would become coplanar as well hence it's a equivalent to 2d the projective 2d space where the two lines will always intersect at a certain point and to check this we can see that the determinant needs to be of this a b and a hat and b hat needs to be equals to zero of the matrix that is formed by these four points over here needs to be equals to zero and we can see that the proof here would be simply uh the the determinant here is given by uh this the the the sum of this dot two dot products over here and we can i will skip the full uh derivation over here but if you wish you can actually prove this for yourself by working out substituting this a b and a hat and b hat with the full vector form and then computing out the determinant so we will be able to see that this is indeed equals to this guy over here and can be evaluated into this form over here which is given by this guy over here which needs to be equal to zero since the dot product of this since the dot product of the final moment vector over here this guy over here uh m and m hat it's representing the moment vector in the plucker line of a b and a and b a hat and b hat over here they must be in the same direction since they are coplanar they are lying on the same plane and the dot product of these two is going to be equivalent to the determinant of this guy over here and that is going to be zero and other properties here is that uh which can be directly derived from the duality principle of a point and plane now suppose that two lines are intersection then of the planes p q and as well as p hat and q hat here defined by p q and p hat and q hat here then the determinant of this needs to be equals to zero if and only if the line intersects so this determinant here can be directly extended from the proof here can be directly extended from the previous proof of line c of a point that defines the line so previously it's a b and a hat b hat over here and since there's a duality between point and plane that means that this should also hold this in to replace the points with planes it should also hold as long as the line defined by pq and p-hat and q-hat intersects and similarly if the line l intersects uh is an intersection of any two planes and l hat is the joint of any two points then this relation should also hold since there is a duality that means that i can simply replace one of this set with points and the other one with s still remain as a as a plane now finally let's look at the definition of quadratics and dual quadratics so quadratic is a surface in the projective three space defined by the equation here so this is similar to the equation of conics that we have seen in the previous lecture except for now instead of this being uh 3 by 3 and this being 3 by 1 we have a 4 by 4 matrix and this is a 4 by 1 vector now q will have the same or similar properties as the conic c that we have seen in the previous lecture and it will not and often the metric q and the quadratic surface it defines are not distinguished we'll simply refer to q as the quadratic instead of a quadratic surface so as mentioned earlier on that the quadratic follows the same property directly from the conics and we have seen in the previous lecture that a chronic is a three by three matrix with five degrees of freedom that's because it's a symmetrical matrix with six unique elements and minus one for the scale it is left with five degrees of freedom and in the case of a quadratic it's a four by four symmetrical uh matrix this means that it has ten unique entries minus one degree for scale and therefore it's left with nine degrees of freedom and as a result knight points in general are sufficient to define a quadratic and if the metric q is singular then the quadratic is degenerate and it might may be defined by fewer points so this is a direct extension of the properties of a conic we saw that if the konig is degenerate that means that rank the rank of c here is lesser than or is lesser than three then uh it could be defined by a repeated line and or uh or uh two two lines that are in that lies on the inverted cone and this can be defined by fewer points instead of five points the c here the coin can be defined by only two points or four points over here and it's the same case here if q is singular then it can be defined by fewer points and the intersection of a plane pi with a quadratic is a conic so we can see that this is a quadratic surface defined as q and the intersection with a plane over here will give us this contour here which is uh exactly the intersection on a conic section and recall that the system for the plane can be defined as the complement space given by this guy here and or pi is the right now space of m here and then the x here is a linear coefficient or linear combination of the m that gives any point that lies on pi as we have seen earlier therefore points on the plane pi are on the quadratic if it fulfills this equation equals to 0 which we have seen earlier that we can directly substitute this x which is the point on any uh any point on this plane pi over here and as with this particular equation x equals to m small x over here so we'll get this equation over here and as a result we can see that uh the small x over here is the is the coefficient and we can group together is the linear coefficient of the linear combination of m such that it forms the point on the plane over here as we have defined in the previous slides and here we can directly group this m transpose qm together and call it c so we can and we can also define this guy here since uh this is a four by three and this x here is a three by one vector so this means that this is becomes three by one and this becomes three by three because this is four by three and uh q is four by four m transpose will become three by four so as a result this becomes three by three so we can collectively call them group them together and call this c and we can see that c here or this particular equation here would become the incidence relation for a point to lie on the conics hence the intersection of the plane pi would be or with a quadratic would define the coding section here so under the point transformation x prime equals to hx a quadratic transform as q prime equals to h inverse transpose q h inverse and this can be easily proven that if i have a point or x prime transpose q of prime x prime equals to 0 here then this transformation here can be rewritten as x transpose h transpose which gives me x prime transpose multiplied by q prime multiplied by h of x equals to 0 here hence we can collectively call this guy here as q to get the equation of x transpose q multiplied by x equals to 0. therefore q would be equals to h transpose q prime h and by moving the h across to make q the subject we will get this relation over here the dual of a quadratic is also a quadratic so do a quadratic equations on the plane instead of defining it over a point here since there is a duality between a point and plane so defining this equation here means that this quadratic here is defined by a point x but we can also define this in terms of the duality of a point plane duality hence we will get the dual of the quadratic defined by q star over here and q star is the adjoint of q or q inverse if q is invertible so a joint is defined by this where the joint of a is simply c transpose where c is the cofactor of a given by this definition over here and under the point transformation of h prime equals to h x a quadratic or do a quadratic transforms as this and i'll leave it to you to prove it and the proof would be simply uh done by the same way as shown here here are some common quadric surface the ellipsoid and the hyperboloid on one sheet and this is given by this plots over here there's also hyperbolic on two sheets as well as the elliptic cones that is given by this this is a degenerate quadratic where the rank is only equals to one and there's also a full rank elliptic paraboloid as well as the hyperbolic paraboloid so the hierarchy of transformation is also a direct extension of the p2 space for the p3 space instead of a three by three transformation matrix we would have a four by four transformation matrix and the highest level of the hierarchy of transformation is the projective transformation where there's a 15 degrees of freedom that means that every entry here is unique minus one for the scale and this can be represented in the same form of a t v transpose and small v here where now a is a three by three matrix and t is a three by one vector so there are invariance properties such as the intersection and tangency of surfaces in contact as well as the affine transformation here where this is also a direct extension of the 2d projective geometry where now we have represent we can represent this a fine transformation as a t 0 and 1 where the last row here is going to be 0 and 1. similarly we also have a similar similarity transformation which is 7 degrees of freedom so now this rotation matrix and the translation vector is what we have learned earlier on that lies in the s that lies in the se 3 space so this rotation matrix now is in the s03 space and translation vector is in the the three euclidean space and the euclidean or six degree of freedom rotation so this is equivalent to the rigid body motion or the rigid body transformation that we have learned at the start of this lecture now in summary we have looked at how to explain the concepts of sa3 group and use it to describe the rigid body motion in 3d space and we also have look at how to represent points planes and in the projective three space and describe the point plane duality then we look at how to describe a line in the p3 space using two representation the null space and the span matrix as well as the plucker coordinates and finally we extend the p2 honest property into the quadric thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_6_Part_1_Single_view_metrology.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about single view metrology hopefully by the end of today's lecture you'll be able to describe the action of camera projection on planes lines conics as well as quadric and you will be able to explain the respective effects of fixed camera center increased focal length and pure rotation on the image and we'll look at how to calibrate the intrinsics of a camera with the image of absolute conics finally we'll look at the definition of vanishing points and vanishing line and use them to find the geometric properties of the scene and camera and of course i didn't invent any of today's material i took most of the content of today's lecture from the textbook written by richard hartley and andrew zizerman multi-view geometry in computer vision in particular chapter 8. i strongly encourage everyone of you to take a look at this chapter after today's lecture so just a quick recap of what we have done last week we basically described the characteristic of a projective camera based on a simple pinhole camera model and with the further projection matrix so this is essentially a three by four matrix we have looked at the its action on a point in particular we have looked at how it uh transformed a point in forward projection that means that given in a 3d point in the real scene we have looked at how this camera projection matrix which you denote as p map this into the image so this is what we call the forward projection given a point on the on on the 2d image we have looked at how to make use of the projection matrix would back project this particular point onto a light ray and today we'll continue to look at the action of the camera projection matrix on other entities such as plane lines conics and quadratic in particular we'll look at the forward and backward projection properties of these entities from a camera projection matrix let's first look at the action of the projective camera on 3d planes so suppose that we have a 3d plane in the world which we call pi now let's first assign the world for a coordinate frame that defined this particular 3d plane and in fact we have looked at this in last lecture during camera calibration a convenient way to assign the world coordinate onto a 3d plane would be to place the xy plane of the world frame so this is the world frame over here we are going to align the xy plane onto the 3d plane pi itself such that the 3d coordinates of any point let's call x pi over here the lines on the plane can be rewritten as x y 0 1 0 here is because of the x y plane lying on the plane so there's no value for the z-axis and if we were to expand this particular expression um metric multiplication here we can see that basically the zero in in this 3d coordinates of this particular point on the plane x on p3 the third column of the camera projection hence eliminating it so as a result we can rewrite this particular expression over here with just p1 p2 and p4 this is the first second and fourth column of the camera projection matrix multiplied by x y and one we can ignore the z axis because it's always zero since we eliminated p3 of the 3d point on the plane which we call x pi over here we can see that the end result would be simply a homography between the 2d point and the 3d point on the plane so the reason is because now we have the same dimension small x over here is actually x y and one the homogeneous coordinates and since we removed the z axis here we will also have a three by one matrix over here and these two entities the 2d and 3d entity are simply related by a three by three homography matrix which is given by the first column second column and the last column of the camera projection matrix it also becomes obvious that since we have two planes the image plane as well as the world plane is uh x pi over here we have looked at in lecture three that the homograph that relates these two planes can be easily found from four point any four point correspondences on the image and the world plane now let's look at the action of the camera projection on 3d lines so suppose that we are given a 3d line which we call capital l we can see that uh geometrically when the line is being projected onto a camera center which this plane over here has the image plane the projection of the 3d line forms a plane which is this guy over here it forms a plane because this is projected onto the camera center and let's denote this particular projection the plane that is formed by the projection of this 3d line l over here as pi and the intersection of this plane pi with the image plane would form the forward projection of the 3d line onto the 2d image which we denote as a small l over here now let's formulate this in a more mathematical way suppose that we are given two 3d points in the space that lies on the 3d line l so we have a and b over here that lies on the 3d line l and we we can then define the whole family of points that lies on this 3d line l as a span by the 2d points a and b so we can simply write x mu which is uh one family of uh solution points over here uh x mu that's simply equals to a which is the first point plus mu uh b and that means that x could be any point that is lying on this particular 3d line over here we can simply make use of this equation over here to represent the the 3d line l and let's look how the camera projection matrix p would act on this particular 3d line over here because we know that given a point in 3d which we call x the camera projection simply it's a multiplication by the 3d point x that will give rise to the particular image point over here so this is capital x and this is small x well the p would forward project this particular 3d point onto the image and since we have defined the whole family of points that sits on the 3d line over here we can also directly make use of this relation over here on the whole family of points so i would have a end result of the whole family parameter rise by the same parameter mu over here but now this would be the whole family of points that is sitting on the projection of this 3d line on the image and this would be simply equals to p multiplied by the big x which is also parameterized by mu over here what so what this simply means is that i'm taking every point that is sitting on the 3d line and project it onto the 2d image via the camera projection matrix p and we can evaluate this in two we simply uh take this equation over here where every point on the line is span by a plus mu b so we can substitute this a plus mu b into the equation as a result we can see that this expands to two terms mu p and b so this becomes the projection of the point a onto the 2d image which you can rewrite this at small a over here in terms of the notations to be consistent we're projecting a capital x which represents the 3d point onto the 2d image point which we denote by small x over here we are now similarly projecting a capital a denoting the 3d points on the line and we are projecting it onto a small a over here which is 2d image point so we can rewrite this pa as small a plus so mu remains because it's a parameter that parameterize the span over the line so mu here remains and we can see that there's this term over here p multiplied by b which will follow the same convention this simply means it's a projection of the 3d point which will denote by capital b onto the 2d image which you denote by small b and this is this particular equation after the projection it will be simply the family of 4d points that sits on this 2d lines which corresponds to the projection of l under the camera projection matrix p now given a 2d line let's look at the backward projection of this 2d line we are given this line here which you denote that small l over here and we'll look at the back projection of this particular line by the camera matrix p because we know that the camera projection matrix here is actually a three by four matrix so what it means here is that this particular matrix here it has a maximum rank of three if this is an infinite camera a final camera which we have looked at last week this wouldn't be rank three it would be actually only rank two but in this case you are only talking of the projective camera where the rank of p is at most three because it's a non a square metric what it simply means here is that the inverse of p doesn't exist hence we cannot directly recover the 3d line from the inverse of this small l over here let's also look at the geometric integration behind this rank deficiency we can see that when we project this particular 2d line on the image back into the 3d space we'll get a plane that looks something like this that we denote by pi and what happens here is that in addition to original line that created this particular projection small l over here or we can also have a whole family of lines as long as it sits on this particular plane pi we can see that this line here will do a forward projection onto the same 2d image over here similarly we if we have a line here you will also forward project to the same line this means that what we get here by doing a back projection is a whole set of solutions that is denoted by the plane which is the projection of this particular 3d line onto the 2d image we'll see later that this particular projection a plane is actually given by this particular equation which is equals to pi equals to p transpose of l now let's look at the proof of y pi is equals to p transpose of l so pi here is this particular plane of the projection of this line which we denote by l onto the 2d space by the camera projection matrix p we know that a point which we denote by small x over here it sits on the line which we denote by l on the image if and only if the dot product of x and l equals to zero so we have seen this relation in lecture one do we also know that the corresponding point that sits on the 3d line capital l at the point this 3d point which we denote by capital x over here it's going to map onto the image that corresponds to this small x over here via the equation that we have looked at many times since last lecture p multiplied by capital x so if we were to substitute this relation back onto this here we can see that it simply means that we have this equation p of x transpose multiplied by l equals to zero and the transpose here would be simply evaluated as x transpose p transpose and l equals to 0 which basically can be used to redefine the back projection where p transpose l here can be taken as a plane so we know that a point has to sit on the plane if and only if the dot product of this two equals to zero hence this particular plane here which we denote by pi must be equals to p transpose multiplied by l which is the dot product of our camera projection matrix and the 2d line now let's look at the action of the projection matrix on konigs we'll first look at the back projection of conics under the camera matrix p and we can see that given the conics on the image might look something like this so intuitively or geometrically we can see that if we were to reproject any point that lies on this particular conic which we denote as c over here on the image and this is the camera center which is also c but this is a vector and this conic here is actually a matrix which is use a bow face c to represent the camera location and we can see that any point that lie on this particular conic is back projected to be a ray in 3d space so if we were to take all the points that lies on the conics and start to reproject them into the 3d space we'll essentially get a cone shape we have seen in lecture two that a cone is essentially a degenerate quadric so this guy here is actually a four by four matrix that does not have a full rank this means that uh this is actually a degenerate quadratic and c over here it's our three by three conics that lies on the image plane and we'll see that this particular cone over here the back projection of the conics into a cone in in 3d space is given by p transpose c multiplied by p where p here is our camera projection matrix that we have looked at we'll also see that the camera center which we denote by bow face c over here it's essentially the null space of this particular cone since it's the vertex of the cone this means that every light ray that runs through this particular cone over here it must converge at the center of projection which is the camera center and what this essentially means is that we would have this particular equation q of co the cone multiplied by c it would be equals to zero since c must lie in the null space of the 3d cone and the proof is this suppose that we have point x which cause small x over here so suppose that we have a conic which we call c and we have a point that is we call small x over here and we know that this particular point small x lies on the conic c if and only if this equation holds true that is the quadratic equation of a conic that we have defined in lecture one that means that x transpose c x must be equals to 0. if this is fulfilled that means the point of small x actually lies on this particular 2d conic which we denote by c over here we also know that from the camera projection big x the corresponding 3d point it maps to the 2d image small x over here via this projection function that we have seen many times p multiplied by x so essentially this guy here is equals to small x and which means that i'm projecting the big x the 3d point here onto the 2d image via the camera projection matrix now we have two equations over here we can substitute this small x the reprojected point via the camera matrix into this particular equation which says that the reprojected point of the 3d point capital x which we denote by small x must lie on the conic in on the image so if we were to substitute this small x into this guy over here we can see that uh we'll get p x transpose c multiplied by p of x equals to 0 which can be further evaluated as x transpose p transpose c multiplied by p x equals to 0 which is essentially this particular equation over here and what's interesting here is that since we have now transformed this particular conic equation in terms of the geometric entities on the 2d image space in the 3d world space and we can see that this essentially would be equivalent to a quadratic equation that says that capital x over here must be lying on a quadratic entity which is given by p transpose c multiplied by p and hence we can take this particular entity over here p transpose multiply by p as a quadratic and geometrically we have seen that this particular quadratic here is equivalent to a cone that is fall from the back projection of the 2d conics into the 3d space and hence this completed our proof that a 2d conic is actually back projected into the 3d space as a degenerate 3d cone by this particular equation here p transpose multiplied by c multiplied by p let's say this is the image over here so there's a conic over here the camera center it back projects to this cone which is given by qc0 equals to p transpose of c p transpose we can see that geometrically because the camera center is the vertex of this particular cone over here hence it actually lies in the null space of the quadratic and hence this equation q co multiplied by c which is the camera must be equals to zero since c is on the vertex of the cone which means that it lies in the null space of degenerate quadratic over here and we can rewrite this equation into what we have seen earlier on which is p transpose of c multiplied by p and now multiplied by c equals to zero and we can see from this particular equation here this indeed equals to the null space because we have looked at p multiplied by c this is the projection of the camera center is equals to zero from our definition of a camera center in last week's lecture so let's look at an example uh how to express this formulation of the back projection of a konig into a degenerate quadrant which is a cone so suppose that we have a camera projection matrix that is equals to k multiplied by identity over here this guy here is a three by four matrix and this is our camera in trin6 this is actually our extreme six and this intrinsic is actually a three by three matrix so this is in a canonical frame that means that uh since we have an identity as the exchange six over here what this simply means is that the world frame is aligned with the camera coordinated frame and then given this particular projection matrix a iconic on a 2d image which we denote by c over here it actually backs projects to a cone which is given by q of co equals to p transpose c multiplied by p so we can see that if we were to take the transpose of this guy over here is equals to k transpose and 0 transpose multiply by c and multiply by p over here so we will get this equation you can see very clearly that this particular four by four uh matrix is less than full rank and the reason is pretty obvious from here because we have the last row as well as the last column that contains only zeros and hence this shows that qco is a degenerate quadratic which is essentially a cone this example shows us that the cone here is a degenerate quadrant now let's also look at what's the effect of the camera projection matrix on other smooth surfaces so suppose that we are given any arbitrary smooth surface which we denote as over here an example is this particular shape over here it's an arbitrary shape that consists of smooth surfaces we can see that the projection of this particular smooth surface which we call as over here it's going to be projected onto a image with central projection such that the outline or the contour of this particular uh image over here it's going to be forward projected onto the ud image plane and we can also see geometrically that every point that intersects the this particular light array over here is actually defined by the light ray that is tangent to the smooth surface and passes through the camera center so just now we have look at in this case here tangent to the smooth surface this is actually our forward uh projection because we are projecting a 3d entity onto a 2d so this is uh from p3 space we project to p2 space and similarly for backward projection we can see that if we were to join a line from the camera center to the tangent of this surface over here this would define our back projection of this contour that is given on the 2d image over here and we call this the backward projection in general so this is from a p2 space we are going to map it into a p3 space let's call this particular surface on the that defines the tangent that defines the light rays that is tangent to the surface and passes through the camera center as the contour generator this outline here is going to be the contour generator this literally me this particular outline here generates what we call the contour of the shape on the 2d image over here and we'll call this the projected contour of this shape onto the 2d image over here the apparent contour which will denote by gamma so this is a projective camera we should know by now that why this is called a projective camera the reason is because all the light rays passes through the camera center over here and similarly for a fine camera we can also define the same thing but in this case here we can see that the backward projection over here will be a light ray that is tangent to the surface as well as the camera plane the reason is because we do not have the center of projection anymore all the light rays are going to be parallel to each other here but this still doesn't stop us from defining the contour generator as well as the apparent contour so the apparent contour is also called the outline or a profile of the shape that's being projected onto the 2d image and we have seen that the contour generator it actually only depends on the relative position of the camera center and surface and it doesn't depend on the image plane it's pretty obvious here so when we define the contour generator we're looking at this outline over here this is what we call the contour generator on the 3d shape and this particular contour generator here we have defined earlier on that this has to be the tangent of this light ray that is tangent on the surface and it must also pass us through the camera center over here so as a result we can just simply define this set of light rays that starts from the camera center and passes through every tangent of the 3d shape and this set of light rays defines what we call the contour generator it's pretty obvious from here that it has nothing got to do with the image plane so the image plane plays no role in defining the contour generator and but on the other hand the apparent contour on the image is actually defined by the intersection of image plane with the rays to the contour generator and it actually does depend on the position of the image plane because we can see over here is that this is the apparent contour that we denote by gamma over here so since we know that this is the contour generator and this generator is defined by a set of light rays that passes through the camera center as well as the 3d shape surface depending on where we place this camera image plane we can see that the section of the light ray on and the camera image plane would be different every time uh depending on the location of a particular image plane hence we will say that the apparent contour which is the projection of the contour generator in of the 3d shape onto the 2d image it does depend on the position of the image plane so now let us define the forward projection of a quadratic under the camera matrix projection p which onto the image and this will become a conic we'll use a dual conic or a line conic to define the forward projection of the quadratic and this is given by c star equals to p q star p transpose the reason why we use the dual conic or the line chronic is because we know that each one of this line that is tangent to the conic in back projects to a plane and this particular plane over here would also be the tangent that defines the quadratic in the p3 space and here's the proof of this expression we know that in lecture one the line that is tangent to the conic outline satisfy this equation l transpose c star l equals to 0 and we also seen earlier that lies from the image back project to a plane given by pi equals to p transpose l that are tangent to the quadratic and thus it satisfy this quadratic equation pi transpose q star pi equals to zero then uh by simply substituting the equation of the plane pi over here given by p transpose l into these two entities over here we'll get this expression l transpose p q star p transpose l where we can simply group this p q star p transpose into c star over here which satisfy the first equation over here and as a result we can see that c star is actually equals to p of q star and p transpose and this becomes the projection of the quadratic onto an image which gives a conic now having a look at the action of the camera projection on several geometric entities uh such as the poi plane line conics and quadratics let's move on to look at the effect of a changing the parameters within the camera in 36 as well as the extrinsic value or on the projection of the 3d entities onto the 2d image we know that any object in the 3d for example this 3d shape of a house the outline of a house over here we know that any object in 3d space and the cam camera center defines a set of light rays because lights drivers is in straight line so what it means is that suppose that we have a 3d shape here any point on the 3d shape we should define as x lights from the sun for example a source it's going to be reflected in straight line and they are all going to converge at a single center of conversion which is the camera center and an image is obtained by the intersection of every one of this light ray with a image plane so see here the camera center and the the orientation of the image plane so essentially this defines the rotation and the translation of the camera this is the extrinsic of the camera and we can see quite obviously here that if we were to change the camera center from this point to suppose this point fixing the orientation of our image plane the light rays are going to move in a different direction and hence the projection onto the image is going to change despite the fact that our camera orientation as well as the 3d object remains fixed we can also see that if the camera center remains fixed if we were to change the orientation of the image over here we can see that the projection is also going to be uh different from the the previous one because the intersection of the light ray with the image plane it's going to change as well as we can see that this length over here define how far the image plane is from the camera center is what we call the focal length so if we were to change this focal length we can also pretty much see that from here if you have a focal length f1 and the projection and in comparison to this image plane over here if we have a denote this focal length as f2 the projection onto the image is going to stay pretty much different so what we are going to look at now is that how to formulate this in a more mathematical way we are going to look at how the projection onto the image by a 3d entity it's going to change if we were to change the intrinsics or if we were to vary the intrinsics as well as the or extrinsic value of the camera we have seen earlier on from the illustration that the image obtained from the same camera it may be mapped onto another by a projective plane transformation which is called the homography the reason is because they're all passing through the same light ray for example this point over here it's passing through the same light ray and this converges at the camera center and we have seen earlier on if the projection is onto a plane that means that uh with the same camera center that means that there is a relation between these two points and this is actually related by a homography so we will have h x equals to x prime in other words we can say that uh this is projective equivalent so there will be a same set of projective properties and hence a camera can also be thought of a projective imaging device and we can we will see that this can be used to measure the projective properties of the cone of ray with the vertex of the camera center let's try to formulate this relation mathematically suppose that we uh we are given two images i and i prime with the same camera center so this means that the camera standard statistics which is illustrated by this diagram over here we'll just look at the change of maybe focal length or orientation of the camera and so what this means is that the light ray for the same point is going to converge to the same camera center because the camera center stays fixed and uh here let's denote the first image that we have seen here as let's say this image here as i prime and this image here as i with the same camera center we also further denote the two camera matrix as ps and p prime over here where the camera center remains the same this is the only thing that stays fixed we can bring this guy here uh k and r over here to this side of the equation making this side of the equation i and minus c tilde the subject so since k and r they are both three by three matrixes so r is an octagonal matrix which the inverse equals the transpose the the multiplication of these two three by three matrices are also going to be a full rank uh three by three matrix so we can take the inverse and bring it over to p hence we get the i and minus c tilde the subject so we can substitute this uh substitute this guy over here as k r inverse multiplied by p hence we will get this equation that relates the camera projection matrix of p prime and p with basically the intrinsic value and the orientation of the camera since the camera center remains the same and now let's further look at the the effect of this expression over here on a 3d uh point that is denoted by x suppose that i'm going to do a forward projection of this particular 3d point onto the image of i prime the n effect would be p prime multiplied by x over here and now let's uh substitute this guy uh into p prime and we will see that we'll get this equation over here which consists of the intrinsics and the rotation of both cameras as well as the projection matrix of the first camera p and since well we know that p multiplied by x is going to be a 3d point here it's going to be a forward projection onto the image by the camera matrix p here and we denote this by small x so this would be the essentially the relation hence we'll get this relation where x prime is equals to k prime r prime multiplied by k r inverse multiplied by x we can see also that since this here becomes a three by one homogeneous matrix x prime and on the right hand side we also get the three by one homogeneous vector of x so this simply means that we are now in the same projective space which is the p2 space and x and x prime would be simply related by a homography that means that k prime r prime multiplied by k r inverse they are simply a homography by the way there's a missing prime over here so we should rewrite it as x prime equals to h multiplied by x where h is simply given by oh that's also a missing prime over here so uh where h is simply given by k prime r prime multiplied by kr inverse previously we have looked at the effect of a fixed camera center so essentially that ended up to be a homography that relates two points at the change of the intrinsics as well as the rotation of the camera let's suppose that everything else in the camera matrix stays fixed the only thing that we are going to change here would be the focal length in our camera matrix so suppose that uh we have a camera matrix that's given by f uh it could be f x or f y but let's uh the make the illustration simpler suppose that we have uh the there's no skew that means that we have a uniform focal length over the x and y axis so suppose that we are only going to change this particular parameter in our camera matrix which is given by this guy over here uh we are going to only change the focal length inside here and everything else remains the same we'll see that this corresponds to the displacement of the image along the principal axis where the effect on the image is simply image modification so this this can be intuitively observed if we look at it geometrically suppose this is our camera uh center c over here and this is our image plane so we we have seen earlier on that the intersection of the image plane with the principal axis is the the length between this camera center and to the intersection of the principal axis and the image plane which is called the principal point it's defined by the focal length so if we were to move this plane in and out we can see that basically a point here it's going to get magnified so if this is the location over here if i were to move it out over here then the basically the relative distance let's say if i have two points over here i have two points x1 and x2 over here so we can see that the relative distance between these two points and the relative distance between these two points after an increase of focal length it's going to be increased as well hence this is equivalent to a magnification of the forward projected object on the 2d image let's now look at how to formulate this mathematically suppose that we have the two image points of the same 3d point x which you denote by x and x prime over here before and after zooming so x before zooming would be equals to x uh equals to k identity zero multiplied by x so this we are we are looking at the canonical uh representation of a camera projection matrix whether world frame is aligned with the camera frame this is our intrinsics this is our extreme six and this whole thing here is actually p multiplied by x and this is the projection into the image and after magnification let's denote it by x prime equals to k prime so in this case here uh it's only the focal length that is changed so we are changing the focal length from f to f prime for example we'll denote this as the change in our intrinsic uh value uh at a canonical frame where the principal point is constant at the maybe p x p y or maybe at zero it doesn't matter over here so this is here the intrinsics and streams here denote p prime multiplied by the same point because this is the projection of that same point onto the two images before and after magnification what we want to do here is that we want to bring this guy into this equation so uh we'll apply this particular trick over here since we know k inverse multiplied by k over here equals identity it has no effect so we'll add a k inverse over here and we'll bring this guy here this p matrix here directly here because these two are going to cancel off and effectively we are going to result to the same equation as x prime that we have seen earlier on so now let's evaluate this guy over here so since we have this k prime multiplied by k inverse we'll let it remain over here but we'll look at this effect over here so this particular effect over here is simply uh p the original projection matrix before magnification multiplied by x and this is equivalent to the projection of the 3d point into the 2d image and hence we get this particular relation over here as a result of this operation over here by simply magnifying the focal length over here we can see that this resulted in another homography where x prime is actually equals to k prime k inverse x where this on the left and right hand side these are all three by one homogeneous coordinate this means that we are doing a p2 space mapping into a p2 space as well hence k prime k inverse here must be equals to homography which gives this same form of relation that we have seen earlier on so this x prime equals to h multiplied by x where h over here is simply k prime multiplied by k inverse just now we saw the general form of representing the change in the focal length as k and uh into the intrinsic value now let's be more explicit about this if the focal length is the only thing that differs within k so we are not talking about the principle point we say that the principle point here remains the same so we can further evaluate this term which is our homography that we have looked at earlier on as this particular form here suppose we denote the magnification factor as this small k over here it's going to be equals to f prime which is our new focal length after magnification and divided by the original focal length uh by evaluating this if we have f prime one and then x 0 so y 0 over here which is the principal point and then we have uh k the original okay which is f over here and x 0 y 0 the principal point remains the same if we were to work this out this is the form that we will get where small k equals to f prime over f and that's the magnification factor if we were to express this particular equation over here in terms of making k prime the subject so this simply means that we are going to bring k inverse uh over to this side which becomes a multiplication with k the metric multiplication here can be further evaluated into k multiplied by the original focal length of the camera so if we were to represent this as k where a here is simply uh the diagonal of the original uh focal length we can see that uh this after multiplication into here because uh this this this is going to become simply k multiplied by a where this guy here has no effect on this because we can see uh from the multiplication is that k i multiplied by x tilde that's going to give us k multiplied by this x tilde over here and then uh plus 1 minus k x tilde where this can be further evaluated as plus x to the minus k x to the where this guy here cancels off and this is simply x zero we can simply rewrite this as x zero and hence as a result of this form of expression over here we can see that the change from k to k prime will be simply a magnification factor on k itself now let's move on to look at the effects on the images by pure camera rotation what this means is that we are going to assume that all other intrinsics and extrinsic parameters in the camera matrix remains the same except for the only change would be our camera rotation let's denote the 2d projection of the same 3d point x x and x prime on two images so in this illustration the only row the change here is the rotation between any two camera frames image one and image two and uh we are going to look at the same 3d point that project onto x and x prime on the first and second image we will denote this projection as before as uh x equals to k uh multiplied by identity zero so this is the canonical uh representation of our camera projection of the 3d point x and then this same 3d point is going to forward projected onto the second image which you denote as x prime over here equals to k multiplied by r so the only difference here is that there is a rotation because we are rotating this camera by an amount of r over here and multiplied by the same 3d point over here now we are going to put this guy over here this guy over here into this the second equation over here and we are going to apply the same tricks as before so uh notice that k inverse k here is going to be identity so this by simply inserting this guy k inverse over here it allows us to bring this canonical representation of the camera projection p over here into the second equation and we'll get k inverse k multiplied by the identity three by four matrix over here this term over here and see that this essentially is equal to the first camera projection multiplied by x which is simply equals to the projection of this 3d point onto the first image which we denote as small x over here hence we what we have obtained here would be similar as the first two cases where we have seen that we have obtained this relation small x prime equals to k r k inverse small x and this is essentially equals to a p2 space projected onto a p2 space where this term over here is simply a three by three homography so we'll get the same form of expression x prime equals to h x where h here now equals to k multiplied by r k inverse we can see that it's a function of the rotation matrix and here's some properties of this rotation the homography it's also commonly called in the literature as conjugate rotation so what's interesting about this is that uh because this is a upper triangular matrix and the k and if we take k multiplied by r which is orthogonal matrix multiplied by k inverse the end result of the homography the eigenvalues of this homography would have the same eigenvalue as the rotation matrix up to a certain scale mu so there will be three eigen values of this homography sin and the rotation matrix since this is a three by three uh matrix we'll see that the eigenvalue of h would be the same as the eigenvalues of r so what's interesting here is that if we have two images that undergoes pure rotation only we can actually observe corresponding points from these two images from this four corresponding point we can compute the homography and from the homography we can actually apply a eigenvalue operation on this homography to get the three eigenvalues over here since the eigenvalues of the homography is equivalent to the eigenvalue of the rotation and what's even more interesting here is that we'll see that the eigenvalue here it shares the same common scale so we can normalize this we can eliminate the scale over here and what's essentially going to left over would be the first entry here is one the second is e to the power of i is defined as an euler angle a complex number e to the power minus i theta and we can see that theta here is actually our rotation angles in the rotation matrix so what's uh interesting here is that if we have this pure rotation in one dimensional rotation we can pretty much find the homography first and then take the eigenvalues of the homography which allows us to solve for the change in angle that corresponds to the rotation matrix what this means is that under pure rotation by using four point correspondences between two images under pure rotation we can actually compute h which will give us the rotation now the last case that we are going to briefly talk about we will not look at this in detail in this lecture uh we'll look at a lot of detail in the next lecture when we look at the fundamental matrix and the essential matrix as well as the epipolar geometry so in this case here we'll look at the the scenario when the camera center is changed so uh previously we look at the intrinsic value if there's a change in the intrinsic value and if there's a change in rotation as well as the fixed camera center and where all the rest k and r changes in the last case here would be if there is a change in our camera center so what's interesting about this case here is that comparing it with the previous three cases uh if we were to change k or change r independently or both at the same time we cannot tell anything about the 3d structure the reason is because what we are seeing in the all these three cases here is that the relation is going to be on a p2 space this means that it's going to be related by a homography and everything that relates the two images before and after the change in these values the k intrinsic value as well as the rotation it's going to be only given by the image point so hence there's no information about the 3d structure that can be obtained from this zooming and pure rotation cases with a fixed camera center but things would be different if we were to move the camera center this means that let's say if i have two images here uh the first image is i can send camera center c and the second image undergoes a rotation and translation for example let's make the scenario simple where there's a fixed k and this is going to rotate and translate to c prime and r prime even if r stays the same this is going to also undergo a motion parallax but so let's make the scenario even more simpler to to understand now let's say r also remains the same the only thing that changed over here is our camera center c to c prime we can see that now the relation of these two points x and x prime over here it's going to be related by the intersection of these two rays the light ray is going to pass through c and x on the first image and since x and x a projection of the same 3d point which we denote our capital x over here so uh with a change of the camera center we can see that this particular 3d point is going to project onto the second camera center on the image plane over here that causes small x prime to be observed over here so now the relation between these two x and x prime is no longer simply a homography because this in addition to being dependent on the camera center the change the the two camera centers over here it's also going to be dependent on the scene uh 3d point in the scene so you we can see this from another 3d point here suppose that if i move this 3d point the depth of this 3d point further down on the same light ray although x2 here is going to be projected onto the same image point in the first camera we can see pretty much that uh because of this change in the in the 3d structure where all the camera intrinsics and x36 remains the same uh this 3d point the new 3d point is going to be projected onto a new 2d point on the new camera center hence in conclusion we can conclude that by changing the camera center the observations on the two image before and after transformation by the camera center it's going to tell us something about the 3d point as well as the camera motion so we'll look at this in more detail actually in the rest of the remaining lectures in this semester basically given a set of correspondences on the two images how to solve for the change in the rotation and translation as well as how to recover the 3d structure and this is what we call the structure from motion |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_11_Part_2_Twoview_and_multiview_stereo.txt | so now let's look at the derivation of the homography that will do stereo ratification to the pair of images we'll consider this by looking at the 3d point denoted as capital p which call and this 3d point corresponds to small p and p prime in the image coordinates of the respective left and right images we further denote o and o prime as the optical center of each camera so this in the richard hart play and andrew zizerman textbooks denotation it would be c tilde and c to the prime where c tilde here it's a three by one inhomogeneous coordinates of the camera centers and we further denote the known camera matrices as m equals to k multiplied by identity 0 which means that we are defining the first camera the reference camera as the in the canonical reference frame where the camera coordinate frame coincides with the world coordinate frame and we denote m prime here equals to k prime multiplied by r and t where r and t is the relative transformation from o to o prime now based on the given information we compute the camera normalized epipose denoted by e hat and e hat prime in each image so here we know that the e which is the epipole in the first image we are given two images over here the epipol in the first image which we denote as e it's the projection of the camera center of the second camera which is denoted by o prime into the first image so this would be given by m which is the camera projection matrix of the first camera multiplied by o prime and one this is the homogeneous coordinates of the camera center in the second view and we know that o prime here can also be rewritten as minus r transpose multiplied by t and 1. so evaluating this where we substitute the definition of m the camera projection matrix of the first camera in the canonical reference frame as ki and 0 multiplied by this camera center which is o prime in the sec of the second frame uh evaluating this will end up with e which is the epipole of the first camera frame to be equals to k multiplied by the o prime and this implies that uh the camera normalize epi pose which is denoted by e hat will be given by o prime and similarly for the epipol in the second view it would be the projection of the camera center in the first view into the second frame which we denote as e prime over here so this would be equivalent to the multiplication of the camera projection matrix of the second view with the camera center in the first view which is denoted by o and one over here this is the homogeneous coordinates of the camera center in this in the first view and substituting m prime which we have defined earlier on as k prime r and t into this equation and expanding this expression over here we'll see that e prime which is the epipole in the second view would be equals to k prime multiplied by the translation vector and this simply means that the camera normalized epipose would be e hat prime equals to t the translation vector between the two views so the now having defined the two epipoles in the respective views the next step would be to compute a projective transformation which we denote as h that maps e hat to an infinite point of one zero zero so the reason why is a one is because it has to have some values in the x-axis and zero because we want the two epipoles to be on the same scan line which essentially means that we want to align the two epipolar lines of the two views and a good choice would be the rotation matrix because we recall in the in the lecture where we talk about homography we say that the homography exists only when the scene is planar and the second case where the homography exists is that it's under pure rotation of the camera and in this particular case we can choose the homography to be equals to a octagonal matrix which is a rotation matrix that we denote that r1 transpose r2 transpose and r3 transpose over here so a good choice of this would be the first row of the homography of the rotation matrix will be given by o prime divided by the norm of o prime and we can see that the the reason why we do this is that because uh the is after we take h multiplied by o prime so this means that we are going to take the first row here and do a dot product with o prime itself so this would become the normalized o prime multiplied by o prime itself so this will give some value which is essentially equals to 1 over here and the next row of the definition a good choice here would be an octagonal vector to r1 because we want h over here to be a octagonal matrix which means that it's a rotation matrix here so this means that any two rows in the rotation matrix has to be octagonal to each other and one of the good choice would be the definition over here so you can verify it by yourself the dot product of r1 and r2 is going to be zero because uh this value over here is orthogonal to r one over here and then uh we also know that uh since the r1 r2 r3 they form the basis the three octagonal basis bases in the rotation matrix this means that having defined r1 to be this value here and r2 to be any external vector to r1 we need r3 to be octagonal to both r1 and r3 because this simply means that the dot product of r1 and r3 would be equal to 0 as well as the dot product of r2 and r3 would also be equals to 0 and r3 here would naturally be given by the cross product of r1 and r2 the cross product of two vectors would give the a vector that is octagonal to both the vectors so hence the homography can be now defined by r1 r2 and r3 that we have computed over here so next we will find a good projective transformation h prime that maps the epipole in the second image which we denote by e hat prime over here to an infinite point as well which we denote as one zero zero transpose and uh a good choice of this h prime that maps e prime to infinite point would be given by h which is the homography that we have computed earlier on uh multiplied by r transpose over here so uh i'll leave it to you to as an exercise to show that this is true so essentially what you need to do here would be you need to verify that h multiplied by e will give you a vector of one zero zero and h prime over here multiplied by e prime which is essentially equals to h rotation matrix multiple transpose multiplied by e prime is also going to give you 1 0 0 for this relation to hold true and finally once we have obtained both h and h prime from the computation we can apply this two homographies on the respective images to rectify them into a parallel path that looks something like this so what this essentially means is that given the original left image for example we will apply h over here on x over here such that we map every pixel into the new rectified image space so in reality or in practice this is not is not done in this particular way as we have mentioned in the lecture in homography because this will lead to hole in this rectified image over here so in practice what we do is that we compute the inverse of h and for every pixel on the rectify image we'll compute the corresponding pixel by taking the inverse of the homography and we'll use this as a lookup table so this means that for every pixel on this rectified image i will be able to find the corresponding pixel in the original image over here and i'll simply read off the value or the rgb value from this original image and fill it in to my rectified image and this as a result by doing this we'll avoid having holes in the rectified image we'll do the same with the second image by applying the h prime inverse on every pixel over here to search for the corresponding pixel value in the original image such that we can simply take the value and put it into the rectified image over here to avoid having holes in the rectified image now once we have computed the stereo rectification this means that the for any arbitrary pairs of images which is related by rotate rotation and translation uh with a camera intrinsic value of k and k prime respectively we can convert this into a parallel pair of images such that the epipolar lines lies on the same scan line and the next thing that we need to do would be to do a correspondence search for every pixel on the reference image we want to find the the patch on the corresponding scan line in the second image such that it has the highest visual similarity with respect to this particular patch in the reference image as mentioned earlier on we'll do this by sliding a window on this corresponding image over here along the scan line and for every uh sliding window over here we'll compare the visual similarity with the patch from the reference image we'll do this by computing what we call the matching cost and here the interesting thing here is that the patch that has the highest visual similarity will be taken as the correspondent of this reference patch in the ref in the first view and this simply implies that at a particular part of the sliding window we will get the lowest matching cost and this means that this is the right match for the patch in the reference view and this would be going to compute the disparity because this patch over here if we denote it as x over here and this patch over here which we know as x prime there's a slight shift because this is a stereo pair looking from two a slightly different viewpoint so the disparity here would be given by x minus x prime and it will correspond to this particular value over here and there are of course several methods that can be used to compute the matching or the photo consistency cost we'll look at four different techniques to do this the normalized cross correlation and the sum of square differences sum of absolute differences as well as the mutual information so the for the normalized cross correlation usually to be more precise we defined it as the zero mean normalized course correlation in short ncc and it's defined as follows suppose that we have two patches f and g so f here simply refers to one of the patch in the reference view the first image and g is the corresponding patch that we are considering when we take the sliding window along this particular scan line in the second view and we'll denote this as g so here mathematically f over here depending on the size of the patch that we consider so suppose that we consider a three by three patch uh denoted as omega over here f would be uh represented as the vectorized version of the three by three image patch that is uh considered and similarly for g over here it would be also a three by three image patch that we are considering at the current stage of the sliding window we'll vectorize this into g and f bar over here would be the mean of the image patch intensity uh suppose that this is a grayscale image where every pixel over here uh has a single value that ranges from 0 to 2 5 5 for example what we can do here is that in this case we have 9 values in this particular vector f bar over here simply means the average values of all the pixel intensities in this particular vector over here similarly for g bar it would be the average value of all the entries in the in this vector and for sigma f and sigma g so this means the standard deviation of the vector over here so we'll simply take every entry here minus away the mean and square it and sum everything together and finally divided by the total number of entries in the vector over here so the normalized cross correlation will be in the range of minus 1 to 1 and what this means is that the higher the value the more similar the two patches will be and since the normalized cross core relation over here it's normalized with the image variance this means we are dividing it by the product of the variance of the pixel intensity in both the vector of f and g which corresponds to two image patches that we are comparing over here and uh since we normalize this uh via the image intensity what it means is that the normalized cross correlation is invariant to gains empires so this means that it's invariant to lighting changes so if i have a patch image patch in the first image and the second patch which is f denoted by f and g as defined earlier on if there is a slight change in the image intensity over here we are still able to compute matching cost pretty efficiently over here but the main failure of this normalized cross correlation is the lack of survey structures and repetitive textures this means that if g is slight across means that i'm sliding it across a uniform or homogeneous surface over here and what this means is that there won't be much variation as what we have seen earlier on in the graph of the of the matching cost so in this case if we were to slide across the different disparity along the scan line the we would expect that there was a greater uh change in the matching course in the normalized cross correlation cost if the image texture is not uniform along this particular scan line but if it is uniform or homogeneous along this particular scan line we can expect something like that a curve that looks like this uniform and this would not be too good for us to figure the the best matching cost in order for us to find the disparity between the two views another way of computing the matching cost would be the sum of square difference so as the name implies this would be simply uh given by the square of the differences between f and g which represents the vector in the respective patches so we will take f minus g and then take the square norm of it but the problem of taking this l2 square distance naively would be that the difference over here could grow unbounded that means that because there is simply no bound to the difference or the square of the difference and as compared to what we have seen earlier on where uh in the normalized cross correlation because we are normalizing over a certain value so we can uh we can guarantee that this lies within minus one and zero uh minus one and one and but in this particular case over here in the l2 square distance over here we cannot guarantee this to be in a certain range uh we can do this by mapping this l2 square differences into an exponential function that looks something like this so this will guarantee that the range lies in uh 0 to 1. since this is a exponential of a minus l2 distance this is somehow similar to the exponential function or the gaussian distribution and where sigma over here it's a hyperparameter that we define as the standard deviation now one of the disadvantage of using the sum of square difference is that it's extremely sensitive to outliers the reason is because we are taking the square difference of f and g and what this means is that this would give us the a curve that looks something like this where this is the this the difference between f and g the norm of the difference between f and g and since we are taking a square it will end up to be a quadratic curve that looks something like this and what it means is that if there's an outlier where f minus g happens to be a very big value because uh one of the one of the this values over here might be wrong and so what this means is that we'll end up with a very big value of f minus g over here because the order over here will go quadratically uh with the magnitude of the difference between f and g which means that this is not good if there's any outlier in our image and one of the way to resolve this would be to use a normalized variant of the ssd so what this means is that in order to prevent f minus g from getting too large because of the outliers we can normalize it use uh by the mean so this means that uh if i have some values of f that one of it is outlier that lies very far away i can compute the centroid which is the f mean over here and then subtract off every value that means that i'm shifting the reference frame to become the mean of this f vector over here and normalize it using the standard deviation of all the points from the centroid so this means that uh i'm going to scale this distribution down a smaller value over here and by doing this the difference would be bounded and it wouldn't go too large as in the case where we simply naively take the differences of f minus g over here now it's interesting that the this normalized variant of the ssd is equivalent to normalized uh cross-correlation except for it's the negative of the uh normalized cross correlation uh if we substitute all the parameters into this definition of the nssd over here we can see that this essentially evaluates to this particular expression over here which is equivalent to 1 minus the normalized cross correlation where this is in the range of minus 1 to 1. so what we are doing here is that we are flipping the sign and essentially what it means here is that for normalized cross correlation the higher the value it means that the more similar the two patches are but in this particular case of the nssd or ssd we are looking at the case where the lower the value the more similar the two patches of f and g there's also a third variant of the matching cost which we call the sum of absolute differences in short sad and this is very similar to ssd except for instead of using the l2 norm we will use the l1 norm over here and to some extent it is much more robust to outliers compared to the l2 norm counterpart the reason is because as i have mentioned earlier on that in the l2 counterpart this is equivalent to a quadratic curve f minus g square over here so the larger the difference the larger this value is going to become as a result it's very sensitive to outliers because outliers is going to cause f minus g to be large as well as the final result here to be extremely large and this will outweigh all the possible measures that we can have but in the case of the l1 norm what this simply means is that we will have a curve that looks something like this in a v shape and where this is f minus g there one norm over here so we can see that in comparison to the quadratic curve counterpart for a very big value of f minus g we would have a substantially lower value compared to the l2 norm counterpart and what this means is that by taking a l1 norm we would have suppressed the sensitivity towards the outliers so the last matching course that we'll look at is the mutual information obtained from the information theory and it is the measure of how dependent two random variables x y are and it's given by this equation over here the mutual information of x two random variables x y is the marginalization over all the range of x and o or and y where the p x y over here is the joint probability of the two random variable multiplied by the log of the joint probability divided by the marginal probability of x multiplied by the marginal probability of y we make use of this to define the matching cost which is the photo consistency measure and by taking the negative of the mutual information so here since the mutual information measure how dependent the who variables are by taking a negative of this mutual information this means that the higher value of the mutual information or the photo consistency cost here which means that the two patches of f and g are very dissimilar |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_8_Part_1_Absolute_pose_estimation_from_points_or_lines.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about absolute post estimation from points or lines hopefully by the end of today's lecture you'll be able to define the perspective endpoint or otherwise known as the pmp camera post estimation problem we'll look at how to estimate the camera pose of an uncalibrated camera with endpoint or line 2d 3d correspondences then we'll look at three different algorithms of calibrated camera post estimation in particular the grenade three-point algorithm the quan four-point algorithm and the efficient pnp endpoint algorithms to estimate the pose of a calibrated camera finally we'll look at how to describe the degeneracies of the camera pose estimation problem and of course i didn't invent any of today's material i took a lot of the slides and content of today's lecture from the textbook written by richard hartley and andrew zizerman in particular chapter seven this is for the case of the uncalibrated camera and for the three-point grenade algorithm i took it from the paper which is a review of the original version of the paper which is written in german review and analysis of the solutions of the three-point prospective post problem published in ijcv in 1994 and i took the quant algorithm the four point algorithm from the linear endpoint camera post determination paper published at t parmi in the year 1999 and i took the efficient uh pmp problem the epmp uh problem and solution from this paper uh written by vinson la petit uh published at ijcv 2009 and the degenerate analysis it's taken from the paper space resection failure cases uh to publish at photogrammetric records in 1966. now suppose that we are given a set of 3d points denoted by capital x1 all the way to capital xn defined in the world frame what this simply means is that all these 3d points which we write as xn using a homogeneous coordinate x y z w this set of coordinates it's defined with respect to a assigned frame which we call the world frame represented by f w over here and for each one of this point were also given its corresponding 2d image points denoted by small x1 o to xn now this what this means is that suppose that we are looking at this 3d point x2 we are also given the 2d image coordinate which is the projection of this particular 3d point onto the image over here which we denote as small x2 over here so small x n in general would be also represented by a homogeneous coordinate x y as well as alpha that is defined in the camera frame which we denote as fc so we call this the 3d 2d correspondences given this set of 3d 2d correspondences the task of camera post estimation problem is to find the camera pose which is the rotation and represented by the rotation matrix as well as the translation vector of the camera frame in the world frame so this is the objective and this particular problem is also known as the perspective endpoint problem or simply in short the pmp problem and the reason why it's called the perspective endpoint problem is because in this particular case we're looking at the camera model where it's a projective camera model where all the light ray simply converts to the camera center a single point of convergence and in the later subsequent lectures we'll be looking at what we call the non-perspective non perspective endpoint camera post estimation problem and in that particular case we'll be looking at the general camera a general camera could be any of with a lens of any shape where all the light rays simply do not pass through a single center of projection now let's start by looking at the first case where the camera is an uncalibrated camera and what this simply means is that we have a unknown intrinsic value the intrinsic of the camera is unknown which we denote by k over here as a result what this simply means is that since the camera intrinsic is unknown and our task here is also to find out the unknown rotation and translation of the camera frame with respect to the world frame since all these parameters are unknown so we simply put them all together into the camera projection matrix which is a three by four matrix where 12 entries in the 3x4 matrix are unknown so the objective now becomes we are supposed to find the 12 unknown parameters in our camera projection matrix such that the given 3d point which we denote as capital x is projected onto the given 2d point which we denote by small x over here with the camera projection matrix and we know from earlier lecture in homography that we can given an equation that looks like this gamma x i equals to p of x i uh in order to eliminate the unknown scale here we can actually take the cross product of both sides which is given by this guy over here and simply equate it to 0 and what it simply means here is that the unknown scale can be cancelled out since this is a scalar value that can be factorized out from this matrix multiplication operation which is equals to 0 and expanding this particular equation over here we will get this equation in trix form where the scalar value of x y and w here are simply the entry of the 2d point and we will simply rewrite or reshape the 3x4 matrix here the 3x4 matrix here into a 12 by 1 vector and this particular matrix over here is simply 3 by 12. and similar to the homography case we'll see that we have three equations here from our camera projection equation as shown here only two of them are linearly independent what this mean is that we can simply remove the third equation over here and end up with a 2 by 12 matrix multiplying by a 12 by 1 vector that will always give us zero so this simply means that we end up with a homogeneous linear equation of the form of ax equals to zero and given and 2d to 3d point correspondences we can simply pluck it this known correspondences to find each one of this constraint in this particular matrix over here and given n of this point 2d to 3d point correspondences we'll end up with two n constraints that can be used to form a 2n by 12 matrix which we call a and uh this together with the 12 by 1 vector forms a homogeneous linear equation of ap equals to 0 where all the entries in the matrix a are known and and the only unknown in this particular case would be the 12 entries in our team vector over here given a homogeneous linear equation that looks like this where a here is 2n by 12 and p here it's 12 by 1. this simply means that we have 12 unknowns in this particular p vector over here so in order for a non-trivial solution to exist for this homogeneous linear equation the rank of a b better be equals to 11 such that we have a non-trivial solution for this particular homogeneous linear equation over here so since earlier on we look at this equal this particular equation over here well each one of the point correspondences gives us two constraints and since we have all together 11 unknowns in the p vector over here well what this simply means is that we need a minimum of 5.5 correspondences to get these 11 constraints where the 0.5 correspondence here simply means that since each point one point gives us two constraint so the half point here simply means that we are taking one of these constraints from a point correspondences that is given to us effectively a minimum of six point correspondence are needed where the last point correspondence only one equation is used to form the a matrix over here and the now the solution after getting the minimal equation over here of ap equals to 0 we can simply solve for the unknown p using the right now space of a so what this means is that we can take the svd of a which will give us the left null space or the left now space multiplied by the singular value multiplied by the right null space and b here can be rewritten as the collection of the right singular vector v1 all the way to v12 and the solution of p would simply corresponds to the last singular vector in v which corresponds to the least singular value in sigma so in earlier lecture we have seen that for homogeneous linear equations such as ap equals to 0 over here to have an exact solution the metric a must be made up of measurements that are noise free and we know that in practice this is not true because a camera is a real device that is often corrupted with noise and this means that the data that we collected which is the 2d to 3d correspondences to form the a matrix over here it's always noisy and in order to deal with uh such noisy data we know that in gen we have learned in the previous lecture that in general we need more than the minimum number of point correspondences to compute the solution for this p vector over here so as a result uh here it's always better to use more than six point correspondences and uh the solution of the camera projection matrix can be obtained uh from either the algebraic or uh minimizing the geometric error as we have seen earlier on when we look at the case of the homography so to minimize the algebraic error because we know that since a here is corrupted with noise in general uh you will never fulfill this homogeneous linear equation where the product of a multiplied by p will never be equal to zero and so what we can do best would be to minimize the norm of ap such that it is as close to zero as possible and uh of course if we were to uh naively minimize this uh the product of amp then we are not guaranteed to have a meaningful solution of p because in in this particular case if we do not subject p here to a certain constraint what it means is that a trivial solution could be found and that would be simply to equate p equals to 0 and this is the solute that we do not want so in practice it's always good to subject this minimization the algebraic error minimization to a certain normalization constraint and in this particular lecture we will use the first option over here where we will just constrain the norm of the vector p to be equals to 1. so this is similar to what we have seen earlier on in the lecture of homography and fundamental matrix so in the lecture of homography we do this we want to minimize the norm of ah subjected to the norm of h to be equals to one and in the fundamental matrix uh case this is what we do we have af over here equals to zero and to minimize the algebraic error means that we will minimize the norm of af and subjected to the norm of f to be equals to 1. otherwise all these cases here by just doing this minimization without any constraint then it's likely that the solution would be equals to 0 and this is what we want to avoid so a second option here in this particular case which is not applicable to what we have looked at earlier on in the case of the homography as well as the fundamental matrix would be to subject the normalization of the p matrix in particular the first three entries of the last row which means that i have a p matrix over here if i were to write it in this way so this is the capital m the three by three matrix and this is the three by one uh vector over here so p to the p3 over here uh it means that i'm taking the last row of this guy over here so m three by three the last row of this guy over here and i want to subject the norm of this to one and the reason why this can be done is because as we have seen in the earlier lecture that the depth given x and p p is the camera projection matrix x is a point in 3d space this depth here is actually given by the sine of uh over the determinant of m and multiply by w w it's the so if we were to project this guy over here p x into small x and small x here would be given by x y and w so w is the the scale here that we are looking at in the image coordinate point divided by t and the norm of m three so m here refers to this three by first three by three and three of the p matrix and m three refers to the last row of the first three by three entries in the p matrix over here and t here it's the last entry of the homogeneous coordinates of the 3d point so we have seen that the depth is uh equals to the w the sum multiplied by w divided by t and the magnitude of the last row of the first three by three entries of the p matrix and hence in order to normalize this we can subject the last row of the first three by three entries in the p matrix to have a norm of one and after we have uh solved for p using the svd we know that p here the solution of p is actually equals to the a certain scale lambda which we call lambda multiplied by the by the solution v this corresponds to the least singular vector in the right null space uh multiplied by a lambda represents a whole family of solution for this uh p vector over here so we can actually uh take the entries of this out by one vector over here so we can take the three entries that corresponds to this uh p3 over here and take the norm to be equals to one so this gives one equation in terms of one unknown lambda over here where lambda can be solved but in this lecture we will stick to the first option here although the second option here can also be applied so after initial a solution has been obtained from the algebraic approach the next step would be to refine the solution using the geometric error and the geometric error here is given by the reprojection error where suppose that p here is the initial camera projection matrix that we found from minimizing the algebraic error we can make use of this as an initialization to minimize the geometric error and using this particular projection matrix that we have found earlier on from the algebraic error we can project any 3d point x back onto the image and which we call x hat over here since the camera projection matrix is an estimated one what this means is that the reprojected 3d point onto the image which call x hat it is unlikely to coincide with the measured point which we call x i uh here and uh hence there is a certain error in between here so this error is given by the euclidean distance between the measured point and the reprojected point from our estimated camera projection matrix over here and the objective now is that we want to find the correct projection matrix p the three by four camera projection metric such that the total reprojection error for every point it's minimized and this will guarantee uh the the best solution for the camera uh projection matrix and in comparison to the algebraic error as we have mentioned earlier on in our lecture in homography as well as the lecture in fundamental matrix is that the reprojection error the or the geometric error use a much more accurate result because it's physically meaningful we are actually physically minimizing this error over the prediction of the camera projection matrix whereas in the algebraic case by simply minimizing this norm there's no physical meaning behind this except for we are just trying to uh mathematically force this norm to uh be close to zero as much as possible but we are not physically minimizing anything as the the reprojection that we are seeing here and this particular minimization or we have also mentioned earlier on in the lecture of homography that this minimization can be done using uh unconstrained optimization algorithms such as lower markov another option would be to use the gauss newton algorithm to do this we'll see these two algorithms in more detail when we talk about bundle adjustment in the subsequent lecture so since our objective is to minimize the norm of ap over here subjected to the constraint that the norm of p should be equals to 1. we know from the lecture in homography that a here better be in a good condition such that when we take the svd of a it can yield a good solution for the p vector over here but unfortunately we know that a is made up of the 2d to 3d correspondences that might be defined very poorly with respect to the respective frame of reference so for example for the 3d points over here we might be give a set of 3d points that are defined with respect to a world frame that is very far away from it and with some of the points might be very close and we can see that from the lecture in homography we we know that if this is the case then we know that this would greatly affect the magnitude of the respective entries in the metric a and as a result it will also affect the svd result that is used to compute the solution for p and we also know that this holds through in the 3d domain as well as in the image frame so we might get some points that are here and some points that are here and it's defined or defined with respect to a camera frame so in this particular case we we know that the a matrix it will affect the magnitude of the a matrix to differ greatly from one another and this will in turn affect the results svd that is used to compute the solution for the p vector so we also seen that in the homography lecture that a simple solution can be done to overcome this particular problem and that would be to do data normalization so in that particular case in the homography lecture we are dealing with the 2d to 2d correspondences for case of formography in that case the data normalization would have to be done on the 2d domain for both of the images and in our case here we are given 2d to 3d correspondences so data normalization naturally has to be done on both the 2d domain as well as the 3d domain so uh in in the case of the data normalization for the 2d data point it will be exactly the same as what we have seen earlier on in the 2d homography case so we have to define a transformation matrix such that all the 2d points in the image such that the centroid of all this let's say i call this centroid as a c over here to be uh translated to the origin or of the reference frame and the points over here in order to prevent this kind of situation where they are badly uh distributed in the image space uh we have to scale the points such that the root mean square distance the total root mean square distance to the origin equals to square root of two and we have seen in the lecture of homography that this can be easily done by defining a scale factor which we call as over here to be equal to square root over the mean distance of all the points from the centroid similarly uh we have to do the same data normalization in the 3d domain we'll still do the same the first step where we define a transformation metric over here so in the previous case where we define t norm over here since this is in the image domain this would be a three by three matrix now since we are looking at the 3d domain well this transformation matrix which called u norm over here would become a four by four matrix where the the first step of this transformation would be the same as the in the 2d domain where we want to translate the centroid of all the 3d points suppose that all these are the given 3d points what we want to do here is that we want and suppose that this is the centroid that we are looking at in the in the 3d space and this is the centroid which i denote as c we want to define a translation such that the centroid over here is translated to the origin of the frame of reference and next we want to scale the root mean square of the 3d points such that the total root mean square distance from the origin or from the centroid is equals to square root of 3 3 over here and this would prevent the case where we have some data that is uh scattered very far away from each other or very far away from the reference frame and in order to achieve this we will simply define a scale scaling factor over here we should call s as before but in this case since we want the total root mean square distance to be square root of three then we can define s to be equals to square root of three divided by the mean distance of all the points from the centroid so in summary the post-estimation steps to estimate the camera pose of an uncalibrated camera with unknown intrinsic value can be summarized in this particular table over here the objective here would be suppose that we are given endpoints where n equals two or more than six points of 2d to 3d correspondences which we denote as big x and small x over here so we are given endpoints of this the objective of this is that we want to estimate the camera projection matrix which is a three by four matrix such that from here we can actually decompose it to become the intrinsic value as well as the extrinsic value over here this particular camera projection matrix should achieve the objective of minimizing the total reprojection error of every 3d point that we are given that is being projected onto the 2d image with respect to the 2d point correspondences that we are given as well and we have seen earlier on that this can be done first by a linear method which minimizes the norm of the of a multiplied by p subjected to a constraint of the norm of p to be equals to 1 and this would be equivalent to finding the null space equation solution of ap equals to 0 subjected to p equals to 1. and but before we do this we also saw that the a matrix here has to be well conditioned by using the data normalization technique and in order to do the data normalization we have to do it in both through 2d as well as 3d domain so we have to compute a similarity matrix which we call t that we have seen earlier on how to compute this to normalize the points and as well as the normalization metric in 3d which called u over here to normalize the 3d points so the normalization is simply done by computing this t matrix and then pre-multiplying all the respective 2d points to get x to the over here which is the normalized set of point and we do the same thing for the 3d point so after normalization we minimize this by simply minimizing a p equals to zero and by simply taking the svd solution from a and so once we have the initial estimate from this algebraic error we can proceed on to refine the estimation of p using the geometric error which is also the reprojection error so here what we want to do here is that we want to do the arc mean over the camera projection matrix all the 12 parameters in the camera projection matrix such that the given 3d point reprojects to be as close as the corresponding 2d point as possible and we can do this using the either the macro algorithm since this is an unconstrained optima continuous optimization problem we can do this over uh level macro algorithm or we'll also see later that gauss-newton algorithm can also be used here then finally in the last step after we have obtained the camera projection matrix from both the algebraic initialization as well as the geometric error minimization we have to perform the last step which is the denomination to recover back the correct projection matrix with respect to all the original 2d to 3d correspondences that we have obtained earlier on to do the denomination we will simply just multiply the inverse of the 2d trans similarity transform by the predicted camera projection matrix from the geometric error and finally multiplied by the similarity transformation matrix for the 3d point denoted by u over here and uh so after we have found the camera projection matrix p uh we know that this is a three by four matrix which is made out of the intrinsic value as well as the rotation and translation and in order for us to reach our objective of finding the camera pose with respect to the world frame which is essentially the rotation and translation which is also the known as the extrinsic value of the camera projection matrix we have to do a decomposition of the three by four camera projection matrix into the intrinsics which is k as well as the rotation and the translation vector over here so we have seen earlier on in lecture four that this guy over here the first three by three entries in our camera projection matrix is actually equals to k multiplied by r and what this means is that we can make use of m to uh to recover k and r using the rq uh decomposition and once k and r has are both filed from the rqd composition we can make use of the rotation matrix that is found in the rqd composition to find the camera center by simply equating minus r and multiplied by c c tilde here is the inhomogeneous coordinate of the camera center and uh with the last column of p that was found earlier on so we can there are three unknown over here and we have three equations we can easily solve for all the three unknowns from this equation over here and and finally what we also want to solve would be the exact solution for the translation as well so notice that when we compute p from minimizing the norm of ap in the algebraic error or even in the geometric error that the scale is missing because say that we are subjecting this to the norm of p to be equals to one what this simply means is we ignore the scale of the camera projection and this uh means that t here the translation vector here it's obtained up to scale at this point that means that we do not know the scale of the translation vector here and fortunately we are given the 3d points x so each 3d point we know its exact location away from the world frame over here and what this means is that the 3d point gives us the absolute scale information which can be used to recover the scale of the translation vector over here and this can be easily done by looking at the ray equation that we have seen earlier on in the lecture this actually means uh the set of points lies on the light ray so x here is actually a light ray where uh the beginning over here is the camera center c and the pseudo inverse multiplied by x x is the is the image point over here and if we will take the pseudo inverse multiplied by the image coordinate over here we will get any point on this light ray over here so x which is the 3d point it actually lies somewhere on this light array over here so let's denote this by x and in order to find because we are we know this information from we know x the exact x from the given 2d to 3d correspondence and we know that this x here is parameterized by a single scalar value which we call lambda here that's the that value over here so since we know this because it's given in the 2d 3d correspondences we can make use of this known value and this one equation over here that represents the light ray to solve for the unknown that value and once this is solved we can put this back into the translation because now the translation is uh without scale so we can actually take t divided by the norm of t here and multiply by the lambda that we have obtained from this equation over here to get the new translation vector over here now in the case where we are given line to 2d to 3d line correspondences which means that instead of just now what we have looked at is that in on the image we are given a set of 2d points and we know the three it's 3d correspondences or also a point in the 3d space we want to find the rotation and translation from here the cam which means that we want to find the camera pose but now suppose that we are given a set of 2d lines we are given all these lines in the 2d image as well as its 3d correspondences here which is also lying the objective would remain the same given this 2d to 3d line correspondences we want to find the camera post the rotation and translation such that the world frame is aligned with the camera frame so as a start we'll first represent the 3d line suppose that this is our 3d line will represent the two end points or any two points on this 3d line using a 3d homogeneous coordinate which we call x0 and x1 and we also know that from the previous lecture we know that a line back projects to a plane so suppose that this is my line x 0 x x1 my 3d line and we have a image plane over here at this image plane it corresponds to a projection of this 3d line on the image plane and we know that this particular two uh 2d line or this particular image projection it back projects to become a plane so in in the 3d space so this is our the plane that is in the 3d space and we also know that from lecture five that this black projected plane can be represented by the equation of p transpose of l over here where l uh it's the image line or the image projection of this of this 3d line so this is the 2d line correspondence that we have over here and we also we also know that this p transpose l represents the back projection of the this 2d line to a plane in 3d space and we since we also know that the 3d line is supposed to lie on this particular plane over here what it means is that any one of the two 3d points used to represent the 3d line the dot product of date with the plane equation must be equal to 0 because there is a coincidence relation here between the two points and the plane formed by the back projection of the 2d line so we can write this relation mathematically in terms of the dot product of the plane so this first is the plane equation we can take a dot product of a plane uh equation with any one of the 3d points and that should be equals to zero and now uh since we get this equation over here uh what we can see here is that each choice of j which means that each choice of the three 3d points that is the of the two 3d points that is used to represent the line would gives would give us a single linear equation in the entry of p over here so what this means is that we'll get this equation where l the 2d line and the 3d point over here that represents the 3d line these are known values and the unknown is p over here we can evaluate this into a linear equation uh which is when we expand this out and then rearrange p to be on the right side and the l and x to be on the left side we can see that uh it's simply a system of linear equations with respect to the unknown entries in the p matrix over here and we also can verify this after we multiply it out we will we'll get this equation a of p equals to 0 where a here is simply made out of l the 2d lines as well as the 3d points of x j any one of the 3d points and this would be a 3 by 12 matrix and this guy over here would be a can be reshaped into a 12 by 1 vector from the 4 by 3 matrix of the camera projection matrix and similar to the last previous cases that we have seen in in the point correspondences as well as the cases that we have seen in the homography that out of these three equations only two are linearly independent so we can rewrite this into a 2 by 12 matrix multiply by p which is a 12 by 1 vector and equals to 0 and we'll get the homogeneous linear equation here so uh what this means is that given n number of correspondences n number of 3d to 2d line correspondences we can form a 2n by 12 a matrix by stacking every uh two constraints from a single 2d to 3d line correspondence together to form the a matrix where p here remains at the 12 by 1 and this here represents the camera projection matrix which is unknown here and this is what we want to find out so similar to what we have seen earlier on in the case of our 2d to 3d point correspondences we also want to minimize the norm of a multiplied by p subjected to the normalization constraint so in this case here of the 2d to 3d line correspondences we can also make use of either one of the two constraints uh the first one would subjected to the norm of p to be equals to one which is the preferred choice here and the second case that we have looked at would be to subject the third row the last row of the first three by three entries of the camera projection matrix to be equals to one and uh so the in in order to minimize this we can apply the same technique that we have seen earlier on by taking the svd of a so this would be the left singular null space multiplied by the singular value and the right singular null space and the equation the final solution here would be given by the last column of the right singular uh null space that corresponds to the least singular value in the sigma matrix over here so once we have found the the solution an initial solution from the algebraic error as before we know that minimizing the algebraic error of the given by the norm of ap here it simply doesn't mean anything physically except for we are just minimizing the mathematical equation such that a multiplied by p gives us 0 but in the physical sense in geometric sense there's actually no meaning to this particular error so once we have found the initial solution for the p vector over here which is a 12 by 1 vector over here we will reshape it back to form a three by four camera projection matrix and we'll minimize the geometric error because now it's not a point correspondence anymore and now it's a 2d to 3d line correspondence so one of the way to define the geometric error is given by the paper which i have published a few years ago at the eccv so specifically this geometric error can be as follows so given a 3d line that is defined by the two end points x i zero and x i one over here uh we can reproject this two end points back onto the image where this reprojected endpoints is given by x hat i zero that is uh directly computed from p multiplied by x i zero and we can do the same projection of this the second 3d point onto the image and what we want to do here is that we know that the observed 2d line is given by this line over here that we represent it as l i over here so what we want to do here is that we want to minimize the closest distance that the reprojected endpoint make with the 2d line is represented by d0 on one end and second perpendicular distance which is made by the second point to the to the line and this is also the closest distance from the second uh reprojected point to the line denoted by d1 over here both of them should be as small as possible and once d0 and d1 are zero what this means is that the line that is formed by the reprojected endpoints would be lying exactly on the observed line and this is the ideal case that we want so in other words we want to minimize the two areas over here but we can see that uh there is a there is an inherent bias towards shorter line so if this line is very short what this means is that uh the error that is formed by this would be smaller compared to a very very long line so if i have a very long line i have a very long line a very small angle over here it will actually make a very huge error at the two end points so uh instead of just simply adding out the two areas of that mid that is made by the endpoint we'll also normalize it by the length of the uh 3d line so as a result this is what we did how we define the reprojection error for line correspondence |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_12_Part_3_Generalized_cameras.txt | now after looking at the two view geometry for generalized camera let's proceed on to look at the absolute post estimation problem of a generalized camera suppose that we are given three sets of 2d to 3d correspondences which we denote at capital x1 x2 and x3 for the 3d point that are defined in the world reference frame which we denote as fw and their corresponding 2d coordinates which you denote as small x1 x2 and x3 the task of the generalized post estimation problem would be the same as the post estimation problem for pinhole camera and that is to find the rigid transformation r and t that brings the multiple camera frame fg into the world frame here's an image to denote the problem of the generalized post estimation and as mentioned earlier on that we are given three 3d point denoted by capital x1 x2 and x3 in the world coordinate frame denoted by fw over here and we are also given the corresponding image point which we denote as small x1 and small x2 and small x3 over here so in this case the x1 x2 x3 it's defined locally with respect to the camera frame and we know also the extrinsic value of this camera with respect to the general camera frame which is denoted by fg over here so the task would be to find the relative transformation that relates the world reference frame and the local general camera image frame so i've seen earlier on that any point expressed in the multi-frame camera along the light ray can be given by this equation over here where q i cross q i uh prime is the nearest point suppose that this is my general camera and this is the light ray that i'm interested in the unit direction of this light array would be given by qi with respect to the reference frame of this general camera and qi prime would be the cross product of any point on this line with the unit direction itself so q i crossed with q i transpose would be the nearest point which turns out to be the perpendicular line from the reference frame to the light ray itself so this point over here would be q i cross q i transpose and we know that q and q prime over here would be easily computed from the extrinsic value r c i t c i as well as the camera image coordinates x and y and the camera intrinsic value which is given by a three by three matrix of k over here and this point that is closest from the ray to the reference frame would be taken as a reference for all the points on the light ray and this would be offsetted by a certain scalar amount in the direction of the light ray where lambda over here is what we refer to as the sine distance from the reference point q i crossed with q i to that point note that the sine distance over here always has to be positive for the point to appear in front of the cam since we are given the three 2d to 3d point correspondences this means that the distance between the three points which we denote as x1 x2 and x3 over here the three distances between this which will denote as d1 2 d2 3 and d13 they would have to be consistent regardless of the reference frame of this three 3d point so we can write this constraint in this particular form here where x i refers to any one of the point minus away xj uh the square norm of this which is the distance euclidean distance between the two points of i and j with respect to the world frame this is uh defined with respect to a world frame has to be the same distance between the same set of points i and j but define with respect to the camera frame the general camera frame so we'll make use of this to formulate the constraint photo that helps us to solve the unknown scale distance and this can be easily done by substituting the line equation that we have seen earlier on here that expresses any point on this particular light ray into the equation on the right hand side here so we can see that since we already expressed this 3d point in the local frame with respect to the unknown lambda and the plucker line coordinates we can substitute it into this particular constraint to get this equation over here since we are given the points that are defined with respect to the world frame we can easily compute this distance over here these are actually known values over here and we also know the plucker line coordinates so all the plucker line coordinates are known as well and what this means is that in this particular equation the only unknown would be lambda and since we have three constraints over here and for each point we would have a lambda that is corresponding to that particular point this means that in total we will have three signed distances the three three lambdas which are known over here hence as a result we will get three equations and three unknowns which will allow us to solve for the three unknowns in a unambiguous way so expanding this constraints that we have seen here we will get this three polynomial equation here which we call a polynomial equation a b and c where k in these polynomial equations are coefficients that are made out of known plucker line coordinates q i and q i prime as well as the known 3d world points x i which is used to compute the distance between any two pairs of points and uh here since the only unknown would be lambda 1 lambda 2 and lambda 3 since we have three equations and three unknowns we can solve for the three unknowns here in a unique way so i make use of the elimination method to solve for the unknowns of lambda 1 lambda 2 and lambda 3 from the three polynomial equations that we have seen earlier on and we'll do this by first eliminating lambda 1 from the first two equations over here so in this case we saw that there are lambda 1 and lambda 2 and lambda 3 in this particular two equations over here will eliminate a wave lambda 1 that appears in both of the equations so this can be done by simply making lambda 1 the subject of both equations and then equating them together to cancel off lambda 1 and as a result we'll get the polynomial equation that is with respect to lambda 2 and lambda 3 over here which we denote as f lambda 2 and lambda 3 equals to 0. so we can see that once we get this particular equation the two unknowns would be lambda 2 and lambda 3 we can see that from the third equation there are also two unknowns over here which is lambda 2 and lambda 3 as well so this means that we can further make use of this polynomial equations in terms of lambda 2 and 3 after we have eliminated lambda 1 from equation a and b to eliminate one of the unknowns in lambda 2 and lambda 3. so here we choose that we to eliminate away lambda 2 as a result we'll get a univariate polynomial equation in terms of only lambda 3 where the coefficients a b c d all the way to i over here are coefficients that are made up of k from these three sets of system of polynomial equations which are known from the known fluke line coordinates as well as the 3d point coordinate hence now we can easily solve for this unknown lambda we get one equation and one unknown over here which we can solve for this uh unknown lambda 3 by solving this univariate polynomial equation we have seen how to solve this in our earlier lectures on the single pinhole camera model uh we can do this by doing using what we call the companion matrix i won't go through this again you should refer back to the previous lecture on how to formulate this companion matrix and we can see that this would be a 8 by 8 square matrix where we can compute the eigenvalues and we'll get eight eigenvalues that corresponds to the eight solutions from this eight degree polynomial equation that we have obtained and now once we get the 8 solutions for lambda 3 we can back substitute into this equation uh over here to get lambda 2 and here it turns out that by doing the back substitution we'll end up with a quadratic equation for lambda 2 which we can easily solve in closed form by completing the square over here so here a b and c they are known variables from the coefficients of the three polynomial equations that we obtained earlier and after we get lambda 2 we where there are two possible solutions that we can get from lambda 2 over here we can do a further back substitution to find uh lambda one so it turns out that lambda one also has this uh same form over here this means that lambda one will also have two solutions and as a result we will get a total of 32 solutions because it from the 8 degree polynomial equation that we solved earlier and then 2 from lambda 2 and another further two solutions from lambda 1 so all together we will get 32 solution and it turns out that a solution triplet here can be discarded if any one of the lambda is imaginary or a negative value because here we are solving for the roots of the polynomial equation hence as a result there's no constraint on whether this should be a real or imaginary or a positive or negative value it could be any of these values but we know that lambdas here since they are the sign distance it always has to be a positive value hence as a result those imaginary solutions as well as the negative solutions as long as it appears in any one of the lambda in that particular triplet we will discard them and it turns out that in practice many of these solutions in fact fall into the category of imagine or negative value then once we have obtained the possible solutions this means that all the lambda lambda 1 lambda 2 and lambda 3 the set of solutions that we have for these three lambdas over here so because this lambda over here as we have seen in this diagram over here lambda simply means this distance over here so this is lambda 1 and here we have lambda 2 and similarly here we have lambda 3. so what this means is that after we have obtained lambda 1 lambda 2 and lambda 3 we can easily compute the 3d point with respect to the reference frame of the general camera so once we get this this means that we would have x1 x2 and x3 that with respect to the world frame and we would also have x1 with respect to the reference the camera reference frame which we denote as x1 gx2 g and x3g with respect to the general uh camera frames so once we have these two the rest of the solution would be easy we will simply do an absolute orientation to get the relative transformation between the two camera and we have seen this absolute orientation algorithm when we talk about the pnp problem now since there are more than one feasible solution where all the triplets lambda 1 lambda 2 and lambda are neither negative nor imaginary we will have to choose the correct solution and we will choose the one that gives the highest in liar count so what this means is that suppose that i have several solutions r1 t1 this is one solution r2 t2 this is enough solution all the way until rn tn this is the nth solution for example of the remaining solutions from the 32 initial solutions after we discarded the imaginary and the negative value for lambdas we are remaining with n number of solutions so for each solution here we will be able to compute the relative transformation between the general frame and the world frame so what this means is also that after this transformation we can make use of this transformation to transform the 3d points from the world frame into the camera frame and then we project them back onto the camera frame so the projection for each one of these 3d points after being pre-projected onto the image will compute the reprojection error and then those reprojection errors that are lesser than a certain threshold will take the count of it and the solution out of the remaining solution over here and solutions over here the solution that gives the most number of in-line account would be considered as the correct solution so the next thing that we will talk about would be to do a generalized post estimation from line correspondences so the setting would be the same as the generalized post estimation problem for point correspondences except for instead of using 2d to 3d point correspondences we'll replace it with 2d to 3d line correspondences which we denote as lj w where w here refers to that this particular line is actually a 3d line that is expressed with respect to a world frame fw and then we will also have the image correspondence of this 3d line which we denote as lj c so this means that i have a 2d line uh which i did with respect to a camera c and i denote this particular line the j line in this particular image as lj of c so the objective here is that given this set of point correspondences we will want to find the pose of the multi-camera system with respect to the world frame so here i have a multiple camera setup where i have a reference frame of f g and i have a world frame of fw where the 3d lines are expressed with respect to this world frame and the objective would be to find the relative transformation between the world frame and the camera frame which is given by this four by four transformation matrix over here there's a mistake over here so this should be one by three here this is a illustration of the problem of uh generalized post estimation from line correspondences as i mentioned earlier on is that i have a camera system for example in this case i have three cameras which is rigidly mounted onto a rigid body and the reference frame that i chose would be lg over here to represent the reference frame of the general camera where i know the extrinsic value of every one of this camera with respect to the to this particular reference frame then the next thing that i would be given would be the 3d lines that is with expressed with respect to the world frame shown here now the problem is given a set of 3d to 2d line correspondences i want to find the relative transformation that brings the camera frame to the world frame so we'll have to represent the line correspondences using a plucker line coordinate so in this particular case over here notice that we will not be representing the light ray of a point in the image but we will directly use the plucker line representation to represent the lines in the 3d scene as well as the image coordinates and we'll do that by first defining the two end points of a line in the 3d scene suppose that this is my line i would have two end points of the line which are denoted by a homogeneous coordinates given by p a w and p b w here so this w here represents that these end points are actually expressed with respect to the world frame because we are given these 3d lines with respect to the world frame so here uh this would be a four by one uh homogeneous coordinate denoted by p a x p a y and p a z and one over here similarly for the second endpoint it can be represented as a four by one homogeneous coordinate as shown in this equation over here so uh having these two end points we can now proceed on to define the plucker coordinate that represents the line segment of l w over here more specifically the six vector plucker line coordinate of this 3d line segment it can be expressed as a six-dimensional vector as we have defined uh earlier on uh here we'll denote this the first three as the moment vector of new w transpose and then the next three entries would be the unit direction of this plucker line which is given by this guy over here so we can see that the unit direction suppose that we have this line where the end points are represented as p a w and p b of w so v w would be the unique vector v w over here would be the unit vector and this can be simply obtained by the subtraction of these two uh vectors of these two points over here so i'm going to take pv minus pa divided by the norm of pv minus a because recall that this vw is going to be a unit vector of the plucker line and then the next thing that we need to compute for the blocker line would be the moment vector so this is going to be any point on the line cross product with the unit vector and we are going to arbitrarily choose any point which we can conveniently choose to be pa over here so we're going to take pa pause it with the unit directional vector of vw and this would give us uw that forms the six dimensional plucker line now uh we know that lw over here it's a known entity the this is because we are given in the generalized post estimation problem we are given the 3d lines with respect to the world frame fw and l w can also be expressed in terms of the camera reference frame so what this means is that we are given these lines which are lw with respect to a world frame fw but now what we want to do is that we want to express this in terms of lc it's the same line over here but we call it lc because we want to express this line with respect to the generalized camera frame and we know that since we know lw with respect to the world frame and the world frame is related to the generalized camera frame via a rotation and translation which is unknown at this moment but we can express lc this particular 3d line over here with respect to the generalized reference frame according to the transformation that is given by the rotation and translation over here so as mentioned earlier on that this rotation and translation uh it's going to be given by the four by four matrix over here which we call t g in w so this means that this particular four by four projective transformation over here it's going to transform any point expressed in the generalized reference frame into the frame of the world we denoted by fw over here and this means that we are going to use this to express lc which we can see that this can be easily expressed in terms of this particular transformation of the pluco line coordinates so here instead of a four by four matrix of the transformation matrix we have a six by six matrix here which is by definition the transformation matrix of a plucker line and we're going to transform this plucker line which is expressed in the world frame into the camera frame hence we're going to make use of this rotation of the wall frame into the camera frame and the translation vector of the world frame into the world frame and we can further see that this transformation matrix that we have defined earlier or consists of rotation of the world frame to the camera frame and the translation of the world frame into the camera frame can be further factorized into two transformation matrices over here so the first would be what we are interested in finding this is the unknown transformation from the world frame to the generalized frame which is what we have defined earlier on and as well as the transformation this is the extrinsic value of the camera center with respect to the generalized reference frame so here we denote it as rg in c because this means that the any point that is redefined in the general frame we are going to rotate it and transform it into the camera frame so pictorially what this means is that i have a generalized camera where i have a reference frame here which is my generalized reference frame and i have a camera here which also consists of a coordinate frame which i denote as fc now i also have a world frame which i denote as fw so the relation of world into the camera frame is that i want to bring this world frame into my camera frame and this can be easily done by the product of the two matrices that defines the transformation between the world frame into the generalized camera frame and the generalized camera frame into the camera frame which we denote as r w in g and t w in g as well as r g c and t g in c and we all know that this is the objective that we want to find this is the unknown that we want to find in the generalized post estimation problem so since now we have lc uh expressed with respect to the camera frame and we also know that this uc over here can be expressed in this in this particular equation over here so essentially this is just the first three by three entries of lc where we take this guy over here the first three rows multiplied by lw over here to get you see since lc is a six by one vector this uc here will be the first three by one vector which is in fact this given by this equation over here and we also know that uh as mentioned earlier on in the lecture that uc and vc here they are perpendicular to each other which means that the dot product of this must be equal to zero we'll see later in the next slide that we'll make use of this constraint to define the 2d to 3d line correspondence that will help us in solving the pose estimation problem so here vc will be the last three entries of lc and we can see from this equation over here this would be simply equal to r c w multiplied by the last three entries of l w which is given by uh this guy over here so vc is simply equals to rwc multiplied by the last free entry of lw which is vw over here and this is the unit vector of the 3d line with respect to the camera frame fc we can see from this illustration over here on the relation between the directional vector and the moment vector of the respective plucker line defined in the world frame as well as the camera frame so this is the unknown here that we want to find the relative transformation between the world frame and the general camera frame and we are given lw so this guy here is given we are given lw that's defined with respect to the world frame and this will give us vw as well as uw over here and uh we are also given the extreme six so this is our extreme six parameter that relates the camera frame to the local camera frame to the general camera frame and here we know that we can transform lw into lc that means that we are transforming lc into the frame of the camera fc over here and uh we can we know that after comp transforming this lc this lc would be a function of lw which is known and the extrinsic value which is also known as well as the unknown parameters of the rotation and translation that relates the world frame to the general camera frame so we also know that the image coordinates at the end points so this means that for a 3d line that we have seen earlier on this would be projected onto the local image as the 2d line over here and we are also given this image coordinates of the 2d line so we'll do the same thing that will parameterize this using a plucker line using the two end points of this 2d image line which we denote as homogeneous coordinates of pa and pb in these two equations over here so similar to the 3d line we can also denote this 2d line using uh the gluca coordinates and this simply would be also consisting of a unidirectional vector that we can compute from the difference between the two endpoints so pb minus pa over here uh where p a and p b hat would be the camera normalized image coordinates as we have seen in the case where we compute the generalized epipolar geometry and divided by of course the norm of this because we want vc here to be the unit norm uh vector and uc which is the first three entries of the plucker line can also be easily computed by the cross product of any point on the 2d line with the unidirectional vector so conveniently we can just choose the first endpoint to be that point on the 2d line so once we have the 2d image lines represented using the plucker line coordinate we can then uh substitute it into the constraints that we got from the 3d lines that we have seen earlier on so as mentioned earlier on that the dot product of the unit directional vector and the moment vector of any plucker line must be zero since they are perpendicular hence we get this relation over here from the cross product of the line in 3d so uc transpose here refers to the moment vector of the line in the camera coordinate and r wc over here represents the rotation matrix and the unit directional vector of the 3d line so what this means is that i'm rotating this unit directional vector in the world frame into the camera frame and this is equivalent to vc that we have seen earlier on so the dot product of these two guys must always be zero because they come from the same plucker line one is the moment vector and the other one is the unit directional vector which are perpendicular to each other and we also further know that you see in the image coordinate is parallel to uc in the 3d world this is because this is a projection you see here is a projection of this guy into the camera frame so we can see that uh this is the projection of this line of the two end points since you see here it's uh we have vc which is the unit direction of the plucker line in 3d and since uc here is the dot product of this point with respect to this this means that this has to be in this direction and c which is the perpendicular direction from the plane over here so this is the normal of this plate that is formed by the back projection of the line into the image over here similarly we will also have this particular direction vector in the 2d line which is also a normal vector with respect to this back projection plane this means that these two lines over here these two vectors which we denote as capital uc and small uc over here they should be parallel to each other and we'll make use of this relation to substitute the known value of a small uc in the replacement of this big uc over here so now what we get would be this relation over here where the only unknown would be the transformation between the world frame to the camera frame so here we simply decompose this guy over here into the product of two rotation matrices one is our extreme six uh rotation matrix that relates the camera frame with respect to the generalized camera frame and then there would be a unknown rotation which further relate the generalized camera frame with respect to the world frame so in uh we can make use of this constraint over here to solve for the three by three matrix of unknown here and interestingly we can rearrange this constraint here which consists of three by three unknowns into a homogeneous linear equation of a r equals to 0 where r here is 9 by 1 vector and a here would be our 1 by 9 matrix and this 1 by 9 matrix of a would be made up of the known variables of uc the moment vector of the two line coordinates r g c this is the extrinsic value of that camera as well as vw which is the unit direction of the 3d line which is given to us and r here would be a 9 by 1 factorized representation of the unknown rotation matrix that relates the generalized frame to the world frame so we can solve for this unknown vector nine by one vector by having eight or more 2d to 3d line correspondences and we'll stack all these equations together to form a r equals to zero so in the case where we have more than eight uh or n number of 2d to 3d line correspondences a here would have a size of n by nine and r here would remain as a vector of 9 by 1 size and this can be easily solved by the svd method as usual we'll take that as vd of a and this would become u sigma and v transpose where we will take the vector of the right octagonal matrix that corresponds to the least singular value as the solution in this case here so once we have solved for the rotation matrix which is rg in w the next thing that we need to solve would be the unknown translation vector of t g in w since we know that small u c and capital u c are parallel we can further write this relation where small uc is related to big uc with a unknown scale factor of lambda over here so we can now substitute this known small uc over here together with this lambda in replacement of the capital u c into this equation that we have obtained earlier on and we can see that uh replacing this with lambda small uc will get this particular equation over here where u uc is known the lambda here is unknown unknown value and only unknown value in the right hand side now would be this translation vector over here w with respect to c which will see that it can be further decomposed into t w of g and g in c which is a known extrinsic translation vector so but first we'll get rid of this unknown scalar value by taking the cross product of uc on both sides and it gives rise to this particular equation that is independent of the unknown scalar value of lambda unfortunately we can rearrange this whole thing with respect to the unknown values in t c w where we can decompose it into these two values over here and now we can get this over determinant linear system of equation with eight or more correspondences note that these eight or more correspondences need not be a new set of eight or more correspondences it can be the same correspondence that we have used earlier on to solve for the rotation matrix and putting it inside here we'll get an over determinant system of b the t equals to zero where b here it's an n by four equation where t here is a four by one uh translation vector which is further denoted as t x t y t z and one over here so we can solve for t by simply taking the svd of b over here and now interestingly t here can be solved with no ambiguity because after taking the svd the solution of t would be alpha multiplied by v where v here is the singular vector that corresponds to the least singular value of the svd of the matrix b over here where alpha here can be solved because we will enforce the constraint that the fourth entry of the translation vector to be equal to 1 and since we have one equation and one unknown here we can easily solve for alpha and back substitute it back into this equation to get the exact value of t now so we have seen how to solve for the translation and the rotation uh matrix the transformation uh for the general post estimation problem using line correspondences there are two special cases that we need to further consider and in the first case is that we only have one camera in our multi-camera system set out this means that i have my general camera where i have this reference frame of fg and i only have one particular camera here which is fc so all the light rays or all the line correspondences in this particular case they are expressed with respect to this camera only or seen by this particular camera only so in this case the whole formulation will still remain because we can easily see that it's only this extrinsic that vanishes this means that in this equation over here we need not consider this extrinsic anymore and we will simply write this as w c where we will simply make use of the uh or define the general camera reference frame to be aligned with the camera frame and so all the steps will still remain the same and in the second special case that we need to consider would be parallel or 3d lines this means that all the lines that we are given they are parallel to each other in 3d and here we can see that since the unit directions of all these lines are going to be the same in our plucker representation this means that the rank of a will drop below a and uh what happens here is that r w in g the rotation matrix cannot be solved because we are going to solve for this using the homogeneous linear equation where r here is a 9 by 1 vector but if the rank of a drops below it this means that we'll get a family of solutions where we will not be able to uniquely identify this rotation matrix but this problem can fortunately be easily overcome by omitting the sets of parallel lines what this means is that given all the 2d to 3d correspondences we can easily check that the 8 point correspondence or the 8 line correspondences that we obtain to solve this homogeneous linear equation are not parallel with each other as a summary we have looked at the plural line coordinates representation to derive the general epipolar geometry of the two view generalized camera then we look at how to use the linear 17-point algorithm to obtain the relative pose for this two-view generalized camera and next we consider three cases of degeneracy in the 17 point formulation and they are particularly the local central projection axial camera and as well as the local central and axia camera configuration we also look at an algorithm on how to uniquely identify the relative rotation and translation of the two view geometry under these degenerate cases and finally we look at how to compute the absolute post of a generalized camera using 2d to 3d point or line correspondences |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_3_Part_3_Circular_points_and_Absolute_conic.txt | now let us look at the definition of a plane at infinity and it has a canonical position of pi represented by pi infinity equals to 0 0 0 1 here so we can see that this is a direct extension of the line at infinity which is on the 2d projective space which is given by 0 0 1 transpose and this is in the p2 space that we have defined earlier and in this case here the plane at infinity is in the p3 space it's in the projective three space we can see that this is another example of a direct extension from the 2d space into 3d space and it contains the directions all the directions of the these ideal points x1 x2 and x3 and 0 here so what this means here is that at the plane of at infinity the plane at infinity denoted by pi infinity here which is given by 0 0 0 1 here transpose it contains all the points that are contained in this plane at infinity uh the ideal points in the p3 space here and this is denoted by d which is equals to x1 x2 x3 and 0 transpose here and these are the representing the directions that are pointing from the finite space to the infinite space and this enables us to identify the five properties such as parallelism particularly two planes are parallel if and only if there are lines at intersection is on the plane at infinity this means that if i have two parallel lines and this would be uh going to intersect at the infinite point which is given by d here x1 x2 x3 and 0 transpose here and this has to be lying on the plane at infinity uh since the well this point at infinity the ideal point is going to lie on the plane at infinity this means that two lines or two planes are parallel or if the line are intersecting at the plane at infinity and similarly a line is parallel to another line or to a plane if the point of intersection is at infinity so in the first case we are looking at two planes in the second case we are looking at a line and a plane uh this is the top view of the plane for example and they are going to be in parallel and they are going to intersect at this ideal point here now the plane at infinity is a fixed plane under projective transformation h even only if h here is an affinity matrix or a affinity transformation matrix and here we can do a simple proof to show this we have the plane at infinity this is going to be pre-multiplied by h a inverse transpose this is now a four by four transformation matrix and this guy here is going to be a four by one homogeneous equation here a homogeneous coordinate representation of the plane in the p3 space so now uh we can substitute this h a into this uh equation here and the pi of infinity is going to be here so we can see that the evaluation of this uh will give us the the plane at infinity again hence uh this proves that the plane at infinity remains fixed or we say that is invariant to the affinity transformation here's two remarks the planet infinity is general fixed under affinity but not fixed point wise so it's the same as the line at infinity where we actually after the the final transformation of the line at infinity line infinity still stay at line at infinity but the points x1 and x2 here we say that uh it could change order here or it can change change location the relative location but they are all going to still be collinear here and in this case here uh we have the plane at infinity pi infinity here after it undergoes a certain a fine transformation this plane at infinity will still remain at plane at infinity but the points inside here they might change location but they are all still going to be contained inside the plane at infinity and the second remark here is that under a particular affinity for example euclidean motion there might be planes in addition to pi infinity which can be fixed so what this means is that under a certain ha here affinity matrix in addition to pi infinity that will stay fixed under the transformation of h a that could be also other planes here for example pi i that undergoes this h a here it could also remain fixed under this affinity and however only uh pi infinity is fixed under any infinity matrix affinity matrix of h a so any affinity matrix will always uh the pi infinity will always be invariant to it but there might be some uh planes in the p3 space that can be in invariant to some affinity matrix so here's an example of the affinity matrix where there are a subset of the planes that can be invariant to this transformation here so this is a euclidean transformation which we have seen earlier and it's uh denoted by the this particular matrix here so what it means here is that uh i'm rotating around the z axis here so that's why the last column here of the rotation matrix here is going to be zero and there's no translation here hence the last three by one vector over here is going to be zero and we can see that uh this particular transformation euclidean transformation here uh if the this plane here that is being rotated it's going to stay uh it's going to stay invariant because after rotation and before rotation it's still going to be the same plane but what happens here is that we can see that uh it's not it's fixed as a plane as a set but not point wise because we can see that the points here with respect to the the original the axis the representation is going to move it's going to change we are here it's going to be transformed to another point here it's going to rotate in a circle by this euclidean action over here and algebraically the fixed planes are actually eigenvectors of the homogeneous transformation h transpose here which we can see from this uh equation here this is the proof here so uh the what we are doing here is that uh we are going to uh transform a certain vector here a certain uh point on the plane or a plane over here denoted by v this is a four by one homogeneous uh coordinates that represent either a point or a plane uh here and we are going to transform here so in this case here this is a plane a v is a plane and we are going to transform by h inverse transpose and this is going to be equivalent to lambda of v over here so what this means is that we are simply going to just scale the uh this uh plane over here but it's still going to be the same plane and we can see that this is actually our famous uh eigenvalue or eigen equation in linear algebra where we say ax equals to lambda of x over here where x here is actually our eigen vector and lambda here is our eigen value so in this case here after this projection or after this transformation by a x which is the eigenvec vector is going to be skill but it's not going to change direction and this is uh the definition of eigenvectors here so as a result what we can see here is that the plane here in this case this kind of planes here and the h transformation here would be the i would form the eigen equation here and what this means is that for any plane that is an eigenvector we need to find the uh h over here the projective transformation such that uh the decomposition of this or such that v here the plane is the eigenvectors of h the transformation here then uh this set of planes would be uh would remain invariant to the transformation given by h over here and we can see that we can substitute v here by pi which is our plane equation and this would exactly form the transformation where lambda and v here are eigenvalues and eigenvectors of the h transpose and h inverse transpose respectively and in this case the eigenvalues or in the example that we have seen of the to rotate around the z axis in this case here the eigenvalues and eigenvectors of this guy here which is the equation that we have seen earlier here the matrix it's given by four components here since this is a four by four matrix it would have four eigenvalues and four eigenvectors here and these are the four eigen vectors that we have uh seen from the that we can get from this particular matrix over here and in this case here these four eigenvectors uh two of them are imaginary and two of them are the real eigenvectors so we'll ignore the imaginary planes e1 and e2 and focus on e3 given by zero zero one 0 and e 4 that is given by 0 0 0 1 and we can see that e4 here is actually our pi at infinity the plane at infinity and e3 here directly corresponds to the z axis which we are rotating around so this is the normal vector of a plane uh that this stays invariant uh after the final transformation along around the z axis and interestingly we can also get a pencil of fixed planes that is spanned by e3 and e4 because we can treat these two as our basis vector to get the linear combination of it to get new planes or additional pencil fixed planes that are fixed under the transformation of h and we say that e3 and e4 are degenerate planes here so we can see that the axis of this pencil is the line of intersection of the planes that is perpendicular to the z-axis so this is the e3 given by e3 where e3 is actually equals to 0 0 1 0 and that represents the normal vector of this plane here and the intersection of this plane with the plane at infinity pi to infinity which is given by e4 uh here is given by zero zero zero one over here so the intersection between these two planes here it's going to give us the line that represents the intersection or represents the pencil of planes that is invariant to the projective transformation h that we have seen earlier and this forms the the line equation or this particular line here at the planer of infinity it's uh given by the span of e3 and e4 which is what we have seen in the previous lecture and of the definition of a line in the 3d space and what's interesting here is that the null space of this line is the null space basis of the of one zero zero zero and zero one zero zero which denotes the uh infinite point or the point infinity the ideal point on the both the line at infinity as well as the plane at infinity and linear combination of these null space spaces will give us all the points on the this particular line here at infinity that intersects the two planes of z that rotates around z as well as the plane at infinity and we'll see in lecture six that uncalibrated two view reconstruction this means that if i have two views uh two images from two different viewpoint of the same scene uh i can get the relative transformation rotation and translation here and then after that i will be able to do triangulation this when we look at what is called the fundamental uh matrix and the triangulation algorithm and we'll be able to recover the 3d structure up to a certain projective ambiguity and by making use of this plane at infinity we'll be able to recover or remove the projective ambiguity and recover the scene with a fine ambiguity here and then the by using and we'll see that how this can be done using the planet infinity in lecture and we will see how this can be done uh using the planar infinity in lecture six now let us look at the definition of absolute conic previously we look at the circular point which is a line at infinity any circle or any conics that intersects the line at infinity will always be at the two points of a circular point i and j and now we will look at the definition of absolute conics which is the conic denoted by omega infinity it's a poinconic on pi of infinity means that it's actually a conic that is on the plane of infinity so any conic that lies on the plane of infinity here a conic that lies on the plane of infinity is known as the absolute conic denoted by omega infinity and in metric frame we know that the homogeneous coordinates of pi infinity is given by this vector over here and uh the points on omega infinity absolute chronic would satisfy these two equations here so this means that it's also a complex number x1 squared plus x2 squared plus x3 squared is close to zero and the fourth element here is always going to be zero so we need two equations to define omega infinity or the absolute corning and for directions on the plane at infinity that is points with x4 equals to zero the defining equation can be given by this equation here so this essentially is the first part of the two equation x1 squared plus x2 squared plus x3 squared equals to zero since this guy here is identity it's a three by three identity given by one one one and zero at the off diagonal and uh so the absolute conics omega infinity cor corresponds to the conics with matrix uh c equals to uh i and this is a uh this is a conic of purely imaginary points on pi infinity what this means is that it actually doesn't exist physically we won't be able to see in the real world because these are purely imaginary points and a conics it's a geometric representation of five additional degrees of freedom required to specify metric properties in an affine coordinate so one of the properties of the absolute conics is that it must remain fixed under a projective transformation h if and only if h is a similarity transformation here's the proof of this uh invariant of the absolute chronic so since the absolute chronic slight on the plane at infinity pi infinity here a transformation on it must fix pi infinity this means that a transformation of the absolute conic since it lies on the plane at infinity this particular absolute conics here omega infinity a transformation of it is still must lie on the plane at infinity after the transformation h so uh what this means is that the we know that this guy here omega infinity after the transformation it still must lie on the plane at infinity and as well as we have seen earlier that the plane at infinity it will remain fixed under the a fine transformation hence this transformation here that transform uh omega infinity uh for it to remain the same so for pi omega prime to be equals to omega itself infinity then this particular h over here must be an a fine transformation in order for pi infinity to remain fixed under this transformation hence uh we can write this affinity transformation to be this guy over here where a here is actually our three by three matrix and t here is a three by one vector and at pi infinity omega infinity is given by the identity matrix we have seen this earlier that is going to be x1 x2 x3 and multiply by identity as well as x1 x2 x3 to give the first equation to be 0 and hence omega infinity is equals to identity at pi of infinity and since it's fixed by this guy the affinity matrix because we want pi infinity to also remain fixed then this means that a inverse transpose i a inverse which is the transformation that is uh on the conics must be also equals to identity since i here is our conic omega at infinity and the transformation a applied on it must give bring it back to infinity and taking the inverse of this guy here by computing this we can see that a we get this a multiplied by a transpose equals to identity expression over here what this means is that a inverse must be equals to a transpose and this is the octagonal relation of the rotation matrix that we have seen earlier when we talk about rigid motion in 3d and hence this a here must be a skill rotation since the scale doesn't matter here and uh with reflection and as a result we proved that this particular affinity matrix here a must be a skill rotation it must be a rotation and hence we have proven that this h a here must be a similarity transform and therefore the absolute conics will only stay fixed under similarity transformation where this similarity transformation will also fulfill the invariant property of the plane at infinity and even though the absolute conic does not have any real point it shares the property of any other codex so the conics is absolute chronic is fixed as a set in general and but it's not fixed point wise under any similarity transformation this is the same as the plane at infinity it's a fixed under a fine transformation but not point wise as well as the line at infinity and one remark that can be made here is that this means that under similarity transform a point on pi on the omega infinity may travel to another point but it's not mapped off the conic this means that given this conic on the plane at infinity here this is omega infinity a point here after any similarity transformation it might be mapped off the conic but it will not be outside the coding it will still remain inside the coding and the second property here is that all circles intersects omega infinity at two points and the remark here is that suppose that the support plane of the circle is pi so any uh circle in the 3d space or in the projective 2d space here a support plane is pi and this intersects the another plane at the the plane at infinity here which contains our absolute conics over here so this guy here is going to intersect at two points it must always intersect at uh the the absolute conics here at two points and then pi intersects uh the plane that contains this uh circle must intersect uh pi infinity at a line here it must intersect at a line and this line intersects the this this conics here uh absolute conics here on two points and these two points must be the circular point i and j that we have defined earlier and all spheres intersect uh the plane at infinity so if i have any 3d sphere here it's going to intersect the plane at infinity on the absolute conics given by omega infinity here so similar to the circular point we can make use of the absolute conics to define the angle between any two lines in the projective 3d space suppose that i have two lines given by d1 and d2 it's a three vector directional vector in this case here d1 and d2 are actually my ideal point that is pointing towards infinity so because recall that all the points at pi of infinity are actually directional vector it's pointing towards the the point at infinity and hence i can also make use of this to represent the line directions d1 and d2 so this line here d2 is going to point to another point at infinity and the the dot product of this two directional vector will give me the angle between these two lines direction and here we can elegantly define this the under any projective transformation to be you to use the absolute conics here to represent this dot product over here where d1 d2 are ideal point or the directional vector that intersects the plane at infinity at the ideal point and omega infinity here is the matrix representing the absolute conic in that plane so these two intersections here must intersect at the absolute conics that is contained on the plane at infinity and omega infinity is that absolute chronic in that plane so the two directional vectors are octagonal if this relation here gives us 0 which means that we have a cosine of 90 degrees here and this would be equals to 0 here if these two lines here the directional vector are perpendicular to each other we'll see in lecture five that this image absolute conic can be used to recover camera intrinsics as uh that is to use it for calibration furthermore we'll also see in lecture six that the absolute conic and the plane and infinity can be used to uh remove the fine transformation distortion here and uh after we have do done the reconstruction we can remove the projective ambiguity and then uh make use of this konig's absolute conics here to remove the fine ambiguity such that we will be able to recover the 3d reconstruction up to a certain scale now after looking at the definition of the absolute conics let's also look at the absolute dual quadratic so this is the doer of the absolute koenigs the dual of the absolute conics is a degenerate dual quadratic in the three space core absolute dual quadratic and we will denote this with q infinity star so geometrically q infinity star consists of a plane's tangent to the absolute conic so recall that previously we have defined the conics the absolute conics to be point conics and the dual would be defined by planes that are tangent to the absolute conics so previously this guy here omega infinity is defined by all the points that are on the conics now we are going to look at the planes that are tangent to this conic can be used to define the absolute conic and we call this the absolute dual quadratic and this is the rim of the uh it can be thought of that this guy here the absolute konig is the rim of the absolute dual quadratic which we also call the rim quadrant so algebraically the absolute dual quadratic is represented by a four by four homogeneous matrix with rank three and the canonical form is given by this guy over here so uh as in comparison to omega infinity the co the konig is just an identity uh matrix and in this case here is given by identity with uh padded columns and rows of zeros and the dual quadric is a degenerate quadric there are eight degrees of freedom because it's a symmetrical matrix with 10 independent elements but the irrelevant skill and determinant of zero will have to take off two degrees of freedom from these ten independent elements here that gives us eight degree of freedom here and uh the absolute dual quadratic is fixed under projective transformation h if and only if h is a similarity transformation so this is the same property as the absolute koenigs since it's a doer of the absolute chronic it must follow the same property so now let's look at the proof of the absolute dual quadratic when it's undergoing similarity transformation and you will remain fixed so since the this guy here is a dual uh quadratic its transformation is given under this equation and applying this with an arbitrary transform which is given by this guy over here so this is a hp h a a and h s over here and we will get the this uh this identity which is our absolute dual quadratic over here q infinity star and this is also our q infinity star over here so uh what this means is that if we were to apply it with a general or arbitrary projective transformation given by the product of these two three factors over here we'll be able to get this equation over here and this must be uh in order for the absolute dual quadratic to remain fixed this means that the this equation here the matrix over here must be equal to uh the absolute dual quadrat after the transformation and the proof here is that uh in order for this guy here to be to be identity a and multiplied by a transpose here it must be a octagonal uh matrix or skilled octagonal matrix this means that it must be a rotation matrix by a followed by a scale as well as these terms here must be all zero and this we can see that this will all be zero if v is equal to zero and v it's the uh is the last three by one element of the uh of our transformation of the projective uh transformation v and the vector here and the small v over here so uh what this means is that in other words h must be a similarity transform since a here uh it must be uh it must be a skill rotation and v here must be 0 this means that h must be a similarity transform given by this guy over here t and then 0 and 1 here another property of the absolute dual quadratic is that the plane at infinity is the null vector of the absolute dual quadrat so the remark here is that this can be easily verified when uh q at infinity this q infinity here has its canonical form in the metric frame and with pi at infinity given by this guy here we can see that the uh the product of these two will give us zero which means that the the pi infinity is actually an incidence of q at infinity which is given by i zero zero zero and multiplying it by zero zero zero one here will give us zero here and this property here's uh holds after any or regardless of the transformation that is applied on the pi infinity as well as the absolute dual quadratics we can see that uh this is true if after the any projective transformation given by h over here that transform a point in the 3d in the p3 space then q is going to be transformed to q prime and pi is going to be transformed to pi prime over here you undergoing this two transformation putting it back into the incidence relation we can see that all the h essentially cancels off and remaining with one h over here but we will still get back this equation over here which is equivalent to zero and the whole thing will be equal to zero now the anchor between any two planes uh pi one and pi two is given by this uh equation over here and this is dual to the absolute conics so we can get this relation over here and the proof here is that consider two planes pi one we have the normal vector and the distance and the normal vector as well as this and in the euclidean frame the absolute dual quadrat has this particular form here and we can get uh cosine theta equals to uh the dot product of this normal vector equals to this and by inserting the uh this guy over here into this we can also consider d1 over here and where d1 is eliminated by the zero paddings in q infinity here to get back this particular equation so we can prove that these two are equivalent here and which is the angle between the planes expressed in terms of the scalar product of their normal vectors now a remark here is that if the planes and q infinity are projectively transformed then the angle here will still remain the same due to the transformation properties of planes and quadratic so i'll leave this as an exercise to prove for you to prove basically you just insert all the h which is representing the projective transformation into this equation here and show that it actually cancels out each other so now in summary uh we have looked at how to use the line at infinity and the circular points to remove a phi and projective distortion then we have looked at the how to describe the plane at infinity and its invariance under a fine transformation finally we describe absolute conics and its absolute dual quadratics and they are in variances under similarity transform we will defer the use of absolute conics as well as the absolute dual quadratic and the plane of in at infinity for a fine and projective distortion in 3d space until we look at camera calibration as well as the fundamental matrix in lecture 5 and 6 respectively thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_11_Part_3_Twoview_and_multiview_stereo.txt | so the joint probability between two random variables x and y can be computed using the parson window method in this case uh in our context where we're comparing two patches of f and g what this simply means is the joint probability over the two random variables denoted by x that represents the vector of f and y that denotes the vector of g will be given by this equation over here let me give you an illustration in a one-dimensional uh case so suppose that we have a kernel where k here is equals to a 2d kernel in this context and it's usually a 2d gaussian suppose that we have a 1d kernel which is also denoted by k this means that we only have one random variable using the parson window approach so this is equivalent to estimating the kernel density function and you know 1d case this kernel over here is defined over values of x where for every value in the vector of f i'm going to subtract it with all values of x and what this means is that for each value f i'm defining a gaussian kernel over the range of all the possible range of x over here and this is simply for one particular entry in the nine by one vector of f that we have defined for over a three by three window so note that the uh this can be of any size it need not necessarily be just a three by three it can be five by five ten by ten etcetera but uh in this particular context i'm going to just illustrate it with a three by three uh kernel and what this means is that for every value i'll get a gaussian kernel where sigma is a predefined value which we define as a hyperparameter and as a result we will get nine of these gaussian kernels and we know that because each entries in the vector of f it corresponds to the three by three patch in the image and since they're nearby pixels the image intensity might be quite similar to each other so uh we might end up with a result where the cur some of them might be overlapping or very close to each other and by taking a sum over all the nine kernels what we are doing here is that is essentially equivalent to uh modeling the superposition of these nine kernels into a distribution it might look something like this and this would be the distribution that we have for the 1d example over x and for the 2d example of course we are looking at a two-dimensional where we have x and y for example and then what we are looking at would be the joint distribution which uh will appear to each one of this value of corresponding value of f and g it will correspond to a 2d gaussian kernel so we have many of this in total because uh in the three by three example where we have nine entries per vector over here we have a combination of uh nine by nine uh kernels and where the joint probability would be simply the superposition of all these calcium kernels so in a discrete context uh what this means is that we would have a two-dimensional table that represents this uh joint probability this means that i would have a two-dimensional table where every entry here represents a single probability suppose that this guy over here represents x equals to 1 and this guy over here represents y equals to 1. so that means that the entry in here would be equivalent to p x equals to 1 and y equals to 1 the joint probability of this and of course this would be y equals to 2 3 and all the way until the the specific range of the where x and y can take and we can uh essentially compute this entry by fixing x here equals to 1 for example and y here equals to 1. and then we will sum it over all the possible entries i'm putting it into the calcium kernel to get the value of p x equals to 1 and y equals to 1 over here so we'll do this repeatedly for all the combinations of x and y over the possible range of x and y and to get this particular joint probability or discrete value of the joint probability of x and y so once we have obtained the joint probability of x and y in a discrete form which was uh illustrated earlier on as a two-dimensional uh table that represents the joint probability of x and y where each axis represents the value of x and y respectively we can then obtain the marginal probability of x and y by simply summing up over the values so suppose that we have the probability of joint probability which is represented by this table over here the marginal probability of x would be simply given by the sum over all the possible values of y in this joint probability table so what this means is that uh for will for probability of x it will be a one-dimensional uh table that looks like this this would be my marginal probability of x and every entry here would be simply equivalent to the sum over every each row of the joint probability table and similarly for the marginal probability of y this would be equivalent to the submission over all possible of values of x over the joint probability and this is also a one-dimensional uh probability which we denote by p of y over here and every entry here would be the sum over every single element in a column of the joint probability table after computing this we can then put into the mutual probability function that is given by this equation over here and to get the photo consistency measure as defined earlier on in this particular slide here's an example of the different matching course over these two images that is shown here so suppose that i'm interested the value of this particular pixel over this particular scan line over here so suppose that i have already rectified this particular image so this scan line would be into a horizontal scan line and where this is a rectified image so for every patch over here in this particular scan line i'm going to compare the matching course with this particular image patch over here and then i'm going to plot out the matching cost over the inverse depth this means that i'm going to take the disparity and compute the depth which will see how to do this in the next few slides and we can see that there is a certain minimal value for the ssd the sum of square difference where the lowest value here would be the one that we take as the depth for this particular image pixel over here and what this essentially means is that we are scanning it over the whole scan line and then we are taking the pixel which is visually more similar to this particular reference patch and then this is essentially given by the value with the lowest matching cost and uh which corresponds to this particular point over here example that is uh given so the depth would be uh somewhere corresponding to 0.5 we can see that uh for sad it's uh the minima the this global minima is also pretty obvious but probably not as obvious as the sum of square difference and normalized cost call relation we will be looking at the maxima and since in this particular example over here there's a high repetitive feature that means the texture is actually repeated all over the place as we have mentioned earlier on that this is a highly repetitive image it's not good for our normalized cross core relation computation so we can see that the maxima point over here it's somehow not very distinctive compared to other points we have all these points so this means that it could easily confuse the computation of the depth if we were to use the normalized cross correlation method in a highly repetitive structure like this so and the last example over here would be the cost given by the mutual information we can see that this seems to be the perfect way to compute a course for this highly repetitive image because it's making use of the probabilistic formulation where a very clear global minima can be identified now once we have obtained the disparity from the matching cost so early on we say that if we have a reference image and a second image over here for each patch over here i'm going to compare uh the cross correlation over a scan line uh suppose that this uh the one with the best matching cost in terms of the correlation so what i'm going to do here is that next is that i'm going to take the x value minus of the x prime value over here and this would be called my disparity so i'm not now going to show that this particular disparity value that i can obtain directly from the patch matches or over here to get the depth of the point so because we know the rotation and translation and after rectification we say that the epipoles are to maps into at the point at infinity and well the epipolar line now becomes uh equal it becomes the same scan line so we'll show in this step that once we have identified the disparity we can directly compute the 3d point or the depth of this particular 3d point with respect to the reference frame without doing triangulation at all so here's an illustration suppose that x and x prime it's the x value of the corresponding patch that we have found from the image correlation step as mentioned earlier on so we can see that the distance of the image plane to the camera center o over here it's given by the focal length uh suppose that two of the images have having the same focal length which we can easily do this by taking the inverse normalized camera coordinates uh using the camera matrix over here so at this step we can assume that they have the same focal length and we can see that if we were to draw a line joining the camera center the image point and the 3d point that gives rise to this image point as well as to draw another line with the that joins the 3d point the image correspondence in the other image yes and the camera center of the other camera we can see that by the virtue of similar triangle we have the similar we have a triangle here we also have a bigger triangle here so we can compare these two as a similar triangle by taking the ratio of x over f to be equal to b uh to be equals to b 1 divided by z which is the depth of the 3d point and we can do a similar operation on the other camera where we take the ratio of uh minus x because this is defined in the opposite direction of the frame and we can take this as minus x prime divided by x which is the focal length this is the triangle over here this would be a similar triangle to this guy over here so we can do the similar operation by taking b2 divided by z where b1 and b2 the adds out to be the baseline this means that this is equivalent to the translation after rectification recall that there's only a translation in the x direction between the two camera centers which we denote as b over here and that's simply the sum of b1 and b2 so we can see that by the this particular relation of these two relations that are obtained from the similar triangle we can take a sum over both sides of the equations to get this so we'll get x minus x prime divided by f since they are the same f over the two equations and as well as b1 plus b2 divided by zz can be because it's the same denominator so there's only one z over here and we can rearrange this by bringing f up onto the right hand side of the equation so we'll get b 1 plus b 2 multiplied by f where we will bring this up and since we have defined earlier on that b 1 plus b 2 is equals to b which is the base line of the two camera centers so we can rewrite this b1 plus b2 as simply the baseline which is equivalent to a scalar value of the translation vector in the x direction between o and o prime multiplied by f divided by z where z here is the depth this guy here is the depth of the 3d point so rearranging this this means that we can get the depth equals to b multiplied by f divided by x minus x prime and x minus x prime over here is the disparity that we can find from the matching cost and now so what this means is that since b here and f here is given by the calibration of the stereo setup oh once we have found the disparity map of every pixel which is simply given by the difference between the x coordinates of the reference of that pixel in the reference image and its corresponding pixel in the other image we can directly compute z which is the depth of the 3d point from this relation over here and what's interesting here is that z is actually inversely proportional to the disparity so this means that the larger the disparity the smaller the that value is going to be and here similarly if x prime is going to be seen in the the other side of the camera we can similarly define using the similar triangle relation these two relations over here and because in this case here we are going to overshoot b by having uh this relation over here as b2 and the whole base over here would be b1 so the similar triangle here that we are considering would be first this triangle and this particular triangle which is the base over here to give us this equation here and then the next triangle that the next set of similar triangle that we are going to consider is this triangle and as well as this particular triangle over here to get this relation so combining the two we need to subtract this time because we want b1 minus b2 to give us b which is what we know from the camera calibration so we can take x minus x prime divided by f equals to b1 minus b2 divided by z over here and we can see that here by doing the same operation we can bring this up here to become f multiplied by v1 minus v2 and b1 minus b2 here is simply the baseline which is the scalar value that is the denotes the translation along the x axis we can again get the same relation where x minus x prime equals to b multiplied by f over z where z here is the depth so we can bring the depth up and becomes z is inversely proportional to x minus x prime and here we can also see that the reason why we that i mentioned earlier on that given a pixel on the reference image which we denote as x the search for the other uh image correspondence can not be exceeding the value of x so x prime has to be in this range if i have if i superimpose this that means i take this coordinate and draw it here so the range of x prime must be within this particular range it cannot fall beyond this range over here and the reason is pretty simple because here we can see that x minus x prime must always be more than zero because since the baseline is always going to be more than zero and focal length is always going to be more than zero as well so uh in order for the depth to appear in front of the camera this means that z must have the value of more than zero hence x minus x prime must be more than zero what this means is that if we were to find a correspondence of x prime in this range over here then this simply means that the 3d point is appearing behind the camera and from the relation earlier on we can see that z which is the depth is inversely proportional to the disparity so if we were to plot the disparity the smaller the disparity the larger the distance is uh going to be and what it also means is that a small disparity the measurement will become more inaccurate and more sensitive to error because a small perturbation here will essentially lead to a large difference in the distance and this means that there is a certain useful range of the stereo camera for example for the bumblebee camera that we have looked at the use full range is usually up to a range of 10 to 11 meters so what we have looked at so far is a naive way of doing matching to compute the disparity map and this naive way is known as the block matching where we consider every pixel independently by scanning across the respective scan line on the other image to look for the visually most similar corresponding patch to get the disparity map and what happened here is that since we are considering every pixel the search of every pixel independently this means that we will likely we are likely to end up with a blocky uh that map that looks something like this where this is a an example of the input image of the left image and this is the ground truth that map we'll see that uh we end up with a lot of this noisy holes in between the lock matching where we use a three by three kernel and what happens here is that if we were to do a small window a smaller window we will end up with a of a size three for example in this particular example over here we'll end up with what we mentioned earlier on about a very blocky disparity map so however there'll be more detail over here so the the boundaries of the tree trunk over here for example can be clearly seen in the that map we can resolve a blocky issue by using a larger window since we are still considering every pixel independently so this will end up with an over smoothing effect where we can see that the fine grain detail of the original image can no longer be seen in our disparity map so there's also some possible failure of the corresponding search as we have mentioned earlier on suppose uh one of the example would be a textureless surface suppose that i'm looking at the image patch at this particular location over here and the corresponding scan line is this line over here so you can see that the patches along the background which is the wall over here they look very similar and it's uh there's no texture at all so this means that the matching cost that we will get for this whole range and this whole range over here would be the same so this means that uh we can't differentiate between the best and the second best or even the subsequent uh matches along this particular scan line another problem which we have looked at earlier on is the repetitions on occlusion so this means that this particular patch over here since we are looking over this particular scan line it might get matched to this or this over here or this over here so the cost of this would be uh quite similar so it's not possible to differentiate in other words the cost function that we saw we might have multiple peaks for example in this case over here so it becomes indistinguishable between these three peaks over here because there are two self-similar and another example of the failure would be the non-lambertian surfaces so non-laboration surfaces means that surfaces uh which are specular this means that it reflects off light for example on the car surface you can see that it's very shiny and these are all non-lamborghini surfaces or glasses that covers this particular photo frame and any when we take a snapshot of this it might reflect light and hence the matches along this epipolar line or the scan line might not be good enough we can see very smurgy effect that looks something like this after we have obtained the death map and in order to resolve this we can use what is called the scanline optimization stereo because we know that pixel wise computation we are treating every pixel independently but we know two neighboring pixels in an image they don't occur as a neighbor for no reason they occur as a neighbor because they are indeed similar they are indeed lying close by in the 3d scene so what we want here is that we want the computation of the disparity or the search over a scan line to be uh to take the neighbors the depth of the neighbors into account of neighboring pixels into account instead of considering every one of this pixel independently so a good constraint here for neighboring pixel would be to make the assumption that because we know that two neighboring pixels as i mentioned earlier on they don't appear as neighbors for no reason they should be lying on the same surface over the same object in other words the depth difference between these two pixels should be very minor in the real 3d world so what we can do here in the optimization of the disparity map would be to add what we call the smoothness constraint along each scan line so this means that neighboring pixels we want it to as much as possible take the same disparity value and since we know that the disparity uh value is given by x minus x prime where x here is actually the entry of a pixel in the image so this means that i'm taking a discrete value of x because we know that image is made up of an isolation of pixels where every entry in the pixels is indexed by a integer number so this means that the possible set of all the dis parity by taking x minus x prime would be a finite set of integer values so this is a finite set of integer value and which we can denote it as d of p which is the disparity for the particular pixel p over here this means that since it takes a finite set of entries over here we can rewrite this into one to l where l would be the whole range of uh all the possible entries in our disparity value over here so each one of these represents a discrete difference of x and x prime and we can now define the problem to find the disparity map as a labeling problem so this means that for the reference image over here for every pixel we are going to assign a label of one out of the l labels in all the possible disparity map over here so we'll do this for every pixel and what we also want to do would be to uh consider all the disparity within a bound this means because just now i mentioned that uh for a certain value of x over here we want to uh the the set of labels we want to consider all the possible difference but we know that there's only a bound that means that we cannot have infinite that because the difference here is not going to take anything outside the image for example x prime is not going to take a value that's outside the image so this means that the disparity would be a finite set and we wish to even string this set further by considering only a disparity that is within a certain range because we also know that from this relation over here the smaller the disparity the larger the uncertainty so we want to bound it at a certain range and hence we will say that we want to bound x minus x to the where x tilde is the possible set of x prime in the second image to be within a certain range of alpha so the whole operation to labor for every pixel one out of the l labels in the disparity space so can be formulated by the following cost function we will have a unary cost over here as well as the binary cost so this binary cost is equivalent to a smoothness uh cost function where we used to regularize the constraint that or the prior knowledge that we have that two pixels appear as two neighboring pixels because they have to have a quite consistent uh depth value so this can be written as the cost of d p d uh subscript p over here so this is the similar dissimilarity measure uh because we want to minimize this particular cost so uh what this means is we want p to take a dab or to take a disparity value such that the cost over here is minimized as much as possible what this means is that this is simply our matching course between the two image so suppose that i'm looking at the pixel p over here what i'm interested in is that for the the other image over here i'm looking for the patch over here that gives me the best matching course which will minimize this particular value over here and as well as i want to in addition to looking at just the unary potential over here which is in the case of our block matching so if we do block matching we are simply just looking at this term the unary cost over here but now in addition to this because we also know the prior that neighboring pixels must take are encouraged to take this a similar depth we want to add this regularization uh potential over here which is what we call the uh uh pairwise or potential or uh it could be a pairwise or binary potential or higher order potential it doesn't matter so depending on the size of uh n over here q uh takes a neighboring pixel so if uh we will look at one scan line so what this means is that for p we'll consider along this particular scan line all the neighbors which we call q uh or all the neighbors that are in n so this whole set of neighboring pixel every one of this is denoted by q we want the disparity to be as smooth as possible that means that dp on dq the disparity of this p and q it should be as close to each other as possible so depending on the size we consider the easiest way to do this would be just simply to consider a pixel in the left and a pixel to the right of the image but we'll look at the first scenario where all the neighboring people still lie on the same scan line and here we can define this regularization cost as this function over here where we say that if dp and dq which means that the disparity are equal then we'll give a zero cost we want this cost to be as more as possible this because we want to minimize we want to minimize this particular cost value over here so what this means is that in addition to making sure that we get the best uh similarity of the best matching cost over here we also want the dp to take a labor such that it is the same as the label of its neighboring pixel dq that is defined by this guy over here and uh if they are not the same so uh if they are in a difference of uh by a magnitude of 1 then we'll give it the value of we'll give it a penalty the cost of p1 so if it has a larger disparity this is a larger difference this means that within a neighboring pixel p and q over here if the depth of dp and dq are too dissimilar larger way larger than one then we want to give it a heavier penalty hence p2 must be bigger than p1 over here so now this since the neighboring pixels lies on the same scan line the neighboring uh what we can do here is that it turns out that the way to optimize this is a can be done using the dynamic programming or otherwise it's known as the vitab algorithm to do this optimization so we can think of it in this particular way we if we look at a single scan line so this block over here is obtained from a single scan line this particular block that corresponds to a single second line in the left image in the left image over here and then uh this particular block over here it's obtained from the same scan line a block of the image in the same scan line on the right image over here so if we were to compute for every pixel here suppose that uh i write this particular pixel as p i compute every pixel the disparity or the the matching cost of this particular pixel or which means that this particular patch over here with respect to all the patches that i see in the right scan line over here and the cost of this the matching cost would be every entry in the row of this particular similarity matrix what this means is that this similarity matrix will give us all the disparity cost for every pixel on the scan line in the left image with the right scan line with a sliding window across the right scan line so in the case where we look at the block matching which is the case where we only consider the disparity value over here what we are doing here is that every pixel in the scan line over here we are looking for the minimum cost for example this could be the minimum cost for this particular pixel and then the for the subsequent pixel which we call q for example we're looking for another minimal cost which might be here but what is happening there in the block matching case is that this two disparity because uh having a match over here for p pixel p means that since this corresponds to x prime over here and this corresponds to x over here means that there's a certain disparity value for dp over here and in this case for q there's also a x value and an x prime value for the entry over here so the depth this means that there's a certain depth of dq and if there are two of these since they are neighboring pixels on the same scan line but the disparity over here they're very far apart this means that the two disparities are very uh dissimilar which is not what we want because we say that we know that we are enforcing this prior that two neighboring pixels must have uh that value that is as smooth as possible that means that they must be as similar as possible hence we want to give it a higher penalty over here so what this essentially turns out to be would be starting from the first image pixel over here on the left scan line and the last image pixel over here the right scan line on the left scan line as well we want to find the shortest path that leads from the start of this particular scan line to the end of the scan line and what i meant by shortest path is that for every entry that is in the way of this particular path i want to sum up all the values and for each transition from one entry to another entry over here for example in this particular illustration that i've shown here between p and q i'm exaggerating the illustration by a lot now suppose that p and q are just the neighboring pixel in this particular scan line over here so uh if i were to choose it the next value of dp to be as close as possible in this particular pathway from the start to the end so two pixels over here what i'm saying here is that two pixels what i want to do is that i want to select them to be as close as possible and this would mean that the disparity penalty to be very small so i want to end up with a situation where i have one pixel over here and the other cost would be the other uh value of q which is the neighboring pixel would be as far as possible so i want to avoid this this means that as a result what i'm interested in it would be to find the lowest or the shortest path from the start to the end considering all the entries or the values of the entries that the the path passes as well as the edges that links every pair of entries that the path passes through since we mentioned that we are going to only consider a disparity value within a certain range of alpha or threshold of alpha what this means is that i'm going to ignore uh this part of the of the similarity matrix in my search of the path so as a result what we will get here is that there are certain few examples of the possible paths from the start to the end and we'll choose the one which ends up to have the lowest cost then that would be the solution to this particular optimization function over here so and uh example of this is that uh of the disparity map obtained from this method is given here so since we are still only considering a scanline now at least it's better than the first case where consider individual pixel now instead of considering individual pixel we are considering the neighboring relation between all the pixels within that particular scan line but what happens here is that we are still ignoring the relation between the different scan lines so as a result instead of getting the blocky artifacts that we have seen earlier on we'll get this streaking artifact over here where we can see that over a single scan line is probably very consistent you can see the depth over here is very consistent but when we look at different scan line across the rows across the different rows we can see a discontinuity which means that there's a certain jump over the different scan line and what we'll get here is that something like this which is known as the striking effect and we can probably do better by uh simply considering q to be anywhere in the in the particular image so this means that we want a smoothness over all directions uh in the particular image so suppose that we are considering a pixel p over here we want to consider q instead of just lying along the straight line to have a same disparity we'll consider every uh other neighbors in all possible direction so in the most extreme case we want this p over here to be consistent with all the other pixels all the other cubes in the whole image and this is known as global matching but unfortunately by doing so we tend to have a better result in this particular case but because we are considering the neighboring pixels of all uh possible pixels in the that particular image what happens here is that we'll end up with an mp complete problem which is intractable to solve most of the time and although there's a method to do this which we call the alpha expansion it's a special case of the of a graph card but this is our scope in this particular module we will not further consider this if you are interested you should take my class next semester in the modeling of uncertainty in ai where i will talk about this in more detail |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_1_Part_1_2D_and_1D_projective_geometry.txt | hello everyone welcome to lecture 1 of 3d computer vision and today i'm going to talk about 2d and 1d projective geometry hopefully by the end of today's lecture you'll be able to explain the difference between euclidean and projective geometry in particular we'll look at the familiar cartesian 2d coordinates to describe the euclidean geometry and we'll look at how to make use of what we call the homogeneous coordinates to describe the projective geometry in particular we'll use homogeneous coordinates to represent points lines and conics in the 2d projective space in today's lecture we'll also look at the duality relationship between lines and points and conics and dual conics on a 2d plane finally we'll apply the 2d hierarchy of transformation on points lines and conics in today's lecture of course i didn't invent any of today's material i took most of the content of today's lecture from the textbook written by richard hartley and andrew zizermann multi-view geometry in computer vision in particular chapter 2 for today's lecture i also took some of the materials from the textbook written by mahi an invitation to 3d vision chapter 2. i strongly encourage all of you to take a look at these two chapters in the two textbooks after today's lecture to reinforce your idea on today's concept projective transformation is actually a part of our life is actually a very strong part of our life and this is because we live in a 3d world where everything follows the euclidean geometry that can be described as a cartesian coordinate but the moment that we open our eye and see the world we are actually seeing the world through projective transformations this is because light that is cast on any objects in the 3d world it's reflected into our eye and it converges into our retina via the concepts of projective transformation and what's interesting here is that the projective transformation causes geometrical changes to the 3d world that we are living in and the images that is formed in our eye that is sent into our brain in particular we can see that from this for example in this picture here we can see that parallel lines in the 3d world this pair of lines is in fact a paler line which should not meet in the 3d world that we are living in in particular this particular building over here so what's interesting here is that when it's projected onto an image it undergone some form of projective transformation and we can see that the pair of line they actually meet at a certain point here at a finite point in the image this is the same thing when we are looking at scene via our eye because our eyes are actually the most powerful form of cameras that is uh available and uh it functions the same way in this particular case here where i have a digital camera where the lights are got projected into the photo sensors in my camera over here and we can further see that rectangles that are in the real world scene for example the window over here because of the projective transformation when this the light ray is being projected from the scene into the camera we can see that the windows that is being formed on the image no longer there are no longer rectangles over here in fact it becomes some form of parallelogram over here and another example here would be the circle this clock over here this clock phase over here in the 3d world in the euclidean world that we're living in it's actually a perfect circle but when it is projected onto the image after projective transformation we can see that the geometry circle is not preserved it becomes an ellipse in this case over here and we saw from the previous slide that certain geometrical properties they are not preserved by projective transformation in particular we saw three examples a circle may not appear as a circle anymore it becomes an ellipse and a parallel line they will not uh they will not stay parallel they actually meet at a certain finite point when it is being projected into the image and as well as a rectangle may appear as a parallelogram when it's undergone some form of projective transformation and in fact if a closer look at the images or a closer look at the world through our eyes it's not difficult to realize that the in fact the angles the distance and the ratio of distances are all not preserved for example in this case here i have two parallel lines in the scene uh these are supposed to not form any anger at all but when it's projected onto the image you can see that this is what is going to happen it's going to meet at a certain point and it's going to form some anger even lines that are not parallel for example they form some already form some angle of data over here when it's being projected onto an image we can see that this angle of theta it can actually change to data prime which is not the same as what is what you see uh what or what it really exists in the 3d world and the distance here could refer to the distance between two points two real points in the 3d world so suppose that this distance here is denoted by d over here when it's projected onto an image we can see that this d over here between these two points it becomes d prime they will not never be the same or it's unlikely that they are going to be the same as well as the distance between the two end points of a line or the perpendicular distance of two lines suppose that i'm looking at this distance denoted by d over here that's formed by two lines and the projection to an image would cause this d over here to change into d prime for example as well as the ratio of the distances suppose that i have two distances that is formed by this uh four points over here uh this let's call this d1 and this d2 over here and the ratio between these two distances is denoted by d2 over d1 in the 3d world but when it is projected onto an image we can see that this d1 becomes d1 prime and d2 here becomes d2 prime that means that d2 prime and d1 prime will not be equals to the ratio of d2 and d1 due to the effect of projective transformation we shall see that all these analysis all these mathematical or geometrical analysis can be formalized using the homogeneous coordinates that we are going to learn in a in a while so we have seen earlier that most of the geometrical property changes uh when it undergoes some form of projective transformation so the question here would be to ask ourselves on what's the a good way to quantify or what's a good way to study this projective transformation unfortunately a property that is preserved in projective transformation is what we call the straightness property and this becomes the most general requirement on the projective transformation is in other words we say that the straight line or straightness property stays invariant after certain form of after the projective transformation we can see from the image earlier on that in that case any straight line in the scene is actually still projected into a straight line after it has undergone some form of protective transformation despite that the anger the ratio of the distance and the distance may change but the straight line will still remain as a straight line after some form of projective transformation and the thought here is that we may actually make use of this invariant property of the straighteners uh to define a projective transformation or to describe the whole set of projective geometry that preserves the straight line this means that mathematically we can formulate some form of projective mapping or some form of mapping that maps from one projective space so this is a projective space that maps from one projective space to another projective space over here for example in the 2d projective space which i'm going to denote as p2 over here so uh but in this case over here since we observe that straightness is being preserved what it means is that we want to define a projective transformation which i now call temporally as h over here that maps any straight lines in in one of the scene into another scene such that the straightness property here will also be preserved and more generally we will study the geometric properties that are invariant with respect to projective transformations in projective geometry so straightness or straight lines preservation of the straightness in the projective transformation is one of the property will throughout these lectures we'll look at some other form of invariance geometrical invariance they are invariant to a certain form of projective transformation in projective geometry so so far what we are familiar with in geometry is what we call the euclidean uh geometry and this is an example of a synthetic geometry that we all have learned since our primary school days when we are taught geometry and mathematics this is also making use of what we call the exomatic methods and it's related tools to describe geometry or to solve any problems in geometry for example if i want to solve the a triangle a right angle triangle and i solve the angle over here we can make use of compass and rulers to actually solve this particular problem we can measure the angles between any two lines in this triangle using a compass or we can just simply measure the distance of this triangle the edges of this triangle using a ruler and computes the angle using the cosine rule or the sine rule so uh this is the most standard form of euclidean geometry that every one of us learned in school and in fact euclidean geometry has also a very long history it was actually invented by the greeks in very early days as depicted in this particular painting over here by rafael in the 1500s we can see that in this painting over here there's a group of philosophers or there's a group of mathematicians trying to solve some form of euclidean geometry problem by making use of the compass and the straight edge to solve this particular problem and what we're going to look at in this module over here is what we call the projective geometry and instead of making use of exomatic methods and the related tools such as compass and straight edges to solve the problem we are going to make use of mathematics we are going to make use of coordinates and in in particular algebra or linear algebra were to solve problems in projective geometry and we can see that we will learn in the later part of the lectures how we can convert projective geometry representation in particular homogeneous coordinates into the euclidean geometry or the euclidean space so euclidean space are usually represented by the cartesian coordinates and the what we call projective geometry since it's making use of math and coordinates and linear algebra to describe projective geometry and to solve problems in projective geometry we'll call this also the analytical geometry we shall see that one of the most important results from projective geometry is that the geometry at infinity can be nicely represented so in euclidean space there's no way to represent any points that that occurs at infinity or any lines that occur at infinity or any planes that occurs at infinity and in fact the seemingly only way to represent this line's point or planes at infinity in euclidean geometry is to make use of the infinity representation itself which it ends up to be an exceptional case in mathematics when we do any mathematics operations infinity it's often unrepresented or it ends up to be a exceptional case in euclidean geometry but we'll see that by making use of homogeneous coordinates in projective geometry the infinite points lines and planes can be represented elegantly using just a set of numbers which where we do not need to treat them well as a separate case so we'll make use of homogeneous coordinates to describe the projective space as we have learned in our high school mathematics or geometry that in the euclidean space which we denote as e over here can be easily represented by a set of cartesian coordinates which are living in the real number space so for example in the 2d space 2d euclidean space this is usually represented using 2d real numbers in the cartesian coordinates as compared to in the projective space in this case over here if we are talking about the 2d projective space then this would be represented by a set of homogeneous coordinates which are a set of three numbers they are also taking a form of real numbers but in this case this will be in the r3 space where we are talking about three numbers over here we'll see that a point in the homogeneous coordinates can be represented with using these three numbers k x k y and k respectively so this form these three numbers over here it forms a point in the homogeneous coordinate or it represents a point in a homogeneous coordinate that corresponds to kx divided by k and ky divided by k where k over here can be cancelled off to get x and y in the cartesian coordinate so here in this particular statement over here what we have seen is that given a homogeneous coordinate we can easily convert it into back into the euclidean space or into the cartesian coordinates over here by simply dividing the first two coordinates of the homogeneous coordinates with the last entry in the homogeneous coordinates over here and it's also interesting to note that uh k x k y and k is equivalent for all uh forms of k so we can think of it this way that the k here is just a scalar value that is multiplied by the homogeneous coordinates of x y and one over here so for all case over here this particular representation of homogeneous coordinates represents the same point of x and y in the cartesian coordinate space now we can also use the homogeneous coordinates representation these three numbers to represent a point at infinity a 2d point and infinity by simply setting the third value here k to be equals to 0 and we can see that this is an elegant form of representing points 2 or 2d points at infinity because it doesn't require the exceptional representation of infinity or the concept of infinity over here all we need to do is just to set the last value over here k to be equals to zero and we can see this clearly when we convert this set of points over here this homogeneous coordinate x y and zero over here into the cartesian coordinates by simply dividing x over zero and y over zero following the rules that we have seen earlier in the first statement over here and this will end up to be infinity infinity where uh in the cartesian space which actually represents a point at infinity but it's actually very inconvenient to represent infinity in the cartesian coordinates directly so here homogeneous coordinates offers us a more elegant way of representing points at infinity and generally the rn euclidean space can be extended to a pm projective space as homogeneous vectors as we'll see in this particular lecture we can see that this example over here for a point we can see that for a euclidean space of r2 represented by r2 space it can be actually extended to a projective space of p too easily by just dividing the values of k the third coordinates here and we can easily convert euclidean space to projective series and vice versa we will see in the next lecture that this can also be done in the r3 space with some difference from exceptional properties that we will see that arise from a higher dimensional space over here compared to the 2d space in particular the hierarchy of transformation and pictorially we can see that the homogeneous coordinates a point in the homogeneous coordinate can actually be represented by array as shown in this diagram over here so as what we have mentioned earlier on that any points on the homogeneous coordinates can be represented by these three numbers over here k x k y as well as k over here and this k is a scalar value that is multiplied by x y and one over here where for any k or for all the case uh this over here these three numbers over here would be the same uh 2d point in the cartesian space and what we can see here is that in this case here since the k over here is a scale what this means is that x y and 1 and 2x 2 y and 2 as well as all the way until k x k y and k they all represent the same point and if we were to join all these points together we can see that it actually forms a projection it actually forms a ray that joins out all these points over here that represents the same 2d point in the cartesian space and hence we can see that we can conclude that the homogeneous coordinates of a point or representation of a point is actually equivalent to array over here if we were to plot it up in the cartesian space of x y z coordinates over here so we can also look at another example over here suppose that there's another point over here which is actually given by k x prime k y prime as well as k over here we can see that this particular point is equivalent to on this particular plane over here is equivalent to 2x prime 2y prime as well as 2 over here and there is also a corresponding point of x prime y prime as well as one over here so if we were to join all these three points up then this will form a light array that passes through the all the sets of points and it's also interesting to note that it's not necessary that k takes an integer value so k need not be just an integer value of 1 2 3 4 and so on so forth it can actually be any real number so k over here can actually be any real number where it can actually be equals to 1.1 or 1.23 and so on so forth and in this diagram over here we can also see that based on our definition of a point at infinity it was given by x y and 0 as defined in the previous slide over here where x and y can take any value so what this means is that the all the points that are or any vectors on this that is lying on the x y plane on the x y plane over here they all correspond to points at infinity and uh we will also look at the uh what we call the line at infinity which we will represent it as l infinity uh later on that this is actually equals to 0 0 1 we'll see how to derive this particular representation in a few slides time and what's interesting here is that this l infinity is actually corresponding to the any point on the z axis over here since it's 0 0 1 and it's interesting to note that the origin of the cartesian coordinate or of 0 0 0 over here is undefined so this point is actually undefined in homogeneous coordinates we do not use this representation at all in hormone genius coordinate so when i look at the homogeneous notation for lines on a plane which means that we are looking at the 2d line and the incidence relations between the lines and the points uh which we will also from here we will also look at how the the to derive the duality relation between lines and point we will realize that the lines and the point representation on homogeneous coordinates are actually interchangeable and in the pictorial form that we have seen earlier we saw that a point in the homogeneous coordinate is actually represented by a ray represented as three numbers kx ky and k well it's going to be the same for all case that's why the point is actually represented as a ray in the projective space in the homogeneous coordinates and we will see that a line in the on the plane a 2d line on a plane over here which we denote as l over here it actually is represented by a plane because it projects to become a plane in the to uh in the 2d projective space and in this case here we can see that a plane can be represented by its normal vector so what happens here is that line the representation of line geometrically it's actually the represented by the normal vector of the plane that is uh projected from this particular line over here now we all know that a line in a plane which is a 2d line can be represented as this particular polynomial equation over here ax plus b y plus c equals to zero this simply represents a line in the x y plane in the uh cartesian coordinate and this is the equation of a x plus b y plus c or if we were to manipulate this particular equation a little bit we can see that this simply becomes y equals 2 minus a divided by b x minus c or simply y equals to m x plus c which is the familiar equation of lines that we all have learned throughout our high school mathematics or even junior high school mathematics and we know that the different choices of the parameters of x y or the coefficients of x y which is a b and c will give rise to different lines because a and b here controls the gradient of the of this line on how steep this line will be on the x y plane over here and c here would be the intersection of the line with our y axis over here thus a line can be represented naturally by the vector these three numbers a b and c without having the need to the concern about x and y over here and hence in homogeneous coordinate representation of lines we'll just simply make use of the coefficients to the line equation over here in 2d abs and c to represent the lines in homogeneous coordinates so we can see that there is a similarity in the representation with the homogeneous coordinates of points which is also represented by three numbers over here and but physically or geometrically it actually means different things so in the case of the uh lines or in the case of the points is kx ky and uh k over here and this is actually representing the all the points that are in the array that is formed by the projection onto the origin and the intersection with all these planes over here which determines the cave value would determine the point in the cartesian coordinate and in this case here the line is also represented by a three number but in this case a b and c over here simply represents the coefficient of the lines and it's important to note that the correspondence between the homogeneous representation of a line as a b c a 3 vector of a v c here is not uh oh there's no one to one correspondence between this homogeneous representation and the usual vectors that we known of in the cartesian space of a b and c transpose over here this is because the lines are represented by this polynomial equation of a x plus b y plus c equals to zero and we can see that in in the homogeneous representation or in the line representation this polynomial uh equation over here when multiplied by a scalar value of k over here it's going to be the same uh line uh it's going to represent the same line because this k here can be simply factorized out into k multiplied by ax multiplied plus b y plus c equals to 0 and this k can be cancelled off so regardless of the value of k here it's always going to represent the same line but in the case of a a vector if we were to write the vector as k a k b and kc here these are going to represent the different lines for or different vectors different three vectors in the cartesian space for different values of k and but in this case here in the homogeneous representation here we say that a b c the three vector of a b c and the three vector of k a b and c transpose represents the same line for any non-zero k and h therefore what this means is that they are equivalent class and note that the vector of 0 0 0 as i mentioned earlier when we look at the diagram that represents the projective transformation the projective space it does not correspond to any line neither does it correspond to any point at all so this is what we call the singular point where it doesn't represent anything at all in the homogeneous coordinate now we'll see the incidence relation so incidence relation means that how a point and a line coincide or intersects with each other so in this case here it would be a line and we will look at the relation or the mathematical formulation in homogeneous coordinates form on how to represent the incidence relation where a point sits on a line the point let's denote it as x and the line let's denote it as l which is given by these two values over here and we say that a point lies on a line if and only if the it fulfills this particular equation in the cartesian space this is the equation the line equation that we all are familiar with in cartesian space and in fact a b and c here represents the parameters or the coefficients of the line and x and y here simply represent all the points that are in this line when we plot it out in the cartesian space of x and y over here so we can rewrite this form of the equation the polynomial equation into a matrix multiplication or vector dot product over here we can see that it becomes x y one uh multiplied by a b and c transpose and interestingly this x and y and 1 over here it becomes the homogeneous representation the homogeneous coordinates of a point of a point x y in the cartesian space and c transpose over here simply cause uh coincide with the what we have defined earlier to be the homogeneous coordinates of a line represented by l hence we can simply rewrite this into x y one multiplied by l which is the vector the three vector that represents the the the line and according to this particular polynomial equation that represents the line we can equate this product over here in to be equals to zero so similarly for any constant non-zero k this relation the incidence relation will always hold true and in this case here we can say that for any point represented as the homogeneous coordinate k x k y and k multiplied by a b and c it will always gives us 0 over here so it's going to be the same if we even if we were to scale l as what we have seen earlier that this is going to be scaled by another scale where we can factorize out say for example k prime so we can see that k and k prime are going to cancel out when we equate it to zero and at the end of the day is still this particular equation that represents the incidence relation between a point and a line hence the representation that we have described earlier of kx ky and k in 2d in the projective 2d space for varying k value is a valid representation of a point which is in the cartesian coordinate space and this actually also proves that or it actually shows the validity of our definition of the point in the projective space using homogeneous coordinates and we can see formally this would be the representation of any point in the homogeneous 2d space x1 x2 and x3 a 3 vector over here and we say that it's in a projective it's in a 2d projective space which is equivalent to the cartesian coordinates of x1 divided by 3 and x2 divided by 3 transpose in the cartesian coordinate space so in this case here this is the our projective space and this is our cartesian coordinate space more formally the point x lies on the line if and only if the dot product of the point and the line is equals to zero as what we have seen earlier so this dot product over here simply expressed out to be ax plus b y plus c equals to zero and we simply factorize the this into a dot product as written in this formal form over here so this is the first example that we see that how linear algebra because this is the dot product is from linear algebra how linear algebra can be used to describe the incidence relationship between a point and a line in the projective 2d space and note that the expression of x transpose l is just the inner or scalar product or simply call the dot product or as i've mentioned several times and the dot product is equivalent regardless of how we sort the order of this x and l over here because we can simply see that you will still evaluate into this form over here and we shall see later that because of this swapping over here we can end the same representation with the free vector for a point and a line in the 2d projective space hence we can see that the row of the point and the line can be interchanged which we'll look at in more detail when we talk about the duality relationship and now the degrees of freedoms of a point is 2 degrees simply because we saw that there's x and y coordinates where it can be changed x and x and y respectively has one degree of freedom and the line also has uh 2 degree of freedom in the 2d projective space because there are two independent ratios of a this to b is to c and this can actually be made more clear when we by the example that i've shown earlier that we can actually rewrite a x plus b y plus c equals to 0 into y equals to minus b or a divided by b x minus c which is simply equals to m x plus c so in this case here we can see that there are only two parameters that matters which is m and c which is the gradient and the y-intersection of the line so m over here is actually formed by the ratio so uh it should be b divided by c over here so because b is a coefficient of y and it should be also divided c should also be divided by b so we can see that in this case here is the ratio that matters a divided by b and c divided by b that matters in the line hence a line also has two degrees of freedom in the projective 2d space now the intersection of two lines in the 2d space will always give rise to a point because any two lines are always going to meet at a certain point and we will see that include this includes parallel lines which we will assume that they or mathematically define them to intersect at infinity at a point at infinity and the formerly using the homogeneous coordinates representation two points always or two lines always intersect at a point would be given by the cross product of two lines suppose that we have two lines l and l prime the intersection which we denote as x over here would be given by the cross product of l cross by l prime over here and geometrically we can see that uh since i have mentioned earlier by showing the diagram that a line l in projective space is actually represented by a plane it because it projects to a plane uh and uh all the lines on this plane actually represent the all the parallel lines in this plane that lies on this plane it actually represents the same line because of the regular scaling that can be cancelled out as what we have as i have mentioned earlier that k a x plus b y plus c must be equals to 0 and k here doesn't matter and we can see that if we were to have two lines which we denote as l and l prime over here they both back project to a plane and what's interesting here is that on the 2d line on the two these two are 2d lines and on the 2d plane the intersection of these two lines will always be at a point over here which actually uh in the homogeneous representation homogeneous coordinate representation is denoted by this ray over here so this particular what this means is that the cross product when we take the cross product of these two lines over here we will get this point and in the homogeneous coordinate this is equivalent to the this particular ray or vector over here the three vector that we use to represent the homogeneous coordinate and uh a further description of this would be that the line here the geometric intuition of the line since it back projects to a plane we can also see it as the the vector over here the normal vector of these two planes respectively that represents the line of l and l prime over here and the cross product using the right hand rule over here the cross product of this two vector actually gives a octagonal vector that is uh in this direction over here which is directly corresponds to the point of x hence a point is actually given by the cross product of the two line l and l prime respectively here's a mathematical proof on uh how how this cross product works just now i have given you the geometric integration between behind why a point is uh represented as the cross product of two lines l1 and l prime now i'm going to give you a mathematical proof now suppose that we are given two lines l which is represented as a b c and a prime b prime and c the triplet scalar product identity uh gives this the triplet product uh scalar product identity it simply means that the dot products of the cross product of two lines are going to be zero which we can see that the dot pro the cross product actually uh of two vectors which you denote as l and l prime over here is going to be the line that is going to be the vector that we denote by v over here here this guy here is going to be l cross l prime over here and this is going to be octagonal to each other all these three lines they are all going to form they are going to be perpendicular to each other so what this means is that if we take the uh cross product of l and l prime we will get v which is octagonal to l and l prime and if we were to take the dot product of v with any one of these lines l or l prime over here since they're 90 degrees apart the the dot products of these two are going to be equals to 0 because dot products is simply if i have two lines and i take the dot product or two vectors i take the dot product the dot product is going to give me a b cosine theta equals to a dot with b the dot product of a and b is going to give me this so in the case where a and b are perpendicular to each other a a vector of a and b they are perpendicular to each other what this means is that the angle between them is going to be 90 degree hence the cosine of 90 degrees is going to give me a 0 over here hence this is going to give me a 0 of the relationship over here and if x is taught as the representing point then x lies on both lines l and l prime hence and hence it is the intersection of the two lines over here so we can see that uh because of this relation over here uh the triple scalar product identity over here this relation is going to give rise to zero so we can replace l and uh the cross product of l and l prime here with x over here in this the first case here and we can also see that this guy over here can also be denoted as x hence if we put this into the this this particular equation here we can see that from our earlier definition the incidence relation of x and l the point and a line when a point lies on a line it has to give a dot product of 0. this is arising from the equation of the line that we have seen earlier hence we can replace the cross product of l and l prime to be x where well we can conclude that the cross product of two lines is actually equals to the point itself and this is the algebraic derivation of the intersection of two lines that gives rise to a point and conversely we can also see that the cross product of two points x and x prime over here gives rise to a line l we can see this or we can observe this geometrically using this representation in the diagram over here so we have seen that two points is actually it can be represented as two rays in the homogeneous coordinates so for example x and x prime over here and the cross product of these two lines since they lie on the same plane it's going to give rise to a perpendicular uh vector a perpendicular normal vector over here which we denote as l over here and this l would represents this particular plane over here that intersects the plane k over here on at the line represented by l hence we say that the cross product of x and x prime uh is going to give rise to a line a vector that represents the line in the homogeneous coordinates form and algebraically the proof is given by this suppose that we are given two points x and x prime the triple product identity will use the same trick of the triple scalar product identity it gives x dot with the x cross x prime over here is going to be 0 as well as x prime dot product with x cross x prime is also going to be 0 which we can rewrite this here this part over here since the cross product is going to give another vector a three vector which we can write rewrite this as l and this part over here as l2 so and we can see that this is exactly the incidence relationship between a point and a line which we have mentioned earlier on and here we can simply write l to be equals to x cross x prime hence we can conclude that the cross product of two points in the homogeneous coordinates gives rise to a three vector line in the homogeneous coordinate and we can also see that there is no exception uh to be made of the intersection of two parallel lines so in cartesian coordinates this would become a problematic the intersection of two parallel lines because uh there is no way to represent this uh intersection in the cartesian coordinates as they simply do not intersect and and or they actually intersect at infinity so there is no way of representing infinity will end up to have a division by zero which would uh be not solvable in cartesian coordinates but in the homogeneous representation we can see that this becomes uh elegant to represent so now let's consider two parallel lines a x plus b y plus c equals to zero as well as uh a b uh a x plus b y plus c prime equals to zero which we represent in the homogeneous coordinate as a b c and a b c prime over here so we can see that since i mentioned that we can rewrite these equations into y equals to y equals to a over b minus or minus a b x minus c over b into the form of y equals to m x plus c so we can see that since a b in this tool are the same well and this means that in this case here they are the gradient which means that there are two parallel lines which simply intersects at the different point uh c and c prime respectively uh yeah and hence uh from a geometry or from a cartesian point of view we can easily see tell that if the two numbers of the lines are the same then uh these two lines are parallel to each other and the intersection of the two lines l and m prime which we have seen in the previous slide is given by the cross product of l and l prime and this can be evaluated in this particular form over here where we will simply get a scalar value of c prime minus c multiplied by the three vector that represents a point or homogeneous point in the projective space and in this case here the third number here is always going to be zero and ignore when we ignore the scale factor we can simply represent the point the intersecting point at b minus a and zero over here hence uh what this implies is that uh this is actually the intersection of the two parallel lines and it actually uh algebraically we can actually see that this point actually is the point at infinity which means that two parallel lines actually intersects at the point of infinity uh which can be represented very elegantly using homogeneous coordinates since the last value over here is zero when we convert this back into the cartesian coordinate we will get b divided by 0 minus a divided by 0 which is simply infinity and infinity that represents a point at infinity now example here is that consider two lines x equals to 1 and x equals to 2 over here so yeah in the geometrical way this is actually these are actually two lines uh parallel to each other at x equals to 1 and x equals to 2 or over here and these two lines are parallel and we will see that it consequently will intersect at infinity so using this cartesian representation over here there's no way of finding the intersection over here or representing it eloquently using mathematics but in homogeneous form we'll see that or we have already seen the derivation that this can be done in an elegant way so the two lines over here can be represented as l equals to minus 1 0 and 1 over here and the and as well as minus 1 0 and 2 over here which represents this particular two lines and and their intersection point can be uh given by the cross product of these two lines over here which is simply ends up to be zero one and zero we see the last coordinate is 0 this means that this point is actually a point at infinity in the y direction because 0 represents x and 1 represents y which means that the this vector over here this point here is pointing towards infinity and that's in the y direction which can be verified over here that since these two lines are parallel in the y direction they must intersect at infinity at the in the y direction so we have seen in the earlier example that any two parallel lines denoted by a b c and a b and c prime intersects at a point at infinity which is given by the ideal point of b minus a and 0 for all the c's and hence we also have seen that the ideal points will lie at the line at infinity so this is the line at infinity the ideal point is going to lie on this line at infinity and hence we can conclude that any two parallel line is always going to intersect at a ideal point at infinity which is going to be lying at the line at infinity hence any two lines parallelized is going to intersect the line at infinity at the ideal point and we have also seen that the ideal point is actually the direction of the is a it can be seen as a directional vector along the intersection of these two parallel lines in other words b minus a over here can be seen as the uh the tangent to the line which is the direction of the line that is pointing towards the point at infinity which is b and minus a over here and the octagonal to this line the tangent direction vector would be the normal of this line over here can be given by a and b over here and we can verify that these two are tangent to each other octagonal to each other uh b minus a and a b transpose over here by simply taking the dot products of these two guys or we can see that it will give us a b minus uh b a and that's going to be equals to zero the dot product of these two and we i've said earlier that the dot product of any two octagonal vectors a and b is going to end up to be zero which indeed verify that these two are the tangent to the line as well as the normal to the line and what's interesting here is that as the direction the line direction the parallel line direction varies a and b varies we can see that the ideal point that intersects at the line at infinity is going to also vary along the line at infinity since these two parallel lines are always going to intersect the line at infinity this mean but what it means here is that with the change of the direction of the lines the parallel lines the intersection at infinity is going to vary and this simply means that the tangent the direction of this line is parallelized is going to change with a and b hence the line infinity can be thought of a set of directions of the lines in the plane since all these are going to live in the 2d plane now through the dot products we can see or through the incidence relation of the that is denoted as the dot product we can see that the row of points and lines since there are all three vectors over here uh it can be interchangeable in the incidence equation in the dot product and the intersection of the two lines and the line through two points can also be uh interchangeable see all of the using the cross product over here so this leads to the observation of the duality principle which simply means that the point and the line they are dual to each other in the in terms of homogeneous coordinates representation and the operations in the protective space and the duality principle simply means that any theorem of two-dimensional projective space there exists a corresponding dual theorem that may be derived by interchanging the rows of the points and the lines in the original theorem so what this means here is that in the context of these guys over here what it means is that as long as we prove the cross product of two lines gives rise to a point there is no need for us to prove the the on the other hand where two points the cross product of two points will give us a line this is because due to the duality principle where line and points are interchangeable we simply just need to prove any one of this for the other one to hold true |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_9_Part_3_Threeview_geometry_from_points_andor_lines.txt | so now let's look at several algebraic properties of the respective ti matrix in the trifocal tensor this would be evident that each of this three by three ti matrix has a rank of two this is because we've seen earlier on that the ti matrix it's made up of the sum of two outer products and in the earlier lecture we said that the sum of two outer product terms will form a rank division matrix in this case it is a rank two matrix and in the earlier slides where we look at the epipolar lines and the epipoles from trifocal tensor we saw that the right null space of ti where i here equals to 1 2 or 3 it's the epipolar line in the third view that corresponds to the point x equals to one zero zero zero one zero and zero zero one in the first view respectively and the epipolar line l prime prime in the third view is given by the cross product of the fppo denoted by e prime prime and the respective column of the camera matrix in the third view uh if you recall earlier on we defined that p prime prime the camera matrix is equals to b and b4 over here where b over here can be rewritten as b1 b2 and b3 and then b4 over here so here's the proof on why this is true recall in the earlier lecture when we talk about fundamental matrix that the epipolar line is given by l prime prime equals to the projection of the camera center in the first image onto the third image or onto the second image here so in another this is my first view which i denote the camera center as c and the projection matrix here is p equals to 1 i identity and 0 and then this is my third view over here where i denote the camera center as c prime prime and the projection matrix as p prime prime over here which is given by b1 b2 b3 and b4 as mentioned earlier on here so the epipole would be the projection of the first camera center into the third view over here and that would be given by p prime prime multiplied by c over here and the epipolar line will be given by this first term cross product with the second term over here which is defined by p prime prime that's the camera projection matrix of the third view and the pseudo inverse of the first camera projection matrix multiplied by the point x in the first view over here so the pseudo inverse here is defined as the pseudo inverse multiplied by p which is the original camera matrix has to be equals to identity over here and here we can see that this guy over here is simply the transpose of the projection matrix which is defined in the canonical frame which is given by this guy over here so it's a four by three matrix where the first three by three entries is identity and the last row is simply a zero and so evaluating this particular term over here we see that the epipol remains over here taking the product of the camera projection matrix and this the the pseudo inverse of the camera projection matrix in the first frame we we see that uh essentially the last row of zero here causes the last column in the camera projection matrix to drop out hence we are left with b1 b2 and b3 in over here and now uh if in the case where we know that x is going to be defined by either one of these three points notice that because there's only one entries in all these three choices that corresponds to one and the rest are zero so this can be simply seen as a selector of the columns of this camera projection matrix over here hence we are going to rewrite this product over here b1 b2 b3 multiplied by x into b i over here we'll use l i prime prime here to denote the epipolar line that corresponds to the i point in the first view and we also learned earlier on that the fpo in the third view is the common intersection of the epipolar lines l prime prime of i where i equals to 1 2 and 3 defined by these three respective points over here so what it means is that the epipolar line i have three different points over here and these three different points it's going to be transferred from the first view into the third view as three different epipolar lines and the intersection of these three epipolar lines over here which i call l prime prime one l prime prime two and l prime prime three they are going to intersect at the epipole denoted by e prime prime over here and this particular epipol can be computed as the left null space of the matrix that is being formed by a simple concatenation of the three epipolar lines l i prime prime over here so similarly the left null space of the ti matrix gives the epipolar line in the second view that corresponds to the points of x equals to one zero zero zero one zero and zero zero one i'll not derive this it's simply following this particular proof here you should try it for yourself now similarly there people in the second view is the common intersection of the three epipolar lines that was found earlier on so what this means is that i have my first view over here and i have three points denoted by these three sets of coordinates over here and in the second view these three points transfers to three epipolar lines which intersects at a certain point over here and this particular point would be my happy pole in the second view which i denote as e prime and similarly this common intersection can be found as the left null space of the three epipolar lines in the second view that are concatenated together to form the matrix over here now the epipose and the epipolar line that was mentioned earlier on can be directly obtained from the ti matrix because of the special configuration of this particular three points over here we selected such that it becomes a selector of the ti matrix in the trifocal tensor but the relation still works in general so what this means is that we can generally call the a matrix which we denote as m over here as the equation of uh the the sum of the x i which is the point the i coordinate of the point so x1 x2 and x3 over here and this respective coordinate multiplied by the respective three by three matrix over here so it's essentially this particular guy over here becomes x1 multiplied by t1 plus x2 multiplied by t2 plus x3 multiplied by t3 so this relation here can work for any general point on the first image it need not necessarily be these three special points over here and and the m matrix here we'll see later that this can also be used to compute the epipolar lines as well as the epipole so here the first thing to mention here is that this m matrix also has a rank of two because what we are doing here is that each one of this x the coefficient here x1 x2 and x3 they're scalar coefficient and uh it's multiplying by a three by three matrix that is of rank two so now what happens here is that we are summing up three respective matrices we are doing a weighted sum of three respective uh three by three matrices which are all rank two respectively so the sum of it should naturally be still of rank 2 it cannot be any other rank and similarly once we found this m matrix over here we can use this m matrix which is this guy over here this relation over here to find the apipola line in the third view this will simply becomes the right uh null space equation of this uh epipolar line in the third view and the same relation holds for the as the left now space equation to find the epipolar line in the second view so this would also be true now let's look at the extraction of fundamental matrices from the trifocal tensor we know earlier on that a line in the third view which is denoted by l prime prime over here induces a homography from the first to the second view which is given by this equation over here so here what it means is that i have the first view over here and this is a point i have a second view with a point as well which i denote as x x prime over here and in my third view i have a line which i denote by l prime prime over here i know that this line is going to be back projected as a plane in the 3v space and the two points over here the two image points here they are all going to intersect at the point in the 3d space that sits on the line that projects to this image line in the third view over here so i denote this line over here as capital l and this point over here as x since the 3d point x lies on the plane that projects as a line in the third view over here what this means is that there is a homography that transfer between the first view from a point to another point in the second view given by this equation over here and this homography transfer is induced by the back projected plane as well as the trifocal tensor given by this guy over here so this is a equivalent to a homography in the first two view now so since we know that a point in the first view denoted by x is transferred to another point via homography in the second view denoted by x prime over here the corresponding epipolar line can be found by joining this transfer point x prime over here with the epipole that is in the second uh image over here so and this can be uh effectively uh obtained by the cross product equation over here so l prime where this is this guy here l prime is the epipolar line of the second image of this particular of the the point x in the first image that is transferred into the second image over here so this would be given by since the two points the epipol and the transfer point x prime sits on this particular line l prime epipola line l prime so this the cross product of these two points will give us the epipolar line l prime over here the first point would be e prime denoted by e prime and cross product by x prime over here which is the transfer point of the x in the first view to the second view as x prime and we've seen earlier on that there is a relation over here which is given by the homography that is given by this guy over here that is a function of the trifocal tensor as well as the the line l prime prime in the third view over here so we can rewrite this equation the epipolar line l prime into a function of the epipole in the second view as well as the homograph that transfer the first view to the second view and now this equation over here the epipolar line can also be rewritten into this form over here where we simply group this part out and do into a three by three matrix that we call f21 and now this becomes our familiar equation that we have seen in the previous lecture when we talk about fundamental matrix f 2 1 over here would be simply our fundamental matrix that transfer a point in the first view into the epipolar line which denoted by l prime in the second view over here and hence we get the relation of the fundamental matrix in terms of the trifocal tensor the epipol in the second view and the corresponding line in the third view denoted by l prime prime over here the formula of the fundamental matrix in fact holds for any vector l prime prime but it's important to note that the choice of l prime has to be done in a way to avoid degeneracy where l prime actually lies in the null space of any of the ti matrices in the trifocal tensor what this situation means is that we have seen earlier on when we look at the epipolar line of the three special configuration of the points one zero zero zero one zero and zero zero one the product of the the epipolar line and this trifocal tensor because it's simply means that the epipolar line of the respective point is in the null space of the ti matrix over here so what we want to do is that we want to avoid this configuration where the product of t i and l prime prime which is given by this part of the equation over here equals to zero this and this arise when l prime lies in the null space of t i uh under this transfer of the three sets of special points over here when this situation arises this means that the fundamental matrix between the first two view would be undefined and we call this a degenerate case so in order to avoid this since l prime prime here can be any line and a good choice of this l prime prime here would be the epipole in the third view which these we denote by e prime prime over here since the coordinates of this epo can also be used to denote one of the lines in the third view over here which we denote as l prime prime over h the special property about this particular line that is equals to e prime prime over here would be that it is perpendicular to the right null space of the fundamental matrix here what did this means is that let's say that this is the line for example that i have a point here which falls into the degenerate configuration one zero zero and this transfer to a line over here and by choosing l prime prime to be e prime prime what we can do here is that we we can be guaranteed that this particular line l prime prime is as far away from this degenerate configuration line as possible which means that it's directly perpendicular to this particular line over here so note that a choice of the line that is very close to the degenerate configuration is also no good because what this means is that once we do all this uh multiplication over here the fundamental matrix over here would all the entries would end up to be very close to zero and that's not good so the the optimal solution over here would be directly a line that is furthest away from the degenerate configuration over here let's see why is that why is it that e prime prime by choosing a line l prime prime to be equal to e prime it will give us the perpendicular line uh to do the degenerate configuration let's assume that l prime prime is the epipolar line uh that and it has to lie on the right now space of each ti which means that we have this guy over here and e prime prime is perpendicular to the right null space since it lies on l prime prime so uh here let's say let's see that x here would be any uh point that is not in this set of configuration so what we can see here is that since l prime prime here is a pipolar line it has line on in the right now space of this guy over here and e prime prime is perpendicular to this right now space since it lies on this this line over here what it simply means is that i have a line l prime prime and e prime prime is is that line here e prime prime and we know that since e prime prime lies on l prime prime that means that the dot product of e prime prime and l prime has to always be equals to zero when i take the dot product of two vectors it is also equals to the magnitude of the two vectors multiplied together and the angle between them so since it always has to be equals to zero what this means is that this guy here better be equals to zero and that simply means that theta which means that the angle between the two lines if i were to say that this line here is equal to e prime prime the angle of between these two lines better be 90 degrees over here in order for cosine theta to be 0 the dot product to be 0 over here hence this gives the solution of the fundamental matrix to be equals to the cross product of the epipole in the second view and the uh trifocal tensor multiplied by the epipole in the third view so in this particular choice over here the epipod of the third view we're guaranteed that f21 will never be in the degenerate configuration and a similar formulation can also be derived between the third and the first view uh given by the fundamental matrix over here f31 so here we can see that uh the row of e and e prime prime simply uh so is swapped over here and the trifocal tensor we have to take a transpose over here so i'll leave it to you to derive this equation it's easy to do that by simply following the sense the same set of derivation that we saw here now the next thing that we want to retrieve after retrieving the fundamental matrix would be to recover the camera matrices given by p [Music] p prime and p prime prime for the three respective three views and since the trifocal tensor expresses a relationship between the images only so what it means is that we've seen that the incidence relation it's purely relating the image entities correspondences it could be a point it could also be a line it could be everything a line and it only involves this l l prime and l prime prime as well as t1 t2 and t3 and this means that the incidence relation is completely independent of the 3d entities in in 3d space and this simply means that the trifocal tensor is independent of any 3d projective uh transformation so conversely this also implies that the camera matrices here p p prime and p prime prime can only be computed up to a projective ambiguity from the trifocal tensor so due to the projective ambiguity we can fix the first camera uh to be in a canonical frame this means that we are assigning the camera according of the first camera to be aligned with the world frame which we denote as i over here which is a three by three identity and then a zero over here which is a three by one column of zeros and since f2 one over here the fundamental matrix that relates the first two view is known from the previous slides we can make use of this known fundamental matrix over here to define the camera projection matrix p prime in the second view and this is what we have seen earlier on in the lecture on uh fundamental matrix that the fundamental matrix can be uh written as uh the cross product of small a and big a over here where small a the epipole it forms the last column of the camera projection matrix and this a over here capital a over here is a three by three matrix that forms the first three by three entries of the camera projection matrix so this guy over here these two terms over here small a and b a over here can be obtained from the trifocal tensor and the epipose that is computed in the earlier slides here now uh what this means is that putting all the terms that we have seen earlier on for a small a and big a over here into the camera projection matrix p prime over here this is the relation that we will get for the second camera and what this also means is that the two camera matrices here p and p prime will be the camera matrices that has a fundamental matrix that corresponds to f21 up to a projective ambiguity that we have seen earlier on in the lecture when we talk about a fundamental matrix now there is a fallen c here that it might be taught that the third camera could also be chosen in a similar manner so what this means is that i have three views over here first second and third view the camera projection matrix would be p p prime and p prime prime so since i fixed the first camera matrix as the canonical matrix and i already chose p prime according to this relation over the decomposition of the fundamental matrix over here there is a temptation to choose the third camera matrix over here that relates the first and third camera matrix here so assuming that i know the fundamental matrix f 3i which can be computed from the earlier slides assuming that i know this now so there is a there is a temptation that i might just simply use the same trick over here to get the results for p prime prime which we denote as v and b 4 over here so there is a temptation that says that if i have f 3 1 and this is going to simply be equals to b 4 cross product with b over here but this this particular force over here which will result in this particular equation of p prime prime from the trifocal tensor relation that we derived earlier on to get the fp pose and the and the epipolar line would be wrong this is not correct to see why let's suppose that a pair of camera matrices p and p prime between the first and second view it's already chosen which is what we already chose here from the decomposition of the fundamental matrix f21 over here and suppose that we reconstruct the 3d points over here so i have my p and p prime and then i have my correspondence which is x and x prime over here in this first two views and we learned earlier on in the lecture on fundamental matrix the 3d points can be easily recovered from the linear triangulation algorithm where i can get x over here so once this x is recovered that means that i have from the first and second view i have p and i have p prime and then i have x and x prime over here i'm going to do the linear triangulation to recover this capital x over here what's interesting here is that once this x is recovered the third view p prime prime would already be fixed so there won't be any projective ambiguity anymore this is because i have a third correspondence here x prime prime that corresponds to this particular point this particular 3d point that was obtained from linear triangulation of the first two views earlier on and uh we've seen in the pnp uh problem earlier on in the last lecture that once this is defined once the 3d point is defined there is actually no projective ambiguity what this means is that now once this is defined p prime prime must have a fixed configuration we can't chose it with respect to this particular decomposition here anymore because recall that by doing so this kind of decomposition from the fundamental matrix here p prime prime and p would be subjected to a projective ambiguity which is not true in this particular case here since x has already been defined and that is a direct correspondence so this means that p prime prime shouldn't have this projective ambiguity anymore and uh of course this is one of the method to recover p prime prime we can simply do a triangulation linear triangulation of the point correspondences from the first two view using p and p prime and then use a pnp to recover the camera projection matrix of the third view it will show that this is actually unnecessary we can directly recover this from the trifocal tensor itself so we have learned in the previous lecture that the fundamental matrix can be decomposed into two pairs of camera matrices denoted by p p prime and p to the p tilde prime respectively we know that due to the projective ambiguity the second pair of camera matrices p tilde and p to the prime can be rewritten in the form of the first pair of camera matrices p and p prime where the second camera matrix over here p prime tilde is equals to a plus small a v transpose and lambda a where v and lambda here they are some form of vector and a scalar value for the projective ambiguity and what this means is that since we have seen earlier on that the second camera matrix p prime here can be denoted in terms of the tensor t1 t2 t3 and the second epipol and as well as the first epi po over here what this means is that the first term over here can be denoted as a and the second term over here can be denoted as small a over here and this means that there also exists another pair of fundamental matrix a general pair of fundamental matrix that has a projective ambiguity and what this means is that the equivalence of p to the prime here in the more general form for p prime is actually given by a which is this guy over here is equivalent to a plus a v transpose where v is some vector form and small a here we have seen here is given by e prime over here so this gives us e prime and v transposed over here and we have also seen that the last column here is given by lambda a and since a is e prime and this would be equivalent to lambda e prime over here and since there is a projective ambiguity between the two views we are free to choose p prime but we'll see later that we are not free to choose p prime prime as what we have explained earlier on and we'll see a technique on how to determine it without reconstructing the 3d point so now uh because of this projective ambiguity we are free to choose the camera projection matrix of the uh first of the second view which we denote by p prime so here we simply ignore this v and uh lambda and express it as a general form over here where a i here is given by the uh product of ti and the epipole so ai here represents the it represents every column over here so this guy over here is every column of this multiplied by e prime prime over here now this choice of p prime would immediately fix the projective world frame such that p prime prime is now uniquely defined up to a certain scale that's what we have argued earlier on in the description of the fallen c and uh now what happens here is that we can substitute a i equals to t i uh e prime prime which is this term over here that we have seen from the first camera projection matrix into the trifocal tensor relation that we have seen uh earlier on in our derivation to get this particular equation over here so what this simply means is that i'm going to substitute this guy here into uh a i over here and hence i get this term over here in replacement of a i which follows that i can uh make e prime b i transpose the subject so here it becomes apparent that what we are trying to do is that uh because v i over here defines the second or if it defines the third camera projection matrix which is given by b 1 b 2 b 3 and b 4 over here so b i here simply gives us these three terms over here where b 4 here is easy because this is simply the epipole in the third view and here what we are trying to do here is that we are trying to derive the respective column the first three columns of the camera projection matrix with respect to p prime that is the second camera projection matrix that we have fixed earlier on and what this simply means is that we are not no longer arbitrarily choosing the third camera projection matrix is subjected to some form of projective ambiguity but now we are more uh careful to solve for this uh p prime prime over here in terms of the first camera projection matrix because it already fixed the projective frame in the third view hence we are going to make e prime bi transpose here as the subject by moving this over to the left-hand side and t i moving it to the right-hand side where t i is a common factor here which we can factorize out into this form here then the next thing that we do here is that since we can choose the scale of the epipole over here to be equals to one what it simply means is that the dot product of the epipole by itself would be equals to one hence we can simply do a multiplication by e prime transpose on both sides of the equation where this term over here becomes one and we are left with a b i over here and this particular term over here so now we have define bi and we can substitute it back into p prime prime where this is equals to b 1 b b2 b3 and b4 here would be simply the happy pole in the third view and b 1 2 and 3 would be given by the definition here which is given by the choice of the first camera projection matrix here in this case here so we can see that now when we fix p prime which is subjected to a certain projective ambiguity in with respect to the first frame now by choosing p prime prime the entries in p prime prime according to the choice of p prime we are no longer subjecting it to a projective ambiguity we say that by this choice according to the choice of p prime we have removed the projective ambiguity from the first two frames to the third frame which is uh given by p prime prime over here now here's a summary of uh how we can compute the epipose e prime e prime prime in the second and third view respectively from the trifocal tensor this is what we have seen earlier on and we also uh can see that there's a this is a summary of recovering the fundamental matrices between the first second view and the first and third view via the trifocal tensor titi t1 t2 and t3 over here then finally we have seen uh how to recover the camera matrices uh without falling into the fallen c of the projective ambiguity of the third view using this method over here we'll now introduce the tensor notation to denote the cumbersome uh three in this inner tensor what i meant by combustion in this is is that we have to represent it in this unconventional way which is t1 t2 and t3 so now we'll introduce what we call the tensor notation to re formalize this particular notation over here so as mentioned earlier on that an image point would be denoted as a homogeneous column of x1 x2 and x3 where this homogeneous column every entry over here would be denoted by an inductor of a superscript over here and uh we'll do the same for an image line which is denoted as a homogeneous row over here and we'll use subscripts of 1 2 and 3 over here to represent the respective coordinates of this particular line over here and we use the same denotation for the i by j matrix and where each entry over here is denoted by a subscript j that index the the column or we call it the covariance index and i over here which is the superscript represents the contra variant which is the row index of the of the matrix over here so what this means is that if i have a matrix a over here and this particular entry of i and j so this would be representing the j column and the i row in the metric a now this in this is repeated in the contra variant and the covariant position it simply implies the summation over the range of this indices an example over here is that the this particular transformation equation over here x prime equals to a multiplied by x over here where x is a column vector and a here is a matrix and we're the the multiplication over here the respective entry in x prime over here so x prime is equals to 1 x prime 2 and so on so forth and x 1 x prime i for example each one of this i coordinate over here it's simply given by the summation of the i the j entry in the a matrix over here multiplied by the jth entry of the x vector over here because we have the subscript and the superscript they are the same notation over here so we do a summation over here and this is equivalent to a i one multiplied by x one plus a i uh two multiplied by x two and so on so forth we'll drop the submission sign over here uh by following this rule over here we can see that uh once we identify that there is a common index between the superscript and the subscript this naturally denote that there should be a summation over j over here but for simplicity we'll just drop the summation sign over here now the trifocal tensor can be also written in the tensor notation using this format over here where we simply use a subscript of i to represent the respective three by three matrix in the trifocal tensor and the superscript of j and k simply denotes the entry in the row which is j and k is the column so now as we have mentioned earlier on this is a three by three matrix t1 is a three by three matrix and then there's another three by three matrix which we denote as t2 and t3 over here so what it means is that the jth row and the k column of t i over here would be this entry over here so we will simply denote this as uh t i j and k over here now in the tensor notation the basic incidence relation that we have derived earlier on this equation over here which we saw earlier on that this is actually a cumbersome notation we need to write t three times over here t1 t2 t3 and using the uh square bracket over here this product over here the product of l prime l prime prime and t the trifocal tensor can be rewritten as this form over here using the tensor notation so we can see that this l i that means that the entry this l transpose over here uh the entry l1 l2 and l3 over here the is n3 over here is simply given by the product of l prime j l prime prime k and t i j and k so we can see following the same rule over here if we see there is a common entry between the subscript and the superscript then that must be a summation over that entry but that particular index and it's uh applies here similarly so we can see that j is repeated here in the subscript and the superscript and k is also repeated in the subscript and super square so that this means that there must be two summations sine over j and k over here and uh interestingly yeah if we look at this equation over here we see that there's no common superscript and subscripts over here so this simply means that this is a scalar product of these two terms over here which corresponds to every entry inside this two outer products in in the original notation now using the same tensor notation the homography map that we have seen earlier on here where we can group this together all these together to form the homography simply becomes uh this particular form over here so here we can see that the i k entry of the homography matrix so homography matrix is actually a three by three matrix so i here it refers to the column i've column and the kth row of this three by three matrix over here the dot product over here can be rewritten using the tensor notation in this particular form here and we see that the j index here is common between the subscript and superscript and this means that there should be a summation sign over here but we'll drop this submission sign and write it simply as this form over here so uh similarly once we have this particular notation over here for the homography in a tensor notation we can also use the same notation to denote the mapping of x prime prime equals to h and x over here so this simply means that this is the k entry of the column vector of the vector in x prime prime and this is equivalent to the sum over i because there's a common index over here which is i of this homography and element multiplied by the i-coordinate of the point x and here's a summary that how we can use this tensor notation to denote the relation of the trifocal tensor as well as the definition of all the camera matrices p p prime and p prime prime that we have defined earlier on and similarly we can use the tensor notation to denote the line incidence that we have seen earlier on as well as the homography that transfer the point from the first to the third view via a plane in a second view and the homography that transfer the point from the first to the second view via a plane in the third there is an additional tensor notation that we need to introduce and that would be uh epsilon r s t over here so if c lon rst can be seen as a indicator function where it takes uh three different states zero plus one or minus one so it's zero when rst contains repeated numbers an example over here would be epsilon one one two for example and when there's two repeated numbers over here then this will be equals to 0 but it would not be equal to 0 when uh all the three numbers here are unique for example epsilon 1 2 and 3. so when there's a even permutation of r s and t over here this means that uh it's either that the all it remains in order one two and three then this would become plus one or uh if there is another even uh permutation which means that the the numbers are sort twice so another example here would be epsilon 3 1 and 2 would also be equals to plus 1 this is because we can see that there are even permutation that means that there are we can see that in this example that the numbers are sort twice two times which is an even permutation where the order of two and three are solved first so this is the first swap becomes one three and two and then the second swap would be between one and two to become three one and two over here so there are all together two slots over here from the original location of one two and three and hence epsilon three one two would be plus one so uh in the case where there is an odd number of permutation between one two and three then epsilon becomes uh uh and then easy loss becomes minus one so an example would be epsilon one three and two this is going to be equals to minus one because 1 3 2 is simply a swap between 2 and 3 one time and that would be odd permutation and hence epsilon is going to be -1 over here so the use of esilon the tensor epsilon rft over here would be to define cross product between two vectors for example a cross with b and this would end up with another vector c so where c can be rewritten as c 1 c 2 c 3 for example and this c i the i element of the the resulting vector of the cross product from a and b can be rewritten into this form where if ceylon here simply tells you which element of the product between a and b to select as an entry into c the the final vector c over here similarly we can also make use of this to represent the skew symmetric matrix of a over here given by i and k so i and k here represents this is skew symmetric for example this is a three by three three by three matrix over here so uh we can represent this using index of i and k where r i simply tells you the row index and k simply tells you the column index and this is given by the the expression over in terms of the epsilon over here so in the case where i and k becomes the superscript then it will no longer be row and column it will be column so i will represent the column and the k will represent the row over here and the final expression would be given by this term over here so by making use of epsilon the trifocal tensor of the incidence relation can be rewritten into these forms over here i will leave it to you to figure out that this is uh correct for example we will still follow the same rule where if you see the subscript and the superscript have having a common uh in this case then the there will be a submission sign over here so for example in this particular case over here we saw that inside the bracket over here r is a common uh in index so we have to sum over r within the bracket over here and then uh once with some sum over r r will go missing and then we are left with i and s where i here is common with the i over here so this means that we have to sum over i and we can also see that j and k are common subscript and superscript over here this also means that we have to sum over j and k over here because we are summing up over r i j and k over here and we are left with s over here so the final result would be zero because this is an incidence relation so the final result would be of course a zero but uh it will be a s dimension uh zeros over here because this is a superscript so what this means is that 0 here is actually a column of there are s number of columns over here so in this case s here is from 1 2 and 3 because this is a trifocal tensor so similar relation can be observed in all the rest of the relationships over here |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_7_Part_4_The_fundamental_and_essential_matrices.txt | so so far we have seen that given the point correspondences between two views how we can recover the fundamental matrix f in an uncalibrated setting and we have seen that by recovering f which is the fundamental matrix we can decompose it into a pair of camera projection matrices denoted by p and p prime respectively although we also seen that the decomposition is not unique this means that there's a whole set of solutions of the camera projection matrices that can satisfy the decomposition of the fundamental matrix given by p multiplied by a four by four non-invertible uh h matrix which is a projective transformation and p prime multiplied by h over here and in the calibrated setting we saw how to get the essential matrix this is the calibrated setting using the image correspondences as well as the known camera intrinsic value denoted by k and k prime and we saw how this can be decomposed into the rotation matrix and the translation vector that relates the two view and together with the camera intrinsic value we can recover p and p prime or which is the camera projection matrices of the two view uniquely so now suppose that we are already given p and p prime or we can compute p and p prime from the fundamental matrix or the essential matrix respectively together with the point correspondences can we recover the 3d point that gives rise to the point correspondences x and x prime the answer is yes we'll make use of what we call the linear triangulation algorithm to achieve the reconstruction and we have looked at the in the past lecture that in each image for a given 3d point we would have the projection of these 3d points onto the 2d image given by small x equals to p multiplied by x in the first camera view and x prime equals to p prime multiplied by x which is the same 3d points in the second view over here and we can see that this equation here is actually defined up to a certain scale so x equals to p multiplied by x it can be scaled using a scalar value lambda and this is valid over all values of lambda so we can actually take the cross product on both sides by taking doing this such that you will equate this particular equation in a constraint over here and the scale can be removed over here which is equivalent to x crossed with p and pick x over here equals to zero similarly we can do the same thing for the second view which gives rise to this equation here x prime cross product with p prime of x equals to zero over here and what we can see here is that each one of this equation in the respective view can will give rise to three equations over here that means that there are altogether three constraints over here but only two of them are linearly independent so that means that uh we will only have a remaining of two constraints per view on each particular point correspondence and what we can do here is that we can stack these two equations from each view together so the first two equation is from the x and then the second equation is from x prime so x as well as p and x prime as well as p prime so we can stack them together to form a 4x4 matrix a over here which we multiply by the 3d point because we can linearize this by factorizing out the common 3d point over here into this homogeneous linear equation over here where x is the unknown 3d point given by x y z and 1 over here and we can solve this equation over here for every point correspondence we will get one of this equation which is an over determinant equation since there's only three unknowns inside this particular 3d point over here and we have four equations so this is sufficient for us to solve for the 3d points uniquely we will simply follow the same step of taking the svd of a so the svd of a will give us u sigma v transpose and the solution of x will lie in the least singular vector inside v which we denote as v4 and furthermore since we know that the last element of x is going to be always equal to 1 over here so what we can do here is that we can simply divide v4 to vector v4 by the last element so v4 here is going to be equals to v4 x v4 y v 4 z as well as v 4 w so we can uh divide the first 3 by v 4 w to make the last element of x over here to be equals to 1. so now let's look at the reconstructed point with known camera calibration which means that we start the reconstruction from the camera projection matrixes that is recovered from e and the known camera intrinsics k and k prime so e here as a reminder we will be able to recover r and t and together with k and k prime will be able to recover p and p prime uniquely so we'll see that by making use of p and p prime in the linear triangulation algorithm that we have seen earlier on the 3d structures that we recover from the linear triangulation algorithm is subjected to a similarity ambiguity we'll see why is this so so let's uh let's denote h s as uh any similarity uh transformation this is a four by four matrix where the first three by three is a rotation matrix and the last three by one over here is a translation vector and there's a scale factor which we denote as lambda in the last element over here so we can see that according to the camera projection equation let's denote this particular point as x i over here it projects onto the image here as a small x i and this is equivalent to p multiplied by x i over here and what's interesting here is that the same point and on the image can be also obtained from another projection which is given by p or h s inverse multiplied by h s multiplied by x i over here what this means is that we can distort this particular 3d point using a similarity transformation here and as well as the camera projection using the similarity or transformation and we can see that this will still fulfill the cam projection function over here whether this dotted 3d point will still project back to the same image coordinate over here we know that the 3d structure is going to be scaled up or scaled down according to the similarity transformation and what's interesting here is that we can see that the resulting camera projection matrix is still a valid camera projection matrix so starting from the original camera projection matrix say uh it's denoted by k rp and t over here if we multiply by this by a transformation of h s we can see that the end result here is still a valid camera matrix where k here is still the intrinsic whereas the rotation and translation would be deformed by the the similarity transformation here and in the case of a uncalibrated camera which we have which we have seen earlier on that we can make use of a fundamental matrix computer from the eight point algorithm to decompose this into two camera matrices p and p prime and we saw that this is not unique in fact there is a whole family of solutions that fulfills this decomposition which is under a four by four non-singular uh projection matrix given by this particular relation over here and we can rewrite this into a inverse of this since this h over here is a linear mapping which is in invertible so let's denote this in order for us to prove this relation as in h inverse over here so now we can see that the point the 3d point is reconstructed as x over here from the linear triangulation algorithm using p and p prime it is still a valid reconstruction what this means is that if we take the projection of x that is reconstructed earlier on using the camera projection matrix onto the image x over small x over here this would be equals to p multiplied by capital x and if we were to take this and the 3d structure that has recovered earlier on and distorted under a projective transformation denoted by h over here we can see that if we apply the same projective transformation onto the camera projection matrix p where p h inverse and p prime h inverse will give rise to the same fundamental matrix as p and p prime over here so this means that this pair over here they are also a valid decomposition of the fundamental matrix and we can see that under this particular decomposition the and as well as the distortion of the 3d reconstructed point by the projection matrix h over here it will still be projected onto the same image point x over here simply because uh we have p h inverse multiplied by h x over here is equals to p of x which is equivalent to the original projection under the original setting but in this case we will see that the reconstructed 3d structure is actually subjected to a projective ambiguity and here is an example of the reconstruction under projective ambiguity given original image pair over here we can take some point correspondences at least eight point correspondences and compute the fundamental matrix so in this case it's an uncalibrated case and we know that we can decompose this into p and p prime that is subjected to a projective ambiguity that we have mentioned many times and by making use of these decomposed camera matrices we can reconstruct the 3d structure and we can see that this 3d structure is subjected to a projective ambiguity as what we have seen earlier on where in this case parallel lines are no longer parallel and points at infinity where parallel lines should meet so these two lines are actually parallelized so we can see that these two lines actually mean a certain finite coordinate and it's no longer at an infinite point now we will introduce the stratified reconstruction to remove the projective ambiguity in the reconstruction from the camera matrices that is obtained from the fundamental matrix and as we have mentioned earlier on or as we have seen earlier on in the first few lectures that the stratified reconstruction or the stratified approach simply means that we are going to remove the projective ambiguity in a two-step approach we'll first remove the projective ambiguity and transform it into a reconstruction with a fine ambiguity then finally in the second step we are going to remove the fine ambiguity to recover the metric reconstruction we will see that these two steps has to be done with some information from the scene the first step to stratified reconstruction is to do a fine reconstruction in this particular step will remove the projective ambiguity so the essence of the final reconstruction is to first locate the plane at infinity and the idea here is very simple because under projective transformation it is it is possible to map the plane at infinity under a projective transformation we know that a plane at infinity can get mapped onto a plane at a finite coordinate and so now the first step of the applying reconstruction because under a fine ambiguity a plane at infinity is still going to be mapped to a plane at infinity so the first step in a fine reconstruction is to identify the plane at infinity and find a homography that projects the plane at infinity which is now at a finite location we are going to find the homography that maps the plane at infinity which is now at a finite location back into the plane at infinity which is given by the coordinate of 0 0 0 1 over here and we can see easily that this particular homography which is h over here to map the plane at infinity which is now at a finite location back into a infinite location that is this homography is actually given by this equation over here where pi over here is a four vector that represents the plane at infinity under projective distortion so if we can identify this particular plane over here we can simply compute the homography over here and by making use of this homography so we can map this particular plane back into the plane at infinity i'll lift it to you to verify this you can easily substitute pi over here multiply by pi over here and this is going to give you 0 0 0 1 which is the pi plane at infinity uh once we have obtained the plane pi and the homography h over here the next step is simply to map all the reconstructed points which is denoted by x over here into x prime to remove the projective ambiguity and hence we are able to get a fine reconstruction from here the plane at infinity which is under projective distortion can be identified if we know that in the scene which set of lines corresponds to parallel lines so we will just simply need to know at least three sets of parallel lines in in three different direction in order to find the plane of at infinity for example this is the reconstructed 3d structure with with projective ambiguity so if we know that these two lines corresponds to the roof they are supposed to be parallel line we can simply take the intersection of these two lines to find the vanishing point which you denote as v1 over here this v1 is simply equals the cross product of these two lines let's say these two lines is uh represented by l1 and l2 here you simply equals to the cross product of l1 l2 and similarly we can identify another two sets of parallel line to give us v2 and v3 so pi here the plane at infinity is supposed to contain so this particular plane at infinity is supposed to complete contain v1 v2 and v3 because these are ideal points vanishing points that are supposed to on the plane infinity and but in the case of under uh projective ambiguity with or projective distortion we can see that all these three three points now are in a finite location as well as the uh plane at infinity is also at a finite location so but we can simply define this plane which is now at the finite location which we denote as pi as the cross product so we can take this two points and take the cross product which is going to give us this line over here as well as v2 and v3 we take the cross product so this is v1 cross with v2 v2 cross over v3 to give us these two lines over here and the cross product of these two lines is going to give us the normal vector that represents this particular plane over here so once we identify the three vanishing points we would have already identified the plane at infinity and once we get this the plane at infinity we can easily compute the homography to remove the projective distortion so this is how it looks like after the removal of the projective distortion by applying the projective transformation that we have seen earlier on which is h that is given here and this is how it would look like after the removal of the projective distortion and what we will see is that now parallelized becomes parallel and the plane at infinity or the parallel line will intersect at the plane of infinity which is located at the infinite location but what remains would be a fine distortion where the angles between lines are not recovered and they still remain distorted so after we have removed the projective distortion to recover a fine reconstruction the last step of the stratified reconstruction is to remove the affine distortion to recover the metric reconstruction and the key to achieve this is to identify the image of absolute conics omega this will see that we'll make use of this image of the absolute conics omega to define or to retrieve a four by four matrix h projective transformation matrix h over here such that we can apply this particular transformation all the reconstructed 3d points under a fine distortion to recover all the 3d structures in the metric reconstruction over here and the this particular matrix h over here the first three by three entries is given by uh a matrix inverse and the last element is one over here where a is simply recovered from the kolowski decomposition of this matrix over here where m here is the first three by three entries of the camera projection matrix in the second view and omega here is the image of the absolute conics that we have identified earlier on so now let's proceed to prove that this relation over here is true and we have seen earlier on that under uh known camera calibration with k prime over here uh of the second view the camera projection uh matrix in the second view pm prime over here is simply equals to the camera in 3 6 k prime multiplied by the extreme 6 represented by a rotation and a translation and this is subjected to similarity or distortion which we have seen earlier on we also define in this particular case that a finely distorted camera matrix is given by p prime or m whether capital m is a three by three and the last column here is represented by small m over here which is the case that we have seen earlier on here that is used to define the three by three matrix of a and now we can see that this particular matrix over here p prime it can be projected under the three four by four uh transformation of h into p m prime from this relation over here where h inverse a equals to this guy over here which is simply the inversion of this matrix that we have defined earlier on and what this means is that if we apply this h inverse here into back into this equation and substituting all the expressions here and here into this equation over here we'll get this particular relation where we can see that k prime r is equals to m multiplied by a in the first three by three entries over here so from the previous slide we saw the relation of m a equals to k prime multiplied by r and this can be rewritten into m a multiplied by its the transpose of itself and that would be equals to k prime multiplied by r multiplied by k prime r transpose where we can rewrite m a transpose uh to be equals to a transpose multiplied by m transpose and similarly we can rewrite k prime r transpose here to be equals to r transpose k prime transpose here and since r here is a octagonal matrix this means that r here multiplied by the transpose of itself would gives us identity which means that r here can be cancelled out and this gives us the relation that we see over here m multiplied by a multiplied by a transpose multiplied by m transpose equals to k prime k prime transpose here where we can simply move m to the right side of the equation to give us this relation over here here we notice that k prime k prime transpose here is what we have defined to be the inverse of the image of absolute conic in the previous lecture which we can denote this as omega star or simply equals to the inverse of omega and here uh this relation here will give us a multiplied by a transpose equals to m transpose omega m inverse which is the equation that we have seen earlier on here in the previous two slides and once we get this relation over here we can apply the cholesky factorization on a a transpose or simply on this guy over here that we have computed to recover a which we can plug into this equation over here to obtain the 3d transformation for metric reconstruction here we know that m here is from the camera projection matrix and omega here which is the image of the absolute chronic can be obtained from the various methods which i have mentioned in lecture 6. so finally after we have recovered the h matrix or the a matrix over here from the image of absolute conics we can simply apply this h matrix onto the fire reconstruction to recover the matrix reconstruction where we are still subjected to only a similarity transformation but we can now see that the angle are now recovered as well as parallel lines they become parallel so in summary we have looked at the various definitions of the terminologies used in epipolar geometry that can be used to describe the two view relations and we have looked at how to compute the epipolar line from the fundamental matrix so this is essentially l prime equals to f multiplied by x that we saw that the point gets transferred to a line from the first view to the second view then we have looked at uh how to compute the fundamental and essential matrixes under uncalibrated and calibrated settings with eight-point correspondences then we look at uh given a fundamental matrix that we compute from the eight point correspondences how to decompose this into p and p prime respectively and well we also look at how to decompose an essential matrix which is under a calibrated setting into the rotation and translation between two views and finally we look at after we recover either the fundamental matrix or the essential matrix uh how to recover the 3d structures from the point correspondences using the linear triangulation algorithm and under the uncalibrated setting we look at how to do stratified reconstruction to remove the ffi as well as the projective ambiguity thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_7_Part_2_The_fundamental_and_essential_matrices.txt | now so far we look at how the fundamental matrix can be used to transfer a point which we call x in one view onto a line which is the epipolar line denoted by l prime in on the other view this relation here is simply given by l prime equals to f multiplied by x and we have also seen how the fundamental matrix can be used to provide a constraint between two corresponding points x and x prime in the two views so if these two are corresponding point we saw that the relation of x prime transpose multiplied by f and x must always be equal to zero it turns out that the fundamental matrix can also be used to do a transfer between the epipolar line in the first view which we denote as l and the second view which we denote as l prime over here so it turns out that there's also a mapping relation between l and l prime using the fundamental matrix and this is given by the equation of f of l prime equals to f multiply by the cross product of any line which we denote as k and l itself so let's see why is this true suppose we have a a line which we write as a k on the first view let's say for example this is my line k and it will always intersect the epipolar line l at a certain point this point here it's given by the cross product of k with l and since we know that a point in the first view is going to get transferred to the epipola line in the second view which we denote by l prime and this equation let's say we denote this particular point as x we know that x tilde is going to be transferred to the second view as f multiplied by x tilde and this equation over here substituting this particular relation of the cross product of any line k with the epipolar line l into this equation over here we get f multiplied by the cross product of k and l which is simply the relation between l prime and l and interestingly this works for any line k so k here can any line that we arbitrarily define on the first image we can see that uh it all will always intersect the epipolar line at any single point x tilde so if k is here it will also intersect at one point over here and this particular point is also going to be transferred from the second view epipolar line via this equation now we can see that we can generalize this to a transfer of a epipolar line from the first view to the second view which you denote by l and l prime respectively the mapping over here is simply f multiplied by the cross product of the any line that we define in the first image and interestingly we can see that this is also equivalent to a homography that maps the 2d line onto a 2d line so this homography here it's simply mapping a 2d space onto a 2d space here is what i meant earlier on this can be seen as a homography and the so if we were to transfer the line from the second view back to the first view we will get a symmetrical relation that looks something like this as we have uh seen earlier on these two mapping the transfer equation over here via the homography of f uh multiplied by any line the skew symmetric matrix of any any line k over here it it it holds true for the pencil of lines the pencil of lines uh epipolar lines that we have defined earlier on between the two e majors any corresponding pair of epipolar line is going to lie on the epipolar plane its existence is defined on the first image as well as the second image respectively hence there's a valid homography that transfer l from the first image to l prime on the second image and similarly we can also define this relation using 1d homography let's take an example that this is a line k that we have seen earlier on we can see that the line k that we define arbitrarily it's going to intersect the pencil of epipolar line so all these are the pencil of epipolar lines the family of points over here and similarly the corresponding uh points due to this set of points in the first image which go i 1 and i prime over here so the corresponding point is also going to lie on a straight line over here because this is linear mapping and this set of correspondence line together with the its corresponding point it's going to intersect at a certain projection point that we call p over here that means that if we were to draw a line from the connecting p and x as well as x prime in the first and second image respectively across all the points here we'll see that there's a projective relation between the set of image points in the first image and its corresponding points in the second image and in fact we have seen this uh earlier on in the first lecture that since this set of points in the first image and the set of points and the second image they are projectively related that means that the cross ratio the cost ratio between these two sets of points must remain consistent and hence there is a 1d homography that relates the point in the first image and its corresponding point in the second image we'll go on to look at how the fundamental matrix behave under a pure translation so in this particular case so we assume that there's no rotation between the two view which means that the rotation in the second camera is going to be at identity we further assume that there's no change in the intrinsic parameter which means that k simply equals to k prime for the two camera projection matrices and hence we can write the two camera projection matrices in the canonical form where the first camera projection is given by p equals to k multiplied by the identity matrix which we denote as p prime it's going to be given by k multiplied by identity because rotation here is simply equals to identity and this gives rise to this identity and k over here which is supposed to be k prime it's equals to k because there's no change in the intrinsic parameter that's a translation vector that we denote as t over here if we were to put into the fundamental matrix that we have derived earlier on in the algebraic derivation that we saw that the fundamental matrix is actually equals to the cross product of the epipole in the second image with the camera projection of the second image and the rotation matrix multiplied by the inverse of the camera intrinsics of the first camera so now because the k prime here equals to k and r equals to identity so by replacing these two values over here in the original expression of the fundamental matrix we get the final fundamental matrix for the camera under pure translation as this particular form over here where we can see that k and k inverse cancels out and hence we end up to have the fundamental matrix equals to the skew symmetric matrix of the epipole in the second image and we can also verify that since this is skew symmetric what this means is that the rank of this particular three by three matrix over here is going to be equals to two and this is consistent with the rank of the fundamental matrix which is also supposed to be two now early on we get the fundamental matrix to be equals to the skew symmetric matrix of the epipole in the second image we can use this relation that we have derived earlier on to define the epipolar line in the second image so use the same equation that we have looked at earlier on that the transfer of the point x from the first image to the second image is going to be l prime over here is going to be given by f multiplied by x over here and since f under pure translations here is going to be given by the skew symmetric matrix of the epipole in the second image this means that we can replace the fundamental matrix over here with this expression and multiply by x over here since we also know that the second point the corresponding point x prime over here is going to lie on the epipolar line in the second image what this means is that the dot product of x prime and l prime is going to be equal to zero so if we were to substitute this or l prime with what we have derived earlier on here into this equation we will get this particular relation of x prime transpose multiplied by e prime the cross product the skew symmetric product of this and x equals to zero and what this particular relation over here implies is that x prime e prime as well as x are going to be collinear so the reason why we say that the x prime e prime and x are collinear is because we can see from this particular equation over here if we take the uh the last two terms over here the last two term over here e prime crossed with uh x over here it actually defines the epipolar line which is this line over here and since x prime is going to lie on this particular line and that's why the dot product of this of x prime with the epipolar line that is defined by e prime and x under pure translation is going to be zero what it simply means is that x prime is also going to lie on this line that is formed by e prime and x hence the three points the x prime these correspondence x as well as the epipole in the second image are going to be collinear and note that this collinearity property it doesn't generally hold in for a general motion it's only under pure translation that the fundamental matrix becomes the skew symmetric uh matrix in the second epipol and in the general form we can see that this is not true because there is an extra term over here which is k prime multiplied by r and k inverse we look at under this uh pure translation an example here so we can view the pure translation alternatively as instead of having took the camera instead of having the camera moving from for example here or on at this point here to a new point under uh pure translation so this is my camera for example and instead of looking at the pure translation uh from the perspective of the camera motion we can also look at the from the perspective of the world scene let's say that in this case my camera center remains fixed that means that my camera is a stationary but the world for example this cube in the world in the 3d world it actually undergoes a pure translation motion in the opposite direction of minus t so uh what this means is that it's moving from this location to this location where all the correspondences all the point correspondences are going to follow a straight line uh this is going to be a straight line where these sets of straight lines between all the different correspondences of the 3d entity are going to be parallel so all these lines are actually going to be parallel to each other this means that the this particular 3d entity which is the cube that we have seen that we will draw in this particular example over here it's going to move under pure translation of a minus t and what's interesting here is that we can see that this set of parallel lines that is span by the corresponding uh 3d points undergoing pure translation is going to project onto the image as this set of lines over here and we can see that since the parallel lines are going to intersect at infinity at the plane of infinity and this particular intersection point is going to be reprojected it's going to be projected onto the image as the vanishing point and interestingly this vanishing point over here is going to be our epipole that we have seen earlier on and uh we can also see that uh we can also see that the the early on we state that uh these three points x prime e prime as well as x must be collinear uh and vice versa so what it simply means is that on the other on the first view e and x and as well as x prime and x they are also going to be collinear so we can see that since the parallel lines the intersecting point of the parallelogram of infinity it's going to be reprojected onto the image as a vanishing point here we will see that this is actually the epipole and we can see that a corresponding uh 3d point over here which we denote as capital x and x prime undergoing pure translation is going to be be projected onto the image as small x and small x prime over here so and see that since x and x prime are going to lie on the parallel lines and all these parallel lines are going to intersect at a point of infinity which really projects to the epipole over here the vanishing point which is also the epiphobe over here we can see pretty clearly that uh the reprojected points of x and x prime are going to be on a same line as the epipole all the lines that parallelize that links of a pair of correspondence x and x prime they are all going to be reprodu projected onto the image and uh intersects at the vanishing point which supports our claim that x x prime and e that people are collinear in the second example uh we can see that in this in this particular case we have a camera which is at c in the first time and then we are going to translate this camera we are going to move this camera forward in under pure translation and the new camera center is going to be denoted by c prime over here in this particular case because this is under pure translation as we have derived earlier on the fundamental matrix is going to be given by skew symmetric metric of the epipole and here we can see that intuitively or geometrically we can see that let's say this is the this is our epiphone over here and if we were to move under a pure translation the epipol actually doesn't change because the epiphone is the line under pure translation the bass line that joins these two camera centers c and c prime are going to intersect at the ipo and as well as going to intersect at the epipole at the second image so they're all going to be collinear forming a straight line over here and as a result we can see that the epipol remains unchanged but any point correspondence let's say this particular point correspondence here a particular point over here which we denote as x in the first view is going to move uh to a certain location which we call x prime over here and interestingly because the epipol doesn't change uh the epipolar line it's also going to remain unchanged what it means is that this particular pipolar line it's going to get transferred to the same epipolar line on the second view but here we will see that if we were to plot x prime on the first image over here we'll see that x prime is actually on the same epipolar line but further out and this supports our claim that uh e e and x and x prime are going to be collinear and what's interesting here is that as the camera translates forward under pure translation we can see that the epipol remains static and all the correspondence points are going to appear to moving radially outward of the image with respect to the epipole as the center so under general motion which means that we have a camera that uh two cameras uh that moves the where the relative transformation is related by r a three by three rotation matrix as well as a three by one translation uh vector and we will see that under this general motion we can decouple the motion into two parts the first part is a pure rotation which is under the influence of r and we will see that under pure rotation this is equivalent to transforming an image from the original uh frame into the into another frame under homography and once this pure rotation uh is is completed we can see that the remaining relation between the image that has been transformed under pure rotation to the final image in the second frame it's simply a pure rotation the reason why we say that the relation of two images under pure rotation is given by homography is because of what we have seen earlier on suppose that the pure rotation is represented by r over here the homography that relates these two images under pure rotation is simply given by k prime multiplied by r multiplied by k inverse and that's the infinite homography that we have seen earlier on where r here is equals is equivalent to the rotation of the second camera matrix which we which we denote as a p prime over here and the first camera matrix over here we are going to denote it using a canonical representation of the intrinsic multiplied by the the identity uh matrix over here there should also be a prime over here because now we are not assuming any particular motion or any particular assumption on the intrinsic value so there should be two different camera intrinsics which you denote by k and k prime over here so once we express this particular transformation under pure rotation as an infinite homography we can see that there is a fundamental matrix that relates the image after transformation by the infinite homography and the final image under uh the the influence of a new fundamental matrix which we call f tudor and we saw earlier on that in the earlier example that this fundamental matrix under pure translation is simply given by the skew symmetric metric of the epipole in the final image or in the second view and if we were to put these two motions together because we say that it's going to be a pure rotation over here and then followed by a pure translation over here so this means that the final motion here from this view to this final view here is actually a composite of the two motions that we have decomposed earlier on the pure rotation followed by the pure translation and putting these two together we'll get the final fundamental matrix which is given by the skew-symmetric metric of the epipole in the second image multiplied by the infinite homography so this guy over here is actually the infinite homography that we have seen earlier here to verify that this is this expression here is exactly the same expression as what we have derived earlier on in the algebraic fundamental matrix derivation and uh we can also uh where where this particular guy here is the pure translation and this particular guy over here is the infinite homography under pure rotation up to this point now what we have looked at so far on the properties of fundamental matrix is how the fundamental matrix is used to relate a pair of point correspondences between two views which we denote as x and x prime over here so we saw that it's actually defined a mapping of a point to align the epipolar line on the second view and we also saw that this defined correspondence constraint which is equals to zero and now we will turn our attention to look at given a certain fundamental matrix that we denote by f over here how can we decompose this particular fundamental matrix into the camera matrices p and p prime of the respective two views and it turns out that the decomposition of a given fundamental matrix into the camera matrices of the respective two views is not unique so what this simply means is that uh given particular fundamental matrix we can have more than one set of p and p primes that fulfill the decomposition that returns us the same fundamental matrix and more specifically these sets of projective matrix camera projection matrices are related by a homography what this means is that p and p prime as well as p multiplied by a four by four projective transformation mission as well as p prime and h this pair over here and this pair over here they're all going to be mapped into the same fundamental matrix let's look at the proof of why is this so over here we know that in the case of a the projection of a 3d point into a 2d image now the equation is given by p multiplied by x then that's also equals to p after a projective transformation which is given by p of h and the 3d point after a projective transformation which is given by h inverse of x so since the h over here cancels out we can see that both of these projection matrices where p and p h are related by a four by four projective transformation of h they are equivalent hence what this simply means is that the point correspondences of x and x prime in two views they're going to be a corresponding point under particular pair of camera matrices p and p prime and they're also going to be a matched pair so what this means is that they are also going to be a pair of correspondences in the 2d image under the two camera matrices of a ph and p prime multiplied by h which means that this this is the equivalent of the first camera the first set of cameras undergoing a uh transformation of uh by a four by four projective matrix over here under this particular new set of transformation the correspondence the under the 2d image is going to be the same but the 3d points is going to be different what this simply tells us is that the decomposition of the fundamental matrix into the camera projection matrices of the two respective views is not unique they are all related via a four by four projective uh matrix uh although we have seen earlier on that given two camera projection matrices the resulting fundamental matrix is unique we can we we saw earlier on that this fundamental matrix is actually equals to the product of our p prime c cross of this this means that this is the epipol in the second image multiplied by uh p prime and p and then the pseudo inverse of the camera of p which is the camera projection matrix of the first uh image so we can see that given p and p prime we always get the unique f but the converse is not true since the decomposition of f can result in multiple sets of camera matrices for the two views that are related by a four by four projective transformation we will go on to prove that this four by four projective transformation always exists on the camera projection matrix after the decomposition of uh the fundamental matrix but before that let's uh first define the canonical form of the camera matrices which is required for the the proof so in this particular case here we show that the fundamental matrix corresponding to a pair of camera matrices that is canonical camera uh matrices that is given by p equals to identity let's ignore the camera projection matrix over here and hence the project camera projection matrix is simply a three by four matrix that is given by identity and zero and the second camera projection matrix which we denote by p is given by m and a three by three matrix m and also a three by one vector m over here and we can see that the resulting fundamental matrix is given by the cross product of the last column of the second projection matrix and the first three by three matrix of the second camera projection matrix now let's look at the proof on why is this true we know that the epipol in the second view e prime over here is simply given by the projection of the first camera center into the second image over here which is given by p prime multiplied by c and substituting these two matrices defined earlier on into this relation over here we'll get a second camera matrix which is m a three by three matrix of m and a vector of three by one vector of m over here multiplied by the camera center of the first camera which is defined at zero since this is a canonical representation uh we'll see that this is actually the camera center is actually given by zero zero zero and one and multiplying it we get the last column of p prime so this is the last column of p which is given by the vector the three by one vector of m over here and earlier on we also defined during the algebraic derivation of a fundamental matrix we saw that a fundamental matrix can be given by the cross product of the epipole and the product of p prime and the pseudo inverse of p so substituting the expressions of p and p prime into this equation over here we can see that this essentially resulted in the this particular expression over here where the pseudo inverse of p is simply given by a three by three identity matrix and the last row would be a three by one uh zero entries and since p multiplied by p inverse is going to be equals to identity i'll leave this to you to verify that this is true and after expanding this particular expression over here we end up to have this equation which is what we have seen earlier on hence we proved that the fundamental matrix is given by this particular equation under the canonical form of the camera matrices now uh with this definition of the the fundamental matrix under the canonical camera configuration uh let's go on to prove the theorem suppose that we are given any pair of uh camera matrices of the two views p and p prime over here and the second pair would be p theta and p tilde prime over here that uh these two paths of the camera matrices of two views if they both correspond to the same fundamental matrix that means that after decomposition of this fundamental matrix we get p p prime we can also get a p tilde and p delta prime they both fulfill this particular fundamental matrix then the relation between the two sets of cameras must be a four by four non-singular h matrix which simply transform the first camera center via this transformation over here p multiplied by h into the second camera so this is going to be p to the equals to p multiplied by h as well as the same h metric over here is going to transfer the second view which is p prime into p to the prime or under the same relation that we have seen here now the proof is as follow we we first suppose that the relation is true that means that suppose that for a given fundamental matrix f and this particular fundamental matrix f corresponds to two different pairs of camera matrices which is given by p prime and p prime so this is the first two views the p and p prime where this p and p prime the two views give rise to a fundamental matrix as well as another two views which we denote as p theta and p to the prime which also gives rise to the same fundamental matrix here suppose that this is true and furthermore let's define the pair of camera projection matrixes in the canonical form as we have seen earlier on this simply means that we can fix the first view at identity which means that p and p tilde over here these two pairs we have these two pairs pan number one and pan number two over here we can we fix the first view which is p and p tilde over here to be equals to a identity camera projection matrix which means that the world frame is simply aligned with the camera frame identity and then we denote the the second view of the two pairs p prime and p prime tilde as a general three by four matrix given by this guy over here as well as another general 3x4 matrix given by this guy over here according to the result given in slide 40 the fundamental matrix of the two pairs of camera matrices uh expressed in the canonical form can be written as this form over here we can see that from slide 40 this is true because uh in a canonical form the fundamental matrix is simply equals to the cross product of the last column of the second camera projection matrix and the first three by three matrix of the second camera projection matrix our first assumption here is that these two pairs of camera matrices corresponds to the same fundamental matrix what it simply means is that the first cross product that we obtain from p prime over here that defines the fundamental matrix f it's going to be the same fundamental matrix as the cross product terms that we have obtained from the camera matrix of the second set p to the prime that we have defined earlier on and before we complete the proof we need the following lemma over here and the lemma says suppose that the rank two fundamental matrix f over here can be decomposed into two different ways so uh which we saw earlier on that f equals to the cross product of this guy over here and f is also equals to the cross product of this guy this is actually the second camera of the first set and this is actually obtained from the second camera of the second uh set then uh a tilde is going to be equal to k of a where k is a scalar value and a three by three matrix of a tilde in in the second camera view of the second set of camera projection matrices is going to be equals to a plus a v transpose divided by k where k here is a non-zero constant scalar value and v over here we'll define it as a three vector |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_11_Part_1_Twoview_and_multiview_stereo.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about tool view and multi-view stereo hopefully by the end of today's lecture you will be able to do stereo ratification and correspondence search along the scan lines to get the disparity map in the to view stereo and then we'll look at how to compute the depth values from this disparity map next we'll look at the scan line optimization algorithm and the semi global matching algorithm to enforce the global as consistency on the disparity map for the two view stereo finally we'll look at the plane sweeping algorithm to perform multiple view stereo to obtain the dense 3d reconstruction and of course i didn't invent any of today's material i took most of the content from these two lectures at uiuc and university of toronto you may follow these four links to obtain the original slides in these two lectures i followed the cvpr 2007 paper on the plane sweeping stereo algorithm for multiple view reconstruction and finally if you are interested i would strongly encourage you to look at this tutorial written by yasu furukawa in the year 2015 it's a very comprehensive tutorial on multiple view stereo you can obtain this tutorial for free online now suppose that we are given two images which we denote as the left image i l and the right image i ah and we assume that these two images have a very large overlapping field of view and we know the relative transformation denoted by a rotation matrix and a transformation vector between these two images we further know the camera intrinsic values which we denote as k and k prime for these two cameras that which we use to obtain the left and the right images respectively the objective of to view stereo is to recover the dense 3d point cloud from these two images as well as the information that is given to us earlier on and we'll see in this lecture that an intermediate step to get the dense 3d point cloud would be to get what we call the disparity map and this is also equivalent to the depth map and it is interesting to note that stereo vision plays a very important part in most of the biological life system in in this world and in particular we can see that it's very important for the evolution of human being because uh given the pair of eye we are able to sense the depth in the 3d environment and this gives us the ability to navigate safely to prevent us from incurring any injuries within this environment and this is a very big importance for our evolution we can also see that stereo vision is also used in recreational purposes where as early as 1833 the stereograms was invented by this guy over here search how's her wheatstone so basically what he did was to mount two camera two cameras that are very closely mounted together on a rigid rig and images of these two cameras were taken at the same time and we can see that because there is a shift in this camera location this means that the point on the left image might appear to be slightly shifted on the right image for example this here it's on the location of x and y and this particular pixel on the right image that corresponds to this left image would be occurring at a location of x prime and y prime and we know that since they are both taken at the same time and of the same scene what it means here is that if we know the relative transformation between these two camera will be able to do triangulation to get the exact 3d point of this but in this particular stereogram what this guy designed was a special eye glass where when you look at this particular eye glass it will give you the sense of a depth based on the principle the only two corresponding pixels over here in this pair of serial image is actually offsetted by a certain amount and we can see that interestingly this stereogram evolves to what we know as the 3d movie in today's entertainment industry so if you have ever been to watch a 3d movie you'll be given a pair of these glasses over here where you put on this particular glass and look at the big screen the big screen actually looks something like this where uh the disparity between the two pixels that's what i have mentioned earlier on because there is a slight shift between these two pixels on the left and right image if i were to put the two images or super impose the two images together i'll see that that is a slight disparity for example we can see here that uh this point over here actually uh it's from the left image and this which corresponds to this point over here that is in the right image for example and by superimposing these two images together when we color the disparity between the left and right images with the two different colors over here uh in this example over here is a red and blue we can see that by looking at a superposition of the left and right images through these particular glasses over here we will be able to obtain a sense of that more mathematically we actually have learned the basic concepts of the two view stereo or how to obtain depth from two views when we look at the mathematics of the epipolar geometry in the previous lecture so essentially what this means is that given two images which we denote as i l and i are over here which is one is the left image and one is the right image as defined earlier on and in this particular case in the toolview geometry case we are given the relative transformation which is the rotation and translation and essentially what this means is that we will be able to get the essential matrix as well as the camera projection matrix p and p prime and we know from the earlier lecture when we learned about two view geometry in particular the epipolar geometry well after we are given the two camera projection matrix which is p and p prime we'll be able to do the linear triangulation from pairs of image correspondences between these two views in order to get the 3d point in the 3d scene and this is all up to a certain scale in the case where we obtain the relative transformation from the computed essential matrix but in today's lecture we'll see that if we are given the rotation and translation where the translation vector it's with a absolute scale that we know the exact translation between the two camera centers using the same form of linear triangulation algorithm we'll be able to do triangulation and obtain the 3d point which is in matrix scale and now we can define the problem formally as suppose that we are given two cameras with a known baseline rotation and translation and they are rigidly fixed onto a rig here's an example of the stereo camera it's a bumblebee two fire wire camera which can be purchased off the shelf so here in this particular casing over here we can see that there are two cameras that are rigidly mounted onto a rig this casing over here is a rigid rig where the two cameras are seated in this particular rig so before this rig is sold to the consumer the manufacturer actually did some calibration we will see this actual step of the calibration which we know as the stereo rectification and calibration later in this this lecture and now suppose that after the manufacturer has manufactured this particular stereo camera we'll know exactly what is the relative transformation the rotation and translation between these two cameras mounted rigidly onto this stereo rig as well as the camera intrinsics of this two camera which we denote as k and k prime after the camera calibration process the problem of to view stereo is to find the depth map which gives us the dense 3d points of the scene so what i meant by that map here is that well the objective is that suppose that image 1 is taken by the left camera over here and image 2 is taken by the right camera over here what we are interested in is that given these two images as well as relative transformation the intrinsic values of the camera we want to recover for every pixel on the reference view so we can choose either the left image as the reference view or the right image as the reference view it doesn't matter but in most cases it's conveniently chosen left image as the reference image so suppose that we choose the left image as our reference image the dens that map simply refers to for every pixel in this reference image we want to get the depth value of this particular pixel for every one of the pixel in the reference image and this is an example of the depth map that we see the brighter the pixel it means that uh the closer the object is to the camera so here we can see that this brightness over here and what it means is that the 3d object that corresponds to this particular pixel in the reference image which is the grass patch over here it's actually much closer to the camera as compared to for example a darker pixel that is shown in the depth map over here which corresponds to the 3d object in the scene that projects to this particular pixel over here now in comparison with the two view geometry case that we have looked at when we studied the epipolar geometry in that particular case what we are interested in is just to get the sparse set of correspondences and then once we compute the essential matrix the order fundamental matrix and recover the camera projection matrixes p and p prime we are only interested in doing a triangulation of this sparse set of correspondences and in that case it's easier because we will just stick to these uh sparse sets of correspondences of the image correspondences that we use to compute the essential matrix and as well as the camera projection matrix so uh what this means is that we'll just make use all this sparse image correspondences and together with the camera projection matrixes and we'll do a linear triangulation to give us the 3d point in the scene in in the case of a two view stereo which is what we are looking at right now as i have mentioned earlier on we want to obtain a dense step map that means that for every pixel in the reference image we want to do a linear triangulation on this particular pixel to get the 3d points that corresponds to this particular pixel here and what it means here is that for every pixel in the reference image we would have to find the correspondence pixel in the right image and this is not trivial because a naive way of doing this will lead to a very computationally expensive uh result so one naive way of looking at this would be suppose that in the reference image which is the left image as shown in this figure over here we are interested to find the depth of just one pixel uh in the reference view because we know the relative transformation the relative rotation and translation for this particular pair of images over here and what this means is that we would be able to compute the fundamental matrix because not forgetting that we are also given the camera intrinsic value denoted by k and k prime over here together with this we can recover essential matrix as well as applying this camera in 3 6 onto the essential matrix we'll be able to recover the fundamental matrix and from here we'll be able to compute the epipolar line which is denoted as l prime over here so l prime is given by f multiplied by x which is the coordinates of this particular pixel on the reference image once we compute the epipolar line which is shown by this blue line over here on the right image the next thing for stereo matching would be to slide the window across this particular epipolar line and we'll find the correspondence of this particular pixel over here x with respect to the right image over here along the epipolar line and once we have picked the best match which means that based on the visual appearance we want to choose a pair of it where the two patches of the left and right images that corresponds to x and the other one would be on the epipolar line on the right image based on the appearance they must be the most similar and so once we have the closest image patch in terms of a visual appearance we'll do a triangulation based on the camera projection matrices that we have obtained from the camera intrinsics as well as the relative uh translate translation and rotation between the two images to get the depth value because when we do a triangulation linear triangulation of these two we'll be able to get the 3d point which we denote as x over here and as well as the that value which is z with respect to the reference camera coordinate frame and once we get this z value here this means that we have obtained the depth value for this particular pixel over here and in the simplest case of course is we know that the epipolar line it's the corresponding scan line what this means is that if i have a point here which we denote as x that we have defined earlier on and we compute the the epipolar line that corresponds to this particular x over here this particular pixel on the reference view the simplest case would be that the epipolar line which we denote as l over here it's a line with the epipolar line which we denote as l on the reference view so what this means is that uh we can simply just ignore the computation of the epipolar line and take this particular pixel here the patch over here and slide it over this particular scan line which is of the same height this means that it has the same y value suppose that we denote the image has x-axis and y-axis over here so this means that the y-value of the reference image is the same as the y-value of the right image and what we can just conveniently take this particular patch over here and slide it along the same y value and check for the patch that is visually more similar to the patch from our reference view and notice the difference over here in the general case where we know that this is not true where the the epipolar line does not correspond uh well in the two views so suppose that in this particular case here where we have generally a rotation and translation over here and we know that this particular point can project to any point on the epipolar line on the right image over here so in this particular case we can't do this uh we what it means is that it would be more computationally expensive in order to find the dense correspondences of every pixel in the reference view because this simply means that for every pixel on the reference view what we need to do would be to compute the epipolar line for every pixel and recall that in our lecture in epipolar line or in the to view geometry that every particular pixel on the left image would be cast as a different unique epipolar line on the second image over here in the general case this means that for every pixel we have to compute the epipolar line and start searching along this particular epipolar line which means that this is going to be more computationally expensive compared to this case over here where we know that the epipolar line corresponds to the scan line and this simply means that the two epipolar lines on the reference view and the right image they align with each other according to the y-axis of the image and this brings us to two cases of the epipolar geometry in the first case where the two epipolar lines are aligned from our knowledge in the to view geometry that we have learned in the previous lecture we know that this can happen when there is a pure translation between the two cameras that means that the relative transformation of these two cameras are given by a rotation equals to identity what this means is that the two cameras would have their principal axis paler to each other so these two parallel principle axis of the two cameras the left and the right images they must be parallel to each other in order for the rotation matrix to be identity and in order for the epipolar lines of the left and right image to be aligned with each other this means that as we have defined earlier on that we assign a axis to the image let's say we call the columns of the image x axis and the rows of the image have a y-axis what this means is that uh in order for the two points the corresponding point or the two epipolar lines to be aligned with each other the height in the of the two epipolar lines they must be equal so what this means is that the relative translation in the y direction which we can see from this axis over here will be zero because the two epipolar lines are aligned with each other over here and there's only a difference between the relative translation in the x-axis what this means here is that suppose that the x-coordinate over here in the left image which we denote as x and the x axis coordinates over here which we denote as x prime the relative translation between these two this means that x minus x prime would be equals to t this is the relative translation in the x axis given by these two coordinates over here and of course since the two image planes are aligned with each other the relative translation in the z-axis would also be zero and this would be the efficient case that we have seen earlier on because what this simply means is that for every pixel in the reference view in the left image we just simply need to slide it along the same scan line in the right image to search for the image correspondences in the right image and in the general case which we also look at later that we know that the relative transformation could be anything it could be generally be denoted by a relative rotation r and a relative translation over here so in this particular case over here the naive way of doing this would be for image pixel in the reference view which is the left image over here we need to first compute the fundamental matrix from the essential matrix as well as the camera intrinsic value of the two images then we need to compute the epipolar line corresponding to the one of the pixel in the left image which we denote as l prime over here this is given by f multiplied by x and the image correspondence that correspond to this particular pixel x over here on the right image we'll have to search for it by sliding a window called along the pipolar line as such for the patch where it has the closest visual appearance to to this particular patch in the reference view and this is computationally expensive we'll see that we can actually transform this particular setting into this setting by doing what we call the stereo rectification so in the simplest case as i mentioned earlier on that the epipolar lines of the two views they align with each other or they will simply say that they align uh at the same scan line and in these cases the camera center are of the same height this means that uh what i meant uh early early on it would be that this this sign x and y axis they are off the same heights where they are at the same row of the the image for example given a point over here which we denote as x the corresponding point x prime over here it would be anywhere along this particular row at the same row of the reference image and they would also have a same focal length this means that the z value of this uh relative translation would be zero and as a result uh we we have seen earlier on that uh the epipolar line will fall along the same horizontal scan line so when these two epipolar lines coincides or sits at the same row of the with respect to the reference frame we say that they fall in along the horizontal scan line of the image we can see this from the mathematic of the epipolar geometry that we have learned in the earlier lecture now we know that the epipolar constraint is given by this equation over here this is the equation that we have seen in our lecture on two view geometry where x prime transpose multiplied by the essential matrix uh and x equals to zero so we know that also from the earlier lecture that the essential matrix can be decomposed into the cross product of the translation vector and the rotation matrix hence we can replace the essential matrix in this epipolar constraint over here with the translation and rotation and since we know in this particular special case over here where the epipolar lines of the two images are aligned as well as the principal axis of the two image are parallel to each other so this means that the relative rotation matrix between the two views which is the relative rotation and translation is between the two view is given by identity and t 0 0 so t as i have mentioned earlier on would be the difference between x and x prime over here in the in the x coordinate axis so x minus x prime would be giving us t over here and we can plug this two value r and t back into the essential matrix that we have defined earlier on to get this particular three by three essential matrix over here so we can see that it has a special structure where only two entries in this particular three by three matrix have a non-zero value and uh by plug this three by three matrix back into the epipolar constraint that we have seen earlier on uh we can define this particular constraint over here which evaluates to t multiplied by v prime equals to t multiplied by v over here t because it's a scalar value it can be cancelled away so what this means is that v prime and v they are equal and this simply means that because v here represents the y axis of the image so what this simply means is that by simply working out the epipolar constraint we can show that when the relative translation and rotation follows this relation over here the y-coordinates of the two images are simply a line and hence the epipolar line of the two images are also aligned along the same horizontal scan line because we know that the two epipolar lines are aligned with each other so what this means is that given a image pixel from the reference view it actually projects or it actually transfer to a epipolar line on the second view l prime which is a line at the same row of the reference image point in the left image over here and this means that if we were to back project a light ray from this particular reference point in the left image we can see that the 3d point can lie anywhere on this particular image and hence if we were to project this line this light ray that is back projected on from the left image onto the right image we can see that it is going to project onto the epipolar line where this epipolar line is aligned with the height of this particular reference image point over here and similarly if we were to take a back projection of the corresponding point in the right view and project the light array over here onto this reference view we will get the epipolar line which we denote by l over here and we will see that the two epipolar lines actually align with each other on the scan line this is simply because the relative transformation between the two views is given by r equals to identity as well as t is equals to t 0 and 0. another observation which will see the mathematical proof later is that we know that as i have mentioned earlier on that we can search for the image correspondence the match of the image correspondence of x on the scan line in this particular special case over here on the right image plane what this means is that we are going to slide a window along this epipolar line on the right image plane and search for the patch which has the highest similarity at this particular location that corresponds to the image point in the left image over here and one interesting observation is that we do not need to search along the whole scan line because if we know that this particular point here is the actual correspondence to this particular image point over here we will see that x minus x prime which is the x x-coordinate so suppose that we denote this as x-axis and y-axis as i mentioned earlier on and this x-coordinate is given by x and x-prime so we can see that we do not need to search through the whole scan line because x minus x should always be more than zero it cannot be the case where uh this is less than zero or this it's a negative value because we'll see later on in our derivation of the how to obtain the the depth value from the disparity so the difference between x and x prime over here is what we call the disparity value which we will need to first obtain from the comparison of the image patches or in another words the search for the image correspondences for the dense image correspondences of every pixel in the reference view to the right image plane so this means that for every pixel over here if we are able to find the corresponds we'll be in in the right image we'll be able to compute this value and record it down in this particular data structure which we call the disparity map so essentially every value in this disparity map contains x minus x prime and this value is obtained from the sliding a patch over here in the left image across the scan line or the epipolar line on the right image and then once we get the patch with the highest visual similarity with the patch in the reference view we'll simply compute x minus x prime and store it in a table of the disparity map and we'll see that this particular location over here we did not search through the whole epipolar line because we know that this disparity value cannot be less than zero uh otherwise what this means is that if it is less than zero then we might end up with a case where we have negative depth negative z value and this negative z value simply means that any image correspondences in this region over here would mean that the triangulation would end up to have a point behind the camera and this is not what we want because we know that we are always looking in front of the camera and in the case of the two images are non-parallel what this means is that if we are given uh two images or two camera settings that are fixed rigidly onto a rig and we know after calibration that this the relative transformation between these two is in general rotation and translation where rotation need not be equals to identity and translation need not be equal to t00 in this particular case what we can do here is that we can do a a process which we call stereo ratification stereo ratification to mathematically align these two images into the parallel form where every epipolar line falls on the same horizontal line or the horizontal scan line and in order to do this we will have to go through a two-step process the first step would of course be the stereo calibration so we'll first treat the two images from the two different cameras mounted on a rigid rig as two separate cameras and we'll first take a checkerboard which is what we have learned in the earlier lecture on camera calibration so we'll take a checkerboard and move it around in front of these two cameras then what we can do is that we can perform the intrinsic calibration respectively on the individual left and right cameras so by doing this particular calibration we will get the camera intrinsics which is k and k prime so here here notice that uh it's unlikely to happen that two cameras even although they might be the same model from the same manufacturer it's quite unlikely that they will have the same camera intrinsic value so we need to first obtain k and k prime from the camera calibration using the checkerboard and of course we also need to figure out the distortion parameters have we have mentioned earlier on in the earlier lectures these are the tangential and the radial distortion parameters all together there are five parameters that we need to uh obtain from here uh each from the respective left and right cameras so we can do this using the uh checkerboard calibration the junction use method that we have that we have discussed earlier on in our earlier lecture then the next thing that we have to do will be after we get the intrinsic parameters we can undistort the image because uh if the cameras in most cases these cameras would be made out of a real lens well which is uh which has a to a certain extent uh radial and tangential distortion so we have seen in the earlier lecture that it will appear something like this where it's not exactly squarish pixel but it will appear to be radial in the image pixel and we can apply uh the undistortion the distortion parameters to undistort this image into something that looks like this where now the pixel appears to be in regular squarish or rectangular shapes and once we have undistorted the image we can use the undistorted image pair between the left and right camera to compute the essential matrix so this is easy this is simply by computing a set of sparse correspondences between the two uh left and right images and then from here we get the correspondence we'll run a ransac base uh eight-point algorithm this is what we have seen earlier on in our lecture on two view geometry to compute the essential matrix and once we have computed the essential matrix note that in this case we get the essential matrix because instead of the fundamental matrix this is because we already have the camera intrinsic value from our calibration process in the first step and another reason why it's better to get the essential matrix instead of the fundamental matrix is that because uh recall that in our earlier lectures that for fundamental matrix is subjected to projective ambiguity uh this is because the intrinsic value are taken to be unknown in that kind of setting but in the context of a stereo vision we know the intrinsic values already from calibration so we can avoid the problem of the projective ambiguity completely by computing the essential matrix and once we get this essential matrix we know how to decompose it into the relative translation and rotation value we saw in the earlier lecture that we can decompose this into four solutions where only one of the solution would have a point that appears to be front of the camera so we can use one additional point to check for this another thing is that we saw earlier on from our two view geometry lecture that uh after decomposing the essential matrix into the relative translation and rotation we know that the translation is only up to scale but in this particular case over here since the correspondences here can be obtained from the checkerboard and if we know the checkerboard size this means that if we know the size of the box in this particular checkerboard over here so if we know the distance between these two uh points we can actually do a triangulation which is up to a certain scale which is up to uh scale and then from the triangulation we can do a similarity transform to just one point pair over here would be sufficient to recover the scale of the translation vector over here so this means that the rotation and translation can be recovered with full metric scale because since we know the calibration parameters of the checkerboard in this particular context over here and once we have obtained the stereo calibration this means that we know the entrance as well as the so this is the intrinsic value as well as the extrinsic value of the camera which is the relative transformation between the two cameras the next step would be to make use of these parameters that we have obtained from stereo calibration to do what we call the stereo rectification so this means that uh if we have two general views where in general the epipolar lines are not aligned with each other so they are they are not aligned in a regular horizontal scan line we want to make use of this intrinsic as well as an extrinsic of both the cameras to rectify them mathematically such that the two images have epipolar lines that are aligned in a regular horizontal scan line now here's a figure that illustrates the whole process so given the calibration checkerboard these are the raw images where we can see that it contains some distortion from the radial and tangential distortion and once we have done the stereo calibration this means that we have obtained the intrinsic values as well as the five uh distortion parameters that we have seen earlier on we can make use of these distortion parameters as well as the camera intrinsics to undistort the images which we have seen how to do this in the lecture on camera calibration so once we get the undistortion we'll see that the square over here looks uh it looks like a regular square and then from this undistorted image we can actually uh compute the essential matrix the regular ransac eight point algorithm that we have seen earlier on to get the extrinsic parameter which is the rotation and translation from all these parameters that we know we can do stereo ratification where now after stereo rectification all the epipolar lines of the two views they are going to line up in a regular horizontal scan line and finally we can crop these two images and now in this particular case here we have we do have a two views or two images from stereo pair of cameras such that they are aligned uh the epipolar lines are aligned and hence in order to get the depth map from the reference view say for example the left view over here the left image over here will be able to simply take any pixel from here and search along the scan line on the other image so as i have mentioned earlier on stereo rectification the goal is to mathematically align the two cameras into the same viewing plane what i meant here by mathematically a line is that the actual camera setting will still be in this setting this means that whatever pair of images that we take from the camera it will still give us a relative transformation of r and t as before it's physically impossible to tune or to adjust the camera setting such that the relative transformation here becomes identity and t equals to t 0 0 as desired in the rectified case so it's physically impossible to shift the camera or to adjust the camera such that it always gives this setting because it has to be so precise but what we meant by mathematically aligning the two images is that if we are able to obtain uh the relative transformation the extrinsic of the two views which we we are of course we are able to do that from the calibration that we have described earlier on we can make use of this information to define a transformation homography on the left and the right image such that we can transform these two uh with a homography h and h prime respectively so by taking every pixel here or by taking uh the from the but this particular image over here suppose that i'm representing one pixel here as x over here so we'll see that we can make use of this rotation and translation to define the homography over here such that we can transform this left image over here every pixel only this left image to a new pixel over here such that which i call x tilde over here such that the new pixel the new image over here is aligned with the new image over on the other side of the right image after we have applied the same process to the right image so this means that for the right image i'm defining a h prime homography from the rotation and translation and then apply every point on the right image which i denote as x prime over here to h prime to get x to the prime so this is the new image such that the image where x tilde and x tilde prime after rectification they fall onto the same epipolar line or a regular set of scan lines and this is how it looks like uh now in this particular example over here these two images are taken from two camera settings where uh the camera settings are related by a relative transformation of r and t so we can see that for any point here it's going to give a epipolar line which is this line over here and we can see that in the this setting in the in practice this is the kind of setting that we'll get where we'll have a relative transformation rotation and translation between these two camera views where the epipolar lines between these two views simply do not align in a horizontal setting so the objective of stereo rectification would be to make use of this to define a homography between the two views h and h prime over here such that after applying the homographies onto this pair of images respectively we'll end up with a pair of images where the scan line of corresponding views now becomes aligned to each other and as a result what we will get here is that we will get we'll be able to compute for every point on our reference view we'll be able to compute the correspondences uh of the correspondence in the right view very easily by sliding a patch along the corresponding scan line instead of computing the epipolar line and it has to be noted that this since we compute the homography over here so what this means is that we are mathematically manipulating the image such that into a state where the epipolar lines of both images are aligned with each other and what what this also means or we're not physically shifting the camera such that it ends up with an image that looks like this and all the rectify images after rectification it should satisfy the following two properties as we have noted earlier on that is the epipolar lines should be parallel to each other along the horizontal axis so this is the horizontal axis what it means is that the epipolar line the corresponding epipolar line they should become horizontal uh to each other and the corresponding points should also have a identical vertical coordinate this means that the two views where i have my image point on the reference view the left image the it must sit on the epipolar line that is aligned with the epipolar line on the other view and we'll make use of these two properties to find a projective transformation such that the epipoles in the two images are mapped to the infinite endpoint because in this case over here we can see that after we have rectified this particular epipolar line to become horizontal to each other or what this means is that for every point on my reference image i'm going to have a corresponding epipolar line on the right image such that the two epipolar lines sits on the same align location this holds true for any point for any epipolar lines so all that what this means is that all the epipolar lines after ratification would not intersect at the common point or rather they would intersect at infinity and this simply implies that the epipole of the left and right images now would have to map to infinity and hence we can define a homography h which is a three by three matrix over here to map the original image into the rectified image where the epipolar lines of this particular image would be mapped to infinity similarly we'll do the same we'll find a homography such that we can map the right image which call h prime over here such that we can map the right image into a setting where the epipole will be mapped in to infinity which we define as one zero zero over here so uh the last value over here note that the last value over here has to be zero because this is going to be a infinite point 0 here means that in homogeneous coordinate it means that it's going to be at infinity and 1 0 over here simply means that there's going to be a value for the x coordinates over here at 0 means that for the y value it means that the two epipoles are going to be aligned at the same location |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_7_Part_3_The_fundamental_and_essential_matrices.txt | now let's look at the proof of the lemma given two pairs of camera projection matrix which we denote as p p prime and p tilde and p theta prime they have the same fundamental matrix which is essentially given by the first cross product here a cross with capital a and this is also equals to a tilde cross product with capital a tilde and we we saw this relation earlier on we also know that if we take the cross product of a vector by itself that means that if i take the cross product of a across with itself this is always going to be equals to 0 hence taking advantage of this particular relation over here and we can do this a transpose with a cross product of a this this equation here simply is equivalent to a cross product of a by itself transpose and this is always equals to 0 and multiply by a and this will give us 0. this simply can be rewritten into a transpose f equals to 0 which means that a is in the left null space of f over here and similarly we can write the relation in the sec with the second pair of camera matrices which gives us a tilde transpose f equals to zero and this simply also means that a tilde is in the left null space of the fundamental matrix and since both a and a tilde are in the left left now space of the fundamental matrix this simply implies that a tilde is equals to a multiplied by a scalar value which we call k over here note that this is a non-zero scalar value of k and next we can see that since the fundamental matrix obtained from the two pairs of camera matrices are equal we can write this relation over here and according to what we've proven earlier on that a tilde is simply equals a scalar non-zero value k multiplied by a we can substitute this relation here into this equation here and as a result we will get a cross 3 by 3 matrix of a and then here it's simply a scalar value multiplied by a cross product and multiplied by the a 3 by 3 matrix of a tilde over here and since k here is a scalar value we can actually factorize it out so this means that by moving the left-hand side to the right-hand side this simply means that we get this particular relation over here which is a to the minus a equals to 0 and this is simply this guy over here and so once we have this particular relation this tells us the relation that the cross products of a with this particular matrix over here the three by three matrix over here it has to be equal to zero and according to the definition that we have seen earlier on uh if we take the cross product of any vector by itself it's always going to be equals to zero so we can simply rewrite this part over here k multiplied by a to the minus a as a dot product of two vectors a and an arbitrary vector of v transpose where a simply cross with itself over here which is this guy over here would give us 0. and hence we can conclude that k of a tilde minus a can be rewritten as a multiplied by the transpose of an arbitrary vector that we denote as b over here and by rewriting this equation into making a tilde the subject we will get this equation over here which is exactly what we have defined earlier on hence this completes the proof for the lemma so putting the lemma back into the earlier proof that we are trying to show which is two sets of camera matrices p p prime as well as p tilde and p to the prime if they both give rise to the same fundamental matrix this simply means that p is related to p tilde bias four by four non-singular projective matrix as well as p prime is also related to p tilde prime by a four by four non-singular projective matrix h over here so uh putting back the results that we have seen earlier on in this particular lemma if we were to write the second camera matrix from the respective pair as a and a tilde over here we get this particular relation that a tilde equals to k a and a the three by three matrix of a tilde equals to this relation over here putting these two relations back into the pair of camera projection matrices we will get this particular uh results over here where p and p tilde still stay at the canonical representation which is i n 0 but p prime and p tilde prime now becomes this particular function of a and then the p tilde prime will be related to the first camera matrix in this particular form which is shown by the lemma earlier on and uh it is not difficult to see that now p and p tilde prime these two camera matrices which is the second view of the respective uh camera projection matrix path it's actually related by a four by four matrix given in this particular form where k here is what we have defined earlier on to be a non-zero scalar value and v will be arbitrary three by one vector that we have defined earlier on hence they're all together four degrees of freedom inside the particular h matrix over here and it's not difficult to see that when we take p multiplied by h we will get this particular relation which equals to k inverse multiplied by p tilde and this simply supports our claim earlier on if the two sets of camera projection matrices give rise to the same fundamental matrix then p tilde which is the first camera of the second set of our camera projection matrices is going to be related to first camera matrix of the first pair by a four by four non-singular projective matrix h over here and uh indeed is true up to a certain scale since k is arbitrary non-zero scalar value that we have seen earlier on so this equation here is proven to be true and furthermore we can also show the same relation between the second camera so we have p p prime and p to the p to the prime well previously we shown this relation between the first cameras and now we can see that the same relation by the projective uh matrix h that we have defined earlier on holds true for the second cameras in these two sets of camera matrices and uh it's not difficult to plug h into the equation over here and simplify this expression we will get a to the three by three matrix with the a tilde three by one uh vector which is simply equals to p to the prime i'll leave this to you to verify that this particular relationship is true and so we can conclude that p and p prime the first set of camera matrices uh is indeed related to the second set of camera matrices uh p tilde and p tilde prime if they both result in the same fundamental matrix so this completes our proof now having proven that the fundamental matrix can be decomposed into a non-unique set of camera matrices p and p prime which is given by this guy over here f is still composed into p and p prime or simply a set of a family of solutions which is related by a four by four projective transformation matrix h over here so these are all equivalent decomposition of h now we're going to uh specify to a certain method to decompose the fundamental matrix and but before we do this we need to specify this particular relation that a non-zero fundamental matrix f over here it corresponds to a pair of camera matrices p and p prime if and only if this relation here is skew uh symmetrical we can see that this is true from the following proof over here starting from the equation the correspondence equation that we have seen earlier on where x trans x prime transpose x prime transpose f multiplied by x equals to zero we can see that the this equation here can be rewritten into p prime multiplied by the 3d coordinate x transpose f and p multiplied by x equals to 0 and this implies that this can be rewritten into x transpose p prime transpose f and p x equals to 0 which means that in this particular case here we can check that this guy over here always has to be a skew symmetric matrix in order for this correspondence equation to remain true hence we conclude that in order for f to correspond to a pair of camera matrices p and p prime this particular relation here that we've seen earlier on here must always be skewed symmetric and as a result we can choose the decomposition of f into p and p prime in the following form where the first camera projection matrix p can be chosen at a cononica setting which is uh equals to identity uh three by four identity matrix and the second camera projection matrix can be chosen as the following form over here where the first three by three matrix over here is simply equals to the cross product of the epipole multiplied by the fundamental matrix over here and the last column in the in p prime is simply given by the epipole of the first view you can actually verify this pair of camera matrices give rise to a skew cement which is shown by this guy over here p prime transpose multiplied by f multiplied by p it must be a skew symmetric metric we can easily show this by substituting this particular equation over here so we have starting from p prime transpose f multiplied by p over here where p prime here is simply this guy over here which we so this is the epipole of the second view so what it means is that we will have to substitute this into this equation over here where this is the form over here that we get here we denote s over here simply as the skew symmetric metric of the epipole in the first view and i will get this form over here and where p over here is simply the identity three by four matrix that we have seen earlier on after evaluating this matrix multiplication relationship this is what we get over here where we know that the dot product of the lp pole multiplied by the fundamental matrix is equals to zero since we have seen earlier on that the epipole is actually the left null space of the fundamental matrix so this guy here is uh equals to zero and uh hence we can show that since s over here is defined as the skew symmetric matrix or which is the skew symmetric metric of the epipole the multiplication of any arbitrary three by three non-zero three by three matrix which is the fundamental matrix with this skew symmetric matrix is always going to be a skew symmetric metric hence prove this proof that the choice of p and p form over here is going to give rise to a skew symmetric metric into when we plug this to the p and p prime into this equation over here hence this shows that p and p prime in this form is a valid decomposition of the fundamental matrix now so far we have looked at the the fundamental matrix under an uncalibrated setting this simply means that we do not know the camera intrinsic value k and k prime in the two views but in the case where we know these two camera matrices k and k prime we can simply rewrite the point correspondences x small x and x prime in two views so x and x prime is uh the point correspondences x and then x prime point correspondences in the two views we can simply rewrite this since we know k and k prime the camera intrinsic value we can simply rewrite this into k inverse x and k prime inverse x prime and we call this the normalized camera coordinates so and we denote this correspondences after we normalized it with the camera intrinsics as x hat and x hat prime over here so now uh let's substitute this particular normalized camera coordinates into the correspondence equation of the fundamental matrix that we have seen earlier on x prime which is x prime transpose f x equals to zero so after we normalize this since we know that the camera intrinsic value we can replace x prime over here we using uh k prime inverse multiply by x prime and transpose of this and then we have to replace f with some other matrix which we now call e over here and uh in order for the relation to still uh be valid uh because we introduced the camera intrinsics over here so we have to replace the fundamental matrix with some other matrices uh with another matrix which we call e over here so x similarly for x we are going to replace x with the normalized camera coordinates which is k inverse multiplied by x over here and this is equals to 0. so after we apply this transpose we will get x prime transpose k prime inverse transpose e multiplied by k inverse x and this is equals to 0 and we can see that this takes the same form as the original correspondence constraint equation that we have seen earlier on which is x transpose f x equals to zero and where this x prime and x remains the same and we can simply call this particular matrix over here our fundamental matrix and uh we will give e the e matrix over here a name we'll call this the essential matrix with the relation of f is simply equals to k prime inverse transpose e k inverse and we can simply re also rewrite this equation by grouping the normalized camera coordinates together so we can call this guy and this guy here as our normalized camera coordinate and now we will get the correspondence constraint equation in terms of the essential matrix as well as the normalized camera coordinates which is x hat prime transpose e multiplied by x hat equals to zero we will see that e can be decomposed into the cross product of the translation vector and the rotation matrix here's the proof that the essential matrix can be decomposed into the cross product of the translation vector multiplied by the rotation matrix previously we have seen in the algebraic derivation that the fundamental matrix is can be expressed as the cross product of the epipole in the second image multiplied by p prime which is the camera projection matrix in the second image and p inverse which is the pseudo uh inverse of the camera projection matrix on in the first from the first image and since we expressed the camera projection matrices as a canonical representation so the first camera projection matrix would be equals to k multiplied by a three by four identity matrix and p prime over here would be equal to k prime multiplied by the transformation r and t i'll leave this to you to verify that the strudel inverse of this particular three by four matrix over here it's equal to a three by three k inverse over here and the last row to be uh to be all zeros since the first camera uh is defined in the canonical frame this means that the first camera center is going to be the origin of the world frame because it's the camera coordinate frame aligned with the world frame in this particular case so now let's substitute these two uh equations pseudo inverse of the first camera projection matrix and and the camera center of the first camera of the first view into the fundamental uh matrix equation over here and x uh expressing this uh working out the the expression over here we can see that this particular expression over here evaluates to this equation i'll leave this to you to prove that k prime in a cross product or skew symmetric matrix and k over here can be factorized to become uh k prime uh inverse transpose multiplied by the skew symmetric uh matrix of t multiplied by r and k inverse which remains from what we saw earlier on here and hence to since we know that the fundamental matrix is equals to k prime inverse t crossed with r and k inverse over here no we this is exactly the relation that we have seen earlier on here where now we have proven that e the essential matrix here is indeed the cross product of the translation vector and the rotation matrix so now let's look at some properties of the essential matrix overall the essential matrix has five degrees of freedom where three degrees of freedom comes from the rotation matrix and three additional degrees of freedom comes from the translation vector so this adds up to be six degrees of freedom but because the translation vector here it's only defined up to scale that's overall scale ambiguity uh hence we have to take away one degree of freedom from this six degrees of freedom so six minus one will give us five degrees of freedom in the essential matrix for a calibrated set of cameras and essential matrix since it's a rank two which means that the singular value of the essential matrix is going to be equals to zero and in comparison to the fundamental matrix where it's also a rank two matrix with the third singular value equals to zero uh in the essential matrix we have an additional constraint where the first two singular values must be equal this means that if i were to take the svd of e i will get u sigma v transpose and i will see that sigma is actually equals to the first singular value second singular value which are equal and the third singular value should be equals to zero now we'll look at how to decompose the essential matrix into a given essential matrix into the translation vector and the rotation matrix so we'll start from this particular equation that we have seen earlier on that we have proved that e equals to the cross product of the translation vector and the rotation matrix so uh given e over here given the essential matrix which is a which is essentially a three by three uh matrix over here we take the svd of e and this will simply give us the left octagonal matrix and multiply by the singular value and the v transpose where v here is equals to the right octagonal matrix we can simply rearrange this taking u and v and such that we multiply u by a matrix which we define as z over here so if you take u which is octagonal matrix multiplied by a matrix three by three matrix that we define as z over here and u transpose the end result of this would be a skew symmetric uh metric so now what it remains is that all we have to further find out what's the representation for z over here but suppose now we have this representation which is z we'll try to find z such that the expression over here gives rise to a skew symmetric matrix that corresponds to the skew symmetric metric of the translation vector over here and we can also do the same trick where we take the left octagonal matrix multiplied by some matrix over here which we call x this is also a three by three uh matrix over here multiplied by u transfer since these two are octagonal matrix we just have to figure out what's x over here such that it will also give rise to a octanormal matrix of r over here so this end result over here would be equal to the rotation matrix here now what remains here is to for us to figure out what's suitable matrix to represent z as well as to represent x and we can check that since the u there's a u transpose here multiplied by u and since u here is the the left diagonal matrix u transpose multiplied by u will simply be equals to identity over here so we can evaluate this particular expression over here the products of these two uh expressions over here into this form over here and we can see that this is simply equivalent to the svd of taking the singular value decomposition of the essential matrix over here where u and v transpose is simply equals to the left and right octagonal matrices of the svd of e and we will have some clue over here that z multiplied by x should be equal to the singular value the diagonal uh matrix so sigma over here should be as we have seen earlier on should be equals to sigma sigma and 0 where the off-diagonal are 0 and this should be the result of z multiplied by x over here and since e here is known up to scale because we saw in the property here that uh the translation vector over here it's uh we there's an overall scale ambiguity that's one minus one degree of freedom here uh we can ignore the sign of z and x what this means is a suitable choice for z and x when multiplied together it should give us this end result over here would be z equals to this matrix over here and w equals to this particular matrix over here and when we multiply these two together z multiplied by w we'll get a diagonal of 1 1 0 which is equivalent to this since it's known up to scale this simply means that we can put unit scale in this particular diagonal singular value matrix over here and we also do not know the sign what this means is that the diagonal matrix could also be minus 1 -1 and 0 and this is sim which is simply given by z multiplied by w transpose where x equals to w transpose over here so after we have recovered the left and right octagonal matrices from the svd of the essential matrix of e here and we also have defined the matrices of w and z which we have seen in the earlier slide we can now go on to recover the translation vector where we previously defined that the skew symmetric matrix of the translation vector t is equals to u multiplied by z multiplied by u transpose here and since u is a octagonal matrix and t is a skew symmetric metric we can get this relation where the translation vector would be simply obtained as the plus minus third column of the left octagonal matrix u from the svd of the essential matrix e and then once we have recovered the translation vector we can also concurrently recover the rotation matrix as equals to be u multiplied by w v transpose or u multiplied by w transpose v transpose this is because we have defined w here to be equals to this matrix over here where we can concurrently obtain the diagonal of 1 1 0 or minus 1 minus 1 0 to suit this singular value matrix over here and hence w can be either w or w transpose hence we'll get two different solutions for the rotation matrix by considering w and w transpose and finally we also have to make sure that the rotation that we obtain from this method follows the right hand coordinate as shown in this figure over here where when we sweep our fingers from the direction of x towards y the our thumb would have to be pointing towards the z direction and this is defined as the right hand coordinate frame uh using our right hand here similarly we can define the same thing using the left hand but we have to ensure that the rotation that we obtain over here follows the right hand coordinate and this can be simply done by adding the if else condition over here so if the determinant of r is lesser than 0 then we simply set r which we have obtained from either of these two equations over here to be minus of r so from the previous slide we saw that the translation vector can be equals to plus minus u of 3 and the rotation matrix can be equals to u multiplied by w multiplied by v transpose or u multiplied by w transpose multiplied by v transpose hence all together we will be able to get four possible solutions for the camera matrix of p prime which is given by these four solutions over here that is a combinations of all the possible solutions of the translation vector and the rotation matrix that we have found earlier and what this means geometrically is that given a point uh the this particular point could be first uh in front of both of the cameras second it could be behind both of the cameras and the third case is that it could be in front of one of the camera and behind one of the camera and the last case here would be that the point would be in front of the other camera and behind one of the camera and only one of the four solutions is physically correct and that is the solution where the 3d point appears in front of both the cameras and we can easily disambiguate the correct solutions from the four solutions by getting the solution with the most number of points that appear in front both of the cameras earlier on we looked the correspondence constraints of the fundamental matrix which is given by x prime transpose f x equals to 0. and we said earlier on that the fundamental matrix here this particular equation over here simply shows that the fundamental matrix can be determined from the image correspondence itself without any other knowledge so in this particular slides over here or in the subsequent slides we'll look at how to compute this using eight point correspondences so starting for constrain equation that we have seen earlier on uh x prime transpose f x equals to zero we can see that uh this particular equation over here can be rearranged into this form well this simply means that we expand out this x prime y prime and then multiply by uh x y and one and so after evaluating this guy over here we'll get this particular equation over here and this simply means that we get one constraint one constraint on the corresponding pair over the fundamental matrix and because this is one equation that is one polynomial equation that is equals to zero so we can rearrange this and and the the cool thing here is that all the entries of the fundamental matrix is linear with respect to this particular polynomial equation so what this simply means here is that we can rearrange this dot product of this vector over here that consists of only the entries from the point correspondence x and x prime and with a nine by one vector which we reshape from the three by three fundamental matrix over here so uh this simply becomes a linear con of the fundamental matrix since we know that f here is a nine by one vector by making use of at least eight constraints this means that we have eight point correspondences and each point correspondence is going to give us a row multiplied by f over here so we simply stack these correspondences together from at least a eight by nine equation over here in order for this null space equation to be valid at the end we will be able to solve for the unknown null space vector over here which is f over here in the case where we have more than eight constraints we can simply just stack them together to form an over determine homogeneous linear equation and solve using the same method so more correspondence that we have the better will be the estimate of the fundamental matrix this is because all the observations here are going to be corrupted with noise and this is essentially a least square equation that we are going to solve so essentially we are going to do the mean of this over f and more equation more constrained simply means the result of this will be much more accurate as we have seen earlier on when we talk about the homography estimation so the solution over here we get a f equals to 0 where a here is simply an m by nine equation and f over here is simply a nine by one vector we seen earlier in in the lecture on homography estimation that the solution of f here of this homogeneous linear equation simply lies in the null space of a and this what this simply means is that after we take the svd of a we will get u sigma v transpose the solution of f here is going to be the last column in v and similar to the homography estimation data normalization is needed so the last thing that we need to do in order to get the accurate fundamental matrix is to uh ensure that the fundamental matrix has a rank of two this which is what we have seen in the properties of the fundamental matrix and but the problem here is that in general after taking the svd solution of af equals to 0 where we get the solution of f and since this is a 9 by 1 matrix and we reshaped it back into a three by three matrix of the fundamental matrix well we have the problem that this particular matrix that is found from the homogeneous linear equation is in general not of rank 2 due to the noise in the a matrix over here because a image is made out of all the point correspondences which is subjected to noise so this fundamental matrix that's estimated from this linear equation over here algebraically is in general not going to fulfill the rank or constraint over here what happens here is that if the rank of the fundamental matrix is not equals to two we will see that since we we are making use of the fundamental matrix we have two views we are making use of the fundamental matrix to transfer a point on the first image into a pipolar line in the second image and we said earlier on that this is given by l prime equals to f multiplied by x and we said earlier on that if there are multiple points on the first image and we go if we transfer all these multiple points into the epipolar lines of the second image we will we say earlier on that this all these ap polar lines they should intersect at the epipole which is denoted by e prime over uh here in the second image and e prime here should lie on the null space left or right now space of the fundamental matrix now in order for this epipol to exist we can see that since fppo lies on the left or right uh null space of the fundamental matrix this means that f here must be less than full rank if f here happens to be full rank then e here would be a trivial solution and what this simply means here is if the rank of f is not equals to 2 this simply implies that e doesn't exist which is geometrically shown here we simply use f which is where the the fundamental matrix the rank of the fundamental matrix is not two and we simply use this to map every single point from the first image to the epipolar lines in the second image which is shown by the black lines over here we simply see that the epipol doesn't exist because all these epipolar lines won't intersect at a single point so what we are required to do is that after estimating the fundamental matrix from the linear equation over here we need to enforce the rank constraint which simply means that we need to make sure that the we need to do the last step of making sure that the rank of f equals to 2 and we'll see how this can be done so the most convenient way to find the correct fundamental matrix with a rank of equals to 2 is simply by replacing the fundamental matrix that is found from the equation f af equals to 0 by another matrix 3 by 3 matrix over here which we denote by f prime that and this f prime must fulfill the rank constraint over here so this can be formalized or as a minimization equation over here simply we want to minimize the fobbiness norm between the three by three matrix that we have found from the linear equation over here with the final fundamental matrix subjected to the rank deficiency cons that means that we want to find f prime that's as close to f as possible such that the rank of f prime is equals to 2 over here and fortunately the optimization here can be easily done by simply taking the svd of the f matrix that we have found from the homogeneous linear equation earlier on so after taking the svd of this guy so as vd of f we will get the u d v transpose where d here is simply since since the fundamental matrix since the f matrix that's fall from the homogeneous linear equation is not rank 2 so this is in general going to be full rank because of the noise from a from the a matrix over here this rank is in general going to be full rank what this means is that after taking the svd we would have three singular values here which means that uh which we denote as rst where r is the biggest singular value and t is the least uh single singular value so since there's three singular value and this actually supports the our claim that the rank of f is equals to three so in order for us to find f prime over here that minimizes this optimization uh the cost function over here we can simply uh replace the singular value t over here with zero and then after uh replace the least singular value with zero we put this back into the original svd equation and pre-multiply it by u and post-multiply it by u transpose to get the final uh fundamental matrix f prime over here and we can assure that this minimizes the formulas norm as well as since the third singular value is 0 this the rank of the this particular matrix over here is going to be equals to 2. now in summary the 8 point algorithm to find the fundamental matrix is given by this pseudo code over here so the objective is that given at least eight correspondences x i and x i prime we want to find the fundamental matrix such that the constrained equation the correspondence constraint equation that we defined earlier on has to be true so the first step of this would be to do a normalization and this normalization is a compulsory step which is done in the same way as the homography computation so uh we will first find the transformation matrix and a scale that consists of the scale and the centroid of all the points so this simply means we are going to shift all the points to have a centroid of 0 as well as the sum of all the scale should be equals to square root of 2 which is given by this equation over here then once after we have transformed this every point by this transformation to do normalization we can simply compute the fundamental matrix which is f prime hat over here by the normalized image correspondences over here then using the linear eight point algorithm so this is simply solving af equals to zero and then finally we will enforce the singularity constraint by finding f prime so this is just the formulas norm f f prime uh arc mean over f prime subjected to the determinant of f prime must be equals to zero to make the fundamental matrix a valid fundamental matrix with rank two uh after we have found the fundamental matrix over here we we need to remember to denormalize uh to recover the correct fundamental matrix the same set of algorithm can also be done for to find the essential matrix over here and uh so it's also a eight point algorithm a linear eight point algorithm for the essential matrix which is given by this equation over here x transpose or t hat where this is the normalized camera image coordinates of the second view and multiplied by e and x hat equals to 0 over here where x hat prime and x hat are correspondences over here so we will simply uh first compute x hat and x hat prime over here from the original image correspondences x and x uh x prime by simply taking the inverse of the camera intrinsic matrix and multiply it with this to get x hat prime and we'll do the same thing for the first view at k inverse multiplied by x equals to x hat over here so uh once we get this we simply make use of this correspondence the not in the camera normalize coordinate to compute the equation of a e which we denote at this vector over here so this note that this vector not that people we are just simply overloading the notation over here equals to zero this is a nine by one vector that is uh reshaped from the three by three essential matrix over here so notice that in this particular case uh since we have already normalized the camera coordinates the the image coordinates with the camera intrinsic value there's no need to do the step of normalization as we need in the algorithm to compute the fundamental matrix the reason is because now all the camera normalized image calling are scaled to be within a certain scale of a unit scale from the camera matrix and now we simply uh apply this to solve for the a e over here where this is actually a nine by one vector and this a is similar to the fundamental matrix this has to be a at least an m by a nine matrix equals to zero so we'll do the svd and then we'll enforce the singularity constraint over uh the essential matrix over here so uh we will look at in the next slide how to do the enforcement of the singularity constraint for the essential matrix which is similar but not exactly the same as the fundamental matrix and once this is done we can apply the decomposition on e to get the r and t and task forming forming the camera matrix since we know that this is the first camera matrix over here which is i and 0 in the canonical frame and p prime is simply equals to k and r or t over here k prime r and t since we know k and k prime in the calibrated setting after decomposition decomposing the essential matrix into r and t we simply can recover back the second camera matrix over here and now a look at how to enforce the singularity constraints for the essential matrix generally the singularity constraint of the essential matrix is not the same as a way that we do a fundamental matrix earlier on and so but the problem arises from the same setting of the fundamental matrix which we have seen earlier on so we say that by computing af equals to zero where this is actually a nine by one vector representing our fundamental matrix is uh and reshaping this into our three by three fundamental matrix here the rank of this is in general not equals to two uh it's in general equals to full rank so the same explanation will apply for our essential matrix where we take a e over here but note that this is not the epipol i'm just abusing the symbols over here e is actually a nine by one vector that is reshaped from the three by three essential matrix or over here and in general now by finding the essential matrix from this method over here in the linear algebra sense the rank of the essential matrix will also not be 2. so the way to do this would be quite similar but not exactly the same as the way that we enforce a rank constraint on f we'll still do the this e minus e prime we want to figure out e prime that is closest to the original matrix over here the original e matrix over here uh but it has to be subjected to a constraint that the determinant of e prime equals to 0 over here and not just that it also has to be subjected to the first singular value must be equal to the single second singular value so the way that to do this it would be the same as the fundamental matrix we simply take the svd of e and this will give us udv transpose where d is in general or three uni diagonal of three unique singular values since in general a is corrupted by noise the rank of e after computing this guy over here is going to be equal to three so we'll simply uh assign the third value over here c e to b equals to zero the least singular value over here to be equals to zero and we will simply take a simple approach by taking the average of the first and second singular value so it will be a plus b divided by two we'll simply set this to be the final first singular value and second singular value in the essential matrix and we'll call this d hat over here by plugging this back into the singular uh decomposition of the essential matrix of e over here we can recover e hat which is the closest essential matrix to what we have estimated earlier on but it will also fulfill the rank constraint 0 which is the determinant equals to 0 by setting the last element here as well as the first two singular values to be the same |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_6_Part_3_Single_view_metrology.txt | so now let's look at how the calibration can be done we assign a frame on the 3d image pattern or which is a square and we'll simply call this 0 0 and if this is the x axis we'll call this 1 0 and zero one and one one over here so we will simply assign a set of coordinates on the 3d imaging device up to scale notice that the scale here is not important this is because after all what we want to find would be the homography that relates the trans projection of the imaging device onto the image the corresponding points over here and this denotes by h and we have seen earlier on that we want to make use of this homography over here to project the circular points at the plane of infini at infinity onto the image and we have seen in the previous lecture that the circular points i and j they are invariant to similarity transformation and hence the absolute scale of this coordinate is not is not important so suppose that we are looking at only one of them and we assign this coordinate frame we can identify the corresponding corners of this particular imaging device on the image and given this set of correspondence four correspondences we can easily compute the homography in a linear algorithm let's look at just one of the imaging device that we have seen earlier on so this guy here is going to be related using uh with this using a homography that we rewrite as h1 h2 h3 where h1 h2 h3 is the respective they are the respective columns of the three by three homography matrix and we're going to apply the this onto the circular point inj which is on the plane at infinity via this particular homography and we can see that uh according to the definition in the previous lecture that uh the circular points is defined by 1 and plus minus i which is the complex number and zero since it lies on the plane at infinity and we've apply h on this pair of circular point we'll essentially get this relation here this which is the this relation over here is simply the image of the circular point onto this particular image that is related by this particular homography uh over here since we know that this particular image circular point is to lie on the image of the absolute conic because since the circular point lies on the absolute conics which is projected onto the image as the image of the absolute conics although we cannot observe this physically we also know that the circular circular point must also be lying on this image of the absolute conics and hence we can rewrite this into this equation because omega here is a conic and we know that any point that lies on the conics must fulfill this particular equation over here hence we can uh rewrite this into this relation which can be decoupled since this is h1 plus minus i of h2 so this can be decoupled into two equations the first equation here is it would be h1 transpose multiplied by h2 equals to zero and the second equation here would be simply h1 transpose omega h1 equals to h2 transpose omega or h2 once this simply means that one pair of circular point which means that there are two circular points i and j here it's going to give us two constraints essentially this is equal to zero we can also make this equals to zero by simply taking h2 transpose omega h2 cross product with h1 transpose omega h1 equals to zero so both of these equations are linear with respect to omega in the terms of omega and all together we have six unknowns in this particular image of the absolute conics because this is a conic which is a symmetric trigger metric that consists of six unique entries altogether we will need five or more equations since this is up to scale we only have five degrees of freedom we'll need five or more in such equations to solve for the the konig and this simply means that uh we only need six of such constraints which comes from three unique homography that can be computed from three of these uh squares uh placed in uh any configuration except for they cannot be lying on a parallel plane once we get the image of the absolute conics by simply solving for omega in this linear equation we can restack this into a w equals to zero for example and solve for the unknowns inside here and then once we solve for the unknowns we can reshape it into omega three by three matrix such that taking a cholisky factorization of this would be able to recover the intrinsic value of the of the camera matrix and here's an example after doing this we can see that this is the result that we can get and uh another property another interesting thing about the image of the absolute conics is that since this is uh the image of the absolute conics is that since this is a chronic it should also uh it will also obey the po polar relation suppose that we have two image points that back projects to octagonal ray these two image point uh points here and if they back project to octagonal rays they are going to be related with respect to omega with this particular constraint here that actually comes from the pole polar relation so if i have a point x1 that lies outside the conics i will be able to draw two tangent lines that meets the conics that and if i join the two tangent line this is so this would become the my pole this will become my pole and this would become the polar line that is another point that is lying on this polar line that is a conjugate to my pole what this simply means is that if i were to do this and uh lying on these tangent points the original point x1 would be lying on the polar line that is defined by x2 and these two conjugate points are related via this equation over here so similarly if i have a point which is which i call x and this particular point is perpendicular it's octagonal to the vanishing direction where x2 lies on earlier on that we have seen so this will directly define the po polar equation which is l equals to omega x in this case here uh this would be the vanishing direction and this would be this could be a vanishing point that is octagonal to the vanishing direction and since x lies outside the conic you will have a polar line which we call l over here and this is the pole and these two are related by this equation so i'll give more detail about the po polar relation in the next few slides we have seen in lecture one that the uh this equation l equals to c x is uh the relation of a tangent line and the point that is uh lying on the tangent of the conic with this tangent line l we call this point x and the conic c over here if x transpose c x equals to zero this means that the point x must be lying on the conics at the same time but in the case where x transpose c of x is not equal to zero this means that we have a point x which is not lying on the conic c but this particular relation l equals to c x can still lie uh true and uh we call this line over here the polar line instead of the tangent line and geometrically this is what it means suppose that we have a point x over here the line l is the line where it's uh passing through two of the tangent points where each one of the this tangent point here it's a result of the tangent line that passes through this tangent point and the point x so we have two of these points that passes through the tangent line and the point x and uh the line that is spanned by these two tangent points over here which we call l is given by c multiplied by x here and this particular line here we call it the polar line the remark here is that if the point x is on c then the polar line becomes the tangent line because in this case here x transpose c of x it would be equals to zero and this holds true this means that the polar line will become the tangent line in this case here here's the proof of the polar line consider that we have two points denoted by z1 and z2 here on the conics and this uh the two points z1 and z2 here are the two points that is uh on the tangent line and the conics respectively and uh the lines this tangent lines here we denote it as l1 and l2 respectively and these are given by c of z1 and c of z2 and in this two case here z1 c or z1 transpose c z is one is going to be equals to zero as well as z2 transpose c z two is also going to be zero the reason is because z one and z two are on the coding and the point x will be equals to l one cross l two uh this is according to what we have seen earlier uh and it is the intersection of the two lines or two tangent lines l1 and l2 putting l1 equals to c z1 and l2 equals to cz2 into this cross product here we'll get this expression over here where we can see that simply we can factorize out c and the factorization of this cross product here would result in the determinant of c multiplied by c inverse transpose multiplied by the cross product of z1 and z2 where c inverse transpose is simply equals to c inverse since c is a symmetrical matrix and l equals to z 1 cross z2 so this guy here is simply equals to l and finally we end up with this expression here where x is equals to the determinant of c c inverse multiplied by l and this would be equals to k c inverse l where we simply take the determinant since this is a scalar number we simply call it a constant scale of k here multiplied by c inverse l and this simply means that l is equals to c of x since uh you know the scale here doesn't matter we can put c inverse here onto this side of the equation together with x and hence we'll get l equals to c x here where l here is our polar line so now let's look at more details on the definition of the conjugate points if the point y is on the line lx equals to cx this means that it lies on the polar line of x the pole of x then y transpose lx would be equals to y transpose cx equals to zero and any two points x y satisfying y transpose cx equals to zero are conjugate with respect to the chronic c the conjugate c relation is symmetrical what this means is that if x lies on the pole or polar of y then y is on the polar line of x we can see geometrically this is what it means here so uh y here it's lying on the polar line lx of the pole x over here so what this means is that x must also lie on the polar line of y which is l y over here so the geometrical construction we can see that this is true by the way this construction the figure here is constructed to scale and everything here is accurate you can verify this for yourself and the proof here is that the point is on the polar of y if x transpose c y where c y here is simply the polar line of y we denote as l y and if x is on this polar line then this relation the incidence relation must hold through to be equals to 0. similarly the point of y is on the polar of x this means that c x here which is the polar line lx of the pole x and y if it is on this polar line of x then the incidence relation of y transpose lx must be equals to zero and since x transpose c y equals to y transpose cx which is equals to zero if one form is zero then so is the other what this means is that if we compare these two equations over here we can see that y and x are actually interchangeable hence there is a dual conjugacy relation for lines two lines and l and m are conjugate if l transpose c star m equals to zero so uh let's give a more formal definition of the vanishing points here so geometrically a vanishing point of a line is actually obtained by the intersection of the image plane with a ray that is parallel to the world line and passing through the image center so what this means is that let's say if i have a line this particular line over here which i call uh x so the vanishing point that is formed on the image so every point here on this line is going to project onto the image plane which is denoted by this line over here it's going to be projected onto the respective points over here so one of this point it's going on the image is going to be the projection of the infinite direction of this line and that's the vanishing point which we call v over here so this particular point is actually defined by simply drawing a line that starts from the camera center and pointing in a direction that is parallel to the real 3d line and the intersection here to the image plane would be what we call the vanishing point so now let's look at another image plane here uh it would be simply the same thing if i don't move the the camera center and simply just rotate this or we'll see that the direction at infinity of this line here is simply projected onto another vanishing point which called v prime on the another image which called x prime over here so uh thus a vanishing point depends on only the direction of the line and not the position of the uh of the line so uh because we are only looking at the parallel direction here in this case and consequently a set of parallel lines will have a common vanishing point what this means here in the in the 3d sense is that regardless of this line over here or this line over here or this line over here uh it's always going to because it's uh the vanishing point is simply uh defined as the intersection of the line that starts from the camera center and parallel to all these lines which is this light ray which is this ray over here okay uh it doesn't matter where this the position of this lines are it's always going to have the same parallel point and uh it's always going to have the same vanishing point and as a result what we can see here is all the parallel lines when you image it onto an image scene so this is what we are going to get all the parallel lines they will converge at a certain point they will intersect at a certain point and that point is what we call the vanishing point so any two vanishing point will give us the vanishing line now algebraically the vanishing point can be obtained as a limiting point as follows suppose that we have a 3d line that lies in the 3d world and we can parameterize this 3d line using the direction as well as any point on this 3d line so the set of points that lies on this 3d lines which we denote as x lambda over here is going to be equal to any point on the line plus a scale multiplied by the direction of the line so any point here would be given by this particular uh equation now suppose that we have a projective camera this is canonical projective camera which is given by this guy over here we're going to project every point on the 3d line that we have defined earlier on which is actually slammed by the camera projection so we are going to take x multiply by lambda and this we can see that is essentially equals to p multiplied by a plus lambda of d which is the directions that we have and we can evaluate this by simply pulling out we have seen this earlier on and this pa over here would be simply the projection of the point on the 3d line and pd would be simply the projection of the direction of the line in in 3d space and so once the vanishing point is defined as a point where lambda here so suppose that just now we have seen that there's a line so a point a and then lambda simply uh there's a direction d lambda is simply the the scale of this or the magnitude of this guy over here along the line so suppose that this lambda approaches infinity what we are looking at would be a point at infinity that is in the direction of this line so now let's take the projection which is given by x lambda over here that we have found earlier on and limit this to infinity uh what we'll get is k multiplied by d so this can be easily seen by here let's take this divided by lambda and then k d so if we limit this guy to infinity this guy goes to infinity and this whole term here will tend to zero will approach zero hence what our remaining would be k multiplied by the lambda here goes to infinity so uh as a result we can see that the vanishing point is pretty much only dependent on the camera intrinsics as well as the direction of the line and it's not specified by any of the direction that means that any location of the 3d points on the line which is what we have claimed earlier on uh geometrically now the in in 3d projective space the vanishing point is simply the image of the intersection of the plane uh at uh at infinity and a setting of lines in the same direction what this simply means is that i have a plane at infinity and a set of parallel lines and they're all going to converge they're all going to converge and intersect the plane of infinity and the infinite point and the this particular point at the plane of infinity is going to be mapped onto the image as the vanishing point and this is essentially uh given by this equation here which is uh what we have derived earlier on using the limiting point but we can also define it uh based on this equation over here so essentially the the point that is infinity is it can be expressed as this point at infinity can be expressed as just the direction just the single direction because all the parallel lines are pointing towards the same direction so they are given by this direction and the last entry here is going to be zero since they are parallel lines that since this is an intersection at the plane of infinity hence it's a ideal point so by substituting this into x infinity and the camera projection multiply pre multiplying it by the camera projection matrix what we are going to end up with would be the same equation that we have seen earlier on here and uh note that the set of line parallels to the plane are also image as a parallel line this means that the vanishing point will still be at the point of infinity however the converse might not be true if we observe a set of parallel lines on the image this doesn't mean that in the real world scene this is also a set of parallel lines this is because we have seen earlier on if this is the camera center we have what we call the principal plane that essentially lies on the xy plane of the camera local camera coordinate frame and any points any uh lines that lies on this plane it's a line at infinity essentially this means that z here equals to zero and any intersection of these two lines it's also is always going to give us a line that is at infinity and the projection of this point here onto the image plane is always going to be ending at a point at infinity because this is uh going to be with a z of zero the projection means that this point here is going to be ended up projected onto the image at the in infinity and what this means is that these two lines although that the lies on the principal plane although they are not parallel they are going to be image as paler on the image because the intersection of this point the two lines which is this point over here is going to be the image to infinity now let's look at an example where we can make use of the definition of the vanishing points on images to estimate the relative rotation between two camera views that let's denote the first camera view as a p equals to uh k multiplied by identity and zero which is in in the canonical frame i and 0 over here and the second frame is that's denoted by p prime where basically the camera in 36 remains the same let's assume make the assumption that the uh it has the same camera in 36 but now this but the second view is rotated and translated by a certain amount in this example if we can observe the vanishing point that means that the projection of a poi at infinity on the first image let's denote by v and on the this projection of the same x and infinity on the second image which we denote by v prime if we can observe these two points over here if we can observe these two points over here we'll be pretty much able to find the rotation the relative rotation between these two frames uh we'll see that we cannot find a t in this particular operation here and it doesn't matter whether the camera has undergone role or translation or not in this case it need not be a pure rotation but we would still be able to find the rotation so let's see how this is done we have seen earlier on that the projection on the first image for example is going to be given by a multiplied by i since we take this as in the canonical frame i and 0 in this case here we are going to multiply by x at infinity which is given by the direction and 0 since this is an ideal point essentially we saw that this is equals to k multiplied by d which is the direction and hence the direction here d can be denoted as k inverse of v which is what we have seen earlier on let's look at how this operation the same operation can be applied to the second view we will get v prime which is essentially the projection of the point infinity onto the second frame so this would be multiplied by k r and t because there's a rotation and translation between the two views but now it's still the same point at infinity so it's d zero and we can see that this is essentially equals to k multiplied by r and d we will simply call this new direction of the line with respect to the second frame as d prime over here hence we can rewrite this as d prime equals to k inverse of v prime if we were to observe the vanishing points in both views as well as having a known intrigue of the camera we'll be able to define d in the first view the direction in the first view where we can also normalize this to become a unit vector by simply dividing it by the norm of that uh of that vector that we have computed earlier on so we'll do this uh we'll do the same thing for the second view where we'll get d prime over here since we know that uh we we can compute d and d prime here from k and v and v prime which is observed from the image so we and we also know that the function that we defined earlier on that d prime is simply equals to r multiplied by d in the which is the direction of this infinite point from the first frame so this is d and this in the second frame this is d prime and they are both related by the rotation uh the rotation matrix so essentially we can see that since uh this is a homogeneous uh three by one uh vector where only the two values the first two uh entries are meaningful because the second the the last entry here is going to be uh so this is dx the the y and the last entry here is going to be one hence this observation here the observation of one vanishing point in one view is going to give us uh two independent constraints two constraints and uh we know that for arbitrary rotation in general that's three is three degrees of freedom which is parameterized by the euler angle row pitch and yaw so as a result we will need at least three such constraints to solve for the three degrees of unknown in the rotation matrix this can be simply obtained from two of this corresponding vanishing point so each one of this v1 and v1 prime gives us one constraint and then we have just to look for another vanishing point the another corresponding vanishing point v2 as well as v2 prime to give us the second constraint and hence we can then solve for the unknown rotation let's look at another example of our application of this vanishing point we can we have in fact seen this many times so a vanishing point is also a point on the image so what this simply means is that if i have two points on the image let's call it v1 and v2 we have seen earlier on that if we know the camera if we know the camera intrinsics which is k over here uh we can essentially define uh this the direction of these two lines which is v1 and v2 and then we can pretty much figure out the angle between this line theta over here which is given by this equation that we have seen many times so uh where omega here is simply the k multiplied by k transpose inverse which is the image of the absolute conics and what this means is that since vanishing point correspond to a line that goes to a point infinity these are the real syn lines so what we can do here is that we can you make use of this to figure out the angle between the two real syn lines now what i have spoken so far is on the assumption that vanishing points are known together with the known intrinsic value we can make use of it to compute the relative rotation between two camera views as well as we can make use of it to compute the angle between two syn lines so but but the the problem here is that the computing vanishing point itself is actually not that trivial because this is actually a gigantic uh problem and uh what i meant by trigonometric problem is that suppose that we are given a set of lines that we know that is a paler in the scene they will have all to converge at one single point which is uh the vanishing which is the vanishing point over here so all these are parallel lines uh supposed to be parallel line but when you project it onto the image it won't be parallel and it's going to be converging to one point so suppose that if we know where is the vanishing point it would be easy to identify which set of lines corresponds to this vanishing point so then the whole problem becomes very trivial on the other hand suppose that we know which set of parallel lines on the image when we extract the using line this extractor if we know the set of the line that should correspond to this vanishing point then we will probably we will pretty much be able to compute the intersection of all these points of order this line and get the vanishing point the but but the problem is that both of these entities which is the vanishing point as well as which set of lines corresponds to the vanishing points are unknown and what makes things worse is that the observation the detection of the lines are often not perfect so we can see this example here that uh the detector line might not lie exactly on the line that intersects perfectly at one one single vanishing point this is because uh the imaging device the camera the photo sensor is actually a a real device that can be corrupted with noise so the measurement is never going to be that accurate so now it becomes a gigantic problem in order to find the vanishing point we need to know the assignment of the lines that belongs to this vanishing point and on the other hand we also need to know the vanishing point in order to know the assignment of the line so but in this case we have to find both simultaneously so i shall not go too much into the detail of the how how to compute the vanishing point but i'll just give you some uh references now let's look at uh the definitions of the vanishing line in a more uh formal way so suppose that we have a set of parallel planes in the 3d world so this is a set of parallel planes we say that this parallel plate they're going to intersect at a line at a pi of infinity so this is going to be the l at infinity the infinite line on the plane at infinity suppose that we can capture actual image of this the intersection of any of this plane with the plane of in at infinity and we will be able to capture what's known as the vanishing line what's known as the vanishing line over here so this line is essentially the projection of the line infinity which is the intersection caused by the intersection of a plane and the plane at infinity and we're going to project this line onto this vanishing line which we call the vanishing line and this geometrically this can be seen as the plane that is in the same direction that is parallel to any arbitrary uh plane uh in the in the scene uh that is that contains the camera center and the intersection of this particular plane with the image plane is what we define as the vanishing line from this geometry over here we can also see that the vanishing line it's only dependent on the normal direction of the plane as well as the camera center and the image location but it's independent on any location of this particular point in the railway because this particular plane here as long as it shares the same direction they are parallel they are going all going to intersect at one line one single line at the plane at infinity so this line is going to be pre-projected onto this vanishing line onto the image hence a vanishing line depends only on the orientation of the same plane it does not depend on the position since line is parallel to the plane intersects at the plane at infinity the vanishing point of the lines parallel to a plane must also lie on the vanishing point so what this simply means is that let's say if i have the plane at infinity and i'm have a plane which i call pi over here so if i extend this to infinity this is going to essentially intersect at the line at infinity so if i have any line here if i have any parallel line that this set of parallel lines that's parallel to this particular plane or it could be in other directions as well but let's look at first look at this direction so what it simply means is that this is going to intersect at one point that sits on the line at infinity and this set of like another line is actually octagonal to the normal direction of the plane so this might intersect at another ideal point here but it's still going to lie on the line at infinity and all the parallel lines all sets of possible parallel lines are octagonal to the normal direction of the plane the the intersection and infinity must all lie on this l at infinity and hence the projection of this line at infinity will form the vanishing line so this means that all the sets of parallel lines that is octagonal to the normal direction of the plane in the ideal point that is reprojected onto somewhere on the vanishing line now let's look at three cases where a known vanishing point as well as a known camera matrix can be used to find out the information about the scene plane in the first case we will see that the plane's orientation relative to the camera can be determined directly from the vanishing line as well as the known camera intrinsic value so early on in the lecture we have seen that uh given a line on the image plane it back projects to a plane which i denote by n over here and the relation between this is given by the back projection is n transpose of the camera matrix intrinsic multiplied by l so this essentially means that i can get the expression in terms of the line which is equals to k inverse transpose now suppose that this l here is our vanishing line if i know that this l here is the vanishing line of planes that are perpendicular to l this means that i'm interested in this particular plane here which is perpendicular to l over here then we can conclude that the plane with the vanishing line has an orientation n equals to k transpose l which is given by the back projection of this line over here that we have seen earlier on in the lecture in the second case here we can see that a plane can be metrically rectified given only its vanishing line this is because since the plane normal direction is known from the vanishing line which is what we have seen earlier on so suppose that uh we have a the vanishing line over here and now we know that if this is the camera center the plane that is parallel in the world scene do this so they are all defined by this normal vector over here and we know that this is this same plane over here it's going to intersect at the vanishing line v this vanishing line which we denote by l over here okay so once we know this and the camera in 26 k we pretty much can obtain this normal vector over here which is given by the back projection of l and this is equals to k transpose of l which both are known so once we know this here what we can do here is that because this is a plane this is a mapping of a plane to another plane so or which is the image plane so we know that the two must be related by a homography h so what we can do here is that we can simply compute a homography that remaps this normal vector here into frontal polar that means that it has to point perpendicular to the image and we can actually walk the scene into a frontal panel scene of a new target image over here the last case that we will look at on the use of a vanishing lines is that it can be used to determine the angle between the two sim planes so suppose that we have uh two syn planes over here one is denoted by n one and then we have another uh sim plane which is denoted by another normal vector here so the back projection of l1 is going to be denoted by the normal vector of the plane which is parallel to the first sim plane and l2 here is going to be n2 the back projection which is going to be parallel to this second plane over here so what we can do here is that we can make use of the uh knowledge of this because we know n1 is equals to k transpose l1 n2 equals to k transpose l2 we can make use of this to form a dot product and hence we can get the angle between this two normal vector and this would be the final expression that can be derived i'll leave it to you to uh to prove this and uh next we'll look at how to compute the vanishing lines in the case of the vanishing line well it will be easier than computing in the vanishing points computation of the vanishing lines would also require us to have a knowledge to first find what's the location of the vanishing points at least two vanishing point because the vanishing line would be simply the uh defined by the cross product of uh these two so l here would be equals to v1 cross v2 that we have seen earlier on in the first lecture finally let's look at some relationships between the vanishing points and lines especially when they're octagonal if i have a sim plane that intersects with vanishing line that is projected onto the image at this direction and if i have another vanishing point which is uh which comes from the scene line that's octagonal to this sim plane this particular vanishing point let me call this uh l this vanishing line of this plane and the vanishing point of the parallel lines that is octagonal to this sim plane is going to define a vanishing point on the image and i know that the set of parallel lines uh it's octagonal to the sim plane which defines the vanishing point and the vanishing line respectively and we know that any vanishing point that lies on this vanishing line which is perpendicular to the first vanishing point which we call let's say we call this v1 and any vanishing point that we call v2 that is defined by the line that is parallel to the to the the sim plane over here uh it's going to be perpendicular and this is going to be related by the relation here because uh we saw that the dot product of the two directions is going to give us cosine 90 degree and that would be equals to zero and we we can derive this simply from the uh from the this this kind of equations that we have seen many times in the in in the lectures but this is the relation between two octagonal vanishing points suppose i look at the one of the vanishing point that is octagonal to the vanishing line we have also see that uh earlier on that this relation defines a po polar relation because we can see that vanishing point and the vanishing line they're going to be related by the image of the absolute conics so we can bring this guy here uh the inverse of this guy which is simply equals to the dual of the image of the absolute connects we said which you have seen earlier on and uh to to give this particular uh relation and in fact the derivation of this is similar to the the popola relation that we have seen earlier on in the lecture in the case where two vanishing lines are perpendicular to each other that means that i have two planes in the image the one is n one and then the other one is uh n2 for example these two planes is going to define two vanishing lines on the image which is defined by l1 and l2 and we know that since this is the projection of two lines at infinity which are octagonal to each other and the projection of this would be simply defined by this relation of the dual image of the absolute conics here which is uh simply given by this equation here uh when we because they are the two lines l1 and l2 are octagonal so we are equating this to cosine 90 which is equal to zero and hence we get this relation over here so finally we are going to uh having defined all the propola relations as well as the octagonality relation between a vanishing point and vanishing lines on vanishing point and another vanishing point we're going to look at an example where we can use the knowledge of the vanishing line as well as the vanishing point to do a fine or 3d measurement in particular we are going to uh we are going to look at a case where we can measure relative length of vertical segments of lines in the 3d world scene given the fact that we know where is the vanishing line and a vanishing point that is orthogonal to this vanishing line so what this means is that if i have plane here that is given by the normal vector n this plane is going to intersect the plane infinity at the line l infinity so this is going to intersect plane infinity and this line projects to this vanishing line over here which i denote as l and now i'm going to define another set of lines that is paler to the octagonal direction of this plane and this is going to intersect the plane at infinity at a point at infinity which i call uh capital x infinity and this is going to be projected onto the image at the vanishing point which is octagonal to this vanishing line so given to these two entities suppose that i have two segments on the ground plane starting from the ground plane i'll be able to measure the relative height that means that i'm going to be able to take the ratio of this guy over this guy and we'll see how this can be done so let's denote the vanishing line of the ground plane as l so this ground plane here that we see is going to intersect the plane at infinity and the infinite line this is going to project back onto the image as a vanishing line which is denoted by l over here and then these parallel lines they're going to converge at the point infinity which is perpendicular to the line at infinity that we have saw earlier on and this infinite point is going to reproject onto the image at the vanishing point which is denoted by v over here suppose that we know this we are given l and v and then we are going given two segments that this rested on the image plane so we are going to define these two end points the the two pairs of endpoints of these two lines l1 and l2 over here as b1 t1 b2 and t2 given the the endpoints denoted as small t 1 t 2 and small b 1 and v 2 on the image so these are the four points that we are given based on this observation v l t 1 t 2 b 1 and v 2 we want to compute the relative ratio of the l2 and l1 over here the first step to do this would be to first compute the vanishing point on the vanishing line where the vanishing point is actually given by the intersection of the line that joins b1 and b2 so i want to find this line as well as the line that the parallel line that is given by the starting from t1 intersecting at this point which we are going to call t1 tilde over here since they are both parallel to the plane they are going to intersect at an infinite point that rests on the infinite line uh that are created by the intersection of this ground plane and the in plane at infinity which we call u over here so the reprojection of this is going to lie on the line and this can be easily followed by uh this equation over here so essentially b1 cross b2 is this line over here and then it will cross it with the line at infinity which is given to us so that will give us the point which is u over here now let's transfer uh this point to t 1 tilde over here let's figure out what's the projection of t t1 tilde onto this onto the image and this can be easily obtained from the cross product of this line defined by t1 so since t1 and u are already known we can easily get the equation of the line which is given by t1 cross with u and then we can cross it with l2 over here l2 this line over here to give us this point over here so l2 is simply given by b1 uh sorry b2 cross with v uh which is the vanishing point and this is also known so now we can now we've found this and we have also found these values so once these are once these are found uh let's look at just one of this line over here let's look at this particular line over here of interest and then we'll have four points on this line which is given by b2 t1 tilde t2 as well as v so all these four points are known so now we can denote this as a as a one-dimensional coordinate given by zero and t tilde t2 as well as v that's the length of these points on the line and we know that this becomes a 1d projective transformation because every is defined on the line so what we want to do what we want to do here uh figure out a way to map the points since we have the line here which is given by uh so given by b2 b2 uh t1 tilde t2 as well as v which and v here is our vanishing point so since we denote the first point as 0 1 and the last point here as v1 and we also know that this last point here is the vanishing point at infinity we can pretty much use this fact to figure out the mapping the homography or two by two homography that maps the ground point in because since this is the origin it should stay fixed at the origin or that maps it to itself but we are going to also figure out the same homography that maps the uh vanishing point to a ideal point which we denote by one zero so zero here means that it's uh going to be a point and infinity and uh this what this simply means is that we're going to figure out a homography that represents this line that maps this line into the 3d line over here where v here is now at point infinity denoted by 1 0 which was given here so we can easily figure this out so if you have a 2 by 2 matrix and you want to map this onto itself as well as mapping the point infinity the vanishing point to 1 0 we can see that this is a very viable solution once we found the homography the 2x2 homography we can then know that the this homography h2 it's actually a mapping between this line to the 3d line here so now we can make use of the same homography to figure out the mapping of t2 as well as t1 tilde that is also resting on the line onto the 3d scene which we've seen earlier on that this is h2 by 2 is given by 1 0 1 and minus v over here we can pretty much compute the transformation of the t1 tilde and t2 in the image coordinate into the 3d frame so this is essentially given by 1 0 1 minus v multiplied by t 1 tilde and 1 so this is going to give us t1 tilde in the 3d frame up to a certain scale and then we can see that this is going to give us t1 and as well as t1 tilde minus v and this is the first equation so the second point that we are going to transform would be t 2 and that's given by 1 0 1 minus v and t2 tilde and one so this is going to map to t2 tilde and t2 tilde minus v over here so uh since we already know these two points so t1 tilde is simply the distance between the graph which is 0 over here and t 1 tilde would be simply d1 the distance between this point and t2 would be simply d2 so essentially we'll just take the ratio of these two and we can see that it's an expression that is given by these two coordinates that we have found earlier on now we can make use of this knowledge to measure the height of people in the scene if we know that uh this is the vanishing line a fixed vanishing line that we denote as l that we have seen earlier on here this vanishing line if we know this vanishing line here which we denote as l and as well as we know that uh these are the end points of some fixed segments say for example we just need to know one of the segment if we know this particular line segment here we can have this point that is being fixed so suppose that this is b1 and t1 that we have seen earlier on suppose that this is b1 and t1 that we have seen earlier on and if we observe a person in the scene we can actually draw a line from the person touching the ground and then the head of the person so we can denote this as we can observe this from the image which we denote as b2 and t2 and we can make use of the same technique to find out the ratio of the length of this guy let's say this is d1 in the real scene and this is d2 in the in the real scene we can actually make use of this to find out the ratio of d2 over d1 equals to some values that we can compute from the known vanishing line uh as well as a vanishing point we need to have the vanishing point which can be easily computed by the intersection of say these two lines uh and that will give us a v which is uh probably out of the image we can actually measure the height of the cabinet and keep it fixed because it will not change and so if we know this height and we can compute this ratio from the vanishing point the known vanishing point as well as the vanishing line we can then determine the actual height of the person in the scene which is two examples that's being shown here we have seen several constraints that is given by the vanishing points and vanishing lines and this table shows us a good summary of all the uh relationships of this the vanishing points and vanishing lines and how it can be used how this relation can be used to calibrate a unknown camera so in comparison with what we have seen earlier on that in the previous example that we are making use of the known uh calibration matrix as well as the as well as the vanishing point of vanishing line to determine the syn geometric properties but now let's say our k is unknown uh we can actually figure out what's k here in the conics so we have seen this equation earlier on that two octagonal vanishing point is constrained by this equation over here since this is a one by one equation this is this means that there's one constraint over here we can actually make use of at least five of these constraints to compute the omega over here which can be decomposed into k uh the the k matrix we can similarly we can also make use of the octagonal relation between a vanishing line and a vanishing point to define this equation so this is essentially obtained from the po polar equation which is this guy over here and which means that the cross product of this l crossed with omega v will be equals to zero and this is linear so each one of these will give us two constraints and we can make use at least three of this line and point relation to define uh the set of constraints which we can stack into the relation of a w equals to 0 and solve for w where a here is derived from l the known l and v so similarly we have seen from the calibration example that with the three squares we can define we can compute the homography and then each square gives us two constraints so all together we need to image three square concurrently in order to give us six constraints that solve for the five unknowns in omega so further knowledge can also be obtained if we know that the camera intrinsics contains zero skew this means that the s here is zero so in this particular case here we if you uh take this and multiply it into omega we can see that w one two and w two one equals to zero so this will give us a further one more constraint on the equations over here and if we further make the assumption that this is a square matrix which means that the focal length in both directions are equal and there's a zero skew then we can further make another set of equations over here another constraint so by doing this we have all together two constraints which can use at least together with other combinations of the vanishing points or lines the octagonality constraint or the homography we can make use of this to calibrate find all the unknowns in the in the camera intrinsics and as i have explained earlier on so the we can make use of any combinations to get the internal constraints rearrange this omega into a six by one vector since only the first six elements here are unique because this should be a symmetric metric and since all these constraints here gives us a linear constraint so any of the constraint can be rearranged into a dot product of a transpose multiplied by w equals to zero as long as we have a as long as we have more or equals to five constraint because there are five unknowns here in this con image of the absolute conic and as long as we have more than five we can restack this into a w equals to zero and use it as vd to solve for uh the unknown w where once we solve for the unknown w we can use uh kolovsky factorization to factorize this guy here into the k matrix over here and hence we'll be able to determine uh since the kolovsky decomposition will give us a lower or upper triangular matrix which corresponds to the k matrix we can determine exactly what are the entries in the camera intrinsic value so in summary what we have looked at in today's lecture is that we have a look at the actions of the camera projections on plane lines conics as well as a quadric the forward and backward projection and then we have explained the respective effect on the images on the images for a fixed camera center increased focal length as well as pure rotation then next we have look at the definition of the image of absolute conics and then we look at how to make use of this to do camera intrinsic calibration finally we define vanishing point and vanishing line and then use them to find the geometric properties of the scene as well as the camera thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_10_Part_1_StructurefromMotion_SfM_and_bundle_adjustment.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about structure from motion and bundle adjustment hopefully by the end of today's lecture you'll be able to describe the pipeline of large-scale 3d reconstruction it consists of three parts namely the data association structure for motion and then stereo algorithm in today's lecture we'll look at data association and structure for motion in more detail and we'll look at dense stereo algorithm in more detail in next week's lecture you should be able to explain the use of robust two view geometry and the back of words algorithm for data association and next we'll look at how to use the two view geometry the pmp algorithm as well as the linear triangulation algorithm which we have learned in the previous lectures to initialize the 3d reconstruction and finally we'll look at how to apply iterative methods such as newton gauss newton and the luvember macro algorithm for bundle adjustment of course i didn't invent any of today's material and i took most of the materials from the tutorial on large scale 3d modelling which was a tutorial conducted in cvpr 2017. here's the link to all the slides and the content of this particular lecture i strongly encourage every one of you to look at this slides and the content after today's lecture to reinforce your idea on the large scale 3d reconstruction and i also took a lot of the materials on bundle adjustment from richard hutling's and andrew zizerman's textbook multiview geometry in computer vision in particular appendix 6. more information on bundle adjustment can be found in the paper that is written by view tricks by the adjustment a modern synthesis in 1999 and if you are interested you should look at the chapter 11 of myis textbook an invitation to 3d computer vision to find out more about the general pipeline of 3d reconstruction as well as bundle adjustment now the problem of large scale 3d reconstruction can be cast as follow suppose that we are given a set of images denoted by i 1 to i n let's say if this is the set of images where we have i 1 i 2 all the way to i n over here the objective of large scale 3d reconstruction is to first find out the motions of the cameras that have taken this particular images this simply means that given a world reference frame which we denote as fw over here what we want to first figure out would be the camera projection matrices denoted by p1 p2 as well as pn and we have seen that we can do this using two techniques which is uh the relative post estimation as well as the absolute post estimation in our earlier lecture and then once the motion of the cameras that means that the camera projection matrices of these cameras have been foul the next thing that we want to do would be to recover the 3d structures from the image correspondences as well as the camera projection matrices triangulate for the 3d structure for example if i have a correspondence over here i know p1 and p2 we have seen in the earlier lecture that we can do linear triangulation to obtain this particular 3d point over here and today we are going to put everything together to see how we can do this for n views that means that if i have n images i want to do the motion estimation of the camera as well as the linear triangulation algorithm to get all the 3d points that is seen across or every single image in this collection of images in the case where we are just interested in recovering the 3d points that corresponds to this image correspondences then this would be known as the structure for motion which we call sfm and in fact the name implies that the steps over here because we are finding the motion of the cameras first before we find the 3d structure hence the name structure from motion and in like next week's lecture we also look at the case where we are interested in recovering the 3d point of every single pixel in the collection of all the images as a result we will get a dense 3d reconstruction and that would be called the dense stereo algorithm which we will look at in more detail in next week's lecture as i mentioned earlier on there are three key parts of the pipeline for large-scale 3d reconstruction the first would be to do data association because we are given a set of images which are in unordered uh sequence that means that i give you i 1 i 2 all the way to i n over here but i do not know how are all these images related to each other so the first step would be to figure out whether they are related to each other geometrically in the sense that we want to suppose that i take any pair of the images from this collection of images over here let's say i denote this as i j i and i j over here i want to find out whether this these two images are seeing the same 3d scene or not so the first the the obvious thing to do here would be to first extract image correspondences and then compute the relative transformation or you are using the fundamental matrix or the essential matrix to see whether these two images are related if they are related this means that the number of image correspondences here after the robust estimation of the fundamental or essential matrix would be more than a certain threshold and the next thing that we want once we have established the image correspondences and the data association the next thing that we want to do would be to do the structure for motion as i mentioned earlier on is that given a pair of image over here we will first compute the pose which we have done uh computing the fundamental or the essential matrix now we can simply decompose the fundamental or the essential matrix into the camera projection matrix over here so we can decompose this into a camera projection matrix which means that we have obtained the motion of the camera with respect to a single world frame and this can be done using what i have mentioned earlier on here using the relative post estimation algorithm or once we have some the first two poses for example and after this we can do triangulation to get the 3d points so in in the future subsequent views where we add in more views we can actually simplify the 2d 3d correspondences then apply a pnp algorithm to find the pose of this or to find the camera projection matrix of this guy over here and that would be the absolute post-estimation algorithm and of course we'll use the linear triangulation algorithm that we have seen earlier on to do triangulation of the 3d points from the image correspondences and the posts of the images that we have found from the relative and absolute post-estimation algorithm so once we have completed this procedure the motion computing it from the relative for absolute post estimation algorithm and then followed by the linear triangulation algorithm to get all the 3d points in n views final thing that we want to do would be the bundle adjustment algorithm where we simply optimize over all the parameters in the camera projection matrices over all the views as well as the 3d points such that the reprojected 3d points onto the image the reprojection area of this is minimized and as a result from this structure from motion we will be able to get a sparse 3d reconstruction of the scene and next week we'll look into more detail that the final step to this large scale 3d reconstruction pipeline would be to do a plane sweeping algorithm or then stereo this is actually a dense stereo algorithm a multiview then stereo algorithm where we simply given the pulse of the camera we simply try to get to that map or the depth of every single pixel in every image so as a result because we are doing this for every single pixel in every single image so as a result what we will get would be a dense 3d model of the scene we'll look at this in more detail in the next lecture here's an illustration of the large-scale 3d reconstruction pipeline so as mentioned before we start off with a collection of unstructured image which we denote as i one all the way to i n so in the beginning here this is a collection of unstructured image what it means is that we do not know the relation between any of these images they are simply treated as independent collection of images in this in this unstructured collection of images over here then the first step that we need to do would be to do what we call the data association and as i mentioned earlier on is that uh that we will take every pair of images in this collection of unstructured images and apply the image correspondences extract the image image key points and establish the correspondences from this pair of images then we'll do a relative post estimation or a robust two view estimation using the ransec algorithm that we have seen earlier on and from here we will be able to establish how many uh in layer correspondences are there between the two image and if this set of image correspondences is more than a certain threshold then we conclude that there is an overlapping field of view between this pair of images and as a result what we will do would be to build what we call the scene graph over here where if there if the number of in-layer correspondences from the robust two-view geometry exceeds a certain threshold we'll add an edge between the two images and the notes of this graph the same graph are simply all the respective images from the collection of unstructured images here that we are given once we have established the same graph the next thing to do would be to do uh structure for motion and of course the first step that we want to do would be to estimate the camera poses the camera projection matrix p1 p2 p3 of every single uh camera view from the the collection of the images that are related in this scene graph we do that using the relative post estimation algorithm as well as the perspective endpoint algorithm that we have seen earlier on in the lectures and the and then once we get the poses of all the cameras from the scene graph from the post-estimation algorithm the next thing that we can do would be to do linear triangulation apply the linear triangulation algorithm on the image correspondences to obtain what we call the sparse 3d model over here it's sparse because we are extracting key points sparse sets of key points from the image to do linear triangulation so here we are in contrast to the dense modeling we are not doing it for every single pixel hence the number of 3d points that we will obtain would be a sparse set of a 3d point so in next week's lecture we'll look at how to recover a dense 3d model an example is shown here from the structure from motion pipeline so just very briefly now is that uh given a set of multiple images where we know how they are related to each other so denoted by p1 p2 the camera matrices p3 and p4 for example what we want to do here in this dance modeling would be to do to obtain the 3d depth of every single pixel in every one of these images so as a result we will get a dense 3d model from from here so an illustration of the structure from motion algorithm is shown in this figure over here we suppose that we are given i1 i2 and i3 over here three images so the first step that we need to do would be to establish the data association that these are these three views are related to each other and then the next thing would be to recover p1 p2 and p3 by doing the relative and the absolute pose estimation algorithm and finally we are from the set of sparse image correspondences we'll do apply the linear triangulation algorithm with the known camera projection matrices to get the sparse set of 3d points as illustrated in this particular figure and one thing to note is that all this camera poses the camera projection matrices as well as the 3d points there they are all expressed with respect to a consistent world reference frame which we call fw over here here are some examples of the structure from motion the first example here is taken from open source pipeline which is called core map this is a open source structure for motion pipeline where you can actually input all the images and it will output something like this where each red point over here denotes a camera post and the colored point clock over here denotes the structure that is being reconstructed from the structure from motion pipeline and here's an example of the city hall in san francisco which is in done in an earlier work that i was involved in it was published in cvpr 2016 and here's an earlier work by samir aragaw and noah's naval where they reconstruct the the colosseum of italy you can see that every single camera image uh and the post is respective pose with respect to a 3d world frame a fixed world frame which called frb over here it's it's illustrated in this figure over here and uh the three reconstructed 3d point is simply the 3d model that we want of the colosseum now the first step in logical 3d reconstruction as we have mentioned earlier on would be to do data association so starting from a collection of unstructured images as illustrated in this figure over here the final thing that we want to get would be the scene graph where the note represents every single image in the collection of images that we are given and all the edges over here simply represents that the pair of image that is being connected by this particular edge over here has overlapping field of view and the first step over here would be to establish what we call the connected components so a connected component simply means that uh all this the whole set of uh images that's in the in a connected component has overlapping field of view and we'll make use of what we have learned earlier on in the earlier lectures the two view geometry to establish the connected component so uh we'll choose any pair of images either from the collection of unstructured images and we'll first extract key points uh corresponds the sift or key point correspondences and and then we'll establish the correspondences between this set of key points and followed by computing the fundamental matrix or the essential matrix in the ransom algorithm and then we'll check the number of in-layers to establish whether this should be a link or not so here is the slide where i illustrate the three steps of data association as i mentioned earlier on suppose that we are given a pair of images that is shown in this diagram over here we'll first extract key points image key points which i will unfortunately i will not go too much into the detail of a image key points extraction uh the algorithms for sift or op if you are interested you should refer into these two papers save feature is a very famous image feature that has been used for in computer vision very extensively and the first step over here would be to extract the image key points as well as the descriptor so every one of this key point over here the key point represents a single pixel or single location within the image and every one of this single this key point over here it comes with a descriptor which is usually 128 dimension for sift for example so this 128 dimension you can think of it as a signature or a thumb print for this particular location in the image and we will use this image descriptor to match across two images suppose uh this and this feature over here this key point over here i have two uh i'm going to extract the image descriptor the key point descriptor in for this particular key point in this image which i call io1 and then i have another image over here i2 i'm going to extract this uh image descriptor which is also a 128 dimension if i'm using a stiffed descriptor so i'm going to match them a dot product which will tell me whether they are similar to each other or not and if they are similar then i will say that uh this is going to be a match and hence i will be able to establish these correspondences have illustrated in this particular figure over here so a line over here simply means that i'm going to say that these two key points over here are a match so i'm going to say that this is a punitive image correspondence in in my pair of images over here and we all know that since image correspondences are obtained purely based on appearance that means that i do not make use of any geometrical information to establish this image correspondences up to this particular step over here hence we we know that by doing that by only relying on the visual appearance we are about to get outliers matches which means that some of these matches for example in this case over here they are not correct match visually we can tell that the correct match should be somewhere here for example so but the sieve key points that we extracted and the descriptor that we extracted uh gives us a wrong match over here and the so the next thing that we want to do would be to apply our robust to view geometry estimation in order to do what we call the geometric verification algorithm so geometric verification algorithm simply means that given a putative set of correspondences i1 and i2 i have all these correspondences so some of them might be wrong some of the image correspondences might be wrong and what i want to do here would be to apply the ransac starting from this set of image correspondences which i denote as x i and x i prime so this set of image correspondences and i want to run the ransac fundamental metric algorithm or the essential matrix algorithm depending on whether this is an uncalibrated camera or calibrated camera so uh calibrated camera means that i know k the intrinsics an uncalibrated camera means that i do not know the intrinsics of the camera so i can choose either one of this two view geometry algorithm to apply uh ransac on this set of correspondences and at the end i will get the the set of in layers based on the either the fundamental matrix or the essential matrix model and from here we can see that uh after the applying the algorithm we'll be able to distinguish which are the in layers which we illustrate colored here as the green lines and the outlier sets will be colored as the red lines over here so once we get this we will be able to count the total number of inliers and if this total number of in-layers exceeds a certain threshold what this means is that there are a lot of correct image correspondences to support the two view geometry to support the fundamental matrix or the essential matrix that we compute so in another words what we are trying to do here is that we are in addition to the establishment of the putative correspondences purely based on appearance we are applying geometry because this is two view geometry that we are applying so now we are applying geometry to further determine whether a set of image correspondences are in liar or outliers and if there are more than a certain number of in-line accounts then we say that this fundamental matrix or the essential matrix is correct and uh what it also implies is that these two views are actually seeing the same common scene where all these points are being projected onto the two views as image correspondences and once this is determined so if the number of in liars total number of in-layers exists a certain threshold will add an edge to link the two image in the scene graph so one of the problem of doing this is that it's uh exhaustively searching through the pairs of images in the set of n images over here where we are given i want all the way to i n so every by doing this we have to exhaustively take every pair of image from these n combinations from all these given set of n images uh to establish the geometric verification and it could become intractable and when n here is uh too large so the complexity of just querying one image suppose that we have a collection of any images over here and we want to pick one image and query to all the other images in in the unstructured set of images that we are given the complexity will be in the order of n k square where n here is the number of images in my collection k here is the number of key points because for every pair of uh images i'll have to check through k number of key point correspondences in this particular pair so all together for one query image for if i were to take one image out here which i call i query i would have to query this n times and for each pair of this query i have to go through the correspondences in the order of k square hence each one of this query image would incur a complexity of n k square and now let's look at an example on how computationally expensive would this be suppose that we have 1000 c feature per image and this means that given one image we are going to extract 1000 c features c feature we are going to extract one thousand c features over here and hence k uh would be equals to one thousand and suppose that we have a hundred million images in our database that we want to query against so n over here would be equals to 100 000 uh images now given this particular query image what we want to do here is that and we have 100 million of this we have 100 million of this database images we want to query every single pair over here and this would simply means that we have to do a comparison of 100 000 million features and if we further assume that it takes 0.1 millisecond to compare each pair of feature in the query image and the database image then this means that one query image would take about 317 years to compute and imagine that if we were to exhaustively compare all the pairs of images in the database then this would even take a much exponentially longer time so the solution here would be to use what we call the back of words image retriever approach and the goal here is that instead of using a case where a brute force way of comparing every single image correspondence in the pair of images that we are given the query image as well as one of the from the database this will incur k-square complexity we are going to build an efficient tree based search algorithm for the matching of the query image features with the image features in the whole database now i will briefly go through the algorithm to do this the first step here is that we are given a set of training images which we call the training data so note that this set of training images can be any collection of images that you have it could be simply a set of images that you randomly crawl from the web so suppose that we have this set of images over here and what the first step of this image retriever algorithm would be to extract key points and the this and the respective descriptors of these key points and store them in the computer so now suppose that uh for illustration purpose i'm going to illustrate this key point as a two-dimensional key point so suppose that we are able to visualize or we are able to plot the distribution of this keypoint descriptors that is extracted from the each one of this single image in our training data set it would look something like this of course in reality in the case of a sieve we would have 128 dimension so you can think of it this way that now i'm going to visualize this 128 dimension if i can plot uh 128 dimension x1 x2 x3 and all the way to x 128 so this each sift feature will lie somewhere in this 128 dimensional space but for the sake of simplicity to illustrate this i'm going to just use a two-dimensional feature where it's going to be shown as x1 and x2 so i'm going to plot all these features as shown in this diagram over here for every training image i'm going to extract the key points the descriptor of the that associates with every single key point from this set of training image and suppose that i'm going to consolidate everything together all the key points i'm going to consolidate them all together and this is the final set of key point descriptors that i have plotted out in the two-dimensional space x1 and x2 and the next thing that i will do is that i will perform the hierarchical clustering in particular i will do a hierarchical k-mean algorithm over here i wouldn't go into the detail of the k-mean algorithm because this algorithm you should have learned it from your undergraduate classes so but very simply over here is that what i want to do here is that uh given the set of all the descriptors that i've seen that i've talked about earlier on here in the two-dimensional space what i want to do would be to hierarchically cluster them into k different clusters per level so we start off from a root node for example the root node here would simply be the centroid of this descriptor so here this this guy over here is the root not one of the choice would be simply to compute the centroid of all the descriptor and assign it as the root node then in the second level suppose that uh in this particular hierarchical came in the this came in three over here i'm going to define it as a three three level three and for each level i'm going to perform a k-mean and that k would be equals to three for each level for each uh for each level and what this means is that starting from the root note which i can choose it as my centroid of all the features the next level i will based on this root note over here i would look at all the features and perform a k-mean that means i want to split them into three sets assign each assigned by a centroid over here and in this particular example over here we can see that the second level is actually denoted by these three notes over here so three notes over here they are the mean of the three clusters that uh it's a caliber then the final level that we have would be the leaf nodes over here in this particular example over here so in the in this particular example over here for each one of the cluster that i have in the second level that is shown here illustrated in this figure over here for each one of the cluster that is represented by this centroids at the second level i want to further split them into three clusters using the k-mean algorithm so in this case here i will have one two and three clusters that is each represented by a centroid node in the leaf level of this three level tree over here so similarly here i will split them into three clusters and this guy over here it will be also split into three cluster and as a result what i will get here would be a k-mean three over here it's actually it's a three-level uh three cluster or uh three mean tree and now what it means here is that each one of this uh descriptor would be reference to the closest leaf nodes that uh it's a representation of the each one of this descriptor in our original collection over here and the next thing that we want to do is that once we have built the k-mean tree from the training data we can discard the training data that's not no longer useful and what we will make use of for the image retriever algorithm would be simply the uh this chemin tree that we have learned it's uh you can think of it this way that this k-mean tree here would be a representation of all the possible image descriptors that we can extract from the training data then the next thing that we need to do would be to build what we call the inverted file index this would the set of images from the database that we wish to query upon and so for every one of this uh image we'll first extract the image key points the image feature so these are the images in my database i'll first apply the shift or feature algorithm to extract all the key points from every single one of these images in the database as well as the to get the descriptor and then for each one of this image key point i will make use of the descriptor and search for the closest leaf nodes in this tree over here so what it means here is that starting off from a key point descriptor which is a 128 dimension if i'm using sift for example i'm going to start to query because each one of these node over here it represents a centroid location in the actual 128 dimensional space so every time when i started querying so i will check for the node that is closest to this query descriptor so for example here the first step over here the first branch i'll check whether this node is closest to this this or this if it is closest to this then i will forget about the subtrees from this to that all the descendants from these two branches over here and i will further query uh whether this particular node over here it's closer to this this or this and if this is this turns out to be this particular node over here it turns out to be closest to the query feature as shown here then i will ignore this too and say that this particular feature it has a closest match to this particular leaf node over here so once we have gotten the closest leaf node for each one of the image feature that is extracted from the database we will store the frequency count of the image feature and the image id on each leaf node as a global descriptor so what it means here is that once i have this query image because this queries descriptor it comes from one of the images in the database say for example i have my database image number one for example so this has an id of one once this feature it's uh when i query with this k-mean tree over here and suppose that this leaf node is found to be closest to this what i will store at the end would be the image id so this would be the image id number one in this particular leaf node over here as well as i would have to also store the number of times that this image id one is coming to this particular leaf node over here suppose that there's another key point that is extracted from the database image number one after querying it also comes to the same leaf node as the first key point over here then i would have stored the image id over here as number one the first image in my database will come to this particular leaf node over here and i will also store another number over here which is the count that belongs to this so the count would be equal to two over here because two of the image key points from image id number one are closest to this particular centroid uh over here in the leaf nodes in the k-mean three here's an example to illustrate the building of the database suppose that this is image number one in my database i'm going to first extract all the key points and the descriptor and then i'm going to pass this every one of this single key point descriptor into the k-mean tree that i built earlier on from my training data and suppose that i have this particular image key point that found the closest lyft node to be this i will store the image id over here as well as the count of how many times the image key points from image number one in the database ended up in this particular leaf node over here and in this case over here suppose that uh it's going to come to this uh this node this node and this node over here and this node over here so in in this case the count would be equal to 1 and here is id 1 and count would also be equals to 1 and but we can see that in this example two of the image key points come to the same note over here this could happen because we can see that every single of this leaf node it actually represents a space a cluster of space in the 128 dimensional space so it could happen that uh some of the key points that is extracted from the images could lie somewhere inside here so if they lie somewhere inside this cluster over here then we say it all belongs to this particular centroid which is the case illustrated here so we have two key points that is extracted from the from this particular image over here and most probably that these two key points come from the eye over here because they look visually similar over here so they're going to end up in the same leaf node in the k-mean tree and here what we are going to do here in this particular node is that we are going to store id of one image of id or one and a count of equals to two over here then in this case here is image one the count of uh one in this case over here so we'll do this for every single image in our database and we can see that what will build out here would be a file a inverted file that contains for every single leaf node it contains a list of ids of the images in the database that has features that corresponds to this particular leaf node over here as well as for each single image we also count the num the frequency of the image key points that corresponds to this particular leaf node over here and once this is built the final step is that given a query image will first extract the key points from this query image over here and then for each one of this query image we'll query the visually most similar correspondence using the k-mean three that and the inverted index file that we have built earlier on so uh for the the exact steps to do this would be for every single image key point that we have extracted from the query image we're going to query through the the three and find the closest leaf nodes for each one of these particular image key points that we have extracted from the query image then we'll keep a counter and we'll increase the counter because just now in this particular step over here we know that for every leaf nodes it's associated with a list of images as well as a count of how many times this particular image contains a image key point that is visually similar to this particular leaf node over here so we'll do the same thing we will make use of this this file which is the inverted index file to increase the counter for each one of this image key point in the query image once it reaches the leave note in our gaming tree we'll retrieve the whole list of database image ids that is stored in that particular leaf node and increase the counter for those images contains the image correspondence closest to that leaf node so for example let's say for this particular uh key point over here once it reaches a leaf node the closest leaf nodes and uh this leaf nodes over here and this particular leaf node suppose that it contains image number one image number three for example this means that the this particular key point is visually similar to suppose that this is image number one number two number three and number four so suppose that you reach a leaf node over here contains a list that says that number one and number three are in the list that corresponds to this particular node over here then i will increase a counter for one and three respectively for this particular feature finally what we will see is that for once we have done the query for every single key point in the query uh image over the k-mean 3 database that we have built up earlier on then we will be able to conclude that the image id in the database that has the highest counter be the visually most similar image to this particular query image over here the reason is because in that case what this simply means is that uh it has the highest count where for every one of this to be in the same leaf node as the respective image in the database so here's an example suppose that uh we are given this query image over here we will extract the key points from this uh from this query image over here and we'll make use of the existing database uh k mean three that we have built from the database as well as the affiliated uh infer inverted index file that is illustrated over here in this particular figure for every single key point that we extract from the query image we'll put it through the k-mean tree so suppose that the first key point finds that this particular leaf node over here is the closest to this particular key point over here then we will look at the associated id in the database that is found in the inverted index file over here suppose i'm calling this number one image number one image number two image number three and image number four for example so the first query uh key point from the query image over here it is the closest to this particular leaf node we will see that this particular leaf node contains image number two and image number four from the data base hence we'll increase the count by one over here and then the second key point over here we'll do the same thing put it through the k-mean three the hierarchical came in three suppose that it comes to this particular leaf what we will do here is to check the image ids that is associated with this particular leaf node over here and in this case number one would correspond to this particular leaf node so we'll increase this by a count of one and image number two will increase this by a count of one as well and then the third key point that we see over here suppose that it comes to this particular leaf node we'll do the same thing and we'll see that uh image number two receive a count of one and image number three would receive a count of one when you build the inverted index file in addition to storing the image id in each leaf node we are also storing the frequency the cult of the image feature for example this guy here the eyes it's uh seen twice here for example so in the case of this guy over here if i found a key point that is closest to this particular leaf node i would normalize it by the count so the count here is equals to 2 for example so what i would do here is that i would normalize the the count because this is a commonly seen feature in this particular image i don't want it to be double counted or to be counted twice in some implementation of the image retriever algorithm people will simply divide by the number of count or the the frequency that i've seen this image feature in the database image to avoid confusing because this means that this particular feature is seen too many times and is not discriminative enough in this particular image so instead of adding a counter by one i will in this case if the count is two i'll simply uh add the counter by one over two i'll normalize it by two because i have seen this feature two times in the training in the database image now we'll see an example on how this helps in mitigating the computational complexity to establish the scene graph suppose that we're querying an image in a database of 100 million images this is the same case as what we have seen in the example that we have seen earlier on we're occurring in a database of 100 million images and uh we will do the same suppose that each of the query image has 1000 c feature or the query image has a 1000 c feature and similarly for every image in the database there's also 1 000 uh sif key points which we denote as k equals 1 000 over here now in this example we further assume that we built a k-mean 3 with 10 branches and 6 depth which we denote as a b and l over here the number of branches will be 10 and the number of depth level will be 6 so in total we would have 1 million vijay words that would be equals to 10 to the power 6 yeah that would be equals to 1 million visual words and then the number of comparisons that we will take would be simply equals to k multiplied by b multiplied by l and that's uh equivalent to 60 000 comparison for one single query image in the whole database the reason is for each query image i have k number of features and each one of the feature i'm going to put it through a tree with l levels and b branches so at every level i would have to compare with b branches and i'll have to compare with b branches over here and i'll have to repeat this for l different levels so it will be b multiplied by l to compute the complexity and that will be equals to 1000 multiplied by 10 multiplied by 6 and we assume that it takes 0.1 millisecond to for a pair of feature comparison so this means that for each image query it will only take six seconds to compute and this is so much faster compared to the 317 years that we need in the earlier example now in summary the image retriever it's a it's a it has a much reduced uh complexity compared to the brute force naive way of pairwise matching that we have seen earlier on so instead of using for one query image instead of having a complexity of n k square we now reduce it to n uh k multiplied by b and l so as a result what we'll do here is that we'll first apply the image retrieval algorithm to eliminate the infeasible edges in the same graph more specifically what it means is that given a set of this unordered or unsequenced images which we denote as i to i and over here we'll first apply the image retriever algorithm to figure out whether this image are linked to each other whether they are visually similar to each other or not so hence as a result what we will get is that we will get a initial candidate uh set of a scene graph that is computed from the image retriever algorithm so if these two images are from my image retrieval algorithm there's very few matches over here the the score over here they're all very low then i can conclude that there is no correspondence over here and the next step that we need to do is that because image retriever is still based on visual appearance so the next step that we need to do would be for those links that we add to the same graph after the image retrieval step we would have to also apply the geometric verification that means that for each pair of this i have to use ransack to compute the in-layer counts and then if it is more than a certain threshold then i will let this edge remain in the same graph otherwise i will remove the edge in the same graph |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_10_Part_3_StructurefromMotion_SfM_and_bundle_adjustment.txt | so after obtaining the 3d structures as well as the camera poses from the initial reconstruction as mentioned in the earlier slide the next thing to do would be to apply bundle adjustment on these estimates to obtain a better 3d reconstruction more specifically we achieved this by minimizing the total reprojection errors which we denote as delta z ij over here and this can be illustrated using this particular figure over here suppose that we have a we are looking at one particular image in the multiview reconstruction as seen in the earlier slides and we denote the camera matrix of this particular image as p i and suppose that there is a 3d point which is reconstructed and seen by this particular image which we denote as xj over here it's corresponding to the points which is the 2d point used to reconstruct this 3d point xj in the initialization step since we know that xj as well as pi over here they are estimated from the algorithm such as the two view geometry algorithm pmp algorithm and the linear triangulation algorithm and this means that they contain uncertainty and what it also means is that if we were to reproject this 3d point back onto the image using the 3d point itself of course and as well as the estimated camera pose the reprojected point would not end up to be the same as the observed point x i j over here and there would be an error which is denoted by this delta z i j over here which is the euclidean distance between the observed point x i j and the reprojected point let's call it x hat i j over here that is being reprojected from this reprojection function over here so the task over here in bundle adjustment would be to minimize this reprojection error over all the views as well as all the 3d points that is being reconstructed from the initial reconstruction and we do this optimization over the camera poses as well as the 3d point and mathematically this can be expressed in this particular cost function over here we do a min over the camera projection matrices of all views and the 3d points that is being reconstructed from all the camera images and by minimizing the square norm of this reprojection error that is given by x i j which is the 2d points that is observed in the image minus away the corresponding 3d points that then reprojected onto the image over here so pi over here represents the reprojection where we compute this by simply multiplying uh pi and xj over here we have seen this in the earlier lecture but we shouldn't forget that this has to be normalized by the third element of the multiplication over here in order to make it a valid uh homogeneous coordinates of x y and one in the third element w over here represents a measure error covariance that is used to weigh this particular error function over here and now the cost function can be rewritten as a product of the error transpose multiplied by the weight and multiplied by the error itself and this is a simple re-writing of the square norm equation over here since everything inside this square norm is a vector and here we can rewrite the error function into g of p where p here is the variable to be optimized as written in the argument over here and in the case of bundle adjustment p would be a 12m by 3m vector because we have 12 parameters in the camera projection matrix where we saw that it's actually a three by four matrix in the earlier lecture this means that there are all together 12 elements in here and if we have n cameras this means that all together we have 12 n camera parameters as well as a 3m point coordinates where m is the number of points 3 means that we have x y and z for each one of the points we'll use the iterative estimation methods to optimize the cost function now let's rewrite the function in a more general way suppose that we are given a hypothesized nonlinear functional relation which we call f of p over here p here represents the parameters to be optimized and x here would be our measurement vector this is an n-dimensional measurement vector where p here is the parameter vector that i've mentioned earlier on to be optimized in the euclidean space and let's assume that this has m dimension measured value of x approximating the true value x bar is provided so what this means is that we can measure the value of x that approximates a true value x denoted by x bar and we wish to find the vector p hat that most uh satisfy this functional relation so in the case of the bundle adjustment x the measured value of x would be the image coordinate point that we have observed and we want to find the vector p which is the all the parameters in the case of bundle adjustment here that would be equivalent to all the camera poses as well as all the 3d points from the initial reconstruction and we want to find this particular vector over here that best satisfy the this relation over here x equals to f p hat minus epsilon where epsilon is the area between x and f p so we can rewrite this equation over here this particular relation over here into epsilon equals to f p hat minus x and since f p x is what we predicted from the parameters that we wish to optimize over and x is a measurement vector that we have defined earlier on what this means is that we want to minimize the error between what we have predicted and what we have observed as denoted as epsilon and we do this by defining another function g of p hat to be equals to the half of the norm square of this error function that we have defined earlier on here and this can be rewritten into a vector form where it's simply equivalent to half of epsilon transpose epsilon where epsilon here is actually a vector that we obtain from this these two guys over here now we'll look at four different methods to minimize the error function and in particular we'll look at the newton's method the gauss-newton method the gradient descent as well as the luverberg macro algorithms so we'll start by looking at the newton's method and this requires us to define what is known as the heisen matrix so suppose that we let p 0 be the initial estimated value so recall that in earlier on in the this slide we denote p as our parameter vector that we wish to optimize over now let's let p naught over here be the initial estimated value and this means that we would have an error of g of pin zero we may expand this g of p the function which this is actually the error function that we have defined earlier on and since this is a nonlinear function uh in with respect to p we can linearize it using the taylor series to get this function over here so the uh we will just use the operating point of the which is the initial uh estimate p zero plus uh delta this means that there is a delta change in the parameter p naught and uh it would be equals to g that is uh defined in at p0 plus the first order derivative of the function g with respect to p multiplied by delta delta is the small change of p not over here plus the transpose of delta multiplied by the second order derivative of the function g multiplied by delta divided by two and there are many smaller order terms over here that we can ignore where here more specifically the first order derivative noted by g p over here is given by l g over del p expressed at the point of p equals to p zero similarly for the second order derivative of the function g it would be del square g over del p square expressed at the point p equals to p zero the first order and the second order derivatives they are called the jacobian and the haitian matrix respectively now suppose that we have a function f x which maps an n-dimensional vector into an m-dimensional vector and we further assume that the first order and second-order derivatives exist on the n-dimensional space then the jacobian matrix is given by a m by n matrix of this particular form over here so what we do here is that after it takes in an n-dimensional vector and outputs a m-dimensional vector what we will do here is that we will take f and simply take the partial differentiation of f which is an m-dimensional vector uh partially differentiated by every single entries of the input the every so there will be all together n columns over here and there would be m number of rows simply because the output of f is an m-dimensional vector so we are going to take this this whole column over here o with respect to the partial differentiation of this column here with respect to the first variable uh the first entry in the variable over here x one and this would form the first column of the jacobian metric we can also simply denote the jacobian matrix as j i j which in short will be del f i over del x j similarly for the haitian matrix it's going to be a n by m matrix in this particular form over here so what we will do here is that we will take the second order derivative of the function f with respect to every single entry in the in in the this x factor over here and we'll uh subsequently we will do a partial differentiation uh with respect to every i and j uh coordinate of the x factor over here so it will be del square f over del x 1 square del square f over del x 1 and then multiply by del x 2 and so on so forth until we get del square f over del x n multiplied by del x 1 square or in short we can write every entry in the haitian matrix h i j it will be equal to del square of f over del x i and del x j now after the taylor series expansion we see the point g of p one where p one is the new uh vector that we estimate such that we minimize the this function over here g of p1 where p1 equals to p0 plus a delta change in it we'll do the same thing well we saw earlier on that p1 since is equals to p0 plus delta over here we can expand this using the taylor series expansion which will give us this particular equation over here where g0 is evaluated on p0 and gp is the first order derivative which is also the jacobian of g and gpp is the second order derivative of g and that's also the haitian matrix of g and we'll seek the minimization of this particular function over here with respect to delta so in order to achieve this we differentiate this taylor series expansion over here with respect to delta and set it to 0 to get this function over here so we can see that if we take this del g p1 over del gp1 over del delta because this is independent of delta this will go to zero and this term over here is also independent of delta we take this over del delta this is going to be 0 as well and this guy will become g of p since it's a function of delta and this is a square function of delta so this would become gpp multiplied by delta and the whole thing will be equals to 0 which is this equation over here we can shift gp over to the other side of the function to get this particular equation over here and we can see that this equation that we have obtained earlier on is an inhomogeneous linear equation that is equivalent to ax equals to b where we can solve for delta using the pseudo inverse or the inverse of gpp over here so once we have obtained the delta we will do an update of the vector p in the next step by simply adding delta that we have found from this step to the p vector in this particular step and we'll do this successively until convergence so a remark about the newton's method and using the hesion is that intuitively what this method is trying to do would be can be visualized by here where suppose that we look the variable that we wish to optimize for let's denote it as a one-dimensional scalar value p and this is our cost function g of p over here so what this newton method is trying to do would be at every operating point suppose that this is my first p0 over here what it's trying to do in this step is that it's trying to approximate this particular point over here of this cost function using the quadratic curve over here and then it's optimizing in that direction of this quadratic curve to get the minima over here and so since this is the point at the minimum this would be the next p1 we are looking for and here it will approximate another quadratic curve on this particular operating point p1 and then you will move to the next minima over here and so this will be done successively until we finally reach the local minima which is shown in this particular curve over here so what this means is that since newton iteration is based on approximating uh a quadratic curve near the operating point it's going to work best when this particular estimate p is already close to the optimal point it's also further based on the assumption that the optimal point is quadratic so once it's quadratic it's going to converge very fast towards the local minima and but this particular assumption might be invalid especially when we are very far away when p0 is very very far away from the local minima so at the very far away point this might not be well approximated by a quadratic function and hence the convergence would be very slow at the beginning and another disadvantage of the newton's iteration method would be that the computation of the heisen matrix is difficult because it involves the second order derivative the nonlinear function that we are interested in so a way to mitigate the computation complexity that arose from the haitian matrix would be to use the gauss newton method we have seen earlier on that uh since this function g of p represents the square norm error of the cost function that we now denote as a epsilon bracket of p so to denote that this epsilon over here is actually a function of p we have seen earlier on that this is nothing but just x minus f of p where f is the non-linear function that we are interested in looking at we also further seen that this square norm can be rewritten into a vector multiplication a dot products given by this particular equation over here we'll add a half over here for the convenience when we differentiate this uh two the square over here and this the first order derivative which is uh del g over del p over here given by this guy over here gp is equals to the first order derivative of epsilon transpose multiplied by epsilon i'll go through the exact way of computing this because this simply follows the chain rule of differentiation which you should know by now and where epsilon here epsilon p over here is the first order derivative of the function f that we have defined earlier on here we can see this way that the differentiation of del epsilon over del p is simply equals to the differentiation of x f of p divided by del p over here since x over here is independent of p it becomes zero so we are all we are left with would be del f p over del p which is given by this guy over here and this is nothing but the jacobian of this nonlinear function f that we have defined over here and in the bundle adjustment case not to forget this nonlinear function is simply referring to the reprojection function which we denote as pi earlier on and we do this similarly for the second order derivative so this would be del square of g divided by del p square we denote this as gpp and by applying the chain rule of differentiation we'll get this particular term over here where epsilon p p over here would be uh equals to the del square epsilon the second order derivative of epsilon which is equals to del square epsilon divided by del p square over here and since this term involves the second order derivatives and it might be very inconvenient or computationally expensive to obtain this particular second order derivative bearing in mind that this is only part of one iteration in the whole iterative optimization technique that we are talking about so what this means is that if we were to compute the second order differentiation we have to do it many many times over the x number of iterations that we have in the technique before it converges so uh and this can incur a large amount of computational time so in order to mitigate this computational complexity we will simply ignore this higher order term and since we ignore this we can write gpp the second order derivative which is the haitian matrix that we have saw earlier on as epsilon p transpose multiplied by epsilon p and this is simply equivalent to j transpose j since epsilon p as we have seen earlier on equals to the jacobian of the nonlinear function f over here so and what this means is that we now can approximate the haitian matrix which was computationally expensive to compute with the product of the jacobian matrix over here and this would be much easier to compute since we only need to compute the jacobian ones after substituting the second order derivative of g which is the heiser matrix of g as equals to j transpose of j here we will obtain this particular equation j transpose j delta equals to minus j transpose epsilon j transpose epsilon is equals to the first order derivative of the nonlinear function g p and this equation over here would also be known as the normal equation so we can also derive the same weighted iteration of the normal equation by adding the covariance matrix over here so this is the final normal equation that we get if we consider the error covariance on both sides but you should work this out by yourself by adding area covariance here so in this particular error function over here so what this means is that we'll end up with a equation of epsilon p transpose sigma epsilon p divided by two and if we start off from here by doing the first order derivative and the second order derivative and ignoring the higher order terms here we will end up with this particular normal equation over here so sigma over here it's our error covariance of the measurement x and it's this symmetrical and positive semi-definite metric so what this means is that uh j transpose j would be symmetrical and positive definite since it's a it's a matrix multiplied by itself and if sigma x over here or if sigma the error covariance x over here is symmetrical and positive definite it also mean that the product of this would also be a symmetrical uh positive definite metric and this would guarantee us to be able to flip this guy over here because we can view this as a x equals to b so in order to solve for x we need to be able to solve for the inverse of a and since this guy over here which is equivalent to the a matrix is symmetric and positive definite this means that we can always compute the inverse of this guy to get x which is delta over here so once we get delta we'll bring it back to update the p vector that we are interested in so p i plus 1 will be equals to p i plus delta i in every iteration and we'll continue to do this until convergence another way to compute the optimization would be to use what we call the gradient descent method so in gradient descent it's a very straightforward method instead of computing the haitian matrix as well as the first order derivative the jacobian matrix will simply compute gradient of the error function denoted by g p over here and that will be given by epsilon p transpose epsilon which we have seen earlier on in this slide over here and we'll use this particular gradient to do what we call the gradient descent towards the local minima so we can think of it this way that we have a error function g of p over here and let's look at a one-dimensional space problem where the variable to be optimized over is simply a scalar value of p now as we have seen earlier on that in the case of the newton optimization technique we would have to start from p0 initial point operating point and approximate this with a quadratic curve and then the optimization will move towards the minima of this quadratic curve and that will give us the next point p1 and we will start the iteration here taking p1 as the operating point and do a successive approximation at p1 and then we'll do the same thing to descend to the minima and we'll get p2 which is this point and we'll do this until we reach the minimal point so gradient descent would be to do this thing without a quadratic approximation but instead we'll do it directly by computing the gradient so let's look at this one-dimensional space and this is g p over here if this is our cost function what it's going to do is that at every point starting from p0 over here we're going to compute the gradient at this curve over here and then we are going to move in the direction of the gradient which is given by this guy over here we are going to scale this gradient using lambda the hyperparameter that we define and we are going to take a step in the negative gradient direction this means that we are going to what's the minimal point and then we'll take that point as the next p1 and successively we'll compute the gradient and take the negative over this particular control uh hyper hyper parameter to control the step and then we'll successively move towards the new point and successively compute this particular gradient that will eventually bring us to a local minima now in comparison with the gauss newton as well as the newton technique this gradient descent is definitely much easier to compute because we just simply compute the first order derivative and move into the negative direction of the first order derivative which is the gradient of the error function and gradient descent itself is actually not a very good minimization technique the reason is because we are computing the direction of the movement with respect to the gradient because the cost function that we have might be locally not smooth so what this means is that by computing the gradient we might be moving in many zigzag direction before we reach the optimal point and this means that it's going to lead to a slow convergence but in comparison to the gauss newton or the newton technique gradient descent still can work reasonably well when it is far away from the local minima and we will see how louverburg macquar algorithm combines the two techniques the gauss-newton technique as well as the gradient descent technique such that it gets the best out of the book or two worlds so more specifically the lower markov algorithm is a slight variation of the gauss newton iteration technique we'll start by looking at the normal equation over here which is the equation that we have derived when we look at the gauss-newton iteration method and in lower markov we simply augment the j transpose j over here the heisen matrix with lambda multiplied by i so what this means is that uh suppose that i is a m by n matrix we will get a diagonal of lambda in here in this particular matrix over here and we iteratively refine or we iteratively update the lambda parameter here over the successive iterations in the lower marker algorithm a typical value of lambda would be initialized to 10 to a power minus 3 times of the average values of the diagonal elements in j transpose j so what this means is that after we compute j transpose j we get suppose m by n matrix we'll take the sum of all the diagonal values and divided by the total number of entries which is going to be n over here and uh to get this particular value and we'll take 10 to the power of minus 3 multiplied by the average values of this diagonal of this heisen matrix over here to be the initial value of lambda and after we have computed this delta over here from the normal equation j transpose j delta equals to minus j transpose epsilon over here so we will use this delta that is computed from the normal equation to update the value of p where p here is equals to the previous value of p plus the delta value and finally once we computed the updated p we'll use this updated p to compute the new error over here where you because epsilon is a function of p which is given by the measurement minus away the nonlinear function taking in the newly updated value of p and we can compare whether the error value over here is reduced or increased compared to the previous step so in the case where there's a reduction of the error epsilon over here we will divide lambda by typically a factor of 10 before the next iteration and we will also accept this new value of p that is estimated from the computer lambda on the other hand if this lambda over here leads to an increase of the error then the increment would be rejected this means that the newly computed p-value will be discarded and we'll take the p-value at this particular iteration to be the same as the p-value in the previous iteration and lambda over here will be multiplied by the same factor and the augmented normal equations are solved again so we'll see why do we do this adaptive changes to the parameter of lambda when lambda is very small the method is simply equivalent or essentially equivalent to gauss-newton iteration we can see this from the normal equation in the lower markov algorithm where lambda here is very small so this term over here becomes negligible and we will simply get the normal equations that we have seen earlier on in the gauss newton uh algorithm and this means that what this means is that since we saw that when lambda is small this means that we there is a reduction of the error because we see that when there is a reduction of the error we are going to divide lambda by a factor which means that we are going to make this lambda smaller and as a result we are going to turn lower markov algorithm to be the same as the gauss newton algorithm when there is a reduction of error and what this means is that because we know that if there's a reduction of error in the cost function we are probably already getting closer to the local minima and has mentioned earlier on when we are close to local minima gauss newton will work very well because the local minima we are here on the assumption that the local minima it's uh getting more and more quadratic and so hence gauss-newton algorithm will work best in this situation and you will also converge faster at this particular situation over here hence when lambda when there's a decrement in the error we will make lambda small and we will turn lower markov algorithm into a gauss newton algorithm that on and that ensures a fast convergence so on the other hand when lambda is large what this means is that we have defined earlier on that there's an increase in error when we saw that there's an increase in error we want to make a lambda large because we are going to multiply a factor of 10 typically so we are going to we are going to make it large when lambda here is large in this particular normal equation over here in the lower markov algorithm when lambda is large this term over here is going to outweigh j transpose j and we can see that when this term over here outweighs j transpose j then the normal equation simply becomes lambda multiplied by delta equals to j transpose of epsilon and this is what we have seen earlier on as the gradient step for the gradient descent technique and so what this means is that when there is an increase in error we want to make the normal equation close to the gradient descent algorithm which is a sensible thing to do because as mentioned earlier on when the estimate is very far away from the local minima the newton technique is not going to work well because that when it's very far away it the the chance is that it's slightly not going to be quadratic at all so gauss newton is not going to work well in this case over here but when it is far away from the uh local minima uh gradient descent method will still work is except for it might be moving in a zigzag direction but in this particular case it's better for the algorithm to move in a zigzag direction then not to converge at all so in this particular case we will simply increase lambda and turn it into a gradient descent technique so in another words the lovable marco algorithm it switches between the gauss-newton iteration and the gradient descent approach depending on the error that is being computed at this particular iteration so indeed we saw that when lambda becomes increasingly large the length of the increment step decreases and eventually will lead to the decrease of the cost function |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_4_Part_2_Robust_homography_estimation.txt | now we have seen that the direct linear transformation algorithm it minimizes the norm of ah where we can rewrite ah to be equals to epsilon and we will call this epsilon here the residual vector hence the direct linear transformation algorithm aims to minimize the residual vector and each point correspondence here x i and x i prime here contributes to a part of this error vector which is a two by one vector where the norm uh it's called the algebraic error hence what we are doing here uh a by minimizing a h over here we are minimizing what is known as the algebraic error or algebraic distance which is the total residual error caused by all the point correspondences that forms ah here and this algebraic distance here let's denote it by d algebraic of x prime and h of x i this means that after transforming or transferring the point of x i from one image to the other image we want to compute the algebraic distance between the two and this is given by the error vector or the residual vector that is formed from the norm of a h which can be computed in this way over here hence given a set of correspondences the total algebraic area for the complete set is simply given by the sum of all the algebraic distances that is uh incurred by each one of these point correspondence that we can directly compute from the norm of ah square here so the disadvantage of minimizing the algebraic distance is that the quality is not meaningful at all or in terms of geometry or statistically uh this is this is because we are simply minimizing the error based on this linear algebra form this linear form over here and there is no geometrical meaning this means that i'm not uh minimizing any distance geometrically uh in the in the image or in the 3d world uh physically so nonetheless it's a linear solution and thirst unique so this means that by solving the norm of this minimizing the norm of a h with respect to h we are always guaranteed to have a unique solution here and it's computationally inexpensive hence the solution based on algebraic error i usually use an initialization for nonlinear minimization where we will see that uh usually the steps here would be we will initialize the eq solution of h using a algebraic uh distance or algebraic cost function and then once we have this initial solution the disadvantage of this is that uh it's not geometrically meaningful this means that the solution of doing this from the svd and from this minimization over here is not going to be as accurate as what we are going to see next compared to minimizing it geometrically but it has an advantage that is computationally inexpensive and there is always a unique solution here so we will use it as an initialization for the non-linear minimization which is uh using a geometric cost function that will lead to a better result or a refinement of the result the thus the non-linear minimization will give us the final solution or give us a final polish to the solution uh of the problem that we want to solve the geometric distance in the two image refers to the difference between the measured and the estimated image coordinates so now let us first look at the transfer error in one image suppose that i have the image or where it contains my observed point of x i prime here and i transfer x i here from the other image to this image now i'm going to uh because the h here that is estimated is subjected to the noisy observation this means that the transfer of x i to the image of x i prime via h over here is not going to be transferred exactly this means that the transfer point of x i is not going to coincide exactly with x i prime here because h is computed using noisy measurements so now suppose that the transfer leads to this point over here this point here which i'm going to call hxi because it's the transfer point using h and x i over here there is going to be a certain discrepancy between the observed point of x i and x i prime after the transformation hence we are going to compute this distance here as such the euclidean distance between the x i prime and the transfer point of x i via the homography of h and we'll use this as the transfer error so this is the euclidean image distance in the second image between the measured point x i and the corresponding point h of x i that's transferred from the first point and the we'll use this error to uh estimate the homography this means that we want to estimate a homography such that the error between the original point that is observed x i prime and the transfer point of h x i is minimized so in the previous slide we look at the error that is increased within one image when the point from the other image x i is being transferred to the second image here x i prime via h here this is a one sided or one image error transfer now it makes more sense that we are minimizing the error from both sides of the transfer hence we call this the symmetric transfer area where instead of just transferring the error or transferring the point from x i image 1 to image 2 over here via h and computing the distance we are going to do the same the opposite way or the other way around where we are going to transfer h i prime using h inverse here onto the image of x i and then we are going to compute this distance here and use it as the error measure as well and this is known as the symmetric transfer of the area because we are doing it in both direction hence we can see that there are two terms here the in the first term this is the same as the previous slide where we are transferring x i to the image of x i prime and then we are computing the distance as an error measure here in the second term here we are transferring x i prime to the image of x i via h inverse and computing this as the error measure and the first term is the transfer error in the first image and the second term here is the transfer error in the second image and again the error is minimized over the estimated homography this means that we want to do a mean of this over h minimizing the total symmetric transfer area between the two images of over all the points and there is a third way of computing the geometric distance as a cost function which we call the reprojection error now we are seeking uh a homography that perfectly align the match points or transfer the two points so this is the subsidiary uh points which we denote as x i hat and x i hat prime what this means is that we are seeking for a homography that uh suppose that i have a observed point of x i and then another observed point of x i prime which is the correspondence of x i i'm seeking for a homography which is uh given by h over here and this h is going to transfer another point here which is my x i hat over here to exactly the point where i desire uh which i call x i hat over here and this h is my perfect transfer this means that uh the given hx or h hat over here i'll be able to transfer the corresponds of x i hat and x i hat prime here they are going to be exact compared to x i and x i uh prime which are corrupted with noise here so uh this we are going to compute this as our uh projection or reprojection error this means that uh given this hx i will be able to have and also my h x i hat and my x i hat prime over here i'm going to compute the error between x i hat and x i as my first error term here and x i hat prime and x i prime here as my second error term here subjected to there is a perfect transfer of the from the homography of h hack such that it will perfectly transfer x i hat to x i hat prime and minimizing this cost here will give us the best uh homography that minimizes this distance over here this means that i want to make x i hat and x i hat prime close to x i and x i prime so what's interesting about the reprojection area is that it's going to be much more accurate than the symmetric transfer error and the algebraic error the reason is because we are not only optimizing over the homography but we're also optimizing over x-hat and x-hat prime such that it ends up to have a perfect mapping or transfer via the two images using the homography that we computed and but the problem here is that although it leads to a much more accurate solution it's also much more computationally expensive or complex because there are two additional terms that we need to optimize over now and its complexity contrasts drastically with the simplicity of minimizing the algebraic area of this residual norm that we have looked at earlier of ah here and we have an alternative option which lies in between the two errors the algebraic areas as well as the reprojection area the which trades off between the computation complexity and the accuracy and this is the samsung error and it lies between the algebraic and the geometric cost function in terms of complexity but it gives a close approximation to the reprojection error in terms of accuracy so let us look at how to derive this let ch x here equals to 0 denote the cost function of a h equals to 0 this means that since this guy here a h is a function of the point x which is given by the correspondence x y as well as x prime y prime transpose is a four by one vector here so we are going to say that this particular residual area here a h is actually a function of x since a is made up of x here so we are going to we simply rewrite this as a function of x and call this function c h and this is equal to 0 and we further denote x hat as the desired point so that c h x hat equals to 0 where delta x here is the error between x hat so this is the optimal point x hat and minus away what we have observed so this is the the point observation which is corrupted with noise and that noise is denoted by delta x here so this is our error delta x here is our error and this is similar to the reprojection error here or the geometrical area that we are talking about here and now the cost function ch may be approximated by the taylor expansion which we rewrite into chx plus delta x so this guy here is actually our x hat equals to c x plus the jacobian this means that i'm going to differentiate this matrix over here and with respect to the vector for four by one vector of x and multiply by delta x this is the first order taylor expansion where we ignore all the higher order terms here and this is equal to zero so the approximate cost function can be rewritten into uh j delta x equals to minus epsilon where uh this guy here it's the jacobian so we are going to denote this as j and this is our residual error or this is our error function which we are going to write as epsilon here so moving this guy out to the right hand side of the equation we will simply get j delta x equals to minus epsilon which is this equation as shown here where j is the partial derivative of the matrix and epsilon is the cost function associated with x the minimization problem now becomes find the vector delta x that minimizes the norm of delta x subjected to this constraint over here the jacobian multiplied by the other x must be equals to minus of epsilon now this constraint here the jacobian multiplied by delta x equals to minus epsilon can be simply solved as the right pseudo inverse given by this equation over here so this is equivalent to ax equals to b and we can solve this ax equals to b by taking the right pseudo inverse of a where in this case is jacobian so the right pseudo inverse is given by this equation here and uh we can solve for delta x and the samsung error is defined by the norm of this uh right pseudo inverse over here which is simply given by delta x transpose multiplied by delta x which is uh given by this guy here transposed by itself here and multiplied by itself here which simply gives us this equation and this is the definition of the samsung error so for 2d homography estimation problem where x is simply the four by one vector x y x prime y prime where this is a correspondence and the measurements are given by this x y and x y prime in the second image so the algebraic error vector a i h is a two vector and this guy over here the epsilon here would simply be this algebraic error over here and given this 2 by 1 vector of epsilon here this guy here is a 2 by one vector we'll be able to compute the jacobian by simply uh taking these two by one vector and differentiating it with respect to every component every element in my x vector here so we'll end up with a 2 by 4 matrix this means that i'm going to take del e epsilon over del x and then del epsilon over del y as well as del epsilon over del x prime and del epsilon over del y prime here to form my two by four matrix because each one of this epsilon is two by one so i'm differentiating it four times over here and i will get the two by four matrix where uh the first term here the j 1 1 this means that this term over here is uh the the first term first element of this partial derivative here is given by this equation here so i'll leave it as an exercise for you to derive the full expression of this jacobian here such that you can plug it back into this equation here and get the samsung error now after we have computed the samsung error which can be still computed in closed form we will now look at how to do the iterative minimization another geometric and the samsung error are usually minimized as the square mahalanobis distance this is a statistical distance if we know the error covariance of the measurement of x here so in the case where we do not know this error covariance we are simply set it to be equals to identity and it's defined by the norm of x minus f p here where x is our measurement so in the case of the correspondence our x here would be our x i and x i prime and p here is the set of parameters to be optimized so in the case of homography p would be our homography that we want to minimize over and a mapping function f such that it maps the m formula dimension m to n in general so this is a unconstrained uh continuous optimization so f p here would be the relation the mapping function of h mapped with uh multiplied by x i and this maps x i into x i prime which is our observation here and we can rewrite this into this form in the general form and this is unconstrained continuous optimization which we we can use gauss newton or lowenberg macro algorithm to solve but we will defer the details of this to lecture 10 when we talk about bundle adjustment and structure for motion so the in the case of the area in one image the iterative minimization it's formulated as such where the measurement vector is simply made up of the two n inhomogeneous points of x i prime since this is only in one image i have my observation in this image of x i prime here so they are all together two n of those l n uh homogeneous or endpoint in my image and the set of parameters to be optimized is set as h here this is my homography the mapping function is simply defined as the linear transformation of h multiplied by x in the first image here so this is x i here and where the coordinates of x i in the right in the first image is taken as a fixed input so the objective here is that we want to find the minimum of this we want to minimize this mean this over h such that it becomes the square distance between the the geometric error or between the transfer of the points as we have seen earlier and for a symmetrical transfer error it would be the almost the same thing as the single image except for we have one extra term so the measurement vector is now the four vector made up of the correspondence point x i and x i prime then we have set the parameters to be optimized again as h similar to the single image transfer and the mavic function would be now defined as h inverse and h x here because we are mapping it both ways here this guy here is transferring it to x r one and this guy here is transferring it to x one prime so we're mapping both ways hence we have two terms here we find that the norm of the difference in the error would yield to one term per image now in the case of the reprojection error the measurement vector will consist of the inhomogeneous coordinates from the point correspondences x i and x i prime and the set of parameters to be optimized would be h and all the x hats and the mapping function is simply defined by the linear mapping of x hat from one image to another image of x hat prime where x hat prime equals to h x hat over here we can define the uh the square norm difference of the observation and the mapping function becomes the the the distance the reprojection area that we have seen in the previous slides and this consists of two terms of the mapping where x hat here are the perfect transformation from our homography estimated homography and it's also a term to be estimated so in this case here x is a 4n factor finally we'll look at the formulation of samsung error in our iterative minimization in this case the measurement vector would be x equals to x y and x prime y prime here which is the point correspondences between the two images and a set of parameters to be optimized the p here is set as h which is our homography and here we directly set uh x minus f h to be delta x and that's and x minus f h squared the square norm will give us the samsung area now up to this point uh when we estimate the homography using the least square method and the iterative method that we have seen in the in the earlier slide that we only assume that the measurements are corrupted with measurement noise and the least square the iterative least square algorithm that we have saw are robust to this small perturbation in the in the measurements of this correspondences but in reality the key point matching gives us many outliers this means that if i were to take the key points of these two images and try to match them across the two images then i would end up with many spurious or outlier matches as shown in red over here and these outliers can actually severely disturb the least square or the iteratively square algorithm that we seen earlier it will lead to a very wrong severely wrong solution and thus this outlier should be identified and removed before we put our here into the least square estimation algorithm and uh before i talk about the ransac to solve this there is actually a song that is uh that can be found on the internet on youtube that is uh describing the whole algorithm and what ransack is all about and how it actually uh eliminates the outliers for us to estimate a better parameter or in the presence of outliers here if you can understand what's going on in this song the lyrics and this song this means that you perfectly understood the ransom algorithm which is uh what i'm going to explain next so uh i'm going to use the same example that was illustrated in the song the line fitting example to illustrate the problem of outliers and how the ransac algorithm can be used to eliminate the outliers for us to do a better estimate of the line fitting now suppose that we are given n data points denoted as x i y i for i equals to 1 to n and the objective here is that we want to find the best fit line that is the two parameters m and c for the line equation y equals to mx plus c for i equals to 1 to m and here i have this means that pictorially i have the xyz axis here and i have n number of points here x 1 y 1 all the way to x and y n here so this means that given these end points here we want to find the best fit line and one of the solution here is to what we have seen is to use the least square solutions where we simply want to for every x i here we want to use the model of m x plus c which is given by this guy over here this model and such that we want to estimate m and c here such that when we insert x i here into this equation you will produce a y i estimate of the y i such that we want to minimize the error between this estimated y i and the one that is observed that is given together with x i here so we can rewrite this into the optimization function or cost function here that we want to do a mean over m and c such that the total error between the predicted y and the observed y are minimized here so this would be easy to optimize uh if we have all the points here as in layers this means that from this distribution here we can see that the points are very close to the line the the solution the optimal solution that we want to estimate and this is subjected to only a very minor perturbation from the noise but in the case where if we have outrageous measurements this means that this measurements here of x and y are far off from the line that the optimal line that we want to estimate here so all these points well what happens here is that because we are minimizing the square norm error so all these points this x i here is going to give us a very bad estimate of our y i tilde and the minimization of this is going to skew or it's going to pull my optimal solution away from the true solution here and this is a illustration of the least square example where it's going to fail under the outliers this is because uh now what happens here is that this outliers is going to cause a large error in this square norm and it's going to cause the optimization to totally fail now uh we can resolve this using the ransac algorithm which i will illustrate to on how to solve the or detect the outliers in the line fitting example to illustrate the ransack steps so in the first step we will just randomly select a minimum subset of points so we know that this equation y i equals to m x i plus c then we have two unknowns here which is m and c so uh as a result what we need to do here is that we just need two points two of x i and y i we just need two of these points to estimate to give us an estimate of m and c to solve for the two unknowns in this equation here so we'll select the minimum subset of points which is two points over here and randomly and suppose that these are the two points that is selected we will fit the line uh using these two points this means that we will substitute this x1 and y1 as well as x2 and y2 here into y i equals to m x i plus c so we will get two equations and then we will have two unknowns here which is m and c which you can solve for the two unknowns m and c to get this line the hypothesized line using these two selected points and now after computing this line uh for the from the hypothesized points uh we'll compute the error the which means that uh the shortest distance point to line distance uh given any other point i'm going to compute this green uh distance here which is perpendicular to the line that i've computed earlier and from here i will be able to compute the total error of every single point other than these two points that i have selected as the hypothesis and i'm going to compute all the total errors from these green lines over here and next i'll select the points that is consistent with the model what this means is that i would have a point here suppose i have a point here and the error function and the area that i computed here is d1 and then i will have another point over here which the error that i computed is d2 now suppose that i define a threshold this threshold distance which i call uh suppose i call sigma d here and d2 is actually lesser than sigma d here so i will select it as an inlier and d1 here is bigger than sigma d the threshold that i have selected uh it's going to be classified as an outlier here from the randomly selected two points i'm going to compute all the points that are within this threshold that is shown by green the green points over here and i'll repeat this and keep the records of all the support points so i'll repeat the whole process by selecting multiple pairs of random uh two points and then i'll compute for each one of these i compute the hypothesized line as well as the support points that are within the threshold so for every hypothesis i'm going to keep a record of the number of green points which i call the support set over here so the number of green points are the support set i'm going to compute the cardinality of this support set in each one of the hypotheses and finally i'll select the hypothesis with the highest number of support set or consistent points uh that is in liar here and this uh will be all my in liars and that which i will then make use of these in layers to compute the finally square solution where i'll ignore all the outliers from here so here's the summary of the ransom algorithm where the objective is to robustly fit the model and in the case of the line fitting algorithm that model is y i equals to m x i plus c to a data set s which contains outliers so i have a data set of in the line fitting algorithm that's x i y i where i here equals to 1 2 and i have an observation here and this is corrupted with outliers here and now the algorithm is first we will randomly select a sample of s data point so in the case of the line fitting algorithm s here is two data point this is the minimum support set or the minimum set of that we need to of the or the minimum number of points that we need to fit the line to compute m and c here from the data set s capital s over here to substantiate the hypothesis then we will determine the set of data points which which is also known as a support set within a certain distance threshold t to the model so in the line fitting algorithm we have these two points which is s from the s data over here then we will determine all the points that are in the this particular threshold t here so all these are classified as in layers and those are outside are outliers over here and then finally after n trials this means that we repeat this n number of times we'll select the one the trial with the largest consensus set and then all the in-layer set over here all the in-layer points here will be used to estimate the model here altogether there are three parameters that needs to be set to for the ransom algorithm the first parameter is the number of points as this is the number of points that we need to form the model or to solve the hypothesis of a model in the case of the line it's the two points two random points and the distance threshold uh for us to identify or classify whether the point for each hypothesis a point is a in liar or outlier and then finally we have the number of samples this means that uh we need to do this over n number of trials so uh that uh this is also a parameter that we need to set because we need to terminate the algorithm the ransom algorithm after n number of trials but n cannot be too small if n is too small then we wouldn't have enough samples here for to contain the true solution or the best solution here but n can also not be too big because if it is too big then it will be computationally intractable so typically we'll choose the number of points as the minimum points needed to fit the model two for the lines and four for homography for example and the distance threshold are usually set empirically or otherwise it can be set at the three sigma distance and the number of points it's exhaustively searched over the number of samples is usually unnecessary and infeasible so there is a statistical theorem that is in the that is written in the original paper that uh described ransack algorithm the best solution is actually given by this equation over here so the probability that an algorithm never selects a set of s points which are in liars is given by this so what it means here is that randomly select two points for example uh the probability that the algorithm in the n number of trials you will never select the uh set of these two points such that these two select points are uh in liars to the optimal solution of the line uh is given by one minus p here where p is the probability that it will select at some at some point of time in this n trout it will select the set of s points such that they are both or such that they are all in layers to the model and this is equals to 1 minus w to the power of s and the whole thing to the power of n we can see that inside here w here and is the probability that any selected point is an in-layer that means that of these s points that is selected one of them is a in liar and this is uh given by the probability of w and uh to the power of s because we want the probability that all selected s points are in liars so uh 1 minus w s here it means that the probability that at least one of the s points is an outlier and to the power of n this means that uh at least this means that all the n trials all the n trials i'm going to select all the s points such that at least one of them is going to be the outlier hence this is equivalent to the probability of the algorithm never select the whole set of s points to be in liars and we can uh re-formulate this a little bit by shifting the by shifting the terms around to get uh n as the subject here so n is the number of trials that is needed and this would be equals to log 1 minus p divided by log 1 minus w to the power of s here and so this will give us the total number of the trials that is needed and this particular table here shows the number of trials the number of n that is needed for over different sample size of s over here as well as the proportion of outliers epsilon 1 minus w over here and for a fixed probability that at least one sample in the n trolls is going to be all the s points are all going to be in liars and we can see that this is going to increase exponentially with the number of sample size and we can see that from 2 to 8 over here for 50 of epsilon here proportional of outliers well it actually increases exponentially from 17 to 1177 and hence as a result it's better to keep the sample size as small as possible that's why we need to use the minimal number of points for the model w here is the probability that any selected point is the in liar so we can choose the worst case that 50 this means that out of all the given set of points fifty percent of them are outliers or and fifty percent of them are in liars and uh alternatively w can also be decided adaptively within each trial we choose the set of sample and then count the number of in layers after we form the hypothesis model we count the number of in layers that is within this threshold and then we'll set w to be the number of in liars divided by the number of points the total number of points that we have observed over here and then we can compute n adaptively from this w that we compute after every trial and uh increment the sample count by one at the end of each one of this uh count and if in the case where we have uh more than when the number of trials that we have because we are keeping a counter to count the number of iterations the number of trials that we have here and uh in the case where in any one of this loop the sample count this this particular trial the number of trials is more than n uh then we will stop this ransom algorithm here's a summary of the robust 2d homography computation with the ransec algorithm so the objective is to compute the 2d homography between two images where the correspondences can be corrupted with outliers so the interest points here will compute the key points in each correspond in each image and then we'll get the putative correspondences so we'll match the key points using some descriptors for example the sift or the surf descriptors and then uh but this is going to consist of some outliers or a lot of outliers then we are going to use the robust uh ransac algorithm for estimation where we'll repeat for n samples where n is determine adaptively as we have seen in the previous slide we will first select four random correspondences this is the minimum set that's needed to compute the model or the homography and then we'll calculate the distance for each putative correspondence and compute the number of in layers according to h by the number of correspondences which is lesser than d so this distance here what it means here is that after uh we have select randomly select four point correspondences here we compute a h here and then for each of the other point correspondences that are in the image other than the one that we have selected the four points that we have selected in the trial we're going to transfer these points using the computed h over here we're going to compute the error the geometric error using either the samsung error or the symmetric area and then uh this error here the total distance here is going to determine uh whether we are going to determine whether this is an inline or outlier by comparing it with the threshold here and finally after the n number of trials we are going to choose the h with the largest number of inliers and make use of all the in-layers to re-estimate the h the homography again and this will give us the optimal result or estimation of the homography that is free from outliers so in summary today we have looked at two cases to show the existence of homography and we also look at how to explain the difference between algebraic geometric and sensor errors and then we apply them on homography estimation finally we look at the ransec algorithm for robust estimation thank |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_3D_Point_Cloud_Processing.txt | okay shall we start how's everybody doing okay and today we are going to ask promise uh I'm going to talk about 3D Point Cloud processing and we are almost reaching the end of the semester so next week would be the last uh lecture where I'm going to today I'm going to talk about 3D Point Cloud processing and next week I'm going to talk about neural fuse representation okay and yeah uh that is also from learning outcomes here so I will start off by talking about what's the definition of implicit and explicit surfaces so today we are only focusing on one type of representation that's really representation and the 3D Point Cloud representation is actually an explicit representation so actually the regular 3D Point clubs you know meshes they all belong to explicit representation okay and uh I'll also talk about implicit representation next week so implicit representation uh the sign distance View and uh the well-known Radiance builds neural Radiance Fields that's enough okay that's just one type of representation one type of implicit representation I'll Define them mathematically today and then next week we'll look at the different examples okay and how we are going to learn them using neural networks I'll also talk about briefly talk about what are the pro and cons of a point Cloud representation then of course this is something that you guys are looking forward to I will describe how deep learning can be used on 3D Point Cloud processing and then once I talk about how I'll introduce pointnet that's actually the landmark work in 3D Point Cloud processing by a guy from Stanford University in 2017 and then I will talk about some tasks on 3D Point processing particularly Place recognition key Point detection descriptor 3D object detector and then 3D semantic segmentation Point Cloud registration and image to point Cloud registration okay so these are several times but it's not limited to just this path it's just that I'm already existing 100 over five so I have no more crime and energy to go on they are actually many more tasks in the 3D Point Cloud processing which actually uh that we are also researching on I have several Publications on some other classes such as Point Cloud completion or Point Cloud to dense uh surfacer completion so these are the stuff that we are currently still actively working on okay and uh I'll talk about just this six or seven task over here all right so uh nonetheless even though it's uh just uh not limited to this task I'm already covering quite a number of papers here okay so all the today's content is not for many textbook they are all from the Publications and from these conferences in computer vision and machine learning over the past five or six years okay uh they are mostly from the last three or four years actually okay so uh it all started from coinet this paper here this Landmark paper here which is actually a very very simple concept okay we'll look at how this can be done then uh I strongly encourage all of you to look through all the papers after today's lecture you will probably be able to appreciate more okay because one disclaimer is that uh I'm just going to touch on the surface of most of them because it's really too much okay or I won't go into too much of the detail and I won't prove anything because there are some theorems here right in some of these words here but I won't go too much into those details I went also because this is not a deep learning class so I'll just assume that you guys already know how to train a deep Network okay I'll just talk about the concept and how is it related to 3D all right so uh as I mentioned that's all there are two types of 3D surface representation two major classes the first one is what we call the explicit representation or otherwise known as the parametric representation okay so uh the second type would be the implicit representation or otherwise known as the volumetric or volume representation okay so here's some examples of the exclusive representation uh in a discrete form it could be pointer which is what we are going to cover today and it could also be measures so these are essentially just uh triangular structures okay that uh that forms a graph and it actually helps to Define some surfaces okay and it could also be continuous uh representation explicit representation in terms of neural error implicit representation would be what we in the descript form would be boxes or grids or more commonly you you guys might have heard of this occupancy read okay so that means that you we split the world in do a distillation of regular grease or occupancy or regular grids where each one of these green will assign a value inside that is either zero or one it's either empty or occupied okay but usually we will just relax this as a as a range of number from zero to one to indicate whether this the occupancy of that particular Grid in the 3D World and of course it could also be in a continuous form uh in in the discrete form it will be occupancy Grid in the continuous form it will be called the neural Fields okay so uh we'll definitely talk about this next week the two types of neural fields that is popular right now would be the side distance function and also the radiance view okay so these are the two types that are uh that are interesting they are interesting right now okay and uh I also hopefully I will also find some uh that I will have some time to talk about you know the most recent papers that links these two together there are several papers because Radiance will uh you can only do volumetric rendering and side distance view you can only do surface rendering right but then it would be cool to uh so most of the earlier works on site distance field requires the 3D surface for supervision but the radiance view would allow you to render the from one view into an overview so so that you can actually use the image itself as a cell supervision but there are some works that actually tie these two together that means that you can actually there are some formulation that allows you to convert SDF into the radiance View and plus you will be able to render the the side distance view into a Nova view for supervision okay they are basically right now uh there are three papers which uh dominated this conversion here I'll talk about hopefully I have some time to talk about this but at least I will cover one or two of them right okay so uh mathematically parametric surfaces are defined by a vector value parameterization function okay so this function here is actually mapping a 2d parametric domain represented by Omega into a 3D surface uh represented by s over here okay so this 2D to 3D mapping it's possible okay since all the 3D surfaces are too manifolds okay so what I meant by two manifold is that regardless of whatever surface that you have okay when you are walking on that surface it's always two dimensional only locally just like Earth is actually without leaving a 3D world but locally when you stand on Earth okay anywhere else you are actually looking and you are actually on a too manifold where everything is uh you can move everywhere okay so there are some problems with this representation although intuitively it seems to work he will look at what's the problem next week in more detail but an intuition is that because the the intuition here is that you would always be able to have a 2d domain Omega that is living in the R2 space so if I have some coordinates which I call u and v whatever coordinates over here the assumption is that I'm able to always map it met every single point into a surface okay this is actually a too many ball over here but this assumption might not always be true uh and vice versa so in P2 space when we look at P2 mapping to P2 mapping we say that that is all P3 mapping to P3 mapping we say that that is a homography when we look at topology like this we say that there's a homeology okay we look at it uh next week all right so uh it's not always true here such as for example you it's impossible to actually map a plane onto a street a sphere such as Earth that's why when you unroll the map the world map some parts of it are actually distorted okay and there are examples of this are Point clouds because when you look at the 3D surfaces on the point Cloud okay you are actually locally it's actually a 2d it's actually a two manifold okay so it's uh theoretically it should be able to map this and also spline surfaces so you can represent each one of these as a some con using some control points and then you do a linear combination of it yes which uh I'm going to most likely talk about next week all right so uh and there's also another major type which is called the implicit or volumetric Surface representation and this is defined to be the zero set of a scalar uh value function so on 2D on on 2D what this means is that let's say I have a surface over here okay I can easily represent this by uh uh uh either occupancy grid okay occupancy value or assign distance value sign distance value means that everything here would be zero okay it's a level set or zero level set over here then as you move out from the surface uh it's uh when you're inside the object it becomes negative let's say this is inside the solid it becomes negative when you're outside it becomes positive okay and at any point continuously okay the side distance value here would be the closest distance that you make with the closer surface okay and therefore we are looking at uh any point that you sample from the renewal right and uh the sign the the surface is defined such that the uh this implicit value is equals to zero okay and uh examples are occupancy grids side distance view Radiance view or adaptive data structures such as supporting okay you can actually represent of three or uh such that you don't waste space okay I mean in the 2D space it will be quad 3D okay so you can divide the world into like uh that means that you have a root and then every time you go down there's four of this data structure okay let's say all these are empty space only something here then you can further divide it again right and then uh you further divide it into this level so all the rest you can stop because there's no you can representative one OMG space here right and then this is the top three part in the 3D space you would have a cube and then you partition it into eight parts every time at every level you go okay so that's uh up three over here and if you look at this this two definitions here mathematically it also makes sense because mathematically this is really your implicit and explicit functions okay when you talk about uh implicit functions right uh when you talk about for example I didn't include this in the slides but uh let's say I talk about a circle just a two days ago right I can use polar coordinates to represent this so you'll be sine T cosine t right and then T where T is the angle and uh radius for example so these are implicit representation right because uh you're implicitly representing uh the function so the function is actually mapping that function here is directly mapping your input to the surface itself right so uh test the implicit representation you can also represent this via x square plus y squared equals to r squared okay where you this is actually an implicit function right so you need to solve for the root in order to get the surface right that's your zero level set actually okay so uh this is the the way that we say implicit and explicit representation is not uh is is not for no reason okay it's because mathematically it's as such okay so uh any question on the 3D representation I'm giving a very brief introduction to this okay all right so uh today we are going to look at just the 3D Point cloud in detail which is a type of explicit representation as I mentioned so you can see that why is it you know explicit representation because you uh this for example this horror strip over here when you walk on the surface it's actually a too many boat it's 2D but then you are mapping it on the green surface right so uh and the data so what is the uh we are actually representing every sample that means I was sampling from that surface okay in terms of a point Cloud a 3D Point Cloud okay and this would be just simply if you think of it in a more computer science way okay this is nothing but just a data structure for 3D representation and that will be n by Korean Matrix that looks like this every Point here would be x y z okay and you have n number of points that forms the whole 3D pointer over here and uh but that's not things stopping you from having a higher Dimension representation uh that means that you can have the first three dimension that represents the spatial coordinate of each point but you could also have the next D Dimension okay as any other features that you have it could be surface normal that means that I can take any point here and compute the surface normal and that appended to my XYZ uh it it could also be RGB information that means that it's a textured Point cloud and you can have a RGB information on it you can also include everything you can have this surface normal as well as the RGB value okay so uh this D here is up to your imagination okay and the user data sources that we have for 3D portal is usually in lidar or lighter it's a type of laser sensor okay so what what's happening is that it's emitting out uh laser beams and then it's measuring the time of flight of course you cannot really measure the time of flight I mean you cannot really measure the reflection and then it's actually over phase shift okay so uh there is some sensor inside the data sensor that measures when you when when you are emitting out some lasers and Laser readings or laser beams onto the like this thing here laser beam here it's reflecting and there is a phase shift when it's reflected back okay so when you measure this phase shift you are able to measure what's the distance away from here okay and that's also RGB camera as I mentioned in one of the lectures that um your connect it's actually the rgbd camera and what it's doing here is that it's actually projecting out certain patterns of infrared light and then based on the distortion okay based on the Distortion it's actually measuring uh what's the shape of the or why is the distance away the depth of the surface to the camera okay and in fact uh one of the very old Topic in 3D computer vision very early uh 3D computer vision uh I think unfortunately I don't have time to cover this in the lecture too but if you are interested you could actually look at the things like uh shape from uh uh I mean shaped from X or shape from Siloam okay so this actually talks about how to measure uh some kind of projection and then what's the Distortion that you can actually infer the 3D shape all right it could also be in terms of cap model that means that you can actually create it using things like uh SolidWorks or proe or AutoCAD yeah so uh Tech model can also help you to create this but um it's usually in the dot o b j format or apply format okay and you can also uh sample from these surfaces and to generate a 3D printer and of course we can also do this by acquire the 3D printout by structure for motion which is what we have already seen previously okay and then uh and that part of images Etc all right so some of the important 3D applications of 3D Point clouds are land surveying so uh you can have perine so this is actually of Auto geometry a lot of the stuffs that we are doing in 3D computer vision are actually stolen from photogrammetry okay so uh yeah so land surveying photogrammetry urban planning so if you do things like City Planning and so on uh you can actually send a drone to map the terrain and do things like this or uh architecture or you can actually do building and Survey the building or even monitor the Integrity the structure Integrity of buildings okay actually it can also uh things like 3D computer vision not just point Cloud but also what we have learned before right in E Majors it can also be used in with all these applications here right and I didn't put another one which I'm actually personally very interested which is actually forensic science right so you can actually use this kind of technology to preserve a crime scene for example or to even tell you things that they are beyond what you can see right so because it's actually picking up all the structural details right or you can use another type of cameras that actually picks up things that you cannot see using the eye okay so these actually thinks something that I'm always very interested in like a no chance to yeah to do this in detail okay so you can also do archeology preservation or for example this is actually uh the statue the David that is carved by Michelangelo okay and this was actually a c graph project by Stanford University in 2002 or three they actually took the whole project and then they digitized the whole uh Michelangelo using this but nowadays I think you can just use images and apply Nerf on it maybe you can already get that kind of detail all right and also Metrology where you uh do some structure or manufacturing kind of stuff and uh yes also the robotics that you can map the world and then do planning and self-driving on this map on Road all right so uh there are also Pro and cons of 3D part Club is not always the is not the ideal case or it's actually not the only choice right so the pros of 3D uh for example is that it has 3D information there's no that ambiguity that means that you know every point in the 3D space unlike images where you do structure promotion but you lose the scale you don't know how big is the object unless you know something about the scene and as well as in terms of coding or in terms of magnetics this is mathematically simple and concise because all you need is that you need XYZ for every point and you can actually represent uh the surface any surface regardless of how complicated or how sophisticated it is right but there are also a bunch of weaknesses for 3D Point Cloud first is that it's very smart Okay so uh even though uh there is uh you can have very dense Point Cloud it's still sparse compared to meshes for example because meshes is just water pipe okay and it's also irregular that means that every point in the uh there's no structured format to this so you don't know where is your nearest neighbor or you know order to find the nearest neighbor it's actually computationally much more expensive than to find the nearest pixel on the image because a pixels in the image it's actually arranged in a regular tessellation of array okay so you know where's the nearest neighbor the all the eight neighbors okay but for 3D Point Cloud if you want to do this in a very efficient way probably you would have to use like kd3 or things like this okay I will talk about skv3 I hope all of you know as a kv3 [Music] [Music] it can also be used to do fast search okay there are also libraries that do uh dust passage there's this fast approximate neural network library I think but this is irregular it's nonetheless compared to an image counterpart it's much more difficult to search okay and there's a lack of actual information because well at the Rawless foam rollers form uh what it means is that you only have every single point there's no extra or no RGB information to it and we'll see that what's the interesting about this is that or what poignant which is the next thing that I'm going to talk about uh is interesting is that because coins are actually unordered that means that if you take X1 X2 X3 okay uh X1 X2 and X3 and you flip it around you do some permutation okay that's all all together uh let's say you only consider three points there are three factorial six combinations here right so you can do X One X3 X2 they are still going to be the same point now okay but this will have some detrimental effect impact on our deep learning we'll see why okay and the last problem here which is uh we also did some work on this we have a paper on this but I'm not going to mention this uh that is rotational equivalent and invariance okay so uh but I'm going to give you some insight behind this okay so uh let's say I have this chair it's upright okay I look at different types of channels they are all upright and then I train a network that is upright that network will not be able to work well if I rotate the chair upside down for example okay because deep networks are not rotationally in variant CNN itself is translational invariance or equivalent but it's not rotational invariant okay so and the difference between invariant and equivalence is that invariant means that you take the input upright you rotate it it's still going to give you the same result okay equivalence simply means that that is a very deep mathematical form formulation we actually wrote all this in the paper as well okay but I'm just going to give you a very quick Insight equivalence simply means that if you take the if you take this uh chat you flip it upside down the output will also be flipped upside down you have to undo it okay and then you will still match up with the original uh upright Channel okay that's equivalent invariant mean that means that you flip it upside down the output will still be the same as all right you don't have to do any undoing of the rotation okay but this is a very hard problem okay actually it's very tough in deep learning we did something in this direction and most of the works uh we follow most of the works in starting the group Theory okay so we actually because SO3 is actually uh it's actually a group okay and we make use of some group Theory to actually study this equivalence okay but I won't say that we have only solved the problem uh that I also don't think that the problem has been fully solved yet okay nonetheless I won't go too much into this all right so uh the you can also see that uh lidar actually directly gives that it's much more accurate than the image based reconstruction so the city that we are that it's actually using some lidar to map out doing this is somewhere in Germany uh and then uh here you can see that using structure promotion the Reconstruction is usually less precise the uh and also you don't get absolute scale from structure for motion okay if you don't know anything about the scene but for 3D workout you get the absolute scale everything is one to one okay the scale right so that's the pro of the advantage of this 3D Point okay and uh so uh let's look at now after I defined the what's the 3D Point cloud and the flowing cost let's just go into how deep learning can be used uh for this uh 3D Point Cloud so the pioneering work for this is actually pointed okay uh by uh this guy uh C at all from cvpr 2017. so the this is a pioneering and what it does is that you it takes in and by three pointer and then it inputs into the point net and it does creating classification so given any uh quota of a object it's able to tell you the semantic class of the object and then it's able to also do semantics segmentation of a scene so this would be each color here represents a semantic class of the scene and then it's also able to do part segmentation this means that even this model of an airplane for example is able to tell you where are the wings where's the body and where are the tails for the 3D point and but the main problem uh that this point net has to resolve when it's actually uh taking in point out is that for all the neural networks that we know right it's MLP or CNN I'm elaborating here using our MLP just one neuron okay but uh CNN also works in the same way actually right but uh you can see that they are not permutation in variant what I meant by not permutation in variant exists right so I have a neuron here which is a function okay it's going to uh this is just a non-linear activation it's a linear mapping followed by a non-linear Activation so I'm going to represent it as page and then this would be a non-linear activation function f and linear mapping of w transpose X okay you can see that basically what happens here is that let's say let's just take the simplest case okay that the activation function is also linear is identity okay so we don't care about it I'm going to just evaluate the inner term to become this part over here and uh you can see that if I want to sort the order because I have three points here right I'm going to sort the order from X1 uh X2 and X3 I'm going to solve into X3 X1 X2 okay but they are still the same point they are still the same pointer right regardless of how you stop them in 3D in the spatial relation it Still Remains the Same nonetheless you can see that if I were to train the MLP using this way the widths are actually trained for X1 X2 X3 in this order right but if I during inference time if I were to sort the point Cloud even using the same training data the output is totally different okay so this means that the network is not permutation invariant and uh the in contrast to our uh image you can see that the convolution here or it's always in an order in a regular range in a regular order so there's no such problem for image okay there's no permutation uh problems in in the image counterpart in and no nonetheless if you look at the point Cloud it's actually orderless okay you look at uh like uh if I have X1 X2 X3 to xn and number of points over here I can arrange them in N factorial uh combinations and each one of the this combination it still represents the same points way to make it permutation invariant is that you augment the training data with all these n factorial combinations but each sample that you give you're going to permutate the N factorial number of times so this is a serious way of doing it because what it means is that you will end up with factorial complexity that's not good okay so uh this is uh not a correct way to do it so nonetheless for Point net what they propose here is that they propose something that is super simple okay so since it's permutation um I mean since since a point cloud is orderless there's no order to it okay and you want to make the network permutation invariant then what they do here is that they just treat every Point separately independently okay so if you have n number of points then I will put each point through our NLP the same NLP so these are basically NLP right I'll put each point inside then it's the same NLP so a processed each one of them independently until I get the final feature here okay so I'll get one M by one zero two four that means that each point that I have each point that I have from R3 now I'm mapping it into r one zero two four okay and I have n number of them that goes through the same uh Network okay and times okay but these are all Shadow weights that means that it does essentially there's only one network that goes through here all right then finally you end up with this 1024 Dimension each one of the features then uh what happens here is that they cleverly use a Max pool operation which is permutation in variant okay to get the global feature then after that you do uh all the semantic segmentation and also classification over here so there is a proof in the paper but I'm going to just simplify it into picture like this okay into a diagram like this to show you why is it permutation in variable because the max2 operation uh if you have X1 X2 X3 let's say the maximum operation will be going along the rows and picking up the element that is the maximum among all the inputs okay and then you concatenate it into the final output over here so you can see that if I were to input X1 X2 X3 and X2 X1 X3 my output is still the same okay because I'm picking the max operation and Max operation is regardless of the order okay so therefore because I maintain every features uh independently at the beginning then I do a Max pool here right which is permutation in variant therefore the whole thing here is actually permutation invariant that means that uh whatever inputs that I give in whichever order it will still work the same okay it will still work the same way and that's pointed okay it's very simple but he came up with this idea now I think these people now have been cited like 10 000 times is a truly nice work I think after this work a lot of people actually started working on point Cloud well we were one of them think that actually uh when we first saw this paper uh we were so excited that [Music] did a lot more a lot of the work in this area here okay but one of the uh although it's nice there's a limitations of Point N okay and that limitation is that so if you compare with CNN okay CNN has many receptive because it's actually convolving the kernel at many layers okay so that means that if you look at one point here at the end it actually matches back you can you are not surprised that it actually map back to a large patch of the image so it actually has a very large receptive View and uh the larger the receptive view means that you gather more information and the more powerful your network will be right but then if you just look at Point N because we are treating every single point independently and then we just do a Max pool at the end so essentially there's only one reflective view okay and uh that's all the points that's not good that means that it lacks what what it has in this it has a Global Knowledge thing of the global shape but then many structures 3D Point Cloud many shapes okay it also depends on locally what's the uh structure right so it actually completely ignores this because it treats every single point independently okay it doesn't take into account the neighbors right so uh and there are many papers and one of them that follow up which is pointed Blah Blah by the same guy again okay uh so we also have a paper on this uh we call it a soul net well they came out with this point to resolve this because the limitation is that our pointer is that they treat every single point independently right so what this suggests is that let's just do some clustering okay so from the input here uh it's all going to do some uh local sampling using the fpf the furthest point sampling then it's doing some cleaners neighbor locally and grouping and then it's passing through the point for every cluster here so that means that the point that although it's independent is treating every Point independently it's now treating the cluster every Point inside the cluster independently okay but what it means is that you have the local feature because each coin Act is local enough and then uh you end up with a cluster output over here and we do uh sampling and grouping again okay so FPS again and then the K nearest neighbor again and uh we do this so as a as a result this becomes very similar to your multi-layer cnns okay that means that your hierarchical or CNN and uh so first we just simply do for this point sampling what it means is that given the set of points we're going to choose a subset of endpoints such that each one of the point in the furthest uh Point sampling okay they're all the it's actually furthest from the rest of the point okay and these aspects and the algorithm in our computer Graphics actually and uh then next we for each one of this further point we treat it as the local Central okay and well we find the K nearest neighbor around the neighborhood and cluster it together then we put it into the poignant but now each point net here because we know the the cluster okay we're going to center it locally with respect to the centroid that means that I'm going to subtract every point in the cluster with its respective Central okay then I put it into the point and I'm also going to do a multi solution grouping that means that at every layer okay I'm going to group this together and then I'm going to put it into a pointer and local pointer and I got the final output over here so let's say I have three outputs over here but the final output here I'm going to take the last layer the output of the last layer and concatenate it with the outputs of the first layer that means that the first three points here is all going to be concatenated together okay you can see this something like your sibling architecture okay the residual or scheduling architecture in resnet okay so it's actually very similar to that kind of concept okay and so so this is the upgrade of the uh or pointed into pointed plus plus but of course there are many other variants okay to to this and usually if you now do this kind of work you would need to like assign a whole bunch of papers and to do Benchmark against okay this has been very popular for the last five years or so now of course the type has changed okay now it changes to Nerf okay Radiance through the the 3D representation but Point Cloud was very popular and I believe that it's still popular I know right now people are actually looking at how to do uh cell training representation learning okay so using uh contrastive loss and all this thing right so um okay the after defining or introducing you to the the two task or or the two networks that are Pioneer that has that have pioneered the point processing I'm going to next tell you about various uh Point Cloud base tasks okay so uh yeah essentially I'm going to talk about six different parts over here okay uh the place recognition key Point detector descriptor 3D object detection 3D semantics accommendation workout registration and image to point Cloud registration so uh I'm going to introduce you to some of the pioneering work okay in all these respective areas over here right so uh the let me talk about the first task which is point cloud based face recognition so this is the where am I uh problem uh it's the same as the vocabulary tree that I talked about the last time when I talked about structure for motion that means that you have a bunch of images a given one query image you want to find which one is the most similar one but now I want to do this I give you the whole map of the terrain okay uh this is actually Oxford City the data set from Oxford University so um and no let's say I give you the whole Oxford city map these are all 3D Point cloud and then I give you a local 3D Point Cloud this means that I'm maybe driving a car or self-bearing car is operating in the this city okay but the lidar sensor that is mounted onto the car can only see the local view of the scene right so this is the local view so if I when I'm building up this map I happen to hack every part of the point with the uh GPS coordinate GPS INF coordinate okay and the question here is that the query Point Cloud if I can find where this query point out is more similar inside this map then I would have successfully localized myself okay I would know where am I right according to where am I let's say this part here I found that it's actually this piece over here okay then I would because this is the reference map I know the GPS ins coordinates here so this means that at the end I'll be able to know what's my coordinate where am I in the in the world okay so this is the uh and the where am I problem it's the most important problem in robotics or navigation okay and uh so the idea in this paper here the point uh uh is that uh you can actually create a database of subnet that are Geotech that means that instead of looking at the whole terrain like this the whole 3D map Point Cloud what I can do is that I can split this into many chunks of subments okay and each chunk of summit is Joe reference that means that I know the GPS ins and then I can download sample it for the sake of efficiency okay because computational efficiency and storage efficiency then uh this is what it will look like after I split this into different chunks and I'm going to uh go into jawtech into uh using my GPS ins then now the question becomes a retriever problem okay you can see that this query submap is just like your query image and then your uh the reference map now becomes tons of subnets over here so which actually looks like this like one chunk of this right so it becomes your database of submaps or analogously it becomes something like your database of images where you want to do the retriever that's what we have mentioned in the structural promotion lecture all right so uh the problem now simply just becomes a retriever okay so in this paper here this is the first work to actually talk about uh on cloud retriever it's called point at vlab because we took pointed and then there's another thing called The Village uh structure I'll talk about this very briefly so the first part here you just pointed where we take the point clock and then uh we put it into a poignant and finally we don't we remove the part of Max pool okay so instead of doing Max pooling we replace the max pooling with a more powerful architecture which is the Vlad architecture and Villa architecture I'll talk about it very briefly it's actually very similar to the visual words okay so what it's actually doing here is that it's actually learning a Vlad call so this is actually all the visual words the central think of the uh it looks something like this okay the equation is something is this here but this is part of the to prove that it's uh permutation in period but then every single so this is p Prime here where I'm taking all these points here and putting it in right so what happens here is that every one of this P Prime is going to be assigned to a centroid this is exactly how the visual Works uh work okay if you still recall I actually mentioned about this that we learned the video words and then doing inference I'm going to count the frequency but here instead of a hard assignment by counting frequency we're going to do a soft assignment so this is actually a soft assignment here it's a soft Max function here right so WB and the centroids here which is the visual words they are all parameters of the big Network so we are going to learn C W and D okay basically this is the soft assignment this part is for Consignment and then these are all the visual words so this means that we are we would have uh in the end here the output here would be uh it's similar to the frequency count the histogram that we have when we talk about the visual word or image retriever right so the if you look at the origin there's uh basically that there's another paper called that relay okay but that's for for that's that's also published in 2017 I think or 16. in cbpr so that paper here basically uh no image and then propose this Villa architecture and then we were the first to actually combine them together and say that this is point of view left so uh today this is still a popular task in although we stopped working on this right but it's still a popular task that you still can see occasionally there are actually almost every conference you can see uh papers uh coming up with new techniques to solve this and and what's interesting here is that we also provide proof that because pointnet uses a soft Max to make this part permutation in the event okay but we're replacing it using nebulance so nebula has to be permutation invariant as well for this to work otherwise it won't work for 3D Point Cloud because navig was first proposed using images okay then we actually show a very simple proof so if you have the points that uh order in this way P1 Prime to PN Prime okay and we have outputs of V this VK over here right so you have to put it into the soft assignment so this this is the assignment with the same chart and then you have to sum everything together right into a scale and what happens here is that because there is a submission even if I were to uh permutate this pointer over here we can see that basically the sum is permutation in the event okay right so you will still end up with the same uh it will still end up with the same feature at the end right so this actually means that it's actually a permutation invarian Network so it shouldn't have any problem working for a 3D Point cloud the way to train this when I read like would be to use a metric distance learning because we are going to uh we're going to compare the similarity of two features right so what happens here is that you would have the same network the same pointed Network okay your repeated uh so we'll have three of this it's a triplet Network okay but they are all the same thing so we will have an angle Point Cloud a positive Point cloud and a negative Point Cloud okay and then what happens here is that you know that the positive is the same or roughly taken at the same place as your Anchor Point out okay and uh what we want here is that we want to create a purpose of training data where we have the uh positive path and also a bunch of negative paths I know this is let's say taken from here okay and then this would be my positive path okay and the negative pair would be all the other pointers that are not the taken at the same time location and it better not be the same thing right so in this illustration in that because it's actually the same thing but uh we actually try we need to Traverse the route a few times still remember when we were doing this paper or because we have the car set up with all the lidar so we actually were on the road for like 30 over hours driving around non-stop 30 hours to collect all the data sets okay around within nus and also in West Coast okay so uh and uh so you need to Traverse several times the same route okay such that the positive path it cannot be exactly the same thing okay but it has to be taken at the same location over a different time such that there is some subtle changes or some changes in the uh in the coin Cloud okay otherwise it will work otherwise you are just simply over everything because if it's exactly the same thing right then that loss is always zero you are not training the network at all okay yeah so uh yes so the that's why we take the positive path and also the negative path it has to be from a different location and it can be taken at the same time that means that is in the same wrong but uh at a different location so we have this uh three plus or couple of triplets and then we'll uh these are the features PR the features so we'll uh propose what we call the lazy triplet loss okay and so the idea here is just this uh because the positive pad when you feed in the positive pair okay the output here the positive feature and the angle so these two paths they should be as similar as possible that means that the distance between f plus and F A needs to be minimized okay and the distance between f minus and F uh a leads to Max be maximized because they are the negative uh pad okay so this distance has to be uh maximized over here and uh what you can do is that you can put them into the same function here okay such that you're maximizing the positive pair and the negative pair but if you simply just do this that can be a trivial solution okay because both are outputs of the network the network can just simply learn everything to be zero and put it into a hinge loss hinge law simply means that it's Max of this or zero okay and uh I'm not going to into a detail maybe because I'm not teaching uh deep learning here right so uh but uh so the ideal case here in the the ideal case is that uh when you do a triplet loss the negative should be the hardest negative right so how what what do I mean by others negative the hardest negative means that there's only one positive but when you do negative ideally you should select from all the other samples that are not that uh anchor okay and then you select the one that is closest to the angle that's the hardest negative okay because you want to it's more similar to it but it's actually not that the angle you want to push it as far apart as possible so that's the hardest entity but the network continuously updated is continuously updated every iteration in your training so what it means is that the hardest negative always move around it always changes and every iteration you cannot just do it at the because during the training if given a batch you have to iterate within that batch okay then after that you escape you need to go back to an important and keep on iterating right so what it means is that at the final level of iteration every after every iteration you need to basically search through the whole database of training data in order to get the hardest negative but this is not good because this is slow very slowly okay but that's the most effective so uh we just record an easy triplet because we're just going to take the hardest negative within that batch size okay and it seems to be uh efficient that's also a quadruplet loss where uh you have because you have the negative right when you push the negative apart or who the positive uh close to each other that you could annoyingly have a third hand that is supposed to be a negative one but you could unknowingly push the anchor towards that path right so it's actually then what is suggested that uh if you just pick the two hardest negative okay such that uh you actually do both concurrently and make sure that they are all pushed apart okay and this is the blood loss but of course it's computationally expensive and what we are doing here is that we are also searching it for the hardest negative and the second hardest negative in the batch size okay and uh here's some results that we have where we basically uh use the Oxford data set and then uh this is the University this data set is actually nus it is a residential area is the West Coast Area and being is actually one more actually people are so many times they always use this abbreviation in the data set but nobody knows exactly where it is and where what it stands for and uh here's the result of course at that time we just need to compare to poignant and because we were the first to uh to Define this task actually okay so uh before before this paper there's no such stuff there's only image retriever there's no point Cloud retriever we are the first to do this okay and uh then yeah but now now I think the the now we are uh heavily beaten by like uh the state of the army they might have already or saturated the whole thing uh by a lot okay but what's interesting here is that we annoyingly solved this problem where the same scene right if you do a image retriever the same scene day and night and uh winter versus spring for example then you can see that even with our eye here if I don't tell you that these two are actually the same place you probably wouldn't be able to find out right it's all uh vocabulary tree or any form of uh net Village the original net Vlad itself it will not work under this kind of situation right so but we annoyingly solve this problem using pointer retriever okay and uh so this this could because structurally Remains the Same but of course there are also failure cases where you can see that such as if like qualities and I have a wrong match wrong retriever whereas the true match is this because what happens is that it's after some time this card has come here uh it's actually driven away right so there's no no more car here so this is actually in the same location but then you can see that it actually uh he actually wrong fully uh retrieve something that is structurally similar so what this means is that the network is picking up the structure okay he's actually picking out some structure it's learning the structure of the 3D Point Cloud to have a similarity comparison over here okay any question okay I guess this is kind of a straightforward all right so let me go into the next task which is key Point detector and uh descriptor okay so for image based uh key point detector and descriptor we I think in your assignment you might have used same feature right and save descriptor right so image it's uh interesting I mean you can because you have texture RGB information you have RGB information that you can actually use to form the descriptor so basically key points are um Corner points or key points are stable points in the image such that when you have a Viewpoint change it won't you you are still able to detect the same stable set of key points and for each one of the key points we are going to assign a thumbprint a signature to it and that's the descriptor so in zip is 128 Dimension such that when we have a different Viewpoint we still can detect the same set of E points with the same descriptor and then we can match it over the image pad here right so uh this would be the image counterpart but for 3D Point Cloud this would be difficult because there's no texture at all and the point clouds are only contained of only consists of XYZ uh input there were works on learning the descriptors of the key points but that there weren't any there are not any point uh key Point detector works okay the reason is because before us we were actually the Pioneer work to propose key Point detector here okay and uh for 3D Point Cloud so using deep learning but before us there are some traditional method to do it but then before that there's no one actually the reason the primary reason is because you cannot supervise this right there's no full supervision you cannot if I give you this point Cloud can you tell me where are the key points no one even if you can you are not probably not going to label every key point right for the uh for the full supervision so uh before us nobody actually knew how to do this then uh so the question is how to actually train the network such that it's able to detect the key points okay and uh so we came up with uh the what we call the 3D free net uh and this is actually a weekly supervised tripler triplet Network okay that looks like this okay so we are going to use why is it weekly Superbike because we only know we only need to know the chunk of submaps okay such that it's the same as net relax point and relapse where we know we need to know Triplets of angle and the positive path that means that this needs to be taken at the same place okay but uh at a different point okay and then we also need to uh with the anchor we need to know where are the negative paths okay so this is the information that we need only to actually train the network to detect key Point okay we don't have to we don't require any uh labeling at all so before this before our redefinite all the previous work they use the uh the traditional approach of computing surface normals and all this to get the stable key points okay but we were the first to actually propose this method to use a weekly supervision okay and the idea is that we so we do FPS the furthest point sampling they do randomly sample on the point Cloud okay and this FPS is actually inspired by poignant also uses this but the original uh thing that the original place where this algorithm was proposed is actually in computer Graphics okay in computer Graphics they propose SP FPS the reason is because on a Surface structure the furthest point is actually they are the actually the most representative points of that structure okay of the 3D Point Cloud so we do further small sampling here and then uh for each point that we sample with no clustering we cluster the K nearest neighbor okay and then after that uh that we formed those clusters we would dump it into a detector so this basically this detector here is just a small point over here okay it's just a point where at the end here we output two things one is the attention with the other one is the orientation okay and uh so the attention we over here so we'll see that it's actually used to weigh the triplet loss later but this attention we will also tell us whether this randomly selected cluster here n by three okay the that point over here is a key point or not a key Point okay so this is essentially uh how we actually get the output to tell us whether this is a key point or not a key point then orientation because we have to make it slightly commutation in the area Okay because local at least locally we have to make it permutation in the event so this orientation here Will Tell Us Where is the upright Direction so we will learn this uh using the workout this is the detector part we also have the descriptor part okay this is just the uh something like a thumbprint for each one of the key points that we are learning and uh we'll just put this orientation into the pasta so that we can rotate it upright okay and then uh this is also something like the pointer here there's a Max Cooling and we concatenate it together so uh finally it comes uh it will output uh descriptor over here so this descriptor we will have one descriptor per cluster okay in all the positive anger and negative pair then finally once we have this we will put it through a similar theme Matrix that looks like this okay so what happens here is that the angle here it will go into each row of it in the in these two matrices over here then the positive Point Cloud will come in here so each entry here would be the each entry here would be the positive distance and the angle between the ankle and the positive Point Cloud okay so what happens here is that for each row we are going to take the minimum distance okay this means that between the positive and angle I'm going to for each one of my uh potential key point or descriptor in my uh each one of the key point in the anchor I'm going to take the closest one that is from my positive hand okay and this will give me my final losses so I'm going to sum this and this will be the distance okay is the closest match similarly between the angle and the negative I'm also going to take the next uh the closest match over here okay it's also the minimum but instead of minimizing it which is because this positive and anger they are supposed to be a match they are from the same area so the descriptors that we take it better the the minimum the closest one it better be minimized I want it to be as close as possible but I'm also going to take the minimum over here this forms my hardest negative between the anchor and the negative so I want to maximize it so I'll take minus of this distance over here okay and then I've done everything into a triplet marginal loss or engine Warfare then uh and finally what happens here is that I'm going to weigh this with my weight that I uh I found over here I'm going to normalize this okay that is uh then I'm going to weigh this with W Prime over here so this attention will tell me whether this key point is significant or not whether it should be a key point or not okay if this is not a key point then I should reduce this right so in the end we learn this right or using the same actually the same data fact that and uh we can see that so basically before us uh there's no our key point is this really key point and then all these key points are existing ISS RS or rspf they are all standard uh geometrical approaches because we are the first to learn this using deep learning and we can see that we actually outperform all of them right so uh you can compare so we can see that the the this attention that we have learned it's actually more yellow uh the brighter it is the higher the attention you can see that it's actually focusing on the structural meaningful places but on the road for example that's nothing because it's flat okay so uh our focus is really not there right and then the key Point that's obtained here is all on the structural uh interesting places whereas look at ISS the standard uh conventional approach you guys see that there are many false positives on the road which is not meaningful at all yeah so we published this at eccb 2018. now of course at that time we don't need to compare with all the any other deep learning methods because you are the first but for descriptors there are many other methods already at that time so there's really match that CGI or fprh I think is also a deep learning method but I didn't remember only so all these are a lot of learning methods over here then we are the first to actually come up with this or to cover them together actually we know both simultaneously right then uh but we're not satisfied with this so later on uh we came up with the second approach to because this is a weekly supervisor then we thought that we might as well do away with the ins and GPS we do it fully self-supervised so uh yeah some results so we do a unsupervised learning okay in the next work which is the next year in 2019 iccv we uh we did this so uh we the idea here is that this this is just for key Point detection okay we do a unsupervised key point in action uh stable key Point detection and uh what we want to do is that even we were just trained we will just design a network to do training without any labels of the key points as well as we wouldn't do any labeling or where the key point is taken okay as compared to the previous one we'll just take that any key Point any set of Point cloud and do self-supervised learning on it okay so nowadays this is has become quite a norm already like people will do contrastive learning uh where they meant the same data they rotate it or whatever then they do some form of augmentation but at that time well we were also one of the first to actually use this kind of idea so given a point card here right what we do here is that uh we are going to randomly schools generate a transformation and transform this into a XCOM X here transform it to an X Tudor then we put it into the feature proposal Network which is this guy over here okay uh it's kind of complicated but then I think there are many points uh there are actually this was taken this was actually uh done or inspired by Sonet okay at that time we thought of this and then uh so the feature proposal Network and you know then the output here would be that from the original Point count I will get a set of key Point okay I'll talk about what Sigma later and so it's it acts as the same as the attention that we saw earlier the uncertainty okay and then for when it's rotated or transformed I would have another set of a few Builders here so the key Point here so when you undo this hit when you undo this due to the then the two sets of key points should match up okay and um then this is this means that it can be fully self-supervised but I didn't show why I didn't show in the in this uh slides here we need some proof in the formal proof actually in the uh paper to show that uh actually if you do this just naively do something like this right that's a designer case because you can actually learn the centroid or learn anywhere in the principle axis and no matter how you transform it you will still get the same point right so uh but we found a very trivial solution to this so you just leave the receptive view as you limit the because the because what happens here is that we found out that if you don't limit the receptive field that means that the network has a global knowledge of the shape of the point and the centroid and the principal axis are actually very good representation of the global information so it's actually it can actually be generated we also show experimentally it does degenerate to them if we open up if we widen the receptive view so you have to select the narrow enough receptive View such that it's not sufficient to degenerate into the principal axis okay or centroid but uh it's still able to give us uh meaningful outputs over here and it does work quite well here okay so a straightforward way is that uh we queue and coupon we just take the chamfer lock okay but then if we just do this this is a hard assignment it's not meaningful okay or it's not equally we know that the M proposals here from the future proposal Network they are not going to be equally Salient okay so they are not going to be equally Salient so the receptive view here can be featureless surface okay and so what we propose here is that we also output Sigma which is the uh which is the uncertainty you can see as an uncertainty so we put it into exponential distribution here such that when we minimize this uh it's actually it's actually characterized by a distribution instead of a heart assignment okay so this Sigma here since it's also learned okay but there's no need for any supervision the supervision is actually to minimize this long livelihood here okay so together with the sigma here is just the output okay so uh if you do this then we realize that actually the sigma will measure the uncertainty okay and it's telling us how uh lightly or how Salient that particular point is for as a feature okay so uh then we also uh so one way that you can do the feature okay the key point which is actually not really feed net okay in 3D Finland we just propose uh the key Point using one of the points in the point cloud but the key point is not necessarily be from one of the point right because you are representing a surface a continuous surface and the key points are just samples on the surface so it could be somewhere in between actually right but then if you force that key point to be on the on one of the samples of the point Cloud then you are subjecting the whole thing to quantization error right so because you're quantizing I mean when you draw samples of point from the surface this is a form of quantization right so we would be subjected to quantization error and therefore we say that or let's not constrain the feature proposal Network we'll just let it output any point in the 3D space okay but if you just do this then that 3D Point might be very far off it might be floating away so we enforce the point-to-point loss that means that whatever that we output it should be uh the distance the chamfer distance to the closest point should be minimized Okay so we've passed the we don't force the output to be one of the point Cloud points in the point Cloud but we force it to be close to the point Cloud itself okay and here's uh if we don't have the if we have the point-to-point loss uh it's actually all that close to the uh surface okay and then if we don't have it you can see that they're all floating around okay so this is the whole idea here and uh it turns out that we this works far better than the in terms of repeatability okay it works far better than our own work 3D business okay because this uh 3D thing that was published at eccb 2018. this work was published iccv 2019. yeah so uh but we were the second one okay to I mean to compare with ourselves that's the 3D Point yeah so it's really Fitness okay and then uh you can see that actually in terms of repeatability it beats everything by a huge margin here even though it's self-supervised okay and here are some other results we call this method the UC it stands for unsupervised interest Point stable interest form okay and yeah so you can see that all these are after registration is very stable or any question up to this point before then let me talk about the next task which is uh 3D object detection okay so the task is this the objective is just this given the 3D input Point cloud okay that looks like this and then what I want to do here is that I want to detect where are the objects okay and then put the title spelling box around each object okay so I want to put a bowling pythoning box that's parameterized by seven parameters so XYZ which is the center of the bounding box hwl is high width length of the title Smalling box as well as we always assume that it's upright okay so there's only a your angle which is uh data okay and that's the heading so this is what we want to assign as well as the in around each object and it's uh okay as well as the semantic class of each of the bounding box okay so this is the cost of a 3D object detection it's much more difficult than the 2D object detection counterpart okay although there are many uh there are many methods already in 2D object detection uh like uh Foster CNN things like this I think the latest state of the is uh data okay using uh Transformer yeah using uh Transformer right so the visual transform the 16 by 16 chance so things like uh this already but 3D object detection is much more difficult because you have nice pass Point cloud and some of the objects right you can see that because of the our perception so you if you use a laser sensor or you use a rgbd camera right so we're all subjected to what we call the a motor perception so a model perception means that when I look at you I cannot see what's behind you okay I only see what's in front of you and when I look at this from this point of view I only see the fun part of the channel That is in my view but the back I don't see at all there's no points and you are basically has a shadow on every anything that is behind right so uh that's the difficult part of uh do to when you do 3D object detection that is the main difficulty there okay and then uh this guy uh the same guy who post pointed he came up with the uh this uh vote net okay it was one of the first work in 3D object detection it's pretty cool this uh so the idea is very simple okay the idea is that I give you this uh 3D Point Cloud okay uh it's actually generating votes to the object Center so it first generates some a set of seats then in each seats it's actually casting a vote in that points towards the object Center the closest object Center then what we do is that we aggregate those votes and then finally uh each vote also contains a feature then finally it uses the aggregated feature to request for the bounding box okay so 3D object detection is challenging because it's both a regression and a deduction problem the reason is because this part here regression and this part here is uh classification okay classification is easier than regression that's why uh I I won't talk about the details of the losses but the details of the losses here is actually goodly inspired by uh the the line of Works in like Fast rcnn faster rcnn uh kind of work where you would need to do uh Anchor Box right so basically you have a proposal and then uh you actually subtract the from the anchor or from The Proposal Because deep Network also has a problem when you are going to ask it to predict unbounded numbers let's say in this in this 3D scene if I just tell the network to say that our uh predict a bounding box yeah or given that point cloud of the or the chat uh out of nowhere and you ask it to regress XYZ all these seven parameters that are not bounded at all right because it basically range from minus infinity to Infinity then deep level is very poor and doing this that's why the idea of anchor boxes right so you treat it as like uh you discretize the whole space into an angle bosses actually faster and then just for every point you have nine anchor boxes then you actually check what is deviation what is the deviation uh and this becomes relative instead of absolute right so networks these networks are better at doing this because when you do relative it becomes more bounded in a bounded space right uh and the room margin for error is actually uh better so probably you would have learned all this in the Deep learning classes because all these are classic in uh deep learning right so yes uh yeah so uh yeah they use the same thing right but in 3D object detection that is a additional challenge which is a modality okay because you see only parts of the object but whereas compared to the 2D holder part image you see everything is dense although the a modality perception also exists it's not so severe that because you are just putting a 2d bounding box okay the parameter is far lower than this all right so uh the idea here is this so what net looks like this okay so the first part here is the uh voting point is a seat to generate C for input point out you generate a set of seats then for each seats you generate the uh you cast the vote so basically you the output what is the Delta distance from every point I receive here to the nearest object Center okay so these are the illustration of the after you uh Point those objects uh Point those uh Delta distance to the object Center then finally we will cluster the volt and then uh that is uh of a certain uh nearest neighbor away then we use this box to regress for the output here okay so these are your proposals actually once you have your proposals the regression will be more or less anchored at a certain part so share some details for the input points we just use point and plus plus to generate M seeds Okay and plus a feature okay so you still have the XYZ of the location of the seats and then plus a feature that we learned from a c dimensional picture then after that we do how voting with the Deep Network okay so uh what it means is that I have the c m number of seats here then I'm going to for each seat I have the seat location XYZ as well as a feature that's learned from 0.8 plus plus and then uh I will use the MLP to take in F and output a Delta X such that y equals to Delta X Plus Delta X is the because I have a c okay I'm going to plus Delta X and this part here is y so this is Delta X and this is X so this y here would be the object Center okay will be pointing towards the object Center as well as I'm going to generate a Delta F such that I have a new feature which I call G okay so these are the votes that I'm going to right so I'm going to supervise this such that the Delta I can compute and then uh have a what regularization loss here and this is supposed to be this the what happens here is that for the set of votes okay I'm going to because I have n number of votes here I'm going to sample pay books which is okay success okay using the FPL I'm going to use the focus Point sampling to get the a number of votes then I'm going to form a clusters okay so each one of these cables that I selected using further sample point I'm going to take use it as a centroid and then or find all those points that are within a radius of uh all those other volts that are within radius of R for me okay then this will form my uh Hey cluster and so given the pay clusters okay all these uh given a hit password that means that I have K number of cluster now and each cluster there will be all the votes inside there now suppose I'm looking at one cluster where all the votes inside that I'm going to now call it w in I in equals to 1 to n where I have the W is actually the uh the the vote Center the spatial location and the work feature okay and uh I'm going to now center with respect to the the centroid okay because I can compute the central here inside the cluster I have all the spatial location I can compute the central which I'm going to call ZJ here then I can actually centralize it divided by the radius because I'm selecting it based on the radius here so I'll take it as Z Prime uh to make it local okay then finally I'm going to put the These Boots okay all these votes into a MLP and I'm going to uh two layers of MLP here okay and finally the output will be just uh P Vector M Dimension or M dimensional Vector where this P Vector here is the object nurse score that means that it's zero to one closer to one means that there is object there if it is zero There's No Object then I'll ignore all the parameters it doesn't matter anymore right then uh if there's an object it will in P here it will also output my bounding box parameters and the semantic classification score so note that the way that they parameterize this is this that because now I have all this Z already I know where are the roughly where are the votes right so the volume box parameters are all parameterized with respect to the uh this votes okay so that means that you don't have to predict you you will never predict the absolute coordinates okay otherwise it's unbounded right so uh and also the semantic classification is called so uh these are all I I didn't write down the loss function but the loss functions here are since this is it's just a course entropy okay and then this bounding box here is a regression uh loss but anchored around the ground through so you are actually predicting just the delta or you're actually supervising the Delta uh away from the ground truth Okay so uh yeah any question so this is uh all for the vote net it's a very useful uh framework I think up till today is still one of the best uh if I'm not wrong there's another one which is the 3D Transformer I think uh is using you know it's 3D detail if I didn't remember only okay that is based on 2D I mean detail for 2D object detection and uh it seems to perform better than fortnite but otherwise but then it's also not that worth not that much worth it Still Remains the one of the most powerful video object detector up to now yes oh of course you need to wait I mean if you have uh different objectives in different scale you need to wait it's very important is very important so uh the thing is that let's say you have two objective functions okay if I my objective function is just in one dimension that means I only have one variable and this is my cost right my objective function let's say I have L1 and L2 and if L1 is so if L1 is of this scale and L2 is of this skill then you better scale up how to to be the same as L1 otherwise you are not going to meet both objective right you are forever optimizing just the Domino one okay this is important all right unless you know that both uh so one of the important thing like uh when you it's not just deep learning but also in general optimization in uh so in computer vision especially 3D computer vision because you can actually be optimizing in the 3D space so if you have a point Cloud you optimize in a 3D space then when you project it back onto the pixel you minimize those errors on a pixel space right pixel space is that no Dimension it's unitless right so you just know how many pixels away but in 3D space that might be metric distance okay if you put the if you are going to optimize both together one is in the 3D space one is in a pixel space obviously they don't live in the same world so you cannot optimize them together like that you have to scale it okay any other question all right if no other question then let's go for five minutes break okay shall we continue and then try to finish everything all right so uh the next task that I'm talking about I will talk about is a 3D semantic segmentation so the idea here is that uh for each point Cloud you know for each point in the point Cloud I want to label uh one of K classes okay so this is the most naive way of semantic segmentation semantic labeling because now there are more sophisticated definitions of this like cannotic segmentation instant segmentation and all this and which requires uh because basically we don't differentiate the naive way of doing segmentation here we don't differentiate between different instances like uh like like for example if I have a chair class or books for example then I will just label all the every book as step in that class I don't differentiate between different instances of Books Okay so instant segmentation does that and then uh there's also panoptic segmentation where basically penaltic segmentation is just instant segmentation but uh anything that's in the background that you cannot up have the final definition into instances you'll just let it remain as segmentation okay so uh this uh 3D segmentation that there are many works early in the early days of when this coin that was introduced right that so many that I forgot which one is actually the most significant one right because I always actually putting all the slides together only yesterday and that's why it was a last minute thing that I mean I should have done it better or to to try to recall which is the most significant the earliest work then in the end I did it last minute so I decided that uh instead of finding the most significant work in the fully supervised one I'll tell you which is the most significant word in the non or weekly supervised one okay this is actually the first book for a week super Vision really supervised learning in 3D semantics management because when you do segmentation in 3D compared to 2D images it's actually a lot more tedious to label in 3D so suppose I give you a point Cloud that looks like this okay or looks like it allows you to go and label every point this is actually uh kind of tedious compared to an image an image is already it's already previous but at least it's just an array it's on the plane right you can actually ask anyone to one or label it and do a untrained eye that means that if you don't look at coin Cloud very frequently actually our eyes are not custom to we are not meant to look at Point cloud in terms of like a deep learning right when you look at different layers of deep learning because they discovered that a different layer you are looking at the finest uh Finance feature lowest level feature and then subsequently higher level feature right so at some level it's the lines and curves and Contours right so actually human eye actually function at that level we are better in terms of looking at the Contours rather than the point Cloud so it's actually very difficult to label uh using uh I mean to go and do this now previous job of laboring labeling every Point okay so uh so the the for 3D shape segmentation the problem is that you need 1K to 10K points of Labor and it's even worse for indoor scene segmentation you can it drastically change to millions of points that you need to label for every scene so it's kind of hideous to do that therefore actually this was also the person this was the first work that we proposed okay in 2020 uh to do with Supervision in the semantic point out labeling so uh we discovered that uh actually by doing even full supervision is 100 labor and then uh you know the side track a little bit you know why we decided to do this okay besides the besides the the the whole idea of it's really uh genuinely very tedious to labor large amounts of training data right so it actually makes more sense to reduce this uh and such that you can actually make use of unlabeled data as well because there is a lot of unlabeled data in the world so we can easily collect all this data right and uh yeah so the and nowadays people are also looking at how to use synthetic data so synthetic to real there's actually a learning shift uh yeah that's why we came up with this weekly supervised so instead of having a full labor uh training data we only have a few points and to the extent that we discovered that even just one point right is sufficient both are semantic class uh I mean I thought this continues so one point is actually sufficient in our framework so basically it consists of four components inside here right uh the of course that is uh this component of feature extraction so from the point Cloud we augment it and then uh this is just by random flipping rotation and so on right and then we have two of the same encoder Network that gives the uh the features of the point cloud and uh we would have a incomplete supervision Branch because this is the uh we only have weak labels because we really supervise that means that we have incomplete labels very sparsely labeled and here what we do here is that we will just add a mask all those with labels we will have the mark of one and all those without labels we will have a mark of zero or maybe is love here so we only supervise on those parts that there are labors okay and but this itself is not sufficient so we would also have uh in today's words because at that time uh people don't use contrastive learning yet okay but uh actually what we are doing here the five minutes walk if uh similar to uh it's not to the extent of contrastive learning because contrastive learning besides the positive you also divide by uh all the negatives right to normalize to push it apart right but uh here what we uh use is just a naive L2 Divergence of course you can also use uh care Divergence or things like uh this uh and uh the minimum distance or and and uh other form of uh Divergence to measure this thing so we just use a L2 loss and we realized that it actually works pretty well right so uh it's just assignment is not and then we also have a third branch which is the in exact uh supervision brush this inspired by multi-instance labor okay so this is essentially a bag of labels right so because you have very uh you have the very sparsely labeled and you can actually propagate those labels to uh everywhere of the neighbors so you can actually group them together and uh uh and then do a supervision by uh saying that every one of the whole bunch of it is actually if they are similar in the features then they should take the same uh labels okay so uh this is what we call the multi instance labeling here and the fourth one which is actually probably the most important one that is some form of propagation so when you look at semi-supervised learning because semantic segmentation is essentially a classification task okay so what you want to do here is that uh you can uh we introduce a smoothness loss okay so basically this term over here is just saying that uh we form these weights over here I J is that because it's a point Cloud right so we have a lot of points in the control and we form what we call the K and N tree that means that for every point I'm going to take the K nearest neighbor and adjust to it so i j here would be the every Edge in the k n uh three okay and the way that we take the KN entry adding address is by spatial and color smoothness that means that if they are spatially close to each other pay nearest neighbor especially we add the edges there okay of course you can also have a fully connected graph but the fully connected graph is expensive right so uh what you what you need to do is that the then the computational complexity will increase and then we also do and uh so you use basically you use this as a weight it's a weighted graph right so uh let's say between f x i and x j you have x i and x j here so the feature that we extracted the the output actually this is the output f here is a feature over here right that we extracted we want it to be because we are minimizing this smoothness loss so if they are very close to each other especially then ideally the feature should be the same because what we are saying here is that neighboring points they are likely to take the same semantic labels right because otherwise they won't they are neighbors because of the reason not because by because it's created to be that way right it's because semantically they are most likely going to be related and also we are also introducing a colors business so that means that if they take the same color and the same spatial relation then we would have W to be almost the same so what we did if you look at the paper in detail what we did is that we compute this the other one for color then we have two weeks and we will take a average of it we take the sum and then we take an average of it okay and uh then uh the reason why we can take our average is because it's exponential it squeezed to become zero and one okay and then the and we put it inside here right so what happens here is that is that uh in our paper we also show this that uh you can actually evaluate this further right and what happens is that it's equivalent to labor propagation uh in semi-supervised learning so what happens here is that uh you have in semi-supervised learning if I form this graph here and then graph right let's say I have a hearing graph where all the edges tells me uh either they are spatially related or semantically related right so if I have a label here and I don't have the labor for other points then basically uh labor propagation means that I can start from any unlabeled point and then I take random walks okay that means that I will do random walks uh let's say these are actually Markov Matrix because when you do this W you can turn it into a macaw metric to do render mobs along the edges okay so what happens here is that you can walk like a drunken man according to the Markov Matrix then when you first end up to well you start at a point where you are unlabeled but when you hit the first labeled note you take that labor that's labor convocation okay and uh you can actually it ends up to be uh it ends up to be an eigen value eigenvector problem okay eigenvalue problem that actually the way that uh we do pitch rank algorithm is the same it's actually the it's conceptually the same thing okay so uh yeah yeah so it turns out that this function here you can be evaluated to that function okay uh you can actually evaluate it and form a lapacian matrix and then you do random walk on it okay so the l0 simply means that we only count those because W here it will be turned into one or zero I think we are forced to become one or zero and l0 non means that we only count the total number of those entries with one inside there okay so with this smoothness constraint uh then you know what happens here is that you are propagating the labels in all the labor data to the unlabeled uh features to the unlabel points and therefore at the end of this uh all the four branches they work together to actually uh turn it into a weak supervision uh 3D semantic segmentation okay and of course there's some blocks we can even see that with 10 of the labor point we are almost equivalent to the fully supervised learning or even some of them actually are even better Okay the reason is because it is largely due to this publication and also this kind of uh because we are doing something similar to contrastive learning at that time okay it's self-training right so this Branch here basically is a self-training branch but now of course uh because you have to take that discount of three years okay that was in 2020 but uh now of course people look at the contrast deep learning is much more powerful than this okay at that time we were still quite unaware I don't know when was contracted learning uh introduced probably also around that time right so uh but now in today's term you probably won't do this anymore you will replace this with just contrastive learning right it's much more powerful because you have all the you have a positive pair and then you have all the negatives and uh you try to do it right so uh yeah so this this is quite effective and here's some uh qualitative results right so you can see that even one point supervision right we are almost close to the whole super region and in fact if you look at the comparison here we might be actually better than the full supervision even with just one point supervision this shows that the self training is actually uh good is actually excellent I mean I think that now you also have uh many uh follow-up works okay that are on this uh well I saw I think recently there are also quite a few of the works that actually Benchmark against us because we also leave room for others when we did this we didn't do it to the maximum no I'm just kidding I mean that's the best that we can do at that time that's the reality so uh yeah I mean uh and I think that now it gets a lot better because of the all the more sophisticated techniques okay so uh yeah this is the semantic family here's the part based semantic segmentation you can see that uh one even one point it looks visually quite good compared to the down truth all right so any question on segmentation if not then I move on to the next task which is the point Club registration so Point Cloud registration is also an interesting task in uh in in this uh for for 3D toy clubs the idea here is that the path is given Cool Boy Color any power points out right you it's defined in their own respective local frame so uh respective local frame so what you want to do is that you want to find what's the relative transformation the rotation and translation such that you register this so let's say I find the registration of all this okay and then I can put it into the registered ones to form a room for example this this whole scan of a room okay and uh of course this uh the extension to this problem is uh rotation averaging okay as what we have mentioned in the global structure promotion case but in this case you don't have the translation ambiguity because the point clock is with absolute scale so there's no uh uh yeah so there are some robust ways to do this where you can actually enforce low rank constraints because rotation always is rank three so you can actually enroll the lowering constraint here okay so the the idea is uh but what we are going to focus on in today's lecture is uh just a pair of points on and here's some applications where you can use this like for localization or autonomous vehicle so there are actually two steps to the localization the first step is that because let's say you are given a very big map the whole of Singapore then you give a local query right you want to narrow down the search first before you find the post of the you know where it is so you narrow down the search in fact I already talked about it that's the place recognition right you treat it as a retriever but after the retriever basically you just get a very cost localization you know that it's roughly there but you don't know what is the exact point with respect to the point Cloud so the point Cloud registration would be the second step to do that okay to get the precise location contrast this to image based uh Retriever and post estimation so we know PMP effective employee algorithm we learned this already in the lecture we also saw vocabulary tree the retriever so let's say I give you the whole database of nus okay I have a map or a Joe tag every image of nus that would be in my database I might have millions of images there right and then I'm in one location let's say I'm in LP 15 right now I take a picture and I want to find out where is my exact location in or I have done structure promotion already using all the database now I take one picture and I want to find where am I inside that location the first step is obviously not to do PNP because if to get that 2D 3D correspondences how are you going to search over the whole database your you have millions of images that means that you could have hundreds of millions of key points right so that would be basically impossible intractable to do so the obvious choice is that you do a place recognition a retriever so that would be the vocabulary tree that we have learned right you do a retriever or network actually that can do that using deep learning then after you narrow down a cost location we do the 2D 3D correspondence search and do PMP right so it's exactly the same idea here so you do point up registration the the point Cloud retriever then you do a registration so localization or autonomous vehicle can use this and we can you also to change detection from here right so if you have two uh so if you Traverse the same location by number you can see that they are basically full splitting image of the map okay but what you really want is to align them like this so this is if I pass the same road here by the way this is one North this is one part or one node we actually took some data sets from there okay so you can see that uh you if you Traverse over who look uh two uh time over a very long period of time some part of the scene has changed and it's hard to register them together right so uh what you want to do is that you want to be able to do this registration stuff that it becomes like that and then comparing once it's registered together you can compare the two and uh detect all the changes in the scene okay so we have the uh we actually did a paper on this to do change detection in the scene but uh the registration will be the first step all right so uh if we when we talk about registration Point Cloud registration we cannot don't mention this okay the iterative process point is the plastic of the classic okay in point Cloud registration uh this is also called the in short the ICP so the idea here is that even a source point and a reference point Cloud okay what we want to do is that we want to find the rotation and translation that this is a rigid registration that means that the point Cloud doesn't move okay internally it doesn't change 3D net is white Baseline matching this is uh iterative you some kind of iterative uh registration okay so the idea here is that that we have the rotation and translation over here and uh so iteratively we'll do this starting from the beginning let's say uh the red points are here and the blue points are here I'll find the closest points for every point in the reference I'll find the closest point in the source okay this can be done effectively by using the kv3 okay you basically use a kd3 to search for the nearest neighbor and then one iteration you find this the closest neighbor then you basically for every pair of the closest neighbor you do a you minimize the error the point plus so you have to rotate this uh source and into the reference right and then uh plus a t so uh so this should be minus so you want essentially you want to minimize this error here right you transform you transform this uh you transform this you rotate this point you transform the source into the into the reference and you minimize the error total error but you do this iteratively right because we are just basing it on the nearest neighbor right so we do this iteratively so after the next iteration uh once you finish do finding this basically this is just solving the SVD we actually talk about this right when we do the PMP after you found the depth we actually do this right so if you still remember so the the but that's just one step because the correspondences here is just the nearest neighbor so you iterate if you move a little bit and then you recalculate the nearest neighbor again then you do this minimization again and you find the nearest neighbor again iteratively until there is no more change in this error or it stabilizes right then you reach a local minimum so finally it will look like this but the main limitation is that it's susceptible to local Minima that means that what it means is that it has a very narrow range of convergence a very narrow Basin of convergence this means that because it's iterative it's acceptable to a local Minima uh when you talk about local Minima let's say one dimension this is my Global Minima here I have a local Minima if I start here then I end up here you say or here right because it's some sort of gradient is saying I will not to get stuck in a local Minima so this means that the initialization needs to be closed to the correct solution but it has a very narrow Basin of convergence right so uh this is not good that's why when uh that's where deep learning or learning from data can potentially help okay so what we did here is that we proposed this RPM net and by the way there's also one problem with this is that if there is a outlier anywhere then it might potentially also throughout everything so we propose this RPM net uh the robust coin matching Network Okay so basically uh it's based on actually it's based on RPM there's a there's a uh all work on core RPM what happens here is that uh we have a rigid soft and reference so we will predict the we will do some form of feature extraction first okay then we get uh for Source we get FX and for reference we get f y okay the next step is that we compute a match Matrix here this match metric can be seen as a assignment Matrix okay so basically our assignment Matrix is just a permutation Matrix in the hard case Okay so this means that uh if you have uh I believe that most of you here know what's the job assignment problem right you have n number of workers n number of uh jobs so only one worker can take one job so that means that the you can treat it as that you have n number of jobs here and then M number of workers here you want to assign each worker to one job right that means that uh the role and the column must sum up to one and every entry can be only zero and one okay so this is a permutation Matrix but that's a hard assignment case soft assignment case means that uh you just need to sum up to one okay and uh it need not necessarily be so it does need to sum up to one the row and the column but every entry can be any number okay uh the only constraint is that it needs to sum up to one okay so this is what we call the tabas stochastic Matrix in contrast to permutation metric so double stochastic Matrix is also a Markov metric because it sounds to one right so it can be left right stochastic or it can be double stochastic so in this case it has to be a double doubly stochastic Matrix okay and what's interesting here is that if it is a hard assignment then it's not differentiable but fortunately there's something called a Singh layer or Singh corn metric on Matrix so what have uh what it says is this synchron I just briefly give you the idea uh synchron Matrix or synchron layer says this right if I start off with a matrix that has positive numbers inside that and then I keep on normalizing the rows and columns I keep on uh iteratively normalizing it eventually it will settle down to become the double stochastic Matrix okay and this process is actually differentiable so that means that the synchron layer can double as my matching Matrix or my permutation metric in a relaxed form okay and uh therefore we introduce this there are two uh so the two main components is that first we use this uh feature uh this uh ppf net feature these are local features so we do fps to select all the product sample points and then we form the we take the Central and compute all the Delta X1 okay and then we compute the angle between the normal and between every one of this Delta x y z okay so this will form the ppf it's actually in this paper here okay so the this is good for local features because it's taking locally the geometrical structure into account so it actually captures that essence of it right so once you have this then we would have the feature and the next thing is actually to compute the match Matrix as I mentioned uh what we want is a doubly stochastic Matrix that means that it just needs to sum up to one the rows and call uh the the rows and columns okay so uh the so what happens here is that you can start off with Computing the distance okay and then put it into a single layer because the distance here is always positive right so uh the it eventually will evolve into uh summing up everything summing up to one that looks like this okay so uh the good assignment means that there wouldn't be conflict between the row and column it becomes a doubly stochastic Matrix if you take the maximum and just put it as one and all the rest putting it at zero then you would see a permutation metric occurring okay so this is a good assignment and this is our objective we want it to be like that right so but uh and what happens here is that this can be done using a synchron layer so basically the score inside here can be exponential negative of the distance so that you know that it's always going to be uh more than zero and equals uh or to be zero to one okay so that there is a weight here that we need to learn we've set it as a learning parameter in the Deep Network such that when you evolve through the synchron layer eventually it will give you something like this the established for config metric okay and but what happens here is that some points they might not have correspondences okay because it's not always equal number on both sides okay and it might also be outliers so the idea here that we create a Dustbin column and that's been row okay so we call it the slack column or slack row such that anything that cannot be anything that cannot be assigned to the rows and columns here in the main part we can dump it inside the slack column or select row such that the constraint it can still be put into the synchron layer such that the constraint will still sum up to one but now what happens is that it need not necessarily be the main portion of this summing up to one anymore right so you can actually push everything to the slack column and let all the rest becomes uh zero but there might be a degeneracy here so everything can get dumped into the Dustbin column or Dustbin road so that's a threshold here that's why there's a transfer here to prevent and that threshold we also don't set it as hyper parameters we learn it we use a I mean we just dump it inside there and uh Define a loss on it and let it learn okay so this is the way that we make it more path uh such that it allows for outliers to occur but it's still able to do it okay and interestingly now this is yeah this is how it looks like when you there's a toilet board then the table uh though uh we if you compare with ICP and all this you can see that our error is much lower we are not the person in this uh so like RPM up here is the original one so and we compare we actually Beat It by quite a fair bit because the is data driven we're actually learning the features right so it's actually kind of uh interesting here these are Big learning methods we're not the first to do this but uh at that time we actually did that and actually this is also not the latest work that we have we also have another one but I didn't include this uh that was James last week before he uh defend his excesses there was a cbpr last year uh record Reg TR okay it's some it sounds similar to Gita okay uh as you guess is actually Transformer inside that so it's actually a very very simple uh thing because we thought that the Transformer with self-attention and cross attention and actually captures both the internal because you have a source and a reference point Cloud the self-attention will capture for every point you will capture its relation with every other point in the in that respective Point cloud and the other one as well that because you are doing registration so the cross attention here will matter a lot right so what happens here is that instead of doing in this case here we are doing matching we are doing matching explicitly here we have a match Matrix this is something like an assignment Matrix but in Vector what we do here is that for The Source Point Cloud because each one of them comes with a reference to its own frame reference frame so what we have what we do here is that we directly predict how The Source Point Cloud will transform into the reference and then how the reference will transform into the soft point Cloud okay using the just the Transformer and it actually works very well then okay that was actually the last word I've seen before he the graduates and that was published last year in cvpr okay and yeah so these are the books that we have done in the registration and then uh finally the okay so you can also see that the weights here the reference weights here it tells you something okay it actually uh when we visualize the width on the reference and Source we can see that it's actually the parts where the parts where they overlap gives the highest weight okay which is it should be true because this is where and next is assigned to F1 as a match okay when it's assigned to f y and f x as a match then there must be an overlap right that's why when you visualize the weights yeah it there's no surprise that it shows up as the O at the overlapping point okay so this is uh the it's quite intuitive here okay any question on the registration point or registration so this is also one big task in 3D Point Cloud okay the workout registration then the last thing that I want to talk about is also registration but uh you might find this slide familiar okay uh so the this slide here is uh the this this slide is actually from the our lecture slide okay on PMP so the idea here is that when we do perspective and both we want to given the 3D 2D correspondence here here we want to find uh what is the 3D points is defined with respect to a Warframe but the image is defined with respect to a camera frame so given the 2D 3D plus one enters here we want to find what is the rotation that registered with okay that uh uh in the camera post with respect to the wall frame and this involves two different modalities I recall that during the lecture someone actually asked me how do we or actually before this lecture someone actually asked me how do we when you have the 3D Point clocks how do you actually register these two to or check that the 2D 3D is actually a correspondence so usually what we do in structure promotion is that we will store all the synth or certain points into the 3D points then in the image we will extract the same key points and then we will match it right using the but this is actually not that robust okay the thing is that if you have a drastic change in the appearance then it becomes difficult so this comes back to the same problem as the place recognition I mentioned that uh if I have day and night page recognition is difficult this 2D 3D keyboard correspondences search is even more difficult right so the thing is that let's say I have my map that is taken in the that is built in the daytime let's say I go up in the uh create a 3 P.M in the afternoon to map out the thing I do structure promotion and I do this uh key points with the descriptor in the afternoon but at night I go into the same route again or even Heavy Rain Without Rain sunny cloudy right so the 2D key points that I'm extracting here the C features it might not match because there's appearance change already all right so that's the that is the key Challenger and it's very difficult to learn this because again it's also because of the data limitation you don't know how to label it and then also at the same time you don't have so many data or so much data that to capture the variation so anyway this is difficult right so the this is known as the PMP problem and uh so this triggers me to think of about or triggers us to think about uh at some point of time I actually also wrote this in my PhD future that 10 years ago I wrote this Insider and then it took quite a while before we came out with this so the uh the thing is that uh the question is this whether we can establish the 2D 3D correspondences right directly using Point Cloud let's say my point cloud is built based on laser sensor that originally or inherently there's no such thing as descriptors or safe descriptors anymore because I don't no longer build that point cloud from uh images and as I have mentioned or motivated earlier that 3D Point clouds or the models the digital models that is constructed from lidar lasers are actually much more accurate so why don't we just use that map as a reference map right but then the problem here is that now we live in two different modalities so the this the advantage of Lida is that it's very accurate but the disadvantage of it is high maintenance because that's rotating Parts anything that's rotating is no good it's hard to maintain it breaks down sorry yes yes it breaks out very easily from trust this with your camera like this camera over here no matter how I break it or I drop it it still works right so the thing is that uh and it's also cheap much cheaper uh lidar velodyne 64 cost about 70 000 USD a camera like this costs about I don't know like 100 100 Singapore dollars okay probably less than 100 USD so uh in that it actually makes more sense to use like that uh for the acquisition of large-scale model Digital model because it's more uh it's more accurate but for the deployment right you just need one lighter to do that mapping but for the deployment you have the reference map already very accurate reference map that's uh that is gotten by your obtained by your very expensive lineup once then your deployment you can send all your autonomous vehicles right uh you have many of those let's say you have a whole plate no taxi or hopefully not self-driving buses malted you don't Mount one of those every one of those with the 60 or 70 000 USD slider sensor that's so hard to maintain the obvious choice is that you mount camera on it right so now then it comes the question of that is a cross modality challenge here how to establish the two lead 3D correspondences right so that uh if you cannot do this The Next Step here which we have learned will fail badly okay because wrong correspondence means that nothing can be computed anymore right so uh we actually did also the first work on this that uh we thought that uh we just used at that time it was Triple uh lost the people at Network so we thought that uh we just register because the this data set here has lidar and cameras calibrated it's mounted rigidly on the car so according to that transformation the calibration will know how to back project the 3D like that points onto the image and then what happens here is that we can extract the image key point okay we can assure the image key points and Define the corresponding ones on the uh on the 3D Park so basically the 3D point guard will use ifs or whatever key Point detector or our 3dp map and then the 2D ones we use see and we try to minimize the we try to do uh some sort of uh triplet loss okay to learn a new descriptor that can work across modality okay but the problem with this is that although we show the initial work here this was the first one to actually show this okay and uh the matching curve is quite low here because we are inherently in living in two modality what happens here is that we increase the idea a little bit instead of doing 2D 3D matches here because they live in different modality so why you force them together right so now we are going to say if uh teach camera here we would have a custom of the camera we know where the which points are in the view of the camera they are projected into the camera so instead of learning the descriptors across modality we twist the problem to become this we just twist the problem to become that given a set of Point Cloud we want to do a one zero binary labeling okay if it is one that is in the view of the camera if it is zero then it's not in the view of the camera okay so as simple as that okay then uh so uh it becomes a classification uh problem in the first so once you give uh that means that I give the point clock and I give the camera image at the same time so we would have to design a network that fuses the two modality in terms of features but your output for every point a label of one or zero whether is it inside the view of the camera or outside the view of the camera okay then once we have this okay so we we design uh uh optimization so this is just about Newton solver but in contrast to what we have learned in bundle adjustment that will minimize the reprojection error now we formulate this such step we formulated such that uh if you are if you are having a camera you know where are the points that should be inside the view of the camera but you don't know what is the pulse of the camera with respect to the point Cloud so what we want to optimize is that we want to optimize for the Post such that when we do our optimization the cost function is that we want to move the camera such that all those points that are labeled one should be reprojected back onto the image as much as possible so that becomes our cost function okay this is the two step of course it's not so trivial we did a lot of uh we actually need to use the exponential and the logarithmic actually all these are part in class in class Okay so the bodies are actually uh in class and okay so the coin uh classification would be simple we'll just let it be every Point uh we take lnc over here it will be inside zero or one okay so it's either inside or outside then this is a network we have a point Cloud encoder and also a image encoder then we do attention Fusion here so this is the fusion we just uh attention but nowadays if you want to do this right I guess you do it using Transformer right so you have a point Cloud encoder actually you can just use the VIP as well vit works for image 16 by 16 patches for example you can do the same too you can actually cluster it you can do a FPS right and make it into 16 clusters or whatever then or no we're making a 16 by 16 cluster okay then you dump it inside the Transformer two different sets of tokens and then it will output or whatever stuffs that you want the features right so if in today's context that will be what we should be doing okay but this was published uh 2019 then we put it inside a decoder and where this uh will just simply be a binary classification one or zero okay and uh so once we know this one audio label given the point Cloud uh one or zero we design uh we design optimization algorithm to find out the post so this would be the cost function okay uh optimal post so there are two terms to it one of it is that uh we want to uh shift everything such that we want to shift the post G over here such that the if the points are inside the Viewpoint okay then it would be one if it is outside the Viewpoint when we shift this thing the reproduction if it's the point is inside we will give it one if it is outside then we'll give it zero so 0.5 here right and then the other uh the other part here is that uh we want to take the label every point that is uh uh the the labor that we have seen okay that is given by the network and multi minus it by 0.5 okay and uh then what happens here is that if you minimize if you maximize this term over here right so what you want to do because these terms here if it is wrong okay if it is wrong it could be minus it becomes negative and this term here if it is wrong it will also be negative so that is only one combination where both must be right then it's positive then you maximize it right so we want to optimize it over G the port such that this term say when you multiply together there must all be positive they must be all uh maximizing it okay and this uh that we turn this into a unconstrained optimization uh but it's the derivation is kind of complicated so I omitted all the details if you are interested you look at the paper Okay so and you should be able to understand right now actually yeah I hope so so uh yeah so the the and and you can easily use a Gauss button uh solver to do it okay so this is uh just a idea here you can see that the question of this the label points is in green okay so uh the network can predict pretty well the labor points then as the Gauss Newton optimizes from zero we do three checkpoints over here so 0 40 and 80 0 40 80 iteration so you can see that it gradually moves towards the correct post such that the whole labor points is inside the Viewpoint of the camera okay then we know that we find the correct pose here we cleverly just make use of for deep learning to give us one zero but all the rest are still done in the correct way of optimization and Mathematics right so this is the thing that I'm uh that I think that we should be looking at in this in this way because it doesn't make sense to dump everything into a deep Network and treat it like a Black Box okay including like just now the RPM you can see that the thing called layer that is a relation is actually doing the assignment Matrix okay so these are the things that we uh try to do okay any questions so far if not then I guess I'm at the end of the actual video so we compared with because uh we are the first method to actually do this so here's some comparison although it's not perfect there's still a large room to improve this right so uh this is nonetheless this is a cool idea that uh that we unlocked and uh here's some results that after the correct optimization okay so any question I have no question then uh that's it for today's lecture so in summary I talk about implicit explicit surface I'll talk more about this next week and then I talk about the representation of Point cloud and how deep learning can be used for particular representation and finally I talked about the six different parts of account but note that it's not limited to this 60 uh different tasks there are more other tasks like Point Cloud completion Sim flow and things like this that I didn't talk about okay so uh yeah that's it for today's lecture I'll see everyone again next week for the last lecture and I'll talk about uh neural fuse representation thank you thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_7_Part_1_The_fundamental_and_essential_matrices.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to look at the fundamental and essential matrices and hopefully by the end of today's lecture you will be able to describe the epipolar geometry between two views and estimate the fundamental or the essential matrix with eight point correspondences and then we'll look at how to decompose the fundamental matrix into the camera matrices of two views and find the rotation and translation between two views from the essential matrix finally we'll look at how to recover the 3d structures with linear triangulation and do stratified reconstruction from uncalibrated reconstruction of course i didn't invent any of today's material i took most of the content of today's material from the textbook written by richard hartley and andrew zizermann multi-view geometry in computer vision in particular chapter 9 10 and 11 and i also took some of the materials from the textbook written by mahi an invitation to 3d vision in particular chapter 6. i hope every one of you will take a look at these chapters after today's lecture so in two view geometry we are given two images which we denote as i and i prime respectively and suppose that we know the image coordinate in the first image which we denote as x of a common 3d point that is seen by both i and i prime concurrently we know from the last two lectures that this particular image coordinate point in the first view it's going to back project to a light ray and the 3d point can lie anywhere on this particular light ray now let's project this 3d ray onto the second image view and we can see geometrically that this particular light array is going to project onto a line on the second image which we are going to define as the epipolar line and denote it as l prime so since the 3d point can lie anywhere on this light array what this means is that the corresponding point of the first image coordinate can lie anywhere on the epipolar line we'll further define the epipolar plane and this plane is simply the plane that is defined by the light array the back projected light ray that we have seen earlier on here as well as another line which joins the two camera centers that we denote as c and c prime here so we call this line the baseline and this is simply the line that joins the two camera centers and we can see geometrically or from this particular diagram here very obviously that the two light rays the two lines here the first line which is the back projector light ray as well as the baseline it spans this particular plane which we call the epipolar plane we further define the epipole the epipol is simply the intersection of the baseline with the camera image itself in two view geometry we were two epipoles one in the first camera view and the other in the second camera view which we denote as e and e prime respectively and equivalently the epipose is also the projection of one camera center onto the other view so for example in this case let's denote the two camera center as c and c prime respectively we see that the epipole which we denote as e in the first view is simply the projection of c prime which is the camera center of the second view onto the first image and the relation holds vice versa so for e prime over here we can see that it's simply the projection of the first camera center onto the second image and interestingly the epipoles are also the vanishing point of the baseline direction uh with respect to the respective camera center so for example in this particular case in the first view we can see that e over here the first epipol over here is simply the direction of the vanishing point of the baseline and similarly for e prime over here we can see that is a vanishing point of the baseline direction with respect to the seconds camera center we have seen earlier on on what's the definition of our epipolar plane it's simply the plane that is being defined by the back projector light ray as well as the baseline so this is the plane here in the example that we have seen earlier on if we observe a single image correspondence here this is supposed to correspond to a back projected light ray that is being projected into the second camera view as the epipolar line and we can see that this relation holds true for any point that we can observe on the image and every one of this pair of point correspondence it's going to define a epipolar plane and hence we'll get one parameter family of the epipolar planes and interestingly this particular one family of epipolar planes it's defined according to the epi pose or the baseline over here that the reason is because since there's a unique definition of the camera center c and c prime so here two view geometry will assume that c and c prime stay fixed hence only one single base line that cuts across the two respective image planes at people e and e prime respectively so in this case e and e are also fixed and hence given all the possible point correspondences we would have a whole family of epipolar planes that must pass through the same baseline and hence as a result this family of epipolar plane must be parameterized by one parameter that revolves around the baseline we have also looked at the definition of epipolar lines earlier on and we saw that the epipolar line is simply the projection of the light ray that is the back projection from the image correspondence of the first view onto the second view and uh similarly uh for the for the definition that we have defined earlier on it's meant for one particular pair of point correspondences but that can also be many pairs of point correspondences and each one of these would define a certain light array that is being projected onto the second view and hence we can see that there's also a whole family of epipolar lines on the two views and what's interesting here is that this whole family of epipolar lines is related to the family of epipolar planes that we have seen earlier on it's simply the a pair of this corresponding epipolar lines which we denote as l and l prime respectively it's simply the intersection of an epipolar plane with the two respective camera images and we also know this family of epipolar line is parameterized by one parameter the reason is because as we have said earlier on that c and c prime which are the camera centers of the two respective views stay fixed as a result the epipoles e and e prime of the two camera views must stay fixed and since the set of the whole set of epipolar lines it must contain the epipole this means that the family of epipola line is actually defined by one parameter that revolves around the respective epipole now after we have defined the terminologies of the epibola geometry and we have seen that given a point from one view it actually transfer the epipolar line on the second view and now we are going to find projective mapping that maps the poi from one view to an epipolar line in another view and define what's called the fundamental matrix as the linear mapping or the projective mapping that maps the point from one view to a epipolar line in the second view and we'll denote this fundamental matrix as f we'll first show the existence of the fundamental matrix and we also formalize the exact transfer of the point to a line between two views so here the map of a point from one view to a line in the second view which we call the epipolar line can be decomposed into two steps so the in the first step we have two views over here given a point in the first view which we denote as x we'll define a projective transformation that maps this point onto a point in the second in fact we have seen this mapping or this transfer earlier on that there's a simply a homography that defines this mapping from a point to a point and then in the second step we after we have defined the transfer of the point in the first view to the point in the second view which we denote as x prime over here since we know the epipolar line must contain the epipol which we denote as e prime over here and the epipol e prime is simply the projection of the first camera center c onto the second view so since we know the after we transfer x to x prime via a certain homography we and we know that the epipolar line must contain e prime the epipol in the second view as well as x prime in the second view we can define the epipolar line l prime as mentioned earlier on the first step is that we want to define the transfer of the point from the first view into the second view and we know in the past lecture that this can be done using 2d homography now suppose that we know that the 3d point which you denote as capital x over here lies on the 3d plane which you denote as pi over here and this knowledge tells us that the image point that we have seen in the first image which we know as small x over here can be transferred to the second image as x prime via the homography and we can rewrite this particular relation as x prime equals to h pi multiplied by x we have seen how this particular definition in the earlier lectures once this is known we will know that the image correspondence x prime in the second image is related to the first image correspondent as hx and the next thing that we need to do would be to find the definition of the epipole in the second view which we denote as e prime over here and the as mentioned earlier on the epipolar line must contain the epipol as well as the transfer point from the first view to the second view and we can easily define this as l prime equals to the cross product of the epipole and the point that is being transferred from the first image to the second image via the homography and as a result we can write l prime equals to a cross product of the epipole and the transfer point and now let's substitute the transfer point which is expressed by the homography that we have defined earlier on into this particular equation over here and we can easily see that after substituting this we'll get l prime simply equals to the cross product of the epipole with the homography multiplied by the image coordinate of the first point now we can collectively call these two terms over here the cross product of e prime and the homography as a single matrix f over here so note that this particular uh cross product over here it's actually written as a skew symmetric uh matrix so if we were to take two vectors a a cross with b uh we can rewrite this a so this a here is a three by one vector and b here is also a three by one vector we can rewrite the the first three by one vector which is a here as a skew symmetric three by three matrix that is given by this form over here so i'll leave it to you to verify that this is indeed the multiplication of the skew symmetric matrix of a and the vector of v is equivalent to the cross product of a multiplied by b so we have seen that this is actually a three by three matrix and the homography which is the projective transfer which is responsible for the projective transfer of the point from the first view to the second view it's also a three by three matrix and as a result we get the linear mapping of x to the x which is the image point in the first image and to the epipolar line in the second view via a linear mapping of a three by three matrix which we call f this particular three by three matrix that we denote as f we were going to call it the fundamental matrix and hence we get the linear mapping relation of a point in the first view gets transferred to a line in the second view via the fundamental matrix which is a three by three matrix we can see this particular relation which we have seen earlier on l prime equals to f of x as uh analogous to the homography that we have seen in the first few lectures so in the case of a homography we get a pair of 2d image and there's a corresponding point path which we denote as x and x prime so in that case that we have seen earlier on this would be a transfer of a point to a point and this transfer we saw that it's equivalent to x prime equals to a homography multiplied by x so now we have a point and we are going to transfer it to a line which we denote as x and l prime over here and we saw that this transfer here is going to be related by f which is multiplied by x and this is going to give l prime and the difference here is that in the case of the homography the transfer must be strictly following a plane in the scene but in this particular case we are looking at here the transfer by a fundamental matrix of a point to a line in the from the first view to the second view although it's via two steps which we define the first step to be a homography it's important to note that this homography can be on any plane it's actually a virtual plane that we define in the scene to help us to derive the this particular relation of the fundamental matrix that transfer a point to a line it is not a must that the 3d point lies on the physical in the 3d scene but in the case of a homography this strictly must be followed that means in order for x to be transferred to x prime into two views actually 3d point must lie on a plane but in this particular case it is just a virtual plane that we define over the 3d point to help us derive the fundamental matrix and furthermore we can see that since skew symmetric matrix of the second epipol has rank 2. this is obvious from here because we can see that the first two rows it's defined out of a3 and a2 and as well as a3 and a1 so we can actually take a linear combination of the first two row to get the third row hence the rank of this skew symmetric matrix must be equals to two it cannot be full rank hence the the skew symmetric matrix that is defined by the epipole it must have a rank of two and we also saw that in the previous lectures that in order for the linear mapping to be valid the the homography here h pi it must have a full rank which is rank of three hence the the multiplication of a rank two matrix with a rank three matrix so this is rank two and this is rank three the multiplication of these two must give rise to a rank two metric and that means that the fundamental matrix is of a rank two geometrically the fundamental matrix actually represents a mapping of a two-dimensional projective plane in the first image to a pencil of epipolar lines in the second image view and what this simply means is that since we are transferring a point in this first view which we denote as i on to a second view which becomes a line uh what this means is that we are actually mapping a projective space of p2 two-dimensional projective space because uh it's a point that lies on the plane and now we are projecting this x to be anywhere which we denote as x prime over here on this particular line since x prime is constrained to the line what this means is that now we have a one-dimensional projective space over here so the fundamental matrix is actually responsible to map a 2d projective space into a 1d projective space and we have also seen earlier on that the plate that is used to define the transfer h pi it's actually a virtual plane and it's not required for the fundamental matrix to exist so the fundamental matrix can also be derived in the algebraic derivation now let's look at this closely here so suppose that we have two camera projection matrices that describes the camera pose of the first view which is cos c and the second view which we call c prime so this is an image of i and i prime suppose that the camera matrices of the two images is given by p and p prime we know that the the image correspondence in the first view is denoted by x we can derive the back projected rays that is span by the camera center as well as x over here but in this particular equation that we have seen earlier on so p plus here is the pseudo inverse multiplied by x so this defines any point on the light ray and any one point on the on this particular back projected light ray as well as this particular projector light ray it must pass through the camera center which we denote as c so this light ray can be defined as a span of these two points which means that it's a collection of all the points that is lying on this particular light ray over here and we parameterize this light ray with a killer value of lambda over here now let's consider two points on this particular light ray so this light ray is in from the first camera view which contains the camera center c as well as the image point the backs projects to this particular line which we call x of lambda that is given by p plus which is a pseudo inverse multiplied by x plus lambda c over here so let's consider two points on this particular line the first point that we are going to consider uh would be at lambda equals to zero and this would be given by the pseudo inverse multiplied by x and this simply means that it's any point on this particular line and the second point that we're going to consider would be at the limiting factor of our lambda tending towards infinity and what this means is that we'll get the second point as the camera center which you denote as c here so let's now consider this two points and we'll project this but these two points onto the second camera view and we can see that since the camera projection matrix of the second camera is given by p prime we can see that the the first camera center over here is being projected onto uh one point on the image which we are going to call it p prime of c so this this point here we will call it p plus of x is going to be projected onto the second camera and this results in the projection of p prime p plus of x so we'll get two points over here which are respectively p prime of x and p prime p plus of x that we have defined earlier on these are the two points that we have seen here and we know that from the earlier definition of an epipolar line is that the epipolar line must contain the reprojected point as well as the epipole this particular line here is going to form the epipolar line and the reason why we call the this particular point over here which is given by p prime of c the epipole is simply because the definition of epipole is the projection of the camera center from one view to another view so this is exactly given by p prime multiplied by c over here hence the epipolar line on the second image can be computed as l prime equals to the epipole in the second image which is given by p prime uh multiplied by the camera center of the first image and cross product with the projected point of p plus x onto the second image which is given by the this equation over here p prime multiplied by p plus multiplied by x so we have mentioned earlier on that p prime c is the epipole in the second image because it's a projection of the first camera center onto the second image so this is originally p prime of c multiplied by the cross product of p prime p plus and x over here so we can rewrite this guy here into uh the epipole cross product with this p prime plus and x and we have seen earlier on that the cross product here can be rewritten into a skew symmetric matrix that is in this particular form hence we will get this particular equation over here where we can collectively group this portion here together and call it a f matrix which is simply our fundamental matrix and hence we'll get the relation of l prime equals to the fundamental matrix multiplied by the image coordinates in the first image where the fundamental matrix uh here it's given by the cross product of the epipole multiplied by p prime and p plus we can check the correctness of this uh fundamental matrix it's supposed to be a three by three matrix over here and we can see that the cross product the skew symmetric matrix that is defined by the epipole it's also a three by three matrix and we can see that uh p prime over here it's supposed to be a three by four matrix and p plus since it's the pseudo inverse uh this is supposed to be a four by three matrix and if we multiply these three matrices together we'll end up with a three by three uh matrix and this form of the fundamental matrix that we derive algebraically we can see that it's similar to the fundamental matrix that we have derived earlier on using the geometric derivation so where the homography simply is given by p prime multiplied by p inverse and here we can also see that from this definition here from this particular outcome of the derivation algebraically we can see that the homograph that we defined earlier on in order to help us to transfer a point to from one image x here to transfer it to the second image x prime over here it's simply a virtual homography this means that the particular 3d point which called capital x it need not necessarily lies on a real plane for this homography to exist because this homography is simply a function of the camera matrices of the two views p and p prime over here a remark over the algebraic derivation of the fundamental matrix is that this derivation that we have seen earlier on it breaks down when the two camera centers are the same what this means is that it degenerates and we cannot make use of this algebraic derivation to derive the the fundamental matrix when the two camera centers are the same so the proof is here uh if we have seen earlier on that the epipol on the second view so he has two views i and i prime the ippo which we define as e prime in the second view is simply equals to the camera center of the first view being project into the second view which is p prime and c so in this particular case where c simply equals to c prime this means that the two camera centers are the same we can see that the projection of c into the second view which is p p prime multiplied by c is going to be a null matrix over here which is 0 over here and this follows that if we were to plug it into the equation that we have derived earlier on for a fundamental matrix since the skew symmetric matrix here is defined based on e prime which is equals to p prime multiplied by c over here and in this case where c equals to c prime this guy over here simply equals to zero and the whole equation here will simply vanish and becomes zero this proves that the algebraic derivation breaks down when the two camera centers are the same we'll look at one example over the algebraic derivation of the fundamental and matrix suppose that we have two camera matrices uh of a calibrated stereo rig a calibrated stereo rig simply means that we place two cameras on a fixed rig that means that the two cameras is rigidly attached to a rigid body over here and calibrated simply means that we know the intrinsic value of the first camera k and the intrinsic value of the second camera which we denote as k prime over here and so we by this form of definition we can assign the camera matrix of the first view to be a canonical camera matrix which is given by k multiplied by i and zero so this is actually a three by four identity matrix this simply means that the camera coordinate aligns with the world coordinator over here and the second view here is simply rotated and translated by r and t over here and hence the second camera matrix can be expressed as the intrinsic k prime multiplied by the rotation and translation of the second view defined with respect to the first camera world frame then we can easily find the pseudo inverse of the first camera matrix over here which is defined by this guy over here so pseudo inverse as we have seen earlier on here is simply given by the three by four matrix of p multiplied by p plus equals to identity we can solve this p multiplied by p plus equals to identity and since we know this guy here we can plug it into this and we can solve for uh p pseudo inverse over here and as a result we'll see that this is equivalent to k inverse and zero transpose over here so this is actually a four by three matrix and c the cap which is the camera center since we define the first camera projection matrix to be a call in a canonical form what it simply means is that the camera center of the first camera is going to be zero since the world frame is aligned with the camera coordinate frame and by plugging p and p prime into the equation of the fundamental matrix that we have seen earlier on here we can simplify this equation and finally we'll see that the fundamental matrix is actually an expression that is given by the camera intrinsic which is k k prime over here as well as the camera xtreme 6 which is r and t over here between the the two views so we can further make use of the two camera matrices that we have seen earlier on here p and p prime to define the epipoles in the two views respectively so now we have two images i and i prime over here we know that the camera matrix of the first view is given by p equals to k multiplied by z i and 0 as well as p prime is equals to k prime multiplied by r and t which can be rewritten into k prime multiplied by r and minus r c which is the camera center over here what this simply means is that from this relation we can tell that minus r c tilde is equals to t and this implies that the camera center can also be rewritten into minus r transpose of t over here so we know that the epipole of the first camera view is denoted by e and this epipole over here which we denote by e is simply equals to the projection of the second camera center which is c prime over here in into the the first view so this would be simply equals to uh p multiplied by c to the that we have seen earlier on and one and this is equals to p multiplied by minus r of transpose of t and one over here which is exactly what we see here if we were to plug the first camera uh matrix in terms of the intrinsics and the extreme six value into this equation over here we can simply express the epipole of the first camera view as k multiplied by r transpose of t we can do similar things for the epipole that we denote as e prime in the second image since e prime is simply equals to p prime multiplied by the camera center c of the first image since the world frame and the camera coordinate frame are aligned in the first camera or we can simply write this as p prime equals to 0 and 1 over here and plugging in this particular expression of p prime into e prime we can see that we'll get this particular relation over here which is k prime multiply by t and what this means is that uh if we were to look at this expression which consists of this guy over here kr transpose of t which is equivalent to the first epipol we can rewrite this part here as the first epipol multiplied by whatever expressions that we have earlier on as well as we can see that k prime of t which is the second epipole here it appears also in this particular equation here so we can also rewrite this term here into e prime the cross product of the l people with k prime r multiplied by k in and this is the relation here now so far we have looked at transfer between a point from one view into another view we saw that this transfer is actually a point which we denote by x small x over here in the first camera image and we transfer it to a line in the second image and vice versa so this line is actually the epipolar line which is l and we saw that this transfer is via the fundamental matrix so the fundamental matrix essentially maps x into a line l prime over here and now suppose that we know that these two points x and x prime are corresponding points in two views if we know x in one view we know that x prime must lie on any point on the epipolar line in the second view but now we suppose that we start from the point correspondences that means that given two images which we observe x in one of the image and its corresponding point x prime in the second image we know these two are correspondence x and x prime are correspondences so we ask the question of whether there's any relation simply these two correspondences and it turns out that the answer is yes we can actually derive a correspondence condition which is simply given by this famous equation in 3d computer region over here so x prime transpose f multiplied by x must always be equals to 0 as long as x and x prime are correspondence point in the two images so the proof is very simple because we know that x prime must lie on the epipolar line in the second view l prime which is given by f over here which is uh here so we can see that x in the first image is transferred to the second image as a epipolar line we know that this epipolar line must contain the point correspondence of x which we denote as x prime so what this simply means is that there must be an incidence relation between x prime and l prime and what this simply means is that the dot product of x prime and l prime which is the epipolar line must always be equals to zero and since we know that l prime it's uh given by the fundamental matrix multiplied by the first image coordinate point which is f multiplied by x substituting this relation into l prime over here we get the famous equation of x prime transpose f multiplied by x equals to zero there's a very important significance of this particular equation x prime uh transpose f multiplied by x equals to zero this simply means that we can actually compute f the fundamental matrix over here the fundamental matrix over here simply from the observation of point correspondences between two views but this is analogous to what we have seen in in the homography case we saw in the homography case that given four point correspondences if we know that these four points are correspondences we can simply compute the homography the three by three homography that relates these two views uh that transfer this onto a set of corresponding point in the second view the fundamental matrix uh unlike the homography it's going to call transfer a point from the first view onto apipola that contains the corresponding point in the second view and this is a significant importance of this particular equation over here which means that we can compute the fundamental matrix directly from the point correspondences without any knowledge of the 3d scene nor the 3d geometry of the camera poses we'll discuss later on how many point correspondences are needed to compute the fundamental matrix from this particular correspondence equation over here now let's look at some properties of the fundamental matrix we saw that the fundamental matrix it can be expressed in terms of the two camera matrices p and p prime in this particular order and uh what's interesting is that the if we sort the order of p and p prime this simply means that just now we define the first image to be i and the second image as i prime over here which means that the camera matrix of the first image must be uh p and the second image must be p prime over here our derivation is based on this ordering so we will get uh we saw earlier on that we get the expression for the fundamental matrix which is f is simply equals to p multiplied by c and this cross product multiplied by p prime and p pseudo inverse over here that uh this is the relation so this particular fundamental matrix over here is defined based on the ordering that p is the first camera view and p prime is the second camera view so suppose that we are we are swapping the order right now that means that the first image over here we call it i prime and p prime the second image over here we call it p and i uh then f transpose would be the corresponding fundamental matrix over here we can see this very clearly from the correspondence equation so we saw that x prime transpose f x equals to 0 over here so this relation will also hold if we take x x prime transpose f x and transpose the whole thing so this would become x transpose f transpose and x prime equals to zero uh where we can see that this is simply a swap of the order previously the first point here belongs to the second image which is x prime and the second point here is the image correspondence in the first image which is the node as x so once we do this transpose we can see that the first point here which corresponds to the second image becomes x and the image correspondence that corresponds to the first image becomes x prime and hence the corresponding phantom matrix becomes f transpose as we have claimed earlier on here's some property of the epipolar lines for a point in the first image the corresponding epipolar line is l prime equals to f x so if we were to look at the other way around l equals to f transpose of x prime simply represents the epipolar line of the corresponding image this what this simply means is that we have two views if we were to look at uh the first view i and i prime over here the first view with a point x over here is going to get transferred to the second view as l prime and this is going to be equals to f of x but if we have a point here that we denote as x prime this point over here is going to get transferred over to the second view here as l prime and l prime is going to be equal to f transpose of x prime over here so again we can simply see the relation over here in this particular equation over here so in the first in the first equation over here x prime f x defines the incidence relation between the point on the second image and the epipolar line so this is the epipolar line l prime over here on the second view so this is an incidence relation and this must be equal to zero and the second equation over here which is the simply the transpose of the first equation it also defines the same incidence relation but in the first view so we can see that now x transpose here is the point in the first view and transpose x prime defines the epipolar line in the first view and that is since the point on the first image must lie on the epipolar line in the first image as well the incidence relation over here the dot product of these two of the point and the epipolar line would simply be equal to zero and hence for fueling this equation that we have seen earlier on and now let's also look at some properties of the people so for any point or x other than the f people in in in the first image the epipolar line must contain the we have seen this earlier on that this particular epipolar line that has been transferred from the point x in the first view via f over here so this guy here is l prime equals to f of x it must contain the epipol over here which is e prime this simply means that uh because there's a incidence relation between the epipol and the epipolar line l prime over here so this simply means that the top product of the epipole with the epipolar line must always be equal to zero we can also rewrite this particular equation so this this is the incidence relation which is uh e prime transpose of l prime uh this simply means that the dot product the epipol in the second view and the epipolar line in the second view must always be equals to zero can be rewritten into e prime transpose multiplied by f and x since we know f multiplied by x equals to the epipolar line in the second view and this must also be equals to zero and we know that x here is not going to be zero for it to be a valid image point so since this equation can be rewritten into e prime transpose of f multiplied by x and we know that x here is never going to be zero what this means is that the multiplication of e prime transpose and f must be equals to 0 in order for this relation here to hole hence we can rewrite this equation into a null space equation of e transpose f equals to 0. what this means is that e prime is simply the left null space of f and similarly we can also derive this equation over here which means that the epipole in the first image is simply the right null space of the fundamental matrix f we can also see that the fundamental matrix has seven degrees of freedom fundamental matrix f over here is a three by three matrix with with nine elements in there and we have to minus away uh two degrees of freedom one because the there are only eight independent ratios uh the scale here is not important and uh the second thing here is that because we have seen earlier on that the fundamental matrix is of rank two this means that the determinant of the fundamental matrix must be equals to zero and this uh reduce one more degree of freedom hence as a result we have nine minus two equals to 7 degrees of freedom for the fundamental matrix and another property of the fundamental matrix is that it's not a proper correlation so what this means is that it's not invertible we saw that the relation of a fundamental matrix is actually a linear mapping function that maps a point x onto the epipolar line in in the second view so l prime is equals to f multiplied by x over here and the the determinant of f equals to zero and it's not full rank although uh it's a three by three square matrix over here the uh the relation is only one directional uh what it means is that the inverse of f simply doesn't exist and here's a table to summarize the properties of the fundamental matrix that we have seen earlier on so we have seven degrees of freedom and the correspondence equation is given by f prime transpose f of x and then the epipolar line in the two views is given by f or x and f transpose x multiplied by x prime and we also seen that the epipoles are given by the left and right now space and we also seen the algebraic derivations that the fundamental matrix can be expressed as the epipole and the camera projection matrices that we have seen earlier on and here there's a significant of this what it simply means is that uh if the two camera matrices p and p prime are uniquely defined that means that we know this uniquely what this means is that the fundamental matrix would also be unique so this means that given a pair of p and p prime matrices of the camera it will only end up to have one particular unique fundamental matrix that defines the mapping of a poi to a epipolar line between the two views |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Neural_Field_Representations.txt | hello everyone how's everybody doing okay so today we will talk about uh the neural Fields representation it's going to be our last lecture of the semester so hopefully the end of today's lecture you will be able to appreciate what newer Radiance view is and the concept of volume rendering I also talk about the limitations of the vanilla nerve which is the original Nerf that was published at eccb 2020. basically it has two uh limitations one is that it's not able to generalize to new sins and then the other thing is that it's efficiency it's very very slow to train the original version of nerd and it's also very very slow to render it during the influence time okay and uh and then I'll talk about what are the different methods because it has been two plus years since the original nerve was proposed at eccb in 2020 the paper was actually put on archive in June 2020 and at that time it was already attracting quite a lot of attention and because this was actually the first work that uh they showed that they are able to use neural networks to encode the whole 3D scene and this was quite a big thing at that time okay it's still a big thing now because a lot of people including myself we are all looking at actively looking into this area of research and one thing that I will also talk about is that I'll talk about the difference between volume rendering so because uh neural fields or neural Radiance views is just one type of neural field to represent the 3D scene and in fact we'll see that uh I'll explain to you why is it that neural Radiance build is not so effective in during the 3D surfaces or the 3D information it's actually very good to do normal view synthesis but it's actually not that good to do 3D reconstruction so on the other hand that was in year 2019 I think one year earlier before Nerf was presented there was a work called Deep SDF scientistance view Society distance view is also a type of neural field that can represent the 3DC with higher Fidelity okay we'll see why later but the problem with this is that you need the 3D go out through that means that you need the 3D surfaces the real 3D object to do the supervision and uh therefore it actually limits it in terms of the screening or the real usage one year ago in the end of 2021 I think there were several works that I will actually mention two of them in particular the volume SDF as well as news where they mentioned how to or they propose something uh to convert the side distance function to volume retinal density because when you have side distance function the best that you can do is surface rendering and surface rendering requires the precise knowledge of where the surfaces are so therefore most of the techniques before this they require some form of mask to tell you where are the objects in the 2D image in order for it to work and then these two works here uh volume SDF as well as news it actually mentions something that without the use of mask it's able to convert the sign distance function directly into volume density which is the type of ingredients and then you can directly do a volume rendering instead of a surface rendering from the side distance function to the images and therefore this revolution analyzes the whole game all together this means that you know by using these two techniques either of these two techniques there is no need to do there's no need to have the 3D output anymore right you just have multi-view images and that would be self-supervised then finally at the end I will talk about how we use Nerf and the side distance field to learn animatable and generalizable dynamic humans think so basically uh this will do also a different game because those works that I'll talk about exists are from one to four they are all covering the static scene so we'll talk about I'll talk about how this animatable and generalizable dynamic humans that means that if I have human beings in the scene I want to actually use nerve or SDF to capture the human and animator means that I can then use it as a digital advantage that means that uh in metaverse or augmented and virtual reality I can actually after I capture the human I can actually control the human into different poses I think and similarly I want it to be generalizable that means that I train the network for a specific human okay for you for example then I want it to be able to generalize to me take and without further training we'll see how this can be done and today we have an unprecedented number of number papers that we are going to talk about okay so much that I have to use two pages two slides to list all of them so I'm going to talk about all together for three different papers uh today covering all the uh the the topics that I want to talk about so just a reminder I actually show this slides last week that uh the 3D surface representations there are two types of it okay uh the parametric or explicit and the implicit representation but unfortunately uh I realized that because I wanted to also talk about explicit representation I mean I'm already like 110 slides okay so uh too bad unfortunately I won't be able to cover this uh but probably there's time I might uh just briefly tell you about it so I will still focus on the implicit representation today and but uh today we are going to talk about the food types of actually we talk about the explicit representation last week which was Point Cloud but I want to actually talk about this unfortunately I don't have enough space or kind to talk about this so today I'm going to talk about uh this neural views in particular it's an implicit representation and its continuous in space so in particular I'm going to talk about two types of implicit representation the newer Radiance view as well as the sign distance field okay so uh just a quick recap of what's the parametric surface so it's from 2D to mapping to 3D okay so this is uh obvious because you have any planar surface you can actually map it into a 3D manifold which is actually a tool manifold any surface is a too manifold because if you stand locally on the surface you are actually looking at just two-dimensional uh space locally okay and so this is what we are going to focus on today which is the implicit or volumetric Surface representation so the definition that is written here this is actually the side distance view uh or a form of distance view where you have at any point uh these are all the points that you can sample in the 3D space and it's really space you sample X which is in the R3 space and then what happens here is that uh you would Define the surface to be on the level set so you have a function which takes in the center point and that function will output or whatever distance view that is at that point and side distance view simply means that you have a surface and this let's say this is your X you will actually tell you what is the closest distance to the surface and once you have zero which is the levels at the zero level set then you know that that is actually the surface itself okay um and today we are going to look at two of them in particular the sign distance View and the radiance field and these two are currently the most popular uh two types of uh representation of uh implicit representation which people in the computer vision community and the graphics Community they are both uh actively looking at right now okay and so the idea of uh neural Radiance view is to do this okay is to do a single view normal synthesis and this is actually not A New Concept it has been existing for a very very long time okay so uh what happens here is that what you want is that you want to give a single input image and then you really want to uh because this image single view image is looking at the scene at one from one Viewpoint okay so you want to uh use a lead Network or to design a big Network such that you can actually look from other viewpoints so these are the normal views okay and what you want is to synthesize those novel views so take note of this that uh although we are synthesizing the novel views okay in contrast to uh the generative models okay probably you are familiar with games yeah diffusion model right now uh things like this right no I mean normalizing flows and Bas and so on there is a big difference because in normal view synthesis although you are synthesizing a normal view this normal view has to be correct has to be correct to that c that you are looking at but generative model you can generate anything right you can generate anything that is fancy or imaginary that means that you're actually hallucinating something right but in normal view synthesis we are not hallucinating anything we are trying to figure out what is hidden from that view okay or you can also think of it as interpolating or extrapolating between views right so that's normal view synthesis we are buying no mean trying to do anything uh or trying to hallucinate anything so another example is that probably you guys have uh I think this is a very old movie uh the metric metric with this movie the metric right so uh there's a there is a classic scene in this movie where that guy jumps out and then uh it's frozen in the midday and the camera actually pans uh to do a surround view right so you know that by doing cameras or by taking camera scene so let's say this is the guy there and then you won't be able to smoothly Pan the camera from this view to this view right because while he is actually Frozen in the mid-air so this means that you need all the camera views to take this all the cameras to take that same view while he is static in the mid air at the same time right so basically this is not possible because no matter how you do it you can have a set array of synchronized camera here but these cameras are discrete that means that you are sampling at a disk View right so the way to actually hand the camera to render it such that you are able to pan the camera smoothly from one view to another view that's actually no overview synthesis because you are actually trying to interpolate between the view and this is by no means hallucinating the scene okay you are actually interpolating or extrapolating the scene and that's an overview sentences okay Enter and this actually is not new it has been around for as long as 3D computer vision existed okay so uh it's only recent here that people realize that it can be done using neural networks okay so uh of course the most significant paper in uh this area would be enough okay and that's the newer Radiance view which was published at eccb 2020 okay so what it suggests is that it proposes to encode the entire 3D scenes in the weights of an MLP so that means that uh it's actually a very simple architecture that means that you can actually train an MLP just this function itself and it's it's not even a CNN it's just an MLP that takes in uh five parameters okay so the input is uh five dimensional Vector which is the 3D position X and a 2d or viewing Direction so let's say you are at this view here okay you would have a viewing Direction which is 2D let's call it pi and Gamma and then uh any so this viewing Direction here is a spherical coordinate that tells you where the ray is pointing to okay and then uh what you do here is that you sample any point that is on the ray okay which we denote as x and x here is going to be in R3 and then uh what we do here is that all together we would have five dimension and you input into a function MLP and the output would be a four-dimensional output which consists of the the first three dimension would be the RGB value or the color okay and then the last value would be what we call the volume density so what's interesting here is that uh the color it actually let's say you are looking for at this point in from this View and if you look at the same point from the other view from the other Viewpoint here the color will change will change slightly let's say let's say if I'm looking at this point here okay from my viewpoint and from your Viewpoint because the way that light is being reflected into both our respective eyes okay it's actually different because of the different angles that we're looking at it okay so we'll see the radiance the color radians to be slightly different okay I'm not going to into the detail because the details of this actually if you are interested you can actually look at what is called a brdf uh uh there's a function rendering function okay that would be computer Graphics okay this is that's why it's an interesting thing that we are now uh everybody is looking at or people are looking at it into this it's both a combination of 3D computer vision and computer Graphics okay so then what happens here is that these outputs here would be uh the color and the volume occupancy density so Sigma here simply means that uh at that point when you look at this at this point what is the what is the probability or what is the rate that uh light is actually being blocked from this point okay so you can think of it as whether this point uh how occupied is this point if it is fully occupied then what it means is that light is actually fully blocked so the volume density will be very high or will be uh you can also actually normalize it to zip from zero to one so it can be thought of that when you're either hitting a opaque object then the volume density could be one is actually highest at that point and uh so this is all that it actually proposes in new uh this newer Radiance field okay and what happens here is that the interesting thing here is that uh once you do this you are able to actually render that means that if I if I train the function here from multiple views I have many many views here and then I just simply train this function f data here okay by sampling all the points from different views okay and uh I want this function here to predict what is the RGB and sigma value of that particular mixer I can do a cell supervision okay because I can actually render I can actually render this color or back onto the image and do a cell supervision and uh I'll tell you I'll talk about the rendering equation later because this see here the color is actually uh it's actually specific to the to a certain point and a certain direction but the color that you actually see on the pixel is actually accumulated for every single person you have to sample along the way many of these C and sigma and then you take a accumulation uh you have to integrate over the transmittance and it will actually form this the color and this pixel so that would be the final rendered value here that you take away with the ground proof you compare it with the ground truth okay but uh before I talk about the rendering equation let's look at the because as I mentioned that the color will differ from different perspective from different direction but the volume density will remain the same because at any point if it is occupied from your Viewpoint and from my viewpoint it's still going to be occupied okay so this means that the whole network architecture of this uh Nerf exhaustive MLP over here okay and what's interesting here is that uh you would take in X and then D and we know that X is the 3D Point okay but as I mentioned that Sigma which is the volume density is independent of the ray Direction okay only the color is dependent on the ray Direction and the volume density as well so what happens here is that we'll see that uh the the network you will directly output your Sigma from X alone okay and then RGB the RGB value would have to you would have to add a uh you would have to add a direction to it okay because in the first form different Viewpoint and it was also realized in this paper uh the newer evidence view paper that uh if you simply just input X and B into the network and train it like this okay what happens is that the network wouldn't be able to capture high frequency components because just a five dimensional input would be very it wouldn't be that expressive at all okay so what they realize is that uh you know instead of directly inputting X and D here it's better to actually put it inside a positional encoder so this is positional encoder is just a Fourier basis function okay so what happens here is that all these are the two zero Pi uh and two or two to two L minus one PI right these are the cosine Sines here these are actually your uh basis functions from policy from Fourier Transformer okay and uh what happens here is that if you do this because given an input X or P here in this case here P you are able to extend the dimension into very high dimension okay by just encoding because uh this will be the low frequency components and this will be the high frequency components from P okay so you are able to actually uh expand it to be very high mentioned and thus making it more expressive in in a sense okay so by just doing this that means that you take X each one of the dimension inside X the element inside X you plug it into this okay and then it becomes a vector so you concatenate these vectors together and then put it into the network and they realize that by doing this and training the network with this you are actually able to capture the higher frequency components as well and that's the the whole rendering becomes very sharp okay otherwise it will be blurry if you lose the high frequency component and uh so Nokia so any Target view by training this uh the network the MLP you can actually uh render any Target view from do it by doing Ray tracing and uh or and volume rendering okay so uh this would be the equation here if you have a view here imagine that you already encode everything into half data where as data is taking in X and d and outputting c and sigma here okay so what happens here is that uh you have a let's say you want to render a new view here okay so you would uh you would know the camera position then you will know uh where's the camera position and then you have to specify the ray Direction here by specifying the and you what you can do is that you can sample many points from here okay and then get all the C and sigma and the expected color this particular pixel here which we are going to call CR where R here is the parametric the representation of this wave okay so this ring would be parameterized by T which is the camera Center o and plus TD D is the direction so you are taking uh the parameter C so that means that you are sampling points for that particular rate okay so it will be a function of CR and this would be the rendering equations where uh you integrate from linear to T5 let's say you want to integrate from here to here so this would be tenure and this will be T form so we are going to sample all the points from here to here and then uh we're going to integrate them together into this uh in this uh in this particular rendering equation where there are three terms inside here okay so the first term here that we are that we'll see is actually the color and this is directly the output from your enough or neural network okay because not neural network output C and sigma but note that this C and this final C here is different okay they are different in terms of this C here there's more C here is just how that color will look like on that 3D point okay by when you are looking at this view but the final rendering if you just back project this coin one single point and then you back project it onto the onto the image right but you'll not get a very high fidelity rendering let's say you have a translucent object or whatever you won't be able to do that kind of rendering away good example is that uh I I bet that everyone would have uh like when you look up into the sky you see clocks like clouds doesn't appear to be flat there is some kind of layering effect and that is actually this event here because uh what you are the layering effect is that because clouds itself right it's not fully open you can actually see through the layer or it's translucent as some I mean not Julio or quick but still is translucent to some degree and what you are trying to what is coming into your eye is actually all the sample of the colors when you are looking at the library through the cloud okay so when you accumulate this you'll be able to see the layering effect and that's actually what's now this equation trying to do here so this term here would be just the color at one point and then the second come here would be the this uh Sigma here it would be the volume density and this volume density means that uh what is the chance because we are looking at one point which is RP here so it's actually the probability of that uh that you are hitting a particle at that particular Point okay which can also be taught off as how occupied this particular point is in 3D space then there's also the term here which is the transmittance okay this is the probability that light travels from linear because you're starting to integrate for the near point to be to that point that you are interested in at uh in in the along that array then without hitting any particle that's why it's called the transmittance this means that if you are going to pass a light array from The Source across along the light Ray this is actually the probability of how much it can be transmitted across that light source can be transmitted across okay and this is cumulative that means that you are actually accumulating the transmittance across okay until you hit the object okay and uh without hitting another object okay so the uh this is the probability and but the problem with this is that if you simply uh do this that because what we are doing here is that we are sampling from the 3D space that means that uh you cannot do the integration okay it's not although everything is formulated in a continuous manner here uh the whole thing is actually when you implement it on a computer it's actually not the continuous you would have to sample a long uh samples along that array okay before you can do the integration and this is also the reason why nerve is so slow because for every pixel you need to sample so many samples along that array in order to just get one pixel okay and uh imagine that if you have a mega pixel image there will be millions of pixels inside here so you need to sample during the rendering you need to sample so many points okay on the so many on so many light rays in order to render the whole image and uh so the integration but the integration is not possible so one way of resolving this problem here would be to actually numerically estimate this or approximate this continuous integral using uniform samples along the way so uh this means that we can actually convert the integration into a summation okay and I will skip the derivation but uh if you are interested you can look at the supplementary material of that paper they actually prove how to derive from here to here okay um you can actually use the trigger uh rule because when you are integrating you can approximate under the curve as some kind of uh rectangular skin and then this would be the final equation that we are going to look at so the transmittance here will be given by this equation over here and then there's a second term here which is one minus exponential of this guy okay then c i here would be the output so basically CI and sigma I here they are the output of no the MLP which we talked about just now and you can actually dump it into this equation once you get n number of samples along that array you will be able to render it into the color okay so this is why the rendering from Nerf from Radiance field neural Radiance view is it looks realistic it's not flat at all okay because you are actually accumulating the color samples over the radiance the volume density that means that how transparent that or opaque that particular point is across that Library so you are stacking everything together that's why you get that 3D rendering effect okay and where and also Delta here is actually the distance between your hydration sample so if you have a light Ray between sample I and Sample J the distance between this would be Delta I okay and uh so this would be everything about enough okay uh where it's actually very simple but what's shocking here is that the quality is so good that when they do the rendering okay and that actually uh Amaze the whole community of computer vision at that time and it's actually very simple but the result is so good that it really just revolutionalize the whole Community right because I would say that uh in the past 10 years or so it was first I like snap that was uh that took everybody I mean it started the revolution on uh deep learning and then uh later on there was this uh people are obsessed with again a generative model for quite some time then uh later on the 3D Point Club also quite a bit of effort from I mean or interest in the community and now it's the URL uh 3D I would say so uh a lot of people now are rushing uh rushing into this to the suddenly all got interested in uh 3D computer vision but it's not easy to uh start because you need the foundation background foundation in order to be able to comprehend and I would say that even though now the result is good there are still many Concepts in 3D computer region and uh computer Graphics it's actually not applied yet because 3D computer vision and computer Graphics is far more than just this right so it's uh still not really and I would say that uh many of the people I mean you can take Nerf and then you play around with it right you can make some tricks to it but then in order to make some significant changes to it you really need the Deep Knowledge from the 3D computer vision and yeah it's so simple and then that's why it's uh taking Everyone by surprise and uh it's quite amazing despite the very cool uh results from neural Reliance people there are limitations to it so basically it does not generalize to unseen scenes once you you are actually optimizing the network over that scene okay and you're overfitting to the scene the reason why it does not generalize one since is that because you have a network here it takes in just 5D input okay and that 5D input tells you nothing about the same it actually doesn't contain anything from the image so it's only telling you XYZ and a Direction so there's nothing about the image so giving it new images would help you won't be able to tell you what is in the scene it's just memorizing but one problem with this the significant power of this is that it doesn't generalize to unseen scenes so uh that there is at the same time and almost at the same time there is another work called multi-plane it Majors which is essentially representing the scene with a discretized version or planes okay so this is very closely related to plane sweeping algorithm okay it's actually uh uh inspired by things sweeping algorithm and this multiplane images is actually generalizable I'll tell you why but uh let's look at how it represents the entire theme from with a single in image so in short this is called NPR the representation is a so if you have an image give an image what it's trying to do here is that it's trying to learn a deep Network that produce so D equals to 1 D equals to two all the way until D equals to n for example you have n number of planes so this is your original image okay what it's trying to do if that is trying to the network that is designed it outputs a set of B planes and these deep planes represent a slice of that scene okay so you can think of it the way that when I'm looking at this end okay the image is here we're looking at the image is looking at this scene here what happens here is that if you take a step forward and B goes to one you slice it here okay that will form another image and then you take a step forward you will form another image slice so what it's trying to do here is that even a single image is trying to Output all these deep planes of image from D equals to 1 to D equals to n okay so once you have this then what happens is that you can do normal views in the system here and you guys know how to do this okay using homography so when you have a view here then you can actually look at this by project simply projecting all these things back onto this image and the reason why you need uh the depth then that means to form this image and render it back onto the uh onto a novel view it's also because of the same thing because of the Constitution the transmittance and the volume density okay so otherwise if you just take one plane and then you project it back onto the normal view you will get very flat images you won't get the kind of uh that inside your uh this or you won't be that realistic that the shadings and all this won't be there anymore okay so this is basically the idea if I multiplying uh multiplying images so this is how it looks like even a single view image is going to do uh is going to produce this multiplane thing and when you have uh this multiplane you have a normal view okay it's going to you are going to be able to render this back onto the scene okay and uh the rendering back would be just simply the you want okay this is what we have seen already it's in your plane sweeping algorithm that we saw how to derive this as well and the network is actually giving One Source image you are going to uh have all the colors and uh everything uh the the basically the image as well as the alpha alpha density so this is something like your volume density okay and it will it actually tells you what is the composition the alpha composition of the of that particular that plane and then once you know how to do this warping from The Source view to the Target view then you can essentially render a image onto the target view so if you look at this uh it's actually very Loosely similar to the rendering equation that there are many different versions the original version is the prdf version okay and that actually is kind of complicated because it takes all into account so we're coming back to your question uh if you take into account the surface normal the reflection uh and you stay very close to the original brdf uh function then you will be able to capture everything but it's going to be computationally expensive and it's difficult to train the to design a network to do that in fact the refner and even our paper we actually all start from that equation and then we simplify it okay right but uh there are so there are different ways of simplification just now that version on inner result or it's also a form of simplification from the brdr function okay so this is also one of the uh this is actually just a weighted sum you are actually just taking the alpha composition and then you're waiting waiting the sum of this by Alpha we can see Alpha here as a weight okay and then it renders back to the image okay you can do the same thing for disparity image as well but that map and because you actually have the dead now okay and so uh what happens here is that in this multi-plane images although it's able to generalize the reason why it's able to generalize this simply just this here okay because f is your function is your deep Network it's now taking in the image so after training you give it a new scene a new image he will still be able to work because you are taking the image directly okay as an input and then uh therefore this particular uh this particular Network here the NPI Network here it's able to generalize the museum but it has a problem here the problem here the limitation here is that it directly outputs it designs of the network and directly outputs a set of planes of images that sweep across the frontal parallel views or that reference camera position okay and what happens here is that the the when you render it from other view here okay because of the quantization error then uh the rendered view the quality of the rendered view might not be that good because there is a quantization error okay because there is a gap between each individual thing here okay therefore it's not going to be that good okay and the therefore in this paper here mine what we did here is that we simply combine the two okay to get the both the best of whole world at that time okay I still remember that this was the second version that we talked on because when Nerf first came out they were put into archive in June 2020 so I actually told him about this paper and then I say that this is going to become the next big thing we should do something about it then uh but what I also told him is not generalizable when I read the paper I say that this is not generalizable it cannot generalize the new view then the first version that we thought of is actually very simple the first version that we thought of is that instead of just having a MLP that takes in uh X and D then what have and it gives out the color and sigma what you can do here is that because you have a view okay instead of just the this view here I'm going to put this image through a CNN first then I'm going to get a feature map from this feature map I can actually when I pass when it passes through this pixel I can actually take this pixel here and input this the pixel the feature that corresponds to this pixel into the MLP okay it potentially can make it generalizable so when we after we discuss this he went back and try it out and it works it it works and it actually can generalize very well then we were about to write the paper because these organizes in me actually we wanted to do something to cvpr now as we want to beginning to write the paper we realized that just a few days ago before we started writing the paper someone put the paper up into uh archive and that paper is actually picked on the okay so it's actually exactly the same idea then we have to throw that away and think of another idea and then finally we came up with this idea here that why not uh because I remember that I saw uh I saw this uh MPI uh in in uh that was before Kobe 2019. no one's namely the second author of the The Last author as the second author of The NTI people okay he actually talks about it during cvpr or ICC ectv ictv 2019 I think then so I remember this paper here then I told uh then we I put that thing and then we discussed about this then finally we came up with this version that the limitation of MPI is that the planes are discrete right and the limitation while the limitation of nerve is that it's not generalizable so then why not do this right so the concept here is very simple uh Nerf is taking a direction and it's sampling points on the way so it's actually continuous right so why not let's just take MPI instead of instead of sampling array instead of sampling array we sample a debt that means that we have this custom here and then instead of we still have the direction right this frontal parallel so we know the direction but instead of sampling every point on array we sample the whole depth and then at one any continuous that we output the whole plane whole image plane therefore our network is actually behaving like no but it's not array sampling of okay it's a plain sampling enough so but at the same time we are inspired by NPI because we take the whole Source image that means that we design a network that takes in the source image as well as taking in the debt a continual debt and then this outputs the whole image at that depth and that's just it so as a result we are both generalizable that means that we mitigate the problem from nerve and then we also solve the quantization area of MBI because now we are continuous okay so that's the whole idea and I probably won't go through all the details but uh I just want to tell you the general idea here right so we call this a painter neural Radiance okay because now we are no longer sampling an array who are sampling at the plane okay and instead of the way so this is the idea here we would have an encoder it's also very simple web encoder we have a source image and then the decoder here will be conditioned on the inverse that notice that we use the university because uh if you use the depth itself then you will not be able to uh you will not be able to represent uh the a very far away Point okay and this would be very bad for the and then you know what happens here is that now uh we just this that condition on any of the debt we'll be able to Output the whole image plane here which is exactly the same representation as NPI then at this uh and the render room would be the same as MPI as well so we actually overcome the two problems or respectively from Nerf and uh MPI and uh this uh and this is why we call in mind because it's it stands for uh multi-plane images or Mi and then Nerf uh and E actually stands for Nerf okay and uh and this is uh these are cool who I think and uh yeah so the some details about the architecture by our school I don't think I will do this if you are interested you should read the paper or play around with it and then we also have the encoding to capture the high frequency components so these are all uh the same where we the inputs here are besides because now we no longer have X and D in the direction oh now the D here represents the debt okay but we also we also do uh we also do this frequency and coding such that we'll make the deck more expressive otherwise it's just one value if you have just one value then it's not so expressive okay and we tested this and yeah you would and not to have a very blurry uh rendering or the output images here would have you would have lose the high frequency component and this is actually available in the power reaction that uh that's another paper on this uh it's actually related to spectral biases okay but also unfortunately I won't have time to talk too much about that if you're interested you should look at the paper okay uh it's actually a very interesting thing to hear so the capture the highest frequency components and then the rendering would also be the same as MPI so once you have the I know uh the rendering here we don't do NPI okay so once you have the this thing here because what we output here is not Alpha we output here as Sigma so we are actually closer to uh yeah okay that we output the volume density uh directly here okay so what we do here is that we output the whole 2D image at that plane instead of just one color and one volume density on array on a point on array what we do is that we output the whole image and we output every pixel on the image a corresponding color and a volume density okay then uh so our rendering equation here directly uses the rendering equation from Nerf instead of NPI because MPI is just uh the just now that rendering equation here as I said that they made a lot of assumption here and this is an oversimplification because they don't take into account the transmittance and the radius view here it's just uh with the sum of the stack of images so well therefore the rendering here might not be as good as the one that you've seen okay so here we adopt the nerve rendering and then we are able to get uh we we do the same thing as no as well so uh because we are not able to do the integration so we have to sample near and far and oh I forgot to mention that uh enough because what happens here is that uh you have an image and then you want to do a sample okay the it would be silly to sample uh very densely along every ring because this would be intractable it will take a very very long time to train the neural network because if you have megapixel image just to render that image and you will sample so densely along the way then you will need to uh you'll be astronomical amount of samples that you need to sample fit in the entire scene to form this image so what they do here is that they actually split the training into two steps two stages the from cost to find okay because the obvious choice of where to sample is that you need to sample near the surface okay let's say I see this monitor here I have already that passes through here it doesn't make sense to sample the empty space you should be sampling around along the library that intersects here you should be sampling close to the surface but the problem is that when you first initialize the network it won't give you anything you don't know where other surfaces you only know where is the image okay so you don't know where to sample a person so the the this is the reason why they splitting into two stages of training the first stage is that you sample falsely so you do a very cost sampling to save the computation but when you do a cost sampling you won't have such a good render of the image right but then what it will help you is that it will help you to initialize your network such that it knows roughly where the surfaces are right then in the second stage you do a fine sampling because you know firstly where on the surfaces then the fine sampling in the second stage will sample near those surfaces densely you know those surfaces and that actually helps to improve the speed of the training and also improve the quality of the uh final result of no okay we'll do the same here as well and um and then the so all these are the math I will skip this uh because you already know all this okay so when you have a new view you just walk it and then you render it back using the rendering equation here okay and uh here's some of the losses that we use where you have a random View and the ground floor view so we use a video that means that the video they don't set you have a video that takes around the scene it's a static scene that we lrc and then what happens here is that uh you just need to take any Source view from one frame okay and then you try to render it onto a Target view or many Target reviews in the scene to train your network using the this lock function so there are several lots of them there's this reconstruction loss which is the L1 North between the photo consistency to views okay and then the SSI item which is the structural similarity loss and we also have a that's long I actually mentioned all this before when we talked about the stereo using deep Network Left Right consistency stereo okay so what this means is that I want to actually make sure that the color gradient and the depth gradient are smooth okay so uh therefore I want to minimize this gradient here and also uh we first perform structure promotion okay of the scene and then you we supervise the depth based on the sparse step that we get from structure from motion okay and then uh and also we also take the scale from the structure for motion to uh to fix it at a certain unique scale okay and then we try to uh and we try to normalize the way that that with the scale calibration so uh this will be the skill calibrated loss and here's some results that we get okay you can see that uh compared to NPI because MPI has that quantization error as I mentioned that along the planes this uh there are gaps in between okay but Powers is continuous so that means that you can sample densely along the way and you render into the target view therefore we are able to actually render better uh quality of the image compared to the ground truth and you compare it with MPI power is actually better okay so here are some results of the novel view synthesis in fact the street view data set this key data set is actually very difficult to train okay because the reason why it's difficult to train is that because uh the car is always triversing in a straight line and when you do the straight line traversing right you don't see too much Parallax of the scene and therefore and therefore this this rendering here actually because you are always moving forward so this rendering here can be highly subjected to degeneracy I think but uh yes we still managed to get it pretty much uh nicely thin so here's uh Mike which actually makes it generalizable as well as mitigating MPI okay and I think this is a very nice a very neat work which is called the ibr net it's actually generalizable so uh it learns from so unlike the uh it also has the same concept you know like the the same concept or in order to make enough generalizable you need to make the inputs as the image or the image feature you can see that what they are doing here is that instead of uh so you have a Target new here okay they have multiple source of views that are closed by and what happens is that they take the image feature so they put each one of the this image into a deep Network 2D Network to process the image into a feature map and then you have this uh if you are looking at the render of this pixel here so your sample many points along the way as well so let's say you are sampling this point so what it does is that because Nerf uh it also needs known poses okay so this is also one of the limitations of the it needs what known poses so so far that I've talked about in this uh it needs a known post okay I'll talk about math data which uh does button adjustment together with uh but buff is also not okay okay I will tell you why later on what this idea does is that you process and then you collect all the features for multiple views and you aggregate it here okay and then uh you you Advocate it here and you collect multiple samples along this way in the same way after you aggregate for multiple views then you dump it into a ray Transformer okay so essentially uh you dump it into array Transformer to aggregate uh based on the self-attention and cause attention you put it into a brain Transformer here so the in the paper that we have to make path to make the software adjustment generalizable we actually uh Leverage ibrnet and I also have a third paper in pcsc up on Earth and that is actually from synthetic to real transfer okay we also leverage this now so instead of really Transformer what we do did was that we actually uh is that we actually do contrastive learning on the brain over on people that's why we call it contra okay so anyways we're coming back to this uh yeah uh so we can aggregate this and then put it into array Transformer such that this ring Transformer here will output the sigma over the uh different location of the Ring okay and this uh Sigma here it's the reason why they put it through a ray Transformer is that because the the opacity or the volume density across the same way right it should be influencing each other uh in a way that because when you have an object at that point so you need to know where the free space are and where are the other objects on that way so in order to produce the uh smooth volume density across the whole way that's why it needs to be placed into the uh into this Ray Transformer because this has got to do with occupancy okay whereas the color will be just based on the direction and that point in space okay so the color will be predicted first you don't have to put it through the array Transformer okay and uh so I have already told you about the uh this overall view but what's interesting here is that okay so this is the architecture okay of the it's very simple and what's interesting here is that this is the reason why it's generalizable as I mentioned is that it's actually taking in the feature from which there's no more uh just purely based on the direction and the and the 3D Point okay and uh it it's also ibr net it's also leveraging the multiple views okay that you can see that at the end here is actually aggregating the colors from the multiple views but let's look at the detail of every step so first uh as I mentioned that given a source view or a Target view okay you look at the point on the rate and then you check all the all the corresponding so even this soft view here you look at the point you check all the nearby views the nearby Source View and then I'm going to pick collect all the image features here which I'm going to call it F1 to FN and the views all together and Source Wheels all together and mu here would be the mean of all these features okay and V here would be the uh the variance or the standard deviation of the actually disappearance of the features so you can easily compute from a d dimensional feature map a feature Vector here so what we do here is that what they do here is that they concatenate this together so this this actually gives the local information of that pixel as well as the global information from multiple views okay because f is local to each image review but mu and B they are aggregated across all the multiple views so by concatenating them together you would have both the global and local information okay and therefore it's multi-view aware they can then uh this you input it into MLP it will give you uh uh this this is something like a point okay it's something like a pointer where uh MLP where you leave the dimension of F into uh the input Vector here into a higher Dimension okay which is uh into another dimension actually not not another dimension it's the same Dimension but you map it into another feature here F Prime okay and then uh we will also output it uh we will also output a set of Weights corresponding weights here okay where uh we're concatening the new feature with the width and uh we'll do with the pulling okay that means that instead of Max cooling or mean cooling we do a weighted cooling that means that you take away the sum and pull them together okay before you input into another MLP which gives you f Sigma that's the density feature okay which we will then uh use it to predict the volume density okay so now you have the this uh that's a big feature from one single point we'll do it for multiple points on the rate the same way to predict so we'll have many F Sigma here okay then we'll put it into the ray Transformer here to get the density okay we'll get the density for other sample as well on the same range because it's a ray Transformer Transformer you whatever that you input you output the same Dimension okay and that's the uh that's the idea behind the Transformer so the positioner and holding and the multi hit and self-attention here in the ray Transformer will tell you the relation along that rate all the features along the ring and finally and then the the color prediction here would be because you have the weights here okay so what we will do here is that we'll concatenate the feature have here with the Delta di Delta D is is simply the viewing Direction so uh you have a Source view a Target view so this direction here is the then from another view you are looking at that point okay from another view this direction here which will be di so I'm going to take the Delta between these two views okay and then concatenate it to uh and concatenate it with this feature over here before I put it into MLP and this MLP here will not put a new set of weeks this new set of widths would be what I used to aggregate the color so I will not predict the color I will not design a a network to predict the color instead what it's trying to do here is that for every view here I'm going to collect those Source views here that corresponds to this point in 3D I'm going to collect all the color C1 C2 this is my target view okay and then what I'm going to do is I'm going to predict the weights for each one of the pixel that corresponds to this I'm going to take the width sum this pixel here this point here would be taking the weighted sum of all the colors okay that I'm going to collect from the multiple views okay but this with the sum here this color is not the color that I'm going to render here you know okay it's just the point because it's the direction it's just that point the color on the point okay because in order to render that this picture here I still need to do this okay and I still need to do this where this C here is the way that some see here that I'm looking at but I need in order to compute this which is this guy here I need to sample many points here which is the aggregation and then I integrate it and with together of the volume density and the transmittance then I'll get the final color here okay so this is how I will that's it it's leveraging on the multi-view okay but ibr net also has a disadvantage this is it is that it does not work on one View because if you you only have uh one view right then you won't be able to compute all this mu and B okay it will not be robust so it needs to have many nearby views in order to make it too much so I think that uh we tested this you need at least review something three nearby views in order to make it work nicely okay so the loss would be just this uh it's the same training uh as enough you have a cost stage and then you have a fine training stage okay and uh you gradually do this you can see that the uh is actually rendering everything very nicely even compared to the original you know right so uh it's actually and the cool thing is that it's actually generalizable and the reason why one of the biggest reason why it's robust to the I mean it's able to produce higher quality rendering is because it's not predicting the color it's aggregating the color deviate too much away from the nearby views and therefore it's actually kind of robust okay so I thought uh and uh we saw that the first disadvantage of Nerf was the that uh it's actually uh not generalizable so we saw that uh from using MPI you can generalize it using mine you can also generalize it and ibr net you can also generalize it okay the the other disadvantage of Nerf is that it's very time consuming to train and render print and test again both are very time consuming because during training you have to sample so many points then you have to render the rendering is also very expensive to render the whole image and usually during training we don't the original enough they don't sample the whole image they don't render the whole image per image they only sample 1000 points okay and that will actually mitigate the computational uh complexity but then it's still very expensive it takes at least like uh three days to train uh enough okay then it came in 2021 yeah there are there are I know many papers on to speed down Okay so this can of trees is one of them but this is for rendering but the training is equally slow slow I'll tell you why then the follow-up of this which is the lenoxer it's a the both the training and the testing is fast because there's no [Music] because it requires 10 sampling where every Point needs to be sampled okay and uh the the reason why rendering is slow is because the query depends on the viewing direction as well as the spatial position okay you need to sample every point and what happens here is that because as I mentioned that the color okay the color is dependent on the viewing Direction therefore you cannot cache the color you cannot store it somewhere you cannot just take one View and then uh and then try to compute the color so the color here C1 C2 and so on okay you cannot just uh put it through a you cannot just take the direction and the point x and then use the Deep network of Nerf to Output the whole set of color and store it somewhere and then reuse this color when you are trying to render another view this is not possible enough because the color is viewed is viewed dependent it depends on the direction so that means that every image that you render that is a new direction of the view you have to resample everything again and do the rendering equation okay so the but it would be different for the volume density volume density is Direction depends independent that means that you can actually take this just one View and then for every point on the ray you sample the volume density you can store it somewhere because you know the spatial location the special location X you store it somewhere in the you catch it in some way then given another view because it's real independent even another view you can still use back that same volume density but this is different from color color you can't do this so the idea here is that if you can buy a way to decouple the Direction with the color from the color right or the color from the upper the color from the direction that means that you make color independent on the viewing Direction then you can catch it and make rendering fast that means that you just need to do it once right you store it somewhere then you can actually render it from any other views okay by reusing all the color and the volume density so this is exactly what uh and of three stuff so what they is realize is this they realize that uh just how I talk about the warrior series The Warrior encoding so for a series of sorry for real transform actually uh the this boiler there's not the problem encoding is actually a formal uh basis a linear combination of the Fourier basis but there is something called the spherical harmonics in 3D okay this is equivalent to Warrior series in uh 2D or in uh using uh I mean when you do signal processing right but uh in 3D when you render the color of the sphere this there's something called a spherical harmonics so basically you can decompose the the color into a linear combination of coefficients k so you can actually take a linear combination of the coefficient K and the spherical harmonics basis okay where the interesting thing is that now the spherical harmonic is the one that is dependent on The View Direction but it can be computed in close form so there is no need to learn the because there is a basis is a set of bases that you can predefine and compute it in closed form okay so what happens here is that if you are viewing it in this direction and then you all you need to do is that you sample from you sample any point here right so you would have a Direction you would have a direction and a 3D point and an X here but what they do is that they redesign this network to Output K which is the coefficients of your spherical harmonics and K is actually independent of the viewing Direction because they shifted the viewing direction to the spherical harmonics basis which can be computed in close form so now there's no need to learn the whatever that is output from your network right is in the totally independent of the viewing Direction and the part that is dependent on the viewing Direction can be taken out and computed in close form okay so what happens here is that if you train it based on this view you have to pay here the coefficient and then now you decide to look at it from another view okay this point here that care that you compute from this view Still Remains because the spherical harmonics here you will take in the the viewing Direction okay so that means that you use the same K but uh when it's going to be a linear combination when you have a and then I and spherical harmonic I depending on what's the order of the basis that you use so it becomes a linear combination the color will become this okay but this now the K here is view independent which means that you can cache it together with your volume density just given one view will catch everything in the 3D space and then when you change a different viewpoint all you need to do is that you recompute the spherical money there's no need to put it through any Network anymore okay so it becomes very fast during rendering because you decouple the viewing Direction now okay and what is even interesting is that they make it even faster by saying that uh given one Viewpoint here right or one image here if you want the new normal view you just use this and then you put it through and then work okay okay for every single point in the so you can think of it that this way that I will do this and then I will sample densely on this all the points and compute all the density and K which are viewed dependent a view independent right and then I'll catch it somewhere but I don't catch everything because not all the points here in the in the 3D Volume is occupied so what they did is that they store it as a top tree that means that they store directly only those points the K and the volume density where corresponding it corresponds to a surface where the object is okay and when you have a new view when you decide to do a uh this uh you decide to query uh novel View then you can actually search through the uh the auction very efficiently so not only not only that the the this thing here can be cached it can be also cached and retrieved very efficiently okay and this is how they uh how they manage to deal with the function here that's okay is coefficient this is the spherical harmonics which can be computed uh in closed form and uh so now they can actually render this in a very nice way okay and uh and the but then the problem with this channel 3 is that although the rendering is fast the training is still the same the training is still the same because there's no change in the output dimension okay still in uh three dimension you still have the RGB of coefficient and you still have the that means that you still have the 4D output and the 5D input so the network here there's no change to it it's just that uh that the the very subtle change is that now it becomes uh real independent so that actually helps to reduce a little bit of the training complexity because now your network doesn't have to learn something that compacts anymore and the document in the paper there's claim that there's about 10 uh faster okay so the boundary uh is about 10 faster okay but the training is not as fast okay so in the follow-up paper by the same guy okay which is cvpr 2022. we have 2003 still requires training time so what they do here is that they say that uh in panoxer they say that forget about these networks we don't have any deep Network at all so there is no more training time at all right but instead they do optimization so what happens here is that uh so instead of a planetary because now there's only to catch the coefficients and the volume density at any more okay but what they do here is that they discretize the world into a regular destination of boxes okay and every action they attach a spherical harmonics okay and this vertical harmonics is still dependent on the viewpoint okay so this spherical harmonics here is dependent on that viewpoint and then given a training image okay although it's saying that training image but actually there's no training there's no it's just optimization you don't train any language at all so what happens here is that you still have that linear combination okay uh the from this view here but because you have a quantization error which is you have the spherical harmonics for this Four Points okay so what happens here is that let's say this point here it intersects at this point here okay this would be a linear interpolation of all the links from the four corners from the eight corners of the of the boxer okay and they optimize over that volume density here as well as the finer spherical harmonics that you can get here from all these eight uh from the linear interpolation of all these eight uh nearby uh spherical harmonics in that in that view okay so once you optimize this you minimize this over the Reconstruction error and the total variation and I'll talk about this uh the total variation error is just that you make sure that the nearby boxes have the same the deviation away the deviation of the uh of the color shouldn't be too much okay so what they do here is that once you have done this then you would have gotten the coefficients for this particular uh for this particular point over here so you can sample from this view you can sample everything and then if you have a new view if you have a new view you just need to project everything back or render everything according to the coefficients that you have computed from the optimization here and also the spherical harmonics will change will be recomputed based on the new uh Direction okay and this means that since there's no more training okay that everything will become super fast and it only takes 11 minutes to do this optimization because it's just interpolation it's parsley across the scene and therefore it actually this paper was also kind of impressive when it first came out it's 11 minutes of optimization and it renders at 15 frames per second so those who are doing robotics right we always say something is running at real time when it's more than or equal to 13 frames per second so this is 50 frames per second it's it's amazing it's really amazing and it takes 11 minutes to optimize okay so here's the open variation and the Reconstruction error the Reconstruction error basically is the same thing okay and yeah so uh here's some results of an answer even though there's no training at all you can see that it actually gives better result than no Okay the reason is very simple the reason is because of the spherical harmonics is able to capture the variation of the color very well okay and uh and that's why it gives actually very good results right so despite the Note 3 and also I am this up and also being so fast right uh there is still a downside of it right so the because an author there's no training you uh basically you you what what is trying to do here is that basically it's uh optimizing it's it's kind of uh impressive but uh what happens here is that you still need to do some form of frequency and coding all this kind of things right so uh in just very recently which is C graph last year 2022 there was this paper called instant NGP they managed to speed up the training because because comparing with binoxa which there's no training it's just optimization so what happens here is that for every scene you need to optimize uh you need to do this kind of optimization and uh the the result that it gets will not be uh as good as if you use data to I mean MLP or any neural network to learn something and then you try to render it into a novel view so it kind of defeats the purpose of having uh data driven approach or deep Network right that there is still something lacking that where you are not utilizing the strength of uh deep Network so instant NGP uh not only that it's able to speed up tremendously it also uh what it's doing is that it's able to speed up both the training and the testing tremendously right so you still train a network but now instead of a few days it reduced to a few minutes to train the holding or a few seconds I think the whole network can be and uh what's interesting here is that um in the original when they do the not they realize that if you don't encode no encoding okay that means that you don't do the frequency encoding as I mentioned earlier on right the Fourier series if you don't do that uh then you only have an MLP okay the MLP will be 4 000 and let's say you have 411 okay they train this network for 11 000 steps and they compare the two with with frequency and coding and without frequency encoding these are the original and this was you remove just simply remove the frequency encoding then you can see that the quality of the result is very different because all the high frequency components will be lost okay because the reason is because now you only purely have a five dimensional input it is not so expressive anymore but if you encode it you're actually equivalent to lifting the five dimension to a higher dimension using the frequency encoding right you are actually using the audio basis to encode it into a higher Dimension so it helps to make it more expressive and first you'll get more details showing the higher frequency components and therefore in this paper they observe this thing here so they thought that why not uh because encoding makes the input more expressive but encoding frequency and coding itself there is no learning so why not let's shift the learning part to the encoding okay so what it means is that you can have a MLP this MLP remains this is the original Nerf MLP but the original Nerf MLP is that it takes X let's call the input as X then it does an encoding here okay before it puts into this MLP and outputs the color and sigma so by doing a fixed encoding here it's already able to make the whole thing improve the result by so much so in instance NLP here they say that why not let's also do an MLP here and learn the encoding right and the so nearly this top itself they did some experiments here and the result was amazing so you can see that here there are zero parameters your parameters but this is with the frequency and cooling MLP is uh this amount of parameters to be learned so what they say is that let's say if I have a if I do this because up to this point here they are all spatial because what you have here is that you have all the points spatially in in the 3D point along that way okay so what they are so this part here if I were to take a volume in the 3D space that I'm interested in and then they say that oh let's uh sample very densely on every single ring on this volume right then uh we put all these points and encode it into a MLP here so that's another MLP there we sample very densely and then put it inside here right then we can reduce the number of parameters so the first parameter here becomes 10K which means that the original nerve Network becomes 10K parameter and this MLP let's Let It Be 36.33.6 million okay and what they show here is that the result is amazing okay the you get almost the same quality as the uh this this how the part here the frequency encoding here but now you are actually Shifting the burden what's the towards the you are shifting the burden towards the spatial encoding rather than the actual MLP okay and I'll tell you later why is this so amazing why is this so important right so uh now they also try it on dense multi-resolution that means that you form an image pyramid and then you sample along the ring right and you can see that again the MLP the original Nerf MLP drops to 10K but you have 16 million 16.3 million in this spatial encoding here you will get almost the same performance and the training time the training time now drops significantly so this value here is the training time it drops significantly and the reason why it drops so significantly is this okay because in the MLP here if you have all your parameters in the Nerf original nerve MLP so whatever input that you have because these are all in feature space in the future space of your deep Network so when you do a back propagation right you have to back talk to every single parameters inside the Nerf MLP but imagine that you have a dense uh because this encoding here is spatially arranged especially arranged so when you have a ray that passes through you only need to update this the weights along that rate okay of this MLP here because it's specially arranged so that's why the training time drops because when you do back prop when you shift the number of parameters from tear drop to 10K and you ship it to the MLP of the spatial encoding during every back propagation you don't have to update all the 33.6 million parameters you just need to update those that are especially along that range okay I'll show you illustration data that's shown in the paper so therefore the training time becomes significantly faster and as well as the endurance time because when you do inference forward propagation you also only affect the those that are on the ring especially okay so every time you do this uh it's going to be uh whereas if there are a lot of parameters here then it will affect all the parameters when you do all propagation okay so what they and instead of even uh doing a dentist greet they make it even faster because when you want to retrieve both parameters they're along this Ray okay if it is just a regular grid then basically you have to search through everything the searching the retriever is not so efficient so what they say is that they propose a hash table it becomes a query and key if you have actually uh you do a hashing that means that along this rate you just need to retrieve the parameters spatial parameters that are along that range okay using the hash table and when you update you also use the hash table to update those parameters and therefore the training time you can actually have a significant number of parameters here but then the the training time drops okay and what happens here is that they even make it uh into a hierarchical uh way okay so the multi-grid there's a multi-resolution of the grid okay so you can imagine that uh let's say this is the we are looking at 2D now okay it's for illustration this is actually shown in the paper so imagine that if I have x and x passes through this point here okay so what I want to do is that I want to retrieve the picture of this point because I'm storing all the features as a read multi-resolution so I would have uh now it's showing two level two okay there are two levels here uh uh more fine level as uh as well as a cost level okay and if you have a point here x is here then what you want to do is that you want to actually retrieve uh at every level two zero three six which is actually the uh the Four Points where the encloses that tax that you want to retrieve okay then from here you interpolate you get a feature here and at the other level you also retrieve zero four one seven using a hash table then you interpolate and then you get the other uh part of the feature okay that you retrieve from the interpolation of this and you concatenate them together there's also a bias term here okay so that there's also a buyer's weight there that you learn and then from here you you can notice that now your features your angle this is this whole part yes the encoding part okay so what happens here is that the parameters that you need to update when you want to update this point here okay it's only these few parameters that is close by to that query point whereas all the other points all the other features of this network the encoding network data here okay it won't affect this point so it becomes very efficient the retriever becomes very efficient the update also becomes very efficient because you'll have a hash table and he only look it only affects locally that point okay because everything now is arranged in a spatial Manner and whereas because you have you are shifting the burden to this encoder okay because it will make everything the wine here more expressive now okay you are learning it to be more expressive but at the same time it's so efficient to update it okay and what you can do then do is that you can reduce the size of this this is the original Nerf MLP you can actually reduce the size of it because you can see that this is fully connected if you have a feature here which imagine that now let's say I am caught using frequency encoding the original that I have to have a big MLP here and this big MLP here there's no way to decouple them one forward pass and one backward path will affect everything all the neurons inside that right so in this way it becomes very very effective and very fast and this paper is so cool it actually speeded up by so many times that now I think a few seconds the nerve is done training okay and it's it's really it's really uh amazing so this is how the uh the other parameters that they have and uh the memory the complexity and you can see that the like uh they have gigapixel image okay where if you train for one second this is what you get and uh then if you train for 60 seconds okay or you can actually zoom into a very fine resolution so megapixel the or gigapixel means that you have to do all the sampling all the race so many times okay and uh this is just too amazing okay where as love will not be able to do this but that is a disadvantage of this anyone can tell me what is the disadvantage before we go forward for no one sees that this advantage yes it's not generalizable that's the disadvantage but why why is it so difficult to generalize this any thoughts on why is it so difficult to generalize this but I've noticing yes yes you are correct so the reason is because of this that's how I mentioned that Nerf is not generalizable because it takes in X and the direction okay but now so the way to generalize now is that you make it dependent on the input image instead but now you are actually entangling the whole thing here especially to X and the direction so all the parameters that is learned because there are actually this is the encoding is actually dependent on the viewing Direction and the 3D point so it no longer becomes so easy to disentangle this okay and that's the reason why uh it is not generalizable and not so easy to generalize it okay any question then I will talk about the remaining parts we come back at the 2035 so there's another category of this uh the neural fields and that's actually the sign distance function or side distance View and what this side circle is is that given any point on the 3D space let's say you have this uh this actually has a name it's called a standard bunny uh what happens here is that let's say I look at one slice just the two-dimensional area over here and what you get is that let's say I query any points on the on this any point and I call it X okay the side decent view will help me the closest distance from this point to the surface to this surface boundary let's say if I have a surface boundary the size distance view will tell me the closest distance okay and it will be more than 0 positive when it's outside the surface and when it's inside the surface it will be less than zero will be negative so you can actually figure out whether you are inside or outside the surface but there will be problem if your surface is open surface okay so uh and then the as when you're exactly on the surface it would be the level set of zero okay it can be also uh there are also other representation like the occupancy representation where it ranges from zero to one yeah you the level set will be either at one or 0.5 okay so uh that's also but in terms of the occupancy probability there won't be a distance you won't know exactly what is the distance of that point to the nearest surface okay but side distance view has this and uh in some other representation you might also have unsigned distance that means that uh you don't care about the whether you're inside or outside the surface you just care about the distance away and there is also a truncated side distance that means that you only care about up to a certain point okay because the distance will go infinitely large if you go very far away from the surface so you truncate it at a maximum value and all the rest of the value will remain the same that would be the truncated sign distance function and so uh in 2019 this paper did hdf was the one that pioneered the learning of the sign distance function using deep Network because essentially sign distance function is just a function that takes in X which is a 3D point on the in the volume and that outputs the sign distance value okay the SDF value and what you can do is that you can parameterize this uh f with a deep Network and then you can train the Deep Network to directly regress the sign distance function it's a continual sign distance function given any query points okay but there are challenges if you do it blindly like this because deep Network essentially is very bad in doing regression okay and you do a x and then it outputs the SDF unbounded this is really going to be challenging to let the uh okay but if you can do this directly regressive then the zero level set would be the would be actually your surface take the object surface and but the challenge is as what I have mentioned it's difficult to learn this so in this deep SDR paper they mentioned a few variants or a few variants of the of the network so the most direct one which is the most naive one is that directly I learned uh encoder for uh here where uh or a network here just a network here where I give XYZ a point in 3D and then I use it to request and yeah in where the level set when it's equals to zero is actually the surface so you need to parameterize the network using beta and then you need to train this to make sure that it output the request value okay and the training is usually done by you need to have a real SDR values that means that I need to use use the 3D surface the 3D object as a supervision so this is also the disadvantage of of SDI because you need the 3D object to the 3D Cloud show in order to do the supervision because there's no way to render this back okay that I mean what you have here is just on the surface it's just the surface the level set of the surface there's no volume density or transmittance let's say because the transmittance is actually a function of the volume density so once you don't have the volume density it's not possible to do volume rendering you can only do surface rendering here but surface rendering won't give you the realistic image okay you uh what is commonly used is that you can do Spirit facing or real uh Ray tracing that means that you trace across the Ray and then you check whether it actually hits the uh hits the surface or not or the level set and then you render it back but then uh it will be a flat image you you won't be able to get uh because there's no volume density okay so uh training is usually done by clamping it so this is the form of truncated side difference actually where any value more than Delta you don't you don't care about it anymore okay otherwise it will become unbounded when it becomes unmounted basically it's not possible for the network to learn well so here if you do it naively like this the shape information is actually contained inside the network so that means that you are over fitting the network to just one uh to just one shape only okay and there's no chance of doing any requirement anymore okay so basically you over fit to that shape you cannot actually use it for other shapes okay after you train the network therefore they also propose a second variant where instead of just inputting XYZ they have a code here okay a latent code here where this latent code can help to capture the ship information so it transfer the it transfer this uh the the network uh relied I mean the encoding of the shape information in the network to the code and what happens here is that I wish to adapt this network to the new shape right you don't have to retrain everything you can keep the network fixed and refine over the latest code so you can optimize over the data code itself okay so this is the uh the second event that we uh introduced the third variant that they introduced would be the auto decoder so this is a really close to the auto decoder where they this is basically the decoder you are encoding this code into the SDI value and you are concatenating XYZ with the code but you can actually condition the X on the decoder itself the latent code would be the input so if you do this what it means is that you can supervise using the output which is your previous surfaces the SDI value that you have for the non-truth and then for every point that you condition and the decoder you optimize over the latent code okay and uh this itself during inference time what happens here is that you can actually given a new shape you can actually keep the parameter fixed as well and then you just refine it over by optimizing this latent code and that will be much more efficient to do okay and that's the auto decoder base deep SDF so the training would be just doing this you basically optimize over the network parameter so the double parameter is this and then you also optimize over the latent goal but in this case here you still need the ground truth so this is the output of the this is the SDF value output of the network this is the ground truth value okay and you still need this but the advantage of doing this is that now you have a label code to actually contrast this with this contrast this with this so if you have x y z that you want to capture the whole shape because the whole ship is actually captured inside the network so what it means is that you need a complete shape in order to train this well such that the network can capture it but if you do this this or this this version here the auto decoder because it's a latent code is in the labor code form right so you are shifting the shape information into the latent code you can actually have incompletion so you can have many pieces of incomplete shapes here right many pieces of incomplete shapes up to K only then you have n number of samples that you will still work because you are optimizing over the latent code if you are not capturing the information inside the network ways okay and there is also a regularization here basically you want to minimize the variation of the within the latent code itself okay and this is during training time and during inference time uh give us a new shape if you want to adapt the network to the new shape you don't have to optimize over data anymore basically you just optimize over the latent code okay and that will be much easier to do because it's D dimensional compared with the whole network that contains many parameters inside that and you just need to do a requirement basically this means that this can handle any form of partial observation as I mentioned and it can also be easily adapted to new unseen shapes okay so here's some results of the device there basically it can actually [Music] as well and uh so in contrast to volume rendering right Nerf in volume rendering there is no concept of 3D inside that so basically everything is rendered onto the image that's why after you train enough right you can also look at the basically you look at the volume density because as I said that the volume density will hit the highest value when it's on the surface and then you will drop right so the thing is that you can also treat this as the level set if you view the level set of uh Nerf after you train enough you take the volume density and then you view the level set you will still get the 3D reconstruction you'll still get something like this but it's very noisy the reason is because they don't explicitly model the surface at all nothing in the nerve formulation that I mentioned just now captures this part okay so but SDF explicitly captures the surface and it trains on the surface so that's why it's actually much better for surface representation okay but that is also a problem with uh the but there's also a problem with SDF that is Divas there and all the variances or requires the ground through free issue to possible Vision but then Nerf doesn't need this okay and therefore uh very recently in the last one year or so people are less married SDF and Radiance view okay so basically the idea behind this is that we will because SDF cannot do volume rendering because there's no volume density so what happens here is that if if I give you a hdf value and somehow I have a function that takes in the svf value and convert it into volume density then I will be able to do the volume rendering okay from there and that's exactly what volume SDF does so basically this is the SDI function so when you are these are x x is the point on the 3D White on the surface so this is the distance minimum distance of x to the surface when you're inside the object it's going to be negative when you're outside the object it's going to give you a positive value okay and what happens here is that in this idea and neural volume rendering you what you are having is that you have your x value you're going to convert it into a positive scalar value the rate that the line is included at the point x we want to have a function that takes in this and then Maps out to be Sigma okay so if we can do this then we successfully combine the two and that's exactly what we've done yeah so they say that uh let's do this let's have a learnable function five with two learnable parameters beta and gamma and PSI here so in the paper they propose of course there's a long proof that they show a skip order proof if you are interested you look at the paper they show that this can be accumulative distribution distribution function tdf of the lacrosian distribution so basically this is the thing here and uh so if you input the sign distance function here inside here inside this CDF of the laplacian distribution then you convert it into a volume density let's see uh just into it if we see why is it true okay when it's inside the object okay and uh or when any points that is far away from the object okay it should pick at the object so when it's far away from the object this distance is uh large okay because this is minus when it's far away from the object this guy here is positive and not only as negative so you basically you are looking at the left side of the axis so when I move closer to the level set to zero then you are actually increasing the this Sigma and the volume density so it's actually it becomes a distribution okay it becomes a distribution that gradually increase and what's interesting here is that this whole thing here is differentiable that means that you can plug it into the network and let it learn okay what is the alpha and beta here okay so uh and when it's inside the object that because when it's inside the object the volume density should Peak because it's hidden inside the solid surface this means that it will obstruct the particle from being transmitted at all okay so that means that when it's inside the object it's going to become even higher the volume density so this this is the way that they proposed or how to convert SDF so basically this is sdn and they convert it into a volume density here okay and then uh basically when you have this you plug it into this so x uh here the sigma here okay this is a transmittent and then uh they basically they derive the rendering equation in the paper Okay so the transmittance means that how much light you can it can pass through you can be transmitted across right the opacity would be the complement of this the subtraction of this with one okay and then that means that the you can think of the O as the CDF and the probability that should be as power so you differentiate this you get this and then you put it inside the this this is the rendering equation of your uh neural Radiance view of your nerve okay but now what happens here is that so this is the color actually and uh what happens here is that this guy here your Sigma okay uh the inner terms will be replaced by PSI and then this is actually a function of your SDM so everything becomes the a function of your SDF now and it becomes uh differentiable okay if you look at this term here Sigma is this is a function of SDF and a point and you plug it inside here right so now your volume density becomes a function of your SDM and everything can be learned based on the sdn okay so what it means is that you actually transform your SDF into the volume density and it becomes you can now you can do volume rendering to render it back onto the image for supervision so what it means is that you can actually learn the SDF without a need for the 3D World truth anymore you render basically you render it back okay and you can see that uh if compared to enough as I mentioned that it's going to be very noisy because it doesn't explicitly model the there's no regularization okay so uh one regularization of uh that I didn't mention here but uh it's well known in all the papers with SDF is that uh the regularization of SDF is that the gradient of the sdn must be equals to one okay because when you move let's say you move discreetly across to the surface it will keep on dropping by one okay so you want to make sure that the gradient of the SDF is going to be one and what happens here is that if you have a network that outputs the SDF you just need when you do back propagation you need to compute that gradient anyways right so that that propagation you compute the gradient of nearby uh points and you just set it to one okay and therefore the you can actually explicitly regularize the surface to make sure that it's moved when it's last year but compared with a volume density because you don't know where the surface is it's just the transmittance and the opacity of the opposing right so you don't know you there's no concept of level set here that's why in the end it becomes very noisy but as Dia you can explicitly regularize it that's why at the end it's going to be smooth and in contrast actually before this you can also do the r actually works by IDR for example what they do is that they directly render from the they directly render from the SDM but because they are not doing accumulating they are not accumulating the others according to the volume density so what they are doing here is rendered directly from the surface okay that's why to apply the before this paper actually there's another paper before this call unisa what they need is that because they need to know where the surfaces are so what it means is that on the 2D image they need a mask to do the training okay that's why you can see that the result of IDR is different from the others the background are all missing okay because it knows where the mask is okay it knows exactly where the objects are on the pixel and that's not so useful because you need a mask okay you need to know as exact Mass so volume SDF uh circumvent the whole problem by saying that oh we just convert it into volume rendering and then we can train it the way nerve does it and as well as we can regularize the SDF values okay any question up to this point okay good so uh then from currently there's also another paper called news when I look at both kahuli I realized that they are actually proposing almost the same thing okay so I'll tell you why uh so what they say here is that uh it's the same thing this is the rendering equation so they say that the weights the this is basically a weighted sum of the color and the width is actually the transmittance multiplied by the volume density this is nothing new it's just a Nerf rendering equation okay and then uh given the level set of f here so they say that there are two requirements on this way this way here inside this news they say we claim that there must be two conditions these two conditions must be satisfy in order for this thing to hold or in order for the SDF to be put inside this volume rendering equation first it must be unbiased that means that uh because you are looking at the waves here and biased means that when the weight Peaks so let's say you have certain distribution of the weight when the weight Peaks that must correspond to your side distance value okay that means that this T star here which corresponds to your sign distance value of zero okay on the surface that is where the width must pick and the second thing is that it must be occlusion aware that means that if you have two depths t0 and T1 and t0 T1 where T1 is further away from t0 if the SDF values are the same okay then the widths of t0 must be greater than V1 because you are passed by G zero first and if that what this means is that you have t here is here T1 is here and what this means is that you have two surface you have two objects here so if you pass by this you have array passing by this then the weights here better be higher than the other one Okay the reason is because it's already occluded by the first of that okay the first value here right so there's no point of accumulating so much of this the whatever that is behind the person object so these two conditions must be satisfied and basically they in the paper they say that the if you choose the conversion value so F here now is the F yeah X goes into a and outputs the FBI value okay and then the let's say I call the PSI function as a transfer function that transfer the transform the SDI value into the volume density okay basically T is also a function of Sigma okay so width becomes a function of Sigma as well as a function of the sine distance value so they say that they claim that uh if you choose the weight to be very similar that means that the this distribution of Sigma here okay to be very similar to what nerve is doing okay then with respect to the debt values okay then you have a offset between the surface so this is T star is where e corresponds to the surface that's the level set so they say that uh if this is the same as nerve then basically there's an offset between where the width picks and the level set and this is not correct because what it means is that your Radiance View the accumulation of your color and it doesn't correspond to that Surface by right the color that you are looking at should be from the surface so if you choose it to be uh the weights to be this uh you need model distribution for example a logistic uh density distribution then that is basically an offset between the weight it doesn't Peak at the surface so that means that the color that you render back is actually offset from the surface if the volume rendering and what they do is that they propose this they propose that uh instead of using this Sigma here they propose a role and row should be computed from this function okay but what I observe when I was looking at this Capulet so basically they say that if you replace this row here the sigma will grow here then instead of having this green curved here as a width where there's an offset now the peak of the weight corresponds to the surface but what happens here is that your stigma will no longer be unimodal like this logistic function it becomes an s-shaped curve here okay so in this paper they explain to be this then I when I look at this curve isn't the same as this okay it's the same thing volume I have yet already tells you that that no need to choose this directly to this and the laplacian which is exactly the same as what they have chosen here right so now everything coincides so condition one is satisfied in the paper this means that the weight must Peak at the surface condition one is satisfied if you choose this and there's also a long proof in the paper but uh I will not show the proof I'll just show the illustration an example here so what happens here is that if you choose row to be this here then if you have two surfaces and 1D dimension of that suppose that t0 is here and T1 is here so t0 is here and T1 is here okay when you are looking at this brain you hit this one person so that means that uh and if you imagine that this object is not there you hit the other surface so both this SDF value and this as the value would be the same they'll be equal to each other which means that I'm actually looking at this condition here okay where this is satisfied and this is also satisfied and then what happens here is that uh if you use row and compute the weight here you'll get this width where both weights are actually more than zero actually is more than zero everywhere okay so this condition is satisfied and because you are looking from here to here in this direction here so the widths don't forget that the width is actually the cumulative weight of the color so the first surface should contribute most to the render color because the other one is already right so the width of e0 must be more than T1 it and they show that this is true actually they prove it in the paper but here is just one case one illustration and uh therefore the last condition here is also true okay therefore the second condition is satisfied here okay let me quickly go through the remaining parts okay uh the so this is how the marry SDF and volume uh rendering together and the but I still want this advantage of Nerf that is it requires known camera poses okay so in bath what they do here is that uh they assume imperfect camera poses and then they do bundle adjustment together with the uh with it but actually buff still requires some initialization it cannot be totally random the poses so basically if you look at 2D image alignment you can have two views okay I2 and i1 you can have a warping function that takes a pulse of the uh the image and any point on the image okay you can walk this uh what this means is that you are whopping one patch here into the other patch the other View I do and then you basically minimize this error this is the photo consistency error between the two views okay and this can be iteratively soft because you are trying to optimize for the Post okay if you know the post you can actually do this whopping and by just minimaling the photo consistency so if you knew about Newton then a here would be simply J transpose J but by now all of you should know this already okay this is what we learned during battle adjustment okay and then the lrp will update this and the Jacobian here is the warping function so this can be obtained numerically that means that you set a point and then you take a Delta and put it into the function you subtract the two and divided by the Delta okay that would be the Jacobian okay and then so if now this is that you have two images you optimize for the photo consistency okay but now let's replace one image with a neural network that means that you are going to render that image using a neural network okay so what happens is that one of the image becomes a neural network you can do uh you can do it for the other image as well right so uh so the you can consider two both of the images as well okay and uh now one of the image becomes a neural network that means that you render it onto the other View and you take the subtraction of this so you can also optimize over that neural network okay and minimize the photo consistency over the post so if you consider two images on both sides this is what you can do basically your submission one equals to M uh equals to two Okay and now the Jacobian if you take the partial difference you can update it this way the Jacobian now here is no longer obtained numerically this is simply your back propagation okay it becomes because this is a network so you can do a back propagation compute the Jacobian and update it and what's interesting here is that this is just a you are replacing it with a general network but if you replace it with not so you render this image using the okay from one view to another view this is what you render basically if you have multiple views you can take any View and render it across all the other views okay and optimize over the poses so now your this thing here is going to become a function of uh the network parameters which is what you are going to learn using nerve as well as the camera poses so the camera poses can be computed it can be used to compute any pizza and the depth okay when you know the pixel and the depth meaning that you know the 3D point you can render it back this this simply what this simply means is that you have no right what you are doing is that you need to know the direction you need to know this point x so this D and X is actually dependent on the camera pose right so now you can actually Express this X as a function of P and if you don't know that right you can optimize it together inside that so basically you need the correct p in order to know this X and you also need to know the post of this in order to render it back here right so that's the what this warping function is all about because you can consider this integration this integration becomes a summation as what we have seen earlier right uh you can consider this as a uh as G as a function G basically you output your network outputs C and sigma which is four dimensional you have n of this and then it's going to map into RGB value it's going to map into this RGB value onto your image so you can actually put it inside here and then do the Jacobian compute the Jacobian you'll be able to do and uh and basically you are able to optimize this then right so it will be uh you are optimizing over your own as well as the camera poses of all the multiple views so this allows bundle adjustment to be incorporated into the so instead of minimizing the reprojection error you are minimizing the uh you're minimizing the photo consistency the rendered image and the source image from that view okay and you can do this over multiple views but that is one problem with this because in order for nerve to be with High Fidelity you need to do the frequency encoding but what happens here is that if you do a partial differentiation and compute the Jacobian over the frequency encoding here okay then what happens here is that you can see that the Jacobian is affected by the order of your Fourier basis so that means that the very high frequency part of your Fourier uh frequency will dominate the Jacobian and if it dominates the Jacobian basically what this is what you are trying to do right I plus one and I plus Delta P so the Jacobian will be used to compute this right Delta p would be equals to J transpose J uh inverse multiplied by J transpose the error right so if your high frequency component is there that means that this guy here is going to be dominated by The High Frequency components of your and it's always there because you encode it this way right so the initial training would not be good because it's all taken over by the uh the there's no chance of minimizing the low frequency part and therefore the way this frequency they propose this way the frequency okay so it's a zero to L controllable parameter where they use this to control during the initial phase of the training they weigh down so the that means that all the higher frequency components are not there initially they only use the low frequency to control it to to train the network and then thereafter they gradually increase they added back the higher frequency okay because at the end it's going to be already well trained and then you add back the higher frequency so it wouldn't affect the training so much okay and that's a bad but bath also has a disadvantage that is it's not generalizable because it's still taking in this uh the encoding is still taking in X the 3D point and the direction so in our work we actually use ibr to substitute this okay we combine the two together and they are updating the absolute poses so all the P here are absolutely with respect to a global frame that's why they still need to have uh substantially good initialization in order for it to work but what we did is that uh we are inspired by the global structure promotion I actually talked about this in the budget adjustment lecture so we instead update the relative post so relative poses is easier to it's also bounded and the network can actually learn this uh better so that frees us from the good initialization and we show in the paper that we don't need any initialization at all just some random initialization okay and this box this results so you can optimize and then the camera poses optimize and you were working all right so the last part that I want to talk about I might go over time a little bit but let me finish this uh the last part that I want to talk about is dynamic scene okay specifically for human Okay so because this is in interesting uh because if I can model the human and I can manipulate the human yeah I would have a digital Advanta so if I take a scan of you using this and I build a digital model uh Nerf Digital model or SDF Digital model of you then basically and if I can control this become animatable then I can actually make a meta verse on this I can put you into the metaverse and control how you look like inside the metaverse okay so the but the problem here is that uh how do we because it's Dynamic the human for example this guy is dancing and this point on the hand here because you know that the in in the in Nerf you need a static seat and a known poses of the camera because you need to do this every point you need to know where it renders back right but then if this point moves in the 3D scene how do you know that this point is the correct Point correspondence okay yeah at time T goes to one it's here time T equals to two it might be here this part that the camera is fixed okay uh how do you actually model this the correspondence so the way to overcome this is to construct what we call the canonical space okay you have an observation space which can be any any post but the canonical post would be the column cost space would be a t-post okay it will be a t-post so what you want is that give an array and any point on the observation space you want to map it into the colonical space and then what you want to learn is that in the canonical space you want to learn all the features that corresponds to the economical space and map it back onto the image okay as a Radiance View so this means that you are storing and you are actually storing all the features on the colonical space the corresponding features on the economical space and once you are given observation space you know that this point here corresponds to this point here for example then you want to retrieve that picture from the canonical space and use enough to map it back to retrieve the color and Radiance View and the on the volume density and that color and volume density of this point in the canonical space will correspond to the observation space and then you place it back into the corresponding observation space okay so this is how they do it but the question is that how do you know that this point corresponds to this point for example from the observation space to the canonical space and that can be done using what we call the linear blend skinning okay so uh suppose that I want to map canonical observation space this is a point on the canonical space okay this point here which I uh I have a canonical space and I want to map it into observation space so it will be a linear combination this would be basically the transformation that I and I want to know the uh it will be uh there will be a linear blending weight okay the blending weights that I need to learn and this blending weight is actually a function of v itself okay and then if I want to do the opposite if I want to do the opposite I simply take this whole part here and invert it and then I'll get it from the observation space back to the canonical space I think there is a hypo error here or this should be the demo this should be the uh yeah it's the code because this is the opposite okay and the exponent should be here okay so you can take this and then what they do here is that what they propose here is that they propose to design a network okay such that who's the blending weight over here and uh this outputs of the biggest G here is nothing but just the bone transformation between the joints okay and it stays fixed you can actually use smpl there's a model called smpl model and to estimate all these parameters over here even any image or a bunch of multi-view images to use this to but smpl has the problem of it's actually the it's actually a model of the human with minimal clothing okay but if you want to model the human you also want to model the codings as well but the clothings are deviation from the uh from the minimal coding uh human model this is where uh what we want to learn here and what would help to encode the deviation from the fmpl model so you can see that that's why there's a Delta uh W here okay this is the human model that we can the smpr model which that we can compute directly from the image and then we want the network to actually predict the deviation away from the smpl model to give us the final uh weight over here so this would allow us to compute the mapping from the observation speed to the canonical space and then the next part here is that uh we need to learn this part here so once we know the colonical space we need to learn how to map every one of this into the color and volume density and basically that's just the nerve equation over here okay and here this part here will also be uh we also have another part over here where helps us to uh invert the it actually learns from the chronometer space back uh wait here back into the observation space okay so that means that this weight here is actually doing the inverse it's actually in this wco here and compared to W over here which is this part here okay so you can see that if W canonical takes in the transformation of x transform into the from the observation space we can if we can transform it into the canonical space then we put it into this video basically the outputs of these two the the output of these two blending weights should be the same thing okay because we are importing it okay and therefore this can act as a regularization to our Network here but there is a problem with this if you do this there's a problem with it that is it's not generalizable okay because once you train it basically you have a network that outputs one canonical space and that canonical space is stuck with the identity of this person so you cannot generalize into another identity but this is animatable what I meant by animatable is that because if you change the observation space if you let's say I animate this uh post okay I give it a different X over here in the and I say that guys I want a different view over here he's able to pick it up from the canonical space and map it into this x that I desire okay so that means that this whole thing I can actually control the rendering okay into a different human pose and you can see that uh you can see that from here in the novel View and uh you can do normal view synthesis and it can also do a normal post that means that you can control the post and you will render back to a new post over here which is pretty much like the wow troop over here but again these things are also actively researched now okay so it's still not only saw you can still see that the quality of the rendering is not affected yet okay so uh it's still actively being solved here okay and there's a two limitations to this the first limitation is that it requires a post optimization okay that means that if you want to if you want to render it to a new post okay you need to optimize over the latent code over the new code so as here is actually in a new post it's computed from smpl I'm leaving out quite a lot of details in the slide but this is actually from the smpl new post so you can input this new post and then you optimize over the latent code the new latent code and then uh you can actually use this refinement of the new latent code to infer the person in a new post over here okay but it's not then it requires first it requires purpose generalization and it cannot generalize the new identity so it's not generalizable per se okay and the first problem here optimization we can have a solution here that means that we because the blank weights here the plan way is here is dependent on the observation space which means that it takes in x as uh as your input over here okay in order to give the X Prime over here that's why it cannot be generalizable to different poses okay it's actually post what you can do is that you can actually make the black weights over here Post Independent of the colonical space instead of the observation space that means that instead of this W here to be a function of x in the observation space you make it a function of X Prime in state in the canonical space so now it becomes independent of the input post okay and therefore you will be able to generalize it but what happens here is that actually this is a more correct version if you do this what happens here is that it involves some form of group finding here okay because now you realize that the width over here you are using this weight here while you are using this weight here this way here to find X star or X Prime over here okay but then you realize that the width now is a function of your output okay so what it means is that you need to solve for the root of this equation okay because now your X Prime here so it becomes a function of uh so you see that X star is here they test your final canonical space pole but W here is a function of X star which is actually your output okay so in the end what you need to do is that you need to solve for the uh you need to solve for the roots of this equation over here it cannot be solved in close form now so uh the way that we solve for the roof would be to use the uh you can either use the newton-raphson method or you use a quasi or Newton method as proposed in the paper so uh this will involve some root solving of the basically this is W which is a function of which is actually your network here okay so what you want to solve is that you want to solve for this x here and that X should be the same as this output X Prime over here okay and uh it will involve a lot of iterations over here then uh you can also do multi-correspondences that means that this will become a you initialize with a vector over here for all the x that means that you can solve for multiple of these things multiple points at the same time but you will all be inside this neutral region or RC Newton methods okay and this is snuff so in the end what they do here is that once you have this now the blending weight is independent of the post that your input here is independent of the pole and therefore you can make it generalizable because uh now you decouple this input from the canonical space here right so whatever input that you give you just need to solve for the root and you will give you the X star over here and you can retrieve that point okay directly from here and once you retrieve this is where it stores all the features app so once you retrieve the feature you can actually uh output it on X Prime which is actually at this point over here okay therefore you will be able to infer the output as the app over here and by the way snuff is only meant for SDM when they Design This they designed it using as the app so there's no novel view synthesis over here okay but what happens here is that you can now control the holes of the human okay by inputting a different post uh parameter you can actually uh infer the final uh output over here the surface output over here using a different host okay and here's how to compute the uh it's fairly computationally expensive here because what you do here is that when you do uh the loss function you partially differentiate with respect to the two parameters one is the one that you learned the features okay to infer the occupancy which is Sigma F the other one is the weight parameter over here so when you infer the weight parameters over here you need to partially differentiate Delta X star with Delta Sigma W but this guy over here requires you to solve this equation because you need to solve for x star where this x star here is actually the input of your deep Network okay and there is no curse form equation here because this is a deep Network right because what you are doing here is that you are actually taking the output and input to this network and then that will output the weight for you to compute X star or X Prime return here right so basically there's no close form here and it becomes this equation over here so every time you do the uh that propagation you need to compute this partial differentiation and that will involve you in solving the uh this uh loop binding equation and what happens here is that you can actually evaluate this and it will you will require to do something called the implicit differentiation okay because this is an implicit function over here so you need to do an implicit differentiation and then the operating point in order to find this you need to find X star the operating point to compute this Jacobian over here you need to do a root binding in order to find the operating point before you can do the back propagation and update it okay but this is a more correct method this is a more current method because the reason is because of this equation over here and because this equation over here you can see that uh it's you shouldn't actually invert the whole thing yeah you should actually invert the whole thing but the if you bring this towards this side without inverting it you can see the X actually appears on both sides okay and you need to actually solve for the X in order to get the correct bending weight this means that in order to get X so without inverting this you bring this whole thing here in order to get X you need to know X in order to solve it for from here okay so this means that it's actually there's no cross form equation and by doing this right it's actually a very big approximation you are simply just assuming that uh this whole thing can be inverted and then you are replacing this weight over here as a parameter of X Prime which is why it's done here right so uh this is not correct or mathematically this is not so correct in terms of the linear blending weight so this is actually a more correct way but it's actually extremely slow okay because it requires the root buying but the result is actually better and you can actually do it for extreme poses uh you input the extreme post and you will output the correct uh post over here the second part the second limitation is that it cannot generalize a new identity the reason is because as I mentioned that it actually relies on the key post okay so what you are learning is that you are learning A T post and that stores all the features from the people and this is not a generalizable so instead of using the people let's relax it to multiples at all okay but then the problem of doing this is that now you have to find all the correspondences okay this can be done if you have multi-view multi over multi time step this can be still done because using a multi-view you can output the smpl model here so every multi-view that you have over different time although the person is moving you can output for every single time step you can output the human smpl border and you know that indices you know which vertex corresponds to which other vertex in the different time step so you can have a rough idea of where this is going to be okay and then what happens here is that now you can do away with the people you can link this up and directly pick out the features from the image itself instead of start storing it using a t-post okay you can directly pick up these pictures and now in these two neural Human Performance which was published is uh lyrics last year I think or the year before I can't remember right so uh there are two steps to it because it's multi-view multi-time step so you need to use these features together over multi-view MLP constant so what they do is that they introduce no Transformer one is the temporal Transformer and the other one is the multiview Transformer so Pandora Transformer is that this point that's seen over multiple views over different times that I want to indicate I want to fuse all these points together in a temporal Transformer so basically uh this is just introducing a Transformer where you can finally fuse all the multi-time steps features correspondence into the output features over here okay so basically you're fusing it over uh multiple times and it's the standard Transformer that we have so once you have this feature over here the next thing is that we can actually put these features into multi-view because you have multiple views you also want to fuse the multiple views uh features together okay that will be another Transformer okay so they just input the this is the output from the temporal Transformer okay they just put this and then you have a multiple camera views so these are the from each really projected pixel onto each image you pick up the feature over here and then you use it inside here basically the you are doing a attention cross attention over the temporal features as well as the multi-view features over here okay then finally you get a final view wise features over here Z and then you learn the MLP from this minor feature to Output the volume density and the color okay so as a result uh this is probably one of the best uh results now okay and yeah I think that this is a kind of interesting but uh they still require multi-views okay that means that a lot of views and in order to produce the good quality result over here okay any question to this point if not then I guess this is the last thing that I want to talk about for the entire course so uh today we look at the newer units view volume rendering and I talk about the issues of neural Radiance view which is the generalizability the efficiency and I talked about some methods to mitigate this then we talk I also talk about the sign distance View and compare the two then finally we will look at how to marry the two together such that you can do a cell supervision which is volume rendering but yet maintaining the surface accuracy with using find the sensory then finally I talked about how to use Nerf and SDF to do animatable and uh generalizable Dynamic humans okay so uh with this I conclude the whole semester and thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_5_Part_2_Camera_models_and_calibration.txt | now let's look at the action of the projective camera on points we'll first consider forward projection and this is just a simple mapping as what we have seen earlier on given a point in 3d space that we represent by capital x over here the the action by the projective camera it's simply the multiplication of the projection matrix represented by p and x and this would map the 3d point onto a 2d image coordinate given by small x over here so effectively what we are doing here is that we are transforming a point in the p3 space onto a p2 space and this works for any general point x in the case where x here it's a vanishing point or a point at infinity which we denote by d the camera projection will work the same or the point at infinity which you denote by d where the first three elements here denotes the direction of the point at infinity and the last element of capital d here it's going to be zero because this is an ideal that lies at a planar of infinity and applying the same camera projection matrix action onto the poi at infinity we'll see that this becomes p multiplied by d where interestingly only the first three by three elements of the projection matrix will act and on d that's because the last element of d here is zero the last column will have no effect of the point at infinity and effectively what we get here would be m multiplied by d this means that the first three by three elements of the camera projection matrix would only have effect on the point at infinity so this guy here it's actually the vanishing [Music] point now let's look at the effect of camera projection on the point in the opposite direction where now suppose that we are given an image with an observed point if we know the camera projection matrix that's p so we ask the question what would be the effect of the camera projection matrix on this observed point that which we denote by small x on the image so a point that is being projected onto the image it's it would never be possible to recover the exact location of this 3d point the reason is because the camera projection matrix is a three by four matrix which is actually less than full rank less than a full rank what this means is that the forward projection x equals to p x it's possible but since p is less than full rank it means that p is a non-invertible trick which means that the x capital x which is the 3d point is never we are never able to recover in the exact form by simply doing this this is because the inverse of p does not exist but as a result what we can get would be one family of solution which is essentially the camera the family of solution that lies on this light ray that passes through the point x as well as the 3d point uh capital x over here and of course this light ray has to meet at the camera center because this is a projective camera with a central projection so what it means is that the the 3d point that we can obtain by doing back projection would be equivalent to a line where this particular 3d point can lie anywhere on the line so geometrically we can see very clearly from this illustration over here any point on this line would actually be projected onto the exact same point on the onto the 2d image by the camera projection matrix p so this line is actually span by two parameters the first one would of course be the camera center which is written here and the second would be any point that lies on this line or conveniently we can actually express the ideal point or the direction that is pointed from the camera center to this observed point on the image you'll see that that can be found by multiplying the pseudo inverse of p with the camera coordinate point x this guy is the pseudo inverse which is actually given by this equation over here and effectively we can see that applying the sudo inverse on x this is actually an ideal point where the last element is zero this simply means that i have an image and i observe this point x on the image and i know the camera center i'm actually finding this direction that points from the camera center towards this 2d camera coordinate to the infinity and this can be effectively found by multiplying the pseudo inverse of p with x and we can actually verify this that the result of this is actually equals to m inverse multiplied by x by simply pre-multiplying this guy here with p again so we can see that if we multiply it by p that means that the this is the direction and we project it onto this image we'll see that effectively what we get would be just x itself because this guy here is m p4 and then we multiply it by m inverse x zero so we can see that this and the m inverse here will cancel off and effectively what we get it would be x plus 0 which is equals to x here and this means that any point that lies on this line here this direction over here would be projected to this x this particular point x over here and the second part over here would be the camera center and this we have seen earlier on that the camera center is effectively uh obtained from the null space of pc equals to 0 that means that we can find the null space of p in order to get the camera center we can also see that this is the end result the null space of c by substituting this guy mp4 back into this and we will see that multiplying it by m inverse p four and one this is effectively zero zero zero which satisfy this null space equation over here hence the sum of the two would mean that first i have the camera center here and then i have a light ray direction over here and the span of the two will give me all the points that falls on this particular light ray and effectively all these points that falls on the light ray would be reprojected onto x which which is consistent with the camera projection that is given by x equals to p x now suppose that we have the location of the 3d point which is given by x equals to x y z comma t and now the question is that if we were given this 3d point as well as the camera projection matrix we want to find out that what is the depth of this particular 3d point that means that given this 3d point x over here i know the image and then i know the camera projection denoted by p what i want to find out would be suppose that this is the camera coordinate frame and this is the z-axis that i'm looking at so i want to find out this particular depth over here which i call that x p suppose that i'm given this point x as well as the camera projection matrix i want to find the depth that means that the projection of x onto the principal axis z over here and i want to find the distance of this projection from the origin of the local camera coordinate which is also the camera center this depth here can be effectively computed by taking the side the determinant of m where m is actually the first 3 by 3 entry of the p matrix multiplied by w where w is actually the scale of the projection projected point onto the image which is this guy over here so this is actually equals to w x y and 1 transpose and divided by t multiplied by the magnitude of the last row of the m matrix we'll see the proof on why is this true let's look at the illustration a figure that illustrates the projection of x onto the image and this point is actually the projected point which is equals to p of x over here uh this this particular point and c here represents the camera center so what we are after is the distance between c and the projected distance of x onto the principal axis z now uh w over here can be easily computed by the dot product of the third row of x of the camera matrix p and x so let's see why is this true if we were to take the camera projection this would be given by this guy over here and we know that this is effectively equals to p1 transpose p2 transpose and p3 transpose multiplied by x over here so since small x here is equivalent to w x w y as well as w we can see that this guy here is equal to p1 transpose x p2 transpose x and p3 transpose x where w over here is exactly equals to p3 uh transpose x which is this relation that is given here and x here can be also further subtracted from the camera matrix which essentially means that we are going to take this length the vector that points from c towards x and we can see that uh this is true because when we the last row of the projection matrix multiplied by x this is exactly what we have earlier on and if we take this guy multiplied by c over here p 3 transpose c this is equals to 0. since this lives in the null space it doesn't affect the equation and result of the equation at all we'll still get back w and what but what's interesting here is that we can rewrite the third row of the uh camera projection matrix into an inhomogeneous uh form of dot product and x here we will take the non-homogeneous form as well so now uh last coordinate of the homogeneous coordinate and call this x tilde so essentially x equals to x tilde and 1 transpose and then it's the same for c c is equals to c tilde and one transpose over here in the homogeneous coordinate so now we can ignore the last element of the last row in the camera projection matrix and this thing here is still uh equivalent to w in this equation over here now we can see that the end result here would be a dot product of the third row of the m matrix multiplied with an not inhomogeneous coordinate of x minus c and we know from the dot product equation where let's say i have a dot with b and we know that this is equal to the magnitude of a multiplied by the magnitude of b and cosine theta which means that cosine theta is the angle between the vector a and b this means that well we can pretty well rewrite this particular dot product into this form over here where since the dot product is equivalent to w this is simply m3 transpose dot with this guy over here and this would be equal to the cosine theta of the angle between the two vectors of m and x minus c and multiplied by the magnitude of these two vectors over here and notice that we also include the sine of the determinant of m that's because we have mentioned earlier on that we do not know exactly where this axis is pointing to it could be pointing in this direction or it could be pointing in the opposite direction hence taking the determinant of m will be equivalent to the side area this means that we are taking the vector on the principal plane and then we are crossing it to find out the direction which is equivalent to this guy over here the direction of the principal axis whether it is pointing this way as the positive direction or that way in the positive direction so we are multiplying this by uh w and effectively the end result that we want to get here would be equivalent to the projection of this guy which is x to the minus c tilde over here this is the distance from c to x and what we want to get here is that the projection by angle theta which is this guy and if we were to express this as the subject this is what we will get so coming back to the multiplication with the sign of the direction of the principal axis this simply means that we will be able to find out whether this point x is lying in front or behind the camera so suppose that uh the sign distance the the sign distance of m is pointing this way as the positive direction and suppose that we are given x over here which allows us to compute the scale uh w uh in the reprojected point if this x here were to lie in front of the camera that means that it's going to lie in the same direction as the camera axis then we know that this would end up to be a positive multiplied by a positive and it's going to give us a positive direction a positive that this means that x is lying in the same direction as the camera axis but on the contrary if my camera axis is pointing the other way that means that this is negative and i have a positive w over here if i were to multiply the two together i will get a negative sign this means that the the point the 3d point is lying in the opposite direction as the camera principle axis and we can also easily verify this by seeing that if the camera axis is pointing this way and x is lying in this direction the negative multiplying by negative will give us a positive and hence the sine depth of this that we have computed would enable us to check whether the 3d point is lying in the same direction as our principal axis now let's look at the decomposition of the camera matrix so generally given a camera matrix what we want to do here is that given this camera matrix which consists of m uh the four vector here what we want to do here is that we want to do a decomposition because we know that this guy here is equivalent to k r multiplied by i minus r and c so what we want to do here is that we want to do a decomposition given this particular three by four matrix here we want to decompose this into k r and c and where k here we want to further decompose it into f as well as s and p x and p y which is our principal uh point so now given a three by four camera projection matrix we want to find all the 11 degrees of freedom that is uh in the intrinsics as well as the extrinsic of the camera projection matrix the first thing that we want to look at would be how to find the camera center denoted by c over here where c the camera center would lie in the null space of p so we'll make use of this particular equation here pc equals to zero to help us to retrieve the camera center and this can be easily done since we know that this three p here rewritten into the rows of the the three by four matrix over here that is denoted by p1 transpose p2 transpose and p3 transpose and we all know that this guy here the third guy here is the principal plane and we also know that these two guys here are the axis plane so so what it means here is that this this the intersection the intersection of these three of these three planes over here they are going to all to intersect at the camera center looking at this equation and re write p in the form of the three planes we can see that essentially what we are trying to look at here is that c actually lies at the null space of p which also means that c is the intersection is the incidence point of these three planes over here and this can be easily solved using svd we'll just apply a svd on p which we will get uh the left orthogonal matrix and the singular values as well as the right octagonal matrix and the solution to c will lie in the v octagonal vector that corresponds to the least singular value and we can see that effectively if you were to work out the math of this svd c would be all the elements in c can be found from this equation over here so now having a look at how to find the camera center we'll proceed on to look at how to find the remaining parameters in the camera projection matrix which is essentially the remaining one would be essentially the rotation of the camera as well as the internal uh parameters which is the intrinsics which consists of the focal length the principal points as well as the skew factor we have seen many times in the lecture that we can rewrite this camera matrix into the form of m minus m multiplied by the inhomogeneous coordinate of the camera center and this is also equivalent to k which is the intrinsic value multiplied by r and multiplied by minus r and c so this guy here is equivalent to the translation the translation of the camera center and what's interesting here is that we can see that if we bring k into here to get k multiplied by r this guy here is essentially equivalent to a three by three matrix m that is given in decomposition in this particular expression over here so we can rewrite this into m and what's even more interesting is that k is actually an upper triangular matrix because this is a intrinsic metric internal parameter which consists of the focal length the principle point as well as the scale factor and r here since it's a rotation matrix it is also an octagonal metric and what this means here is that we our objective now becomes that we want to decompose the three by three matrix of m into an upper triangular matrix as well as a octagonal matrix and this can be done using the rq decomposition of m so suppose that we are given a matrix m and we want to decompose this into r and q where r is upper triangular matrix and q is our octagonal matrix we can actually bring it over to the left side by simply uh post-multiplying it by the transpose of q and this would be equals to the upper triangular matrix of r and since we know r here in the upper triangular part that there has to be some values and in the lower triangular part it has to be zero in in the case of the camera projection matrix the last entry here is actually one so what it means here is that we can actually express this q since it's a rotation matrix using row pitch and your angle m here is a known value because it's directly from the camera projection matrix so this would be just three unknowns inside here and what we can do here is that we can get from here pre-multiplying this q transpose by m we'll get nine nine equations all together because it will end up to be a three by three uh matrix with nine equations so nine equations inside here and where four of the equations would be equals to a known value of zeros and one in this particular case then we can use make use of these four equations to solve for the three unknowns in fact this is over determinant equation or we can just simply make use of the three zeros here to solve for the three unknowns in the the euler angle this is essentially the steps of r q uh d decomposition since in our internal parameters k over here the diagonal of this which is the focal length it has to be strictly positive we can also make use of this to add additional constraints to this system of equations when we solve for the unknowns of the euler angles that is in in this octagonal matrix over here and this can be simply easily done having solved for r and the internal parameters of rotation and k the last thing that we need to do would be to recover back all the parameters in the camera intrinsic value so suppose that we assume that the focal length is different in the two direction the x and y direction and including uh and we also include this scale vector over here so all together we would have uh five unknowns in this so since we apply a qr decomposition on m we will be able to retrieve r and k where r is an octagonal matrix and k is the upper triangular matrix so we will be able to directly read off this entries all these five unknowns from the upper triangular matrix over here so far all the developments of our camera projection matrix is based on euclidean geometry or euclidean coordinate system that's because well we have seen when we talk about the camera coordinate system we are talking about the euclidean reference frame and when we talk about the world frame we are also making use of the euclidean linear system this might seem confusing at this particular time that why is it that everything that we use to describe the camera projection matrix is based on euclidean geometry but we insist that camera is actually a projective device which actually maps a p3 space into a p2 space this can be easily seen by another way of decomposing the camera matrix p we can actually see that this since this is a three by four matrix we can decompose it into a first three by three element followed by a three by four matrix and then post multiplying it by a four by four uh homography so the four by four matrix over here it would be the action because we can see that when we do a projection it's actually p equals to x over here so uh this means that this guy over here is directly in contact with the 3d space and when it's in directly in contact with the 3d space so the 4x4 matrix here could be any projective transformation of 3d space that means that it can act on this x in a in a projective transformation manner and this would be equivalent to everything that we have learned earlier on in lecture two and lecture three where the operation here with this four by four matrix this is just a transform transformation matrix which could be projective in nature and when it acts on x this could be equivalent to a p3 space mapping it back onto itself which is also a p3 space and uh hence this part here would be projective in nature then we can see that we can pre-multiply this operation here by a three by four matrix and this three by four matrix here notice that it's just simply an identity with the last column here with all zeros entry and this is equivalent to a projection from a 3d space to a 2d space so it does nothing essentially there's no transformation here so what it does here is just simply remove the last dimension from the p3 space and form it into a p2 space this is the action where we are simply mapping a p3 space into a p2 space so once all these actions here so when it's done we are now essentially living in a projective 2d space and hence when we're living in a projective 2d space the last 3x3 matrix here could act on this space and this would be simply any form of transformation in it could be also a projective transformation or homography so what it simply means is that i can actually pre-multiplying it by this uh matrix where up to this action here i'm already into the p2 space so if i were to pre-multiply by this this guy here by another homography i'll be just simply transforming it from an image to another image which is essentially equals to what we have seen earlier on h prime equals to h multiplied by x and this is a transformation of a of a p2 space into the p2 space itself so consequently the whole operation of the camera projection matrix should be seen as a projective operation now having a look at the projective cameras where the center of the camera lies in a finite coordinate let's us now switch our attention to another class of projective cameras where the camera center lies at infinity so we'll see that uh the camera make all of this type of camera which we call the affine camera has the form of this it's still going to be a three by four matrix but now what's interesting here is that the last row of this projection matrix is going to be zero zero zero one let's see uh why is this so there are two reasons that can explain this phenomena where the last row equals to zero zero zero one so let's look at the second reason first we have seen that earlier on when we talk about the finite camera the projection of the finite camera that the last row of the projection matrix is equivalent to the principal plane so if i have a this guy over here which is my camera center this principle plane here is going to be given by p3 transpose and in the case of the finite camera c as well as the principal plane is going to have a finite value but in the case of a fine camera this plane as well as the camera center they are going to be lying on the plane at infinity and this is why the principal plane which is given by p3 transpose over here has to be on the plane at infinity which is given by this guy here zero zero zero one uh we can also see that since c here has to be on the principal plane and it has also got to be in a line on the plane at infinity what this means is that c now becomes an ideal point which we conveniently denote as d zero transpose where d is a directional vector that points towards the direction of the camera matrix we can see that this d here the directional vector of the camera center at infinity it's can be obtained from the null space of the of this equation previously we talked about the camera matrix it can be re written into this way where m here is a three by three matrix so m two by three is simply the first two rows of a matrix that is formed by the first two rows of the uh of this m matrix and we can solve for the null space of uh d here so the reason why this is true is because we can see that uh if d here equals to the null space of this guy m two by three multiplied by d equals to zero then substituting it back onto this equation over here will get zero we can easily see this so pc here equals to m p4 so if we were to multiply this by d and zero over here we can see since the last row of this guy we said that it has to be equal to zero zero zero zero one so the last row is always equals to zero and the now we are remaining with the first two rows so uh the first two multiplied by d this will be equivalent to m two by three multiplied by d so the first two entries of the projection matrix over here uh if we were to multiply it by zero what we will get here is two zeros over here so since we have this m two by three multiplied by d which is defined as d to be the null space of the m two by three matrix so this guy here is essentially also equals to zero hence we get a 0 over here that means that we have proven the this relation pc equals to 0 to be true because the last entry here of the projection matrix is equal to 0 0 zero one we can see that the m matrix which is given by the first three three by three uh elements of the entries of the projection matrix it's going to be singular because the last row here is zero so that means that we only have two linearly independent entries in our m matrix so the five camera can also be decomposed into this particular form over here because the projection equation is x equals to p x over here the the first four by four uh transformation would have to act on x and we'll see later that this particular guy over here when the acts on x it's always going to be a fine transformation and now uh the the the simple reason is because a point at infinity after this operation by pa it will still remain at infinity hence this guy here is in a fine camera so the difference besides having this four by four matrix as a fine transformation instead of a projective transformation the other difference would be the this particular projection matrix that brings a point from the 3d space into the 2d space so instead of having 1 1 1 and 0 0 0 at the last column this is a projective projection from a 3d space to a 2d space now we can see that the 0 the columns of 0 is swapped into the the 3rd and the 4th column from the projection projective into a octographic projection over here then finally when we after we have done this projection that converts the 3d point into a 2d point we can apply any a fine transformation on this 2d point by simply pre-multiplying it with a 3x3 a fine transformation matrix over here instead of writing the p matrix just solely as a three by four uh matrix with the last row as zero zero zero one we can also decompose this particular projection matrix into the same form as what we have seen in the projective camera so there will be a the the first matrix here would be four by four and this is essentially the rigid transformation the rigid transformation which is given by r and t and then followed by this three by four matrix here which is the octagraphic projection and then uh finally we have a three by three uh calibration matrix which is this in the similar format as our intrinsic value the an extrinsic value in our uh projective matrix so we can also see that this calibration matrix the intrinsic matrix can be expressed into the scaling factor as well as the skew factor and the principal point because this rotation matrix here is actually a 3x3 matrix which consists of r1 transpose r2 transpose and r3 transpose so by multiplying it with this octographic because the third column here is 0 we can see that the effective result is that we are removing the third row of r and then putting it here by putting the first two rows of r here similarly the effect will act on t which consists of t x t y and t z so effectively what it's trying to do here is remove the last entry of t where only t x and t y remains here so it's a standard practice or it's commonly found in the literature that people just conveniently set the principle point to zero the reason is because the this is a fine transformation and there won't be any projective distortion in the in the image so it's uh it would be more convenient to just center the principle point on the centroid of the scene this is because uh there won't be any projective distortion and here zero head transpose is simply just a row of zero and zero after we have seen all the the decomposition of the fine camera matrix into the intrinsics and the extrinsic uh values we can see that all together there are eight degrees of freedom in comparison with the 11 degrees of contrast this to the 11 degrees of freedom for a projective camera so there are only 8 degrees of freedom that corresponds to the all the non-zeros and non-unit matrix elements so essentially there are three uh degrees of freedom from the intrinsics and then there's another three from the rotation so three here another three here and finally there's two degrees of freedom uh from t1 and t2 so you might be wondering uh why is that that the original rotation matrix which is r1 transpose r2 transpose and r3 transpose this has three degrees of freedom why is it that when i remove the last row i will still end up with uh three degrees of freedom by just considering the the first two rows the reason is very simple because the the first three the three rows in the rotation matrix they're actually not linearly independent this is because r three transpose is simply equals to the cross product of the first two uh row so this means that any two rows in the rotation matrix would already contain all the necessary three degrees of freedom so if we were to multiply this out and express the p matrix the projection matrix into this form that we have seen earlier on we can also easily see that there are eight degrees of freedom here because they are all together eight non-zero and non-unit matrix elements and the last row here is 0 0 0 1 which is fixed what this means is that we only essentially have two linearly independent rows and now pa over here camera projection matrix would essentially only have rank of 2 because this is a rank division and contrast this with our projection matrix the the projective in the projective camera mp over here the rank of this guy over here is going to be three here's some proofs to show that a fine property of the camera at infinity so basically there are two points that we should note over here and these two points actually is referring to the same thing so as we have seen earlier on after applying a finite transformation on any entity on any point space or there's a certain property that is invariant to this affine transformation so one of them is actually a point of infini at infinity when we transform it by a fine transformation this is going to still remain at the point at infinity or even a plane at infinity it's going to also be mapped to points at infinity on the image because we are talking about the 3d space mapping onto a 2d space so this can be easily shown by computing this guy here this is our camera projection matrix the fine camera projection matrix and if we were to multiply it by an ideal point that is sitting at the plane at infinity so this guy here is my at the plane of infinity and i have a point here which is represented by x y z 0 the last entry here has to be 0 because it's a point at infinity i'll see that after the undergoing the fine transformation this guy is going to map into x y zero on the image space so it will not be found on the image this is because this is zero it will be somewhere you'll be lying at infinity and uh the last entry here which is your other because the last row of this guy is zero zero zero one hence you will all when you take zero zero zero one multiplied by this guy here is always going to be zero now we have shown that the a fine camera projection matrix is invariant to a point at infinity and hence this must be a fine transformation we can also see that uh parallel lines in the world are also projected to parallel lines in the image and this is also another property of a finite transformation because a fine transformation preserves the parallelism of any lines or any planes so we can we can easily think of it this way that parallel lines in the world so if i have two parallel lines they are both going to intersect at a point but this point is going to be at the plane sitting on the plane at infinity this point is going to be an ideal point which can be also conveniently represented as x y z and zero transpose over here so now if we were to apply this uh the intersection of this parallel line and then pre-multiplying it by the projection matrix that means i'm going to project this point onto my image over here we can pretty well see that from the first proof over here that this point is also going to lie at infinity this point is going to lie at x y and 0. so what this means is that two parallel lines when they are projected by a fine transformation a fine projection matrix onto an image the point where they meet is also going to be at infinity hence these two lines after projection they must also be parallel so this means that the p of a over here or a five camera transformation matrix indeed preserves the a5 property so there are several different types of uh fine cameras uh let's just look several uh popular types of a five camera the first type would be what we call the octographic projection this is the case where the same object is being mapped onto the image without any change in the scale without any scaling so this would be just simply removing the z direction the depth of the of the object and we will project it onto that uh that image over here so uh another uh interesting point to note here is that every point correspondence the ray that the light ray that uh man that joins the point correspondence they are all parallel to each other and of course the camera center because if we were to extend this light ray the camera center will actually converge they will actually converge at the plane at infinity and uh this shows that this guy here is actually an a fine camera which fulfills our property that the camera center must be at the plane at infinity and what's interesting here is that this projection ignores the depth altogether because we can see that it doesn't matter whether this person is here here or here the depth doesn't matter anymore because at all this location you will always project to the same person and mathematically we can uh write this into this form so essentially what we are doing here is that we are ignoring the first part here the k matrix over here the intrinsic this is because the k matrix consists of a scaling factor which will scale the projection onto the image but since we say that octographic projection has no scaling effect so now this guy here is essentially equals to identity and what remains will be the octagraphic projection matrix as well as the rigid transformation matrix and essentially we will be able to get this three by four matrix over here where the same thing the last row is still going to be zero zero zero one now uh octographic projection has altogether five degrees of freedom because we ignore the first three by three matrix over here the k matrix over here which essentially consists of a additional 3 degrees of freedom where there's alpha x alpha y and s so since this is ignored we all together we would have 8 minus 3 and we are left with 5 degrees of freedom over here which we can easily see from here too r1 and r2 that's all together 3 degrees of freedom and t1 and t2 that's all together 2 degrees of freedom so adding these two up will get 5 degrees of freedom and now there's also other constraints in the m matrix which is true the first two by three entries in the projection matrix so this always has a uh so sorry this is the first three by three entries in the camera projection matrix that where the last row is always zero and the first two rows are octagonal and then it should also be a unit norm because these are the entries in the rotation matrix and the last t3 over here should always be one so another form of a fine camera projection is what we call the scale octographic projection this is essentially a two-step process where in the first step we will just simply map the object onto a virtual image plane first using orthographic projection so this means that there's no scale change and because uh we have a scale here this is skilled octographic projection so after applying the octographic projection we would have to apply a scaling on this orthographic projection and we can see that the image of the person actually becomes smaller or it could also become i mean it could also become bigger but in this case it becomes smaller so essentially it's a two-step process the first is an orthographic projection followed by a perspective projection which is acted upon by the scaling effect so this means that this also tells you the hints that it should be first the octographic projection which is what we have seen earlier on and then since this is a skilled octographic projection we will have to multiply it by a scale so notice that in this case here scale or octographic projection the definition is that we have to scale the object by the same amount in both directions hence k over here the scaling factor over here has to be equal in the in both direction and there won't be a skew factor over here so all together we would have six degrees of freedom or five uh degrees of freedom from the octographic projection which i have seen earlier on and one additional one for k so all together we'll get uh six degree of freedom the some characteristic of the scale octographic projection matrix is that the m matrix has last row of zero which is the same as octographic projection where the first two rows are octagonal and equal norm but the now t3 is no longer equals to one this is equals to one over k over here and the third type of a fine camera that we can find it's uh what we call the weak perspective projection so the this is similar to scale octographic projection except for one difference is that this instead of scaling the same amount in both directions over here where there's one single constant of k over here what we are doing here is that we are going to scale it differently by a different amount and we are going to say that this is uh in the x direction we are going to scale by alpha x in the y direction we are going to scale by alpha y and effectively this part here remains the same which is the orthographic projection in addition to the five degrees of freedom from the octographic projection we have two additional degrees of freedom from the scaling factor so all together we would have seven degrees of freedom to our weak perspective projection and the projection matrix is now characterized by the last row has to be zero and the first two row has to be octagonal and there's no need for equal norm that's because of the scaling from both different directions |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_5_Part_3_Camera_models_and_calibration.txt | so now finally let's re-look at the projective camera projection matrix that we have seen earlier on that there are altogether 11 degrees of freedom in the projection camera projection matrix which is given by the k with 5 degrees of freedom from the intrinsics this is the intrinsics and then another 3 plus 3 equals to 6 degrees of freedom from the extrinsic of the camera so now the million dollar question is that since we have all together 11 unknowns that constitute this camera projection matrix how can we find all the 11 parameters which is the 11 degrees of freedom of the camera projection matrix the answer is a true calibration and we'll look at how to do this besides calling this procedure as the calibration of the camera it's also very commonly known as a camera resectioning so resectioning simply means that we want to find the parameters that that make up the camera projection matrix the intrinsics as well as the extrinsic parameters and one of the simplest approach and most commonly used approach is what we call the calibration with a 2d calibration pattern that it might look like something like this with circles but more commonly used would be the checkerboard pattern you have to physically print this out and paste it onto a flat board on a which is a plane and we'll see why is this so i'll be mainly describing the technique written by yangtzen yo in this particular paper a flexible new technique for camera calibration that was published in teaparmy in the year 2000 so here are some open source uh calibration toolboxes where you can download and play with it yourself they are all mainly based on this uh this paper that was published in tp in the year 2000 so the first one is the bhuge calibration toolbox commonly known as the bouquet calibration toolbox because it was written by bugay so this was based on matlab and then later some people took this code and developed it into c plus plus and uh i think there's also a python uh version right now in opencv so uh you probably can find a later version and what i put here is version 2.4 but i think now it's a probably version three point something already or maybe even version four this particular bouquet calibration toolbox is also migrated into the matlab image processing toolbox so this is our free version this is also free but this is a paid version because it's under matlab imaging processing toolbox let's describe the actual procedure for calibration of a projective camera with the checkerboard so given a checkerboard that looks like this we would have to first print out the checkerboard pattern and then we have to paste it on a flat planar board that looks something like this so the first thing that we need to do is to assign a reference frame onto the checkerboard and we would conveniently select the assignment of this by saying that we will assign the x y plane to be on the plane of this checkerboard over here such that the z axis would be zero would be always the z reading of all these points on the checkerboard would be always zero next thing is that we'll have to put this into our camera projection matrix which is given by the equation x small x equals to p multiplied by x and since we know that the projection is always up to a certain scale since we have looked at earlier on as x as y and can the unknown scale here can be actually factorized out such that this becomes a normalized uh homogeneous coordinate over here and uh p over here would be rewritten as the into this form r and t over here so essentially this is equals to p and we will multiply the corners that we observed on the checkerboard uh what we have seen earlier on is that the checkerboard looks something like this so this part here is shaded so we'll be able to detect these corners so every corner will be represented by x y 0 and 1 since the world axis is conveniently attached to the checkerboard where the xy plane is lying on the checkerboard itself and this means that all the z axis is equal to zero here so uh that's why there's a zero here so if we were to evaluate this we'll see that uh this term here on the right hand side this multiplication of the matrices and vectors over here it ends up to be this equation over here where z can be ignored because the z is going to multiply by the third column of this guy over here and hence we can ignore r3 in this equation over here as well as z equals to zero over here so effectively what we get here is that we have 2d projective mapping from the checkerboard which is this is p2 and we actually gets to map it into a p2 space which is on the the image so this means we're doing a mapping from a plane to a plane which simply means that the transformation operation here given by k r1 r2 when t is simply a homography which can be represented by h1 h2 h3 over here where h1 h2 h3 respectively are the vectors in each column of the homography matrix and we can rewrite this into this form the scale homography with the unknown scale factor factor here is equals to k multiplied by r1 r2 and t this homography is unknown xy1 here the image coordinate this is known because given the image of the checkerboard we can actually know where are the x y coordinate of the image as well as the capital x y uh coordinate over here because we know that we attach the world frame onto the maybe let's say the top left-hand corner of the checkerboard and we know the right that the checkerboard is uh of a regular size that means that i know that each square is probably one centimeter by one centimeter so by looking at this and i also know that how many squares are there in the checkerboard and where are two black and white boxes so what it means is that i'll be able to find the one-to-one correspondence of the 2d coordinate point the corners matching those corners at the checkerboard in the world scene so this part here is known not the only unknown would be the homography as well as the scale factor so based on the known x y in the image as well as the known x y in the world coordinates so now the objective is to find the homography as well as the unknown scale factor and we can do this by looking at the equation over here that we know that this s multiplied by the homography is going to be equals to k r1 r2 and t and we can simply equate this to get two independent constraints so here i simply multiply s by h1 and then i bring k over so this would be equals to r1 and this would be equals to r2 although we don't know what's r1 and r2 at this moment but one thing that we know from here is that r1 r2 since it belongs to the rotation matrix because this has to be r1 r2 and r3 and r1 r2 it's the first two columns of the rotation matrix and we can make use of what we have seen earlier on the octanormal constraints of the entries of the columns in the rotation matrix to get a constraint so this means that since they are both octagonal this perpendicular to each other that means that the dot product of r1 r2 must be equals to zero so we can apply this dot product and equate it to 0. what this means is that this term here dot with this term here they are going to be equals to 0 and hence we get the constraint over here equation 1 and we also know that since this is a octa normal constraint the norm of these two vectors over here r1 and r2 they should be equal hence we can write the this relation over here that the dot product of this r1 over here which is equivalent to the norm of r1 it's going to be equal to the norm of r2 we'll see that in the next lecture that this term here this interesting term here is equals to the image of the absolute chronic which we will call omega in the next lecture and now uh we got two equations uh over here which consists of all the unknowns uh the of the homography as well as the intrinsic values that let's denote this guy k inverse transpose and multiply by k inverse as a e matrix is actually a three by three matrix and since we are multiplying it by the transpose or taking the inverse this means that b here it should be symmetrical as well as positive or definite it should be a positive definite metric because this can be seen as the square of a matrix when we take a square that means that the eigenvalues would have to be strictly positive and since b is symmetrical and we can rewrite this in terms of a six by one vector of unknowns then we will get this represented by a small b over here so finally we can rearrange the equations in one and two if we compactly write this term k inverse transpose k inverse by uh represented by the six by one vector b we can rewrite this into a linear constraint of the form of a b equals to zero so uh similarly here we can also rewrite we can bring this guy over here and then this will be equals to this term minus this term equals to zero we can rewrite this a inverse transpose and k inverse into the six by one vector b and then rewrite this into a linear constraint here so if we were to stack these two equations together we can rearrange this homography term into this form where a is made up of the homography uh terms and b is made up of the unknowns from the intrinsic values over here which is k a is going to be a two by six matrix because b is a six by one vector and we have two equations that we have seen earlier on so all together we will have a a would have two by six entries and now this a is going to be made up of all the homography terms that we have seen earlier on in this equation over here it consists of h1 and h2 and b is going to be made up of all the unknown terms in the camera matrix which is a k inverse transpose and multiply by k inverse and so what it means is that since we have six unknowns in this equation a b equals to zero where this has six unknowns which one of the point correspondence here is going to give us a 2 by 6 entry in a so all together since we need to solve for six unknowns all together we need a minimum of three different views that means each fuel is going to give me h1 and h2 a homography that relates a view from the real checkerbot so this is the checkerboard it's mapped into the image one view is going to give me h1 and h2 so let's denote this as one and then uh and all together i need three different views i need to move either the checkerboard or the camera i can fix the camera and then take the checkerboard just like what this guy is doing he's going to move this around into a second view so this is the second view and then we'll get another set of correspondences that relates this homography so we have to do this at least three different views so this is the third view where i will get another homography to the image now once we have this done we'll all together have a six by six a matrix multiplied by a 6 by 1 unknown b and this is equal to 0. the first thing that we need to do is that we need to get a in order to solve for the unknown b and what we can do here is that we know that the mapping between the checkerboard and the image is given by a homography from lecture three we know that if we have four correspondences between each view we'll be able to solve for the homography using this equation over here so we know that we have x y one that's equals to a homography multiplied by x y one we can ignore the scale because this is a homography is up to scale and so one view here is going to give us this correspondence and we need at least four point uh correspondence to solve for this homography over here for each view we have four correspondences we'll be able to solve for the homography and we if we have at least three views over here we will get three sets of constraints where we can solve for three different homographies and we can pluck it into this equation to get a six by six known a matrix multiplied by a six by one unknown vector that this consists our intrinsic value k and then since we have a b equals to zero where a has to be six by six and then b has to be 6 by 1 so we can easily solve for the svd of a and this is going to be u sigma v transpose and then the solution of b is going to take the v vector that corresponds to the least singular value of this guy over here yeah this is equivalent to the right now space of a as we have also seen in the last lecture that this is essentially a least square operation when we solve for the ax equals to zero so for the null space of b what this means uh since the correspondences is corrupted with noise a would never be exact it's actually very noisy so if we can get more views like this where we move simply fix the camera and we move the calibration board around if we can get more views it will provide a better constraint to this a b equals to zero and we'll be able to solve for a more accurate uh b vector over here so once b is recovered we can actually put it back into the capital b which is actually a three by three matrix so small b is actually a six by one vector which can be placed back into this symmetric matrix b and then finally k can be obtained because b here is equals to k inverse transpose inverse which is equivalent to k transpose k inverse we can actually take the inverse of b and then uh take a cholisky decomposition on b inverse to find out the values of k so if you take a cholesky decomposition on a matrix a this is equivalent to l l uh transpose or the the other way around it doesn't matter so this is a lower triangular or the other way around it could be upper triangular matrix which is equivalent to the intrinsic of the camera once k is known then what we are left with would be just the extrinsic parameter of all views so now it's no longer just one rnt so if we were to take three views that means that a is at least six by six uh we will have three views over here each different view over here there will be a different r and t if we have a camera that stays fixed over here that is looking at this views so view one it will have a rnt with respect to the camera view two will have another r and t with respect to the camera and so on so forth we'll also need to solve for all the extrinsic parameters once this is done as well as the the scale factor so that now let's look at how to solve for all the remaining unknowns so we've seen that earlier on r1 is equivalent to s multiplied by k inverse multiplied by h1 so h1 is in that view so since k remains fixed for all views it's fixed for all views we can and we know that this from the cholesky decomposition earlier on so we can make use of this values bucket inside here and h here would be the homography the first column of the homography that is in that particular view that corresponds to r1 and r2 for example if i were to solve r1 here then h would be taken from this transformation of the checkerboard onto the image itself then i can solve for this and r2 i can also solve for this since this is known and this is known this is known and this is node r is r1 r2 and r3 and they are all octagonal matrices uh octa this is octagonal matrix this means that the cross product of r1 r2 will give you r3 but uh the only unknown here the only remaining unknown here is s which can be easily which can be easily solved by taking the scale of the of this guy over here because we know that the norm of r1 has to be equal to 1 so we can easily solve for this by taking the norm of this guy here it has to be equal to 1 and this is what we get we can solve for s and put it back into this equation so everything here is known we can easily solve for r1 r2 and r3 for each respective view and finally the translation vector which is also an unknown can also be solved using this form here so now this is known this is known and this is also known we can easily solve for the translation vector so what we have looked at so far would be how to solve for the all the 11 degrees of freedom that is in the camera projection matrix so that's still another set of unknowns in the camera calibration that we need to solve for and that would be what we call the lens distortion so usually when we take a picture using any camera using your cell phone camera or using the dslr or even a pocket camera we will be able to observe some form of distortion in the image so com a more commonly observed uh distortion would be a radio distortion for example if you use a highly convex lens a fisheye lens for example then even for the normal camera lens you will be able to observe some form of distortion so there it could be a negative radio distortion that looks something like this all the pixels will be squashed uh towards the center and or it could also be more commonly observed it's a positive radio distortion that looks something like this where it's distorted with respect to the radar direction where the original image should look something like this where every pixel is regular there's also another less commonly observed uh distortion which we call the tangential distortion so this is more commonly occurring in cheaper camera that might it might be due to manufacturing imprecision where let's say this is my uh photo sensor and this is my lens that i'm going to place inside my camera so usually due to poor worksmanship we can see that the camera sensor might not be aligned in parallel with this lens it could also be the lens that is misaligned so in this case we will observe a tangential distortion because the light rays that is supposed to be focused onto this it will be misaligned to to complete the camera calibration we also also have to model the lens distortion mathematically and find the parameters that gives us this particular model and here well we will first look at how to model the radial distortion so let x equals to x y be the image projection of a 3d point without distortion this means that i'm looking at this guy here without any distortion so any point here let me represent it with x y let's suppose which is equals to x over here this is without any distortion and what we need to do here is that we need to find a formulation let's say for example it's uh the camera has this positive radial distortion so this is a function that takes in x here such that it maps out to y where this is the new position which i call y here and this is uh the radially distorted pixel location and now the question is that what is this particular function and uh we choose to use a parametric way of describing this function over here so essentially this function here which i call f x earlier on which takes in the pixel without distortion and maps it to a pixel with distortion so fx here is essentially given by this equation that is uh first proposed by brown in this particular paper close range camera calibration and we can see that the input to this would be x y the pixel location without distortion and there's a parameterization here where 1 plus k 1 r square plus k 2 r 4 plus k 5 r 6 where r here is simply the circle equation okay r squared equals to x squared plus y squared so r is actually a function it's actually a function of x and y which is this guy over here and then here k1 k2 k3 is basically the radial distortion where we use to parameterize this mapping function these are unknowns that needs to be found during the calibration process we can also model the tensioner uh distortion uh using this particular model here that was also proposed in the same paper by brown in 1971 so but in comparison to uh the radio distortion where we find a mapping function that maps a pixel uh without distortion onto a pixel with distortion here we say that we are going to find the delta change in the pixel after the distortion that means that if i were to give you a image and then this point here which i call x and y it has no uh distortion so in the previous case with the after some form of uh distortion over here i'm going to map this and i'm the mapping function that i looked at earlier on for the radial distortion is to find the exact location of this with xr with radial distortion yr with radial distortion but now this case here i'm going to find the delta change which adds on to the radial distortion which i call dx over here so this essentially is also a function of x which is my perfect pixel on the image without any distortion but after mapping to here what i'm finding dx over here is essentially the new location of the pixel after i take the radial distortion into account so the final location here would be simply x r plus dx over here and dx is given by this equation where r is still the same r squared equals to x squared plus y square and now we have two more unknowns which we call k3 and k4 all together i would have five unknown parameters k1 k2 k 5 for the radial distortion and k3 k4 for the tangential distortion so this would be the final equation where i can map a perfect pixel onto the tangentially and radially distorted image so it will be given by this guy which is this equation now the question is that how can we find k1 k2 k3 k4 and k5 so uh we'll use a very simple method to do this we'll just put it through a iterative refinement a maximum likelihood estimation so the steps to estimate this would be left to the last after we have estimated the intrinsic and extreme six uh calibration using the method proposed by uh zhang tanyo that we have seen earlier on and then uh once we get this unknowns the k as well as the ri and ti because of different views of the calibration toolbox we can simply put it into this equation here this cost function here which we call the reprojection error so what this means is that this guy here pi is simply the projection function so for example i have an image here i have a point here where i know that this point corresponds to the checkerboard the corner of a checkerboard and it's being reprojected here so let's call this point x j and let's call this reprojected point uh small x i j where j corresponds to the 3d point and i corresponds to the camera view over here so this means that i'm reprojecting this point which is the point on the checkerboard onto the i camera view and the correspondence will be given by x i j over here so uh this reprojection has to undergo p of x which we have seen earlier on so p here would consist of this three sets of parameters k r and t then but then uh once we use this k p to reproject x j back onto the image this is on the assumption that there is a the the mapping is perfect that means that there's no distortion so what we need to do to compensate for the distortion would be simply putting this p x j into the distortion function that we have seen earlier on here to get the new distorted location and since there's a set of unknowns which is consists of the five parameters in k here we have an equation it's a function of the five unknowns over here and p here we assume it to be known already because we would have already obtained it from the first step of the calibration so now the question is that once we reproject this so here we can think of this as a it's actually a function of j as well as the unknown of k which is this guy over here and then uh so once we reproject it based on this equation that consists of all the unknowns on k 1 to k 5. so we can compare this reprojected point onto the image the x i j that we have observed on the image essentially we want to minimize the square errors here and we'll see in the lecture that uh when we talk more about bundle adjustment on how to make use of global markup to minimize this this is essentially a continuous unconstrained optimization which can be done using uh lower marker we'll do also a refinement over r and t uh as well as k and then we'll find this where we initialize it to zero the reason why we can initialize this to zero is because these parameters are usually very small it's close to zero so uh zero initialization is actually a a pretty good initialization in this sense then once we put it through uh we'll be able to get a new estimate of all the parameters all the in 36 parameters extrinsic parameters as well as all the unknown parameters that parameterizes the radial and tangential distortion here so this is also called minimizing the reprojection error because we are reprojecting the 3d points onto the 2d image so what this point here is being reprojected here the correspondence might be here there's an error here that we want to minimize so this is and in other words also a geometric error that we have seen earlier on in the last lecture that this kind of error uh minimizing the the geometric error is far more accurate than what we have seen earlier on uh using tungsten yields method we were solving a b equals to zero over there and this is actually the algebraic error we have seen earlier on in the lecture on homography that this error is less accurate than this error hence uh by applying this minimizing the reprojection error at the last step we are guaranteed to have a much more accurate results for the calibration so once we have gotten the the calibration for the gradient and tangential distortion we can actually do the mapping the other way around so previously i mentioned that there's a function here it's essentially this direction where any point here x without any distortion we're going to put it through f and then we'll get this point here corresponds to the y here which is the distorted so this is with distortion and this is without any distortion so once this is done uh once we know this exactly this function we can actually compute a lookup table as mentioned before so every pixel here computes a lookout table that look up on the corresponding pixel over here and then i can simply fill this part in so originally we will not be able to know this or this is what we observe but we will compute for every pixel here every pixel value here will compute the corresponding y and then use this as a lookup table to fill in the entries here so eventually we'll be able to get the to recover the undistorted uh image which we call lens distortion correction so this ends my lecture uh and in summary today we have looked at how to uh describe the camera projection with a pinhole camera model that's the most basic camera model that we can use to describe a projection metric and then we also have looked at how to identify the camera center principle planes principle points and principal axis from the camera projection matrix we also look at how to use the camera projection matrix to find the forward and backward projection then next we look at the properties of and some definitions of the commonly seen affine camera then finally we look at a technique proposed by sangsan yo to do calibration to find the intrinsic and extrinsic values of a projective camera furthermore we also look at the radial and tangential distortion and also or how to get this radial and tangential distortion parameters from calibration and how to undo the distortion effect thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_2_Part_1_Rigid_body_motion_and_3D_projective_geometry.txt | hello everyone welcome to the lecture on 3d computer vision and today we are going to talk about rigid body motion and 3d projective geometry hopefully by the end of today's lecture you'll be able to explain the concepts of the special euclidean three group and use it to describe rigid body motions in the 3d space we also look at how to represent points and planes in the projective three space and describe the point plane duality next we will look at how to describe a line in the projective 3-space using the null space and span matrix representation as well as what we call the glucose coordinate representation finally we'll extend the projective 2d space conics properties to the quadric in protective 3d space and of course i didn't invent any of today's material a lot of the contents and slides of today's lecture are again taken from the two textbook on geometry on projective geometry the first one is the one that is written by richard hartley and andrew zizermann multiview geometry in 3d computer vision especially from chapter 2 and 3 and the second reference here is the textbook written by mahi an invitation to 3d vision chapter 2 especially the first part of chapter 2 where i look at the rigid motion or the rigid 3d motion and this is taken from the first part of chapter two and the projective geometry the p3 geometry that i'm going to talk about it's uh mostly taken from the end of chapter 2 as well as from chapter 3 of richard hartley and andrew zesermann's textbook before we can look at the special euclidean three group to describe the 3d rigid body motion let us first look at some of the definitions in the three-dimensional euclidean space we'll use the this symbol here this notation here e to the power of 3 to denote the familiar 3d euclidean space this means that we are looking at the cartesian or euclidean space with the cartesian coordinates of x y and z that is represented in the 3d space so this particular cartesian space here would be represented using e to the power of 3 here and every point in the euclidean space can be identified with a point in r3 with a 3d coordinate or cartesian coordinates here so this means that every point in the euclidean space will be represented by three a vector of three numbers that is representing the numbers in this particular cartesian coordinate or this particular euclidean three space over here and is denoted as x equals to x x1 x2 x3 this means that it's uh corresponding to the x y and z coordinate of this particular space that i drew over here or otherwise simple in simply in the vector form of x1 x2 and x3 in the r3 space where each one of this x over here it's in the real num it's a it's a real number in the 3d space so through such assignment we'll be able to establish a one-to-one correspondence between the euclidean space and the cartesian coordinates in r3 and e3 we also need the definition once we have uh define the uh this euclidean space we need to define what we known as the vector a 3d vector do for us to further describe our rigid body motion so a vector as we have all known from our high school or junior high school mathematics that in euclidean space a vector v is determined by a pair of points p and q in the euclidean space so what this means is that in the euclidean space that is characterized or formalized by the cartesian coordinates of x y z in the real number space suppose that i have two points over here p and q in this particular space in the euclidean space euclidean three space over here the vector is defined as the direction arrow that connects p to q so the direction vector that connects p to q over here we can define it as a vector v similarly it can also be a the other way around the other direction where it's from q pointing towards p and a vector the direction of this arrow matters in the definition of a vector so suppose that p has the coordinates of x this means that p has a coordinate of x 1 x 2 x 3 over here and q has a coordinate of y this means that q here takes a coordinate of y 1 y 2 and y 3 over here then v has the coordinate of y minus 3 and v here the vector will also lie in the 3d cartesian space or 3d cartesian coordinates that we have defined earlier on for a point in the euclidean three space and the preceding definition of a vector that means that this particular definition here is also referred to as a bound vector a bound vector because we are the the base of the vector or the starting point of the vector which is the point p here is important to define this particular vector as well as the end point of q so it's defined by two different points hence it's a subtraction over these two different points in the euclidean space and there is also a definition of what we call the free vector whose definition does not depend on the base point so in comparison to what we have seen earlier the base vector where it's defined by p and q over here this means that the two points over here the base point of p is important to define this particular vector that's why it's bounded by the base point over here and in the case where it's a free vector we do not care where is the origin of this particular vector this means that any two pairs of points p here takes the coordinates of x and q here takes the coordinate of y in the cartesian space and p prime will take the coordinate of x prime as well as q prime taking the coordinate of y prime and as long as the subtraction of these two this means they refer to the same thing over here then we say that it defines the same free vector this means that it doesn't matter which particular reference point uh it's uh or the base point is where that this vector is referencing with respect to so since this particular free vector over here it's not bounded by any base point this means that any of this vector in the 3d space as long as they have the same magnitude and pointing in the same direction parallel to each other hence this equating these two eq parts of the equations over here then we will say that this is a free vector and it allows it to be transported anywhere parallel in the euclidean 3-space defined by the 3-d cartesian coordinates as we have seen in the previous slide so without a loss of generality we can assume that the base point is the origin of the cartesian frame for the free vector such that x is actually equals to zero since this base point is not important and y would be the vector itself would be the point that there is defines the direction of the vector hence the base point is at the origin of this euclidean space or the cartesian coordinate and the vector here would directly define the point of uh y over here and this vector here this this arrow here would be the our vector and uh the set of all three vectors forms a linear space well with the linear combination of two vector this means that any two vectors it will form any two vectors or any two three vectors or even a bounded vector if we take the linear combination of this this means that i'm let's say in this case here i'm looking at the two vectors v and u over here any linear combination of alpha v plus beta u over here this means that i'm taking i'm moving in this direction by a magnitude of alpha suppose that v here is my unit vector or it need not necessarily be a unit vector but i'm multiplying it with alpha this means that i'm stretching it or shrinking it by a magnitude of alpha over here it has to be in the same direction and a linear combination of this together with beta u let's say this is alpha v over here and uh beta u will be also in the direction of u over here but to a certain magnitude let's say this is uh beta u then what i'm getting here is this particular addition of these two of these two vectors over here and uh this will be a linear combination this means that the linear combination of this will still lie in the subspace of v and u you will not move out of this particular space that is defined by v and u and in this case the subspace is actually our r3 space over here and the another thing that i need to define in order for us to understand the rigid body motion is the inner product of two 3d vectors denoted as u and v here and this inner product or otherwise known as the dot product can be defined by is given by this equation over here is denoted by either an anchor bracket of u and v or it could also be written in this way as a u transpose v because u here and v here they are both in the r 3 space they are defined as a vector of x1 x2 and x3 so what this means is that if i have u u1 u2 u3 i'm transposing it and then i'm multiplying it with the v vector v1 v2 and v3 and by evaluating this dot product over here or the product over these two vectors over here i'll get u1 v1 plus u2 v2 plus u3 v3 and this would be equals to my inner product and what's interesting here is that the inner product can be used to measure uh distance because we can simply see that the distance of the vector v is simply given by the components respective components sum of the respective components square and square root everything so we can see that this is equal directly equivalent to the square root of the dot product of the vector itself and it can also be used to define the anchor between two vectors suppose that i have two vectors u and v over here and they are both base vector this means that they are sharing the same common reference point of the origin 0 0 0 over here so the these two vectors the angle between these two vectors can be defined as the cosine theta equals to the dot product of the division of the multiplication of the two magnitudes so in another words the dot product of u and v is actually equals to the magnitude of u and multiply by the magnitude of v multiplied by the cosine of the data of the angle that is produced by the between these two vectors another thing that we need to define would be the cross product or the outer product so this cross product is also known as the outer product in comparison to the inner product which is referring to the dot product which we have seen in the previous slide so here given two vectors v and u that lies in the r3 space the cross product their cross product between these two vectors over here is simply a third vector with the coordinates that is given by this guy over here so this is the definition of a cross product but what it means simply means is that if i have a vector u and i have a vector v defined by these two arrows over here then the cross product between u and v it would be simply following the right hand rule so if i take my right hand and my the fingers my four fingers is pointing towards this direction and i simply sweep it across this because i'm taking u cross v over here and i simply sweep it across the order of my cross product then my thumb would be pointing downwards here so this would be the cross product the direction of the cross product between the u and v and what's interesting here is that this cross products vector would be perpendicular to both u and v so another property of the cross product of two vectors is that it is linear in each of its argument now suppose that i have a vector of u and i'm taking the cross product of u with a linear combination of two vectors v and w here and what happens here is that i can directly take the cross product of each term with u in this particular expression here and this will evaluate to the cross product of alpha u multiplied by v here so this alpha here is a scalar value and we can actually factorize it out after multiplying or after taking the cross product of u with alpha v so this means that alpha or v or u crossed with alpha v we can directly factorize out this scalar value from this cross product to give us this expression and uh we will also take the cross product of u with the next term beta w to give this particular expression here and note that alpha and beta here they are both a real value or scalar value that we can simply factorize out of the cross product so another property of the cross product is that the cross product of the two vector is octagonal to each of its factor what this means is that as i've mentioned earlier that if we have a vector u and v the cross product of this two vector u crossed with v is going to give a vector that is perpendicular to both u and v in the direction where you point your fingers towards you and then sweep it across towards v according to this cross product of the order you'll get this particu the direction of this particular cross product factor u crossed with v over here and now since uh the cross product is going to you the vector that is perpendicular to both u and v over here so what it means here is that the dot product of the cross product u and v with either u or the vector of v is going to be equals to zero this because the definition of the top product is that we have seen earlier that the dot product of the two vectors a uh dot with b it's going to be the magnitude of a multiplied by the magnitude of v and multiplied by cosine theta where theta here is the angle between the two vectors a and b and in this case here we have the cross product of u and v dot with u itself and since the angle between it is uh 90 degrees and we have cosine 90 degrees here which is actually equals to zero hence uh this proves the relation is true over here and we also have this uh property of the cross product of u and v is equals to the cross product of negative v and u so we can easily verify this by the right hand rule that we have that i have described to check the direction of the cross product so in the first case here it would be u cross with v so taking our pointing our finger towards u and sweeping it across towards v will get this direction of u crossed with v but in the case where we have negative v and u over here then we will have the opposite direction v vector minus v over here and this would end up to have the same cross product by pointing our fingers towards negative v and then sweeping it across u we would have our thumb pointing towards u cross v over here and task varying find the 2 the equality of this equation here and what it means is that uh the this cross product it defines orientation because in the case where u remains the same and v swaps direction then the the it's only possible that the in this case where the cross product would be pointing towards the same direction another cross product can be represented by a mapping function from r3 to r3 this is because we are putting a cross product of u crossed with v this means that we are mapping v itself to another uh we're mapping v or u to another vector in the 3d space which is given by this function over here u crossed with v and this would also be in the 3d space where the input of u or v they are all in the 3d space as well the r3 space and this guy here is also in the r3 space and this map is interestingly linear in v and therefore it can also be represented by directly a matrix multiplication with the vector so we can rewrite this cross product over here uh u crossed with v with a matrix of u hat multiplied by the vector of v itself so we can verify that this u here it has to be a three by three matrix multiplied by a three by one vector the result of this is actually a three by one vector as well hence uh it fulfills this mapping function over here and what's interesting here is that the you had here the cross product uh operator is a skew symmetric matrix this means that it's a diagonal of this matrix is zero and we would have the elements within u so u suppose that u is given by u1 u2 and u3 uh here it's in the r3 space so we can form u hat which is a matrix operator or cross product operator with the entries in the vector of u uh u1 u2 and u3 respectively and it will always follow this format over here and since it's a three by three skew symmetric matrix that means that the u hat transpose would be equals to the minus of u hat we can verify uh it yourself over here from this definition of the skew symmetric matrix as i have mentioned earlier or to define the or to check the direction of the cross product we'll use the right hand rule and for standard cartesian coordinate the cross product or of the principal axis of x and y gives the principal axis of z and the cross product therefore conforms to the right hand rule when we talk about this uh x crossed with z over here or or x cross with y over here we are talking about pointing our thumbs or pointing our fingers towards x the direction of x and sweeping it over towards the direction of y hence the our thumb will be pointing upwards over here and this would give us the direction of the z vector it's easy to verify that this is true the right hand rule is true that suppose that x and y here are just simple basis vectors of 1 0 0 and 0 1 0 we take the cross product of these two what we will get will be the z axis of 0 0 1 pointing in this direction over here hence this means that the cross product it follows the right hand rule so having defined the basic definitions of a point a vector uh and the inner product and the outer product we are ready to look at the definitions of coordinate frames and as well as to look at the definition of the special euclidean group to describe rigid motions or 3d motions of rigid bodies in the 3d euclidean space so a rigid body can always be associated with a right-handed octagonal normal frame which will call it the object coordinate frame or body coordinate frame this means that any object any rigid body object in the 3d space we can always arbitrarily fix a fixed frame with respect to this object over here so suppose that this is our object and we can call this the f0 over here which is also known as the object coordinate frame or the body coordinate frame it's a three three axis which is octagonal to each other and this rigid body motion can be entirely specified by the motion of such frame with respect to the reference frame which we will call the world frame so suppose that there is also a fixed frame in the world which we will call the world frame f denoted by fw over here we'll see that by doing this definition that we have arbitrarily assigned of fix a frame to onto the rigid body so this means that this frame over here it will rotate and translate with rigidly together with the object and this particular world frame here is fixed rigidly onto the world so as the object is in motion this means that the relative transformation of these two frames can be used to describe the motion or the relative location or relative pose of this particular object with respect to the world frame an example is a camera frame suppose that we have a camera here denoted by c and we can rigidly fix a frame onto the camera as well as we can fix an arbitrary world reference frame which we call fw here in advance so the motion between these two or the motion between or of the camera since this uh world frame is fixed it sits rigidly on in the in the world and the motion of this particular 3d object or the rigid body here would be described by the motion of the 3d frame that is rigidly fixed onto this object which is our camera here with respect to the world frame and we can see that there are two components of this to describe the motions between these two frames the body frame or the object coordinate frame with respect to the world frame so the first component here is the translation vector this means that we want to figure out what's the translation or what is the vector that points from the origin of the world frame to the origin of the body frame and there is also a relative orientation between the coordinate axis of f w and fc so what this means is that now if i were to align the two origins i can see that the camera vector over here the camera axis over here is misaligned with the it is misaligned with the world frame by a certain orientation of uh denoted by row pitch and your three angles i'll talk about this in more detail later on when i talk about the special octagonal group the rotation matrix in particular and uh we need to align this so these three angles that help us align the camera frame with respect to the world frame would be our relative rotation here and we need this translation vector which is a 3 vector and a relative orientation rotation matrix we'll see that this is defined by a three by three matrix to denote the relative transformation between the body frame and the world frame to start describing the rigid body motion we need to observe some invariance of rigid body transformation and we can see that a motion of the rigid body preserves the distance between any pairs of points on it now suppose that i have a rigid body over here which i assign a fixed reference frame or body frame to this rigid body here and suppose that i have two points defined with respect to this reference frame or the object frame over here which i call p and q now the distance between p and q is given by d which we can simply compute by p minus q and this vector here we can take the dot product of this uh vector and take the square root of it to get d so uh we can see that after a rigid body transformation so the definition of rigid body this means that every point in this mass over here is going to be fixed rigidly onto this body and this means that after this body this rigid body undergoes a transformation of g the two points are still going to stay relative with respect to each other hence the distance d between them is going to be preserved first if x t and y t are the coordinates of two points uh suppose this p and q that is given by x in r3 space and y in r3 space respectively then the distance between them is constant when i do any transformation this means that at any after any transformation of given by t over here or g over here the distance between these two points are always going to stay constant regardless of what is the transformation denoted by t here so a rigid body motion or rigid body transformation is then a family of maps that map the this x into a new coordinate space in the 3d euclidean world and defined by the linear transformation here or defined by a certain mapping function which we call g to be more specific g describes how the coordinates of every point on the rigid object change in time while satisfying the distance constraint this means that after we have applied g between any two points for any two points the distance between these two points suppose that is given by d over here between these two points after applying g the transformation individually on p and q here the distance between these two points p and q must remain the same as d then we will say that the p and q here are light on a rigid body and g is the rigid body motion and considering only the mapping between the initial and final configuration we can drop t here so t here refers to the time of any point in time this particular g mapping function of g is going to transform the two points of p and q but suppose that we are only interested in the start point and the end point or the initial and the final configuration then we have a rigid body displacement which is uh representing represented by this mapping function here we simply drop the t notation for time here so here we will say that this guy over here x is in the initial point and g of x here is going to be my final configuration and g also induces a transformation of on vectors now suppose that if v here is my vectors defined by two points p and q so i have two points p and q here and it forms a vector of v and what happens here is that after the transformation if i apply my transformation on this vector v then the final result is equivalent applying the transformation on y minus the transformation on x this is because it should preserve this g here should preserve the distance between any two points this means that uh it would be the same as uh simply taking the transformation on the two points of x and y uh denoted by p and q here so after the applying the transformation g this vector is going to point in another direction but the distance between p and q or x and y here is going to be preserved and hence it doesn't matter whether i take this g here directly on this r3 vector over here or i simply since v here is equals to x minus y it's equivalent to taking the transformation on x minus the transformation on y since the distance has to be preserved so let us now call the mapping of g that preserve the distance between any two points or preserve the vector as euclidean transformation that is denoted by e bracket 3 over here in the 3d space all of this euclidean transformation shall be denoted by e bracket 3 here however what's uh interesting here is that the preservation of these lenses between two points is not sufficient to characterize the rigid body uh moving in the 3d space because as i have mentioned earlier that to define or to describe the transformation the motion between the world frame and a rigid body or object frame that is assigned to the object here to define this transformation here we need two things we need two components the first one is the translation vector and the second component is the rotation matrix so the preservation of distance as we have seen it is to describe the translation vector or the translation is sufficient to describe the translation of the object with respect to the word frame but it's not sufficient to describe the rotation of the object in the 3d space so let's see an example on why is this so suppose that we are only depends on the preservation of distances to describe the euclidean transformation there there are actually some exceptional cases that fulfills the preservation of uh the distances but they are not physically realizable an example here is that the mapping function of f it can map a point from or described by x1 x2 x3 here to a point of x uh 1 x2 and x minus x3 so what this means is that since x1 x2 suppose the x1 x2 x3 here represents the 3d object and i have a fixed frame onto the 3d object here denoted by x1 x2 and x3 as we have mentioned that uh we need these three axis here to be octagonal to each other and following the right hand rule so now we can see that this particular mapping function of f is going to preserve the uh distance if i were to just simply flip the sign of x over here if i were to x3 over here if i simply flip the sign of this then any distance any point p and q defined in the new frame here uh with respect to the old frame of x1 x2 x3 so if i were to take these two points here with respect to this particular reference frame of minus x3 here then the distance we can see that the distance is pretty much preserved over here it's fully preserved over here but what's interesting here is that this particular transformation this particular mapping function over here is not physically realizable the reason is because uh then this would violate the right hand rule if we were to take our thumb or fingers pointing towards x1 and sweeping across x2 then the according to the right hand rule the x 3 axis should point upward instead of downward so this is not physically possible and this has got to do with the orientation because we are not preserving the orientation to follow the right hand rule in this case here so to rule out this kind of mappings we require that any rigid body motion besides preserving the distances it should also preserve the orientation as well hence we'll now look at the second part so in addition to preserving the norm of the vectors what this means is that preserving the normal vectors it means that uh the preservative preservation of the distance as we have seen so the norm of the vector v is simply equals to v transpose v and the square root of it and uh so this will preserve the distance but it will not preserve the orientation uh in order to preserve the orientation we can we will see that we need to preserve the cross product of the vectors as well so this means that if i have two vectors u and v then the cross product between these two vectors must be preserved under a certain transformation and the map or transformation induced by the rigid body or motion is called a special euclidean group or special euclidean transformation notice that just now in the previous slides we mentioned that euclidean transformation or the euclidean group e3 it's only preserving the distance the norm of the vector but it does not preserve orientation so special euclidean transformation here means that if this particular mapping here the g over here the g function over here has to fulfill the preservation both the preservation of the distance as well as the cross product this means that the distance and the orientation needs to be both preserved and the word special here indicates special here indicates that the transformation g has to be orientation preserving let's see how this is done the definition or more formally the definition here of the special euclidean transformation refers to a map g that maps the r3 space into another r3 space is a rigid body motion or special euclidean transformation if it preserves the norm so this is the first condition this means that the distance between any two point has to be preserved and the cross product of any two vectors this is the second point that we saw earlier and this simply refers to the orientation of the two any two vectors has to be preserved after applying this mapping from the special euclidean group and the first part here which is the norm can be mathematically denoted as this this means that i have a vector v the norm of v has to be the same as the norm of the v after applying the transformation after applying this mapping of g onto v so this specifies the preservation of the distance here and the cross product here would simply be the cross product of any two vectors of these two vectors over here uh u and v it has to be the same after i have applied uh this cross product so the cross product suppose i'm looking at this cross product over here this would be my final result of u cross with v so the result of u cross and v with v should be the same after a certain transformation on of g on v as well as on g on u so this particular vector over here should be the same as this guy over here after i have transformed this guy from u cross v towards g from uh to by applying g over here towards to become g of u cross v so this guy over here it should be the same as after i met the v and u respectively with g and the collection of all such motions of transformation is denoted by se3 group it's a group we'll see the reason why is a group i won't go into the detail of group theory here but the reason why is in group is that because it's closed in mapping this means that if i have any uh if i have any transformation g i can actually uh g1 and i have any transformation of g2 then the concatenation of these two the product of these two will also give me another g that is in the r3 space so this means that it's close under uh this the the product of each other and but i won't go into the detail of the group theory you know so uh instead i will look at the properties of the special euclidean transformation the first property here is that the preservation of angers and the inner product of the angle it's given by the can be expressed as the polarization identity given by this particular equation here uh i won't go into the definition or the derivation of this polarization identity this is uh you can actually verify this that uh from here we can actually get back to u transpose or rather uh the the product of the dot product of u uh transpose v would be the same as this uh polarization identity here you can easily prove this by yourself by doing this u 1 u 2 u 3 plus v1 v2 v3 and then put it into this norm evaluating this so this would be this term here and you do the same here this norm over here and then you can see that the right hand side the end result of the right hand side and the left hand side would be the same you end up with u 1 v 1 plus u 2 v 2 plus u 3 v 3. i'll leave this as a practice for you so since we have this relation the polarization identity we can see that uh u plus v norm is equals to the g of applying g on u plus applying g on v respectively for any rigid body motion given by this particular relationship here so the dot product of u and v is equals to the dot product of u after the transformation by g and with v after the transformation with g here so since the dot products are the same after the transformation what this means is that the angle between the two vectors must be preserved the angle between the two vectors of u and v this particular angle over here u and v and the angle of applying g on v as well as the applying g on u this particular angle over here they must be the uh they must be the same since the dot product is given by the magnitude of u multiplied by the magnitude of v cosine data here and this guy over here it will be g of u uh the magnitude of g of u and the magnitude of g of v or cosine data so here the two data since the dot product are the same these two are equal then this means that the this means that the anchor should be the same this is because the norm of this this part here the norm of u af and the norm of g of u they are the same because the preservation of the distance and the norm of v and the norm of g of v should be the same hence this two angle over here must also be the same that's the another property of the euclidean special euclidean transformation it preserves the angle between any two vectors after the mapping by g respectively another property here interesting property here is the preservation of volume so from the definition of a rigid body motion we can show that it also preserve the so-called triple product among three vectors and the triple product is given by this equation over here the definition is simply the cross product of two vectors v and w and then take u a third vector to dot product with this so these two here are the triple product and they should be equals to each other before and after applying the transformation of g on each one of this respective vector here and we can see that this is we can see that this is true because the cross we have proven the dot product to be equal so this part here these two parts here it needs to be equal for inside the anchor bracket and we have also seen that the cross product needs to be preserved in order for the mapping to fulfill the special euclidean transformation definition hence these two terms here must be equal and what's interesting here is that we can see that the triple product actually defines the volume of a cuboid here so here we can see that the dot product suppose that we have the two vectors a and b here which is uh the cross product of a and b would be the area of this base let's say i take a multiplied by b i'll get the area of this base over here and then i'm going to multiply by the height of this object over here and that would be simply given by the third vector of c which defines the height and then we are going to take the dot product of a cross b with c which is simply given by this equation over here so we can see that this cosine phi over here defines the angle between these two because the cross product of a and b is going to give us this perpendicular vector of a cross b and then i'm going to take the dot product of this or the cosine phi of this particular vector c here to be aligned in this direction such that i can actually treat it as a cube and take it uh take the product of this with a and b hence this would be equivalent to the cosine of phi by c and this would be the height of this guy over here and this is given by this guy here so if i take the dot pro or the product of these two terms here then i'll get the volume of this cuboid here and that would be simply equals to the magnitude of the triple product hence by saying that after applying g on the respective vectors here the two of this before and after applying g should be equal and this means that the volume is being preserved before and after the transformation now we have already looked at the translation component of the special euclidean group or the special euclidean transformation that we have defined earlier and we also look at some properties of the special euclidean transformation the preservation of volume the preservation of area as well as the inner and the cross product let us now define and we said that the cross product defines the orientation let us now properly define another entity such that we can use it to describe the orientation of the body with respect to the world frame and this leads us to the octagonal matrix representation of rotation now suppose that we have a rigid body rotating around or about a fixed frame or fixed point o which is the origin here now suppose that i have the world frame defined by this x y z axis the solid axis over here and i have a object that is where i have uh the another axis of the dotted line uh the dotted line frame over here x y small x y z that's rigidly fixed to my rigid body here and with both of this uh object frame as well as the world frame aligning with the origins coincides with each other now suppose that i'm rotating this particular object along a certain axis which i defined to be this dotted line here and i rotate it along this axis of omega now the configuration or the orientation of the frame c the my object frame relative to the solid frame which is my world frame is determined by the coordinates of three octagonal uh vectors here and this would be given by r1 which is simply the mapping the g mapping of the basis vector of e1 so the basis factor of e1 is the x-axis given by this solid line over here so this means that i'm applying g the transformation of g here by on to the x axis and this will give me the new axis of small x1 here so i will do the same thing with the vector of e2 and applied it with g the the same uh the mapping of g and this will give me the dotted line of y axis as well as the dotted line of z axis respectively so these three vectors r1 r2 and r3 that is defined by the g mapping of each of the respective basis vector that represents the world frame would be the one that is used to define the orientation of my object frame here and the three vectors r1 r2 r3 are simply unit vectors since the since my original axis over here can also be represented by the unit basis vector e1 and e2 as well as e3 here so the final axis here after the transformation should also be a unit vector along the three principal axis of x y z in the object frame c the configuration of the rotating object is then completely defined by the three by three metric that is given by the three basis or the three transform basis vector stack in order into its three columns here hence i would have a three by three rotation matrix and this rotation matrix simply means that if i have a point here which i call p and it would be defined and this point if i define it to be the point that is in my object frame frame c and uh this means that i'm actually looking at the coordinates over here the z coordinate with respect to the dotted line and the y coordinate with respect to my dotted line over here as well as the x coordinate here in in my uh in this particular frame here so uh let's say the x or coordinate is here and then i would be defining p with respect to the dotted frame in my object frame now the rotation matrix that i'm defining here r1 r2 r3 multiply by p here it would give me the corresponding coordinate with respect to f uh w which is z over here and then with respect to y as well as with respect to x over here and this would completely determine the three by three matrix over here would completely determine the orientation of my rotating object with respect to my reference frame and since r1 r2 and r3 forms an octagonal frame it follows that the dot product of any pairs of this frame should be equals to one if i equals to j and zero if i equals not not equals to j the reason is because i have three octagonal vectors that are perpendicular with each other if i were to have uh suppose i have to have this i j and this k here so if i were to dot product any other pair where i they are not the same axis they are not the same axis they will have form a 90 degree angle between the two vectors here and cosine 90 here would always be equals to zero so this means that if they're not the same it's always going to be 0 but if they are the same they are multiplying or they are pointing towards the same direction then cosine 0 here will give me 1 and this would be the result here and this can be written in matrix form this relationship here can be written in matrix form where r transpose of r is going to be equals to identity or r multiplied by r transpose is going to be identity we can easily prove this because each r here is going to be equals to r 1 r 2 and r 3. so we can see that by taking the transpose here that means that i have r1 transpose r2 transpose and r3 transpose multiplied by the vector the matrix itself r1 r2 and r3 i will get r 1 transpose r 1 and that would be 1 and then i'll get r 1 transpose r 2 that will be 0 and so on so forth and i will get an identity matrix here and any matrix that satisfy the above identity is what we call the octagonal matrix this is because the matrix has the columns octagonal to each other as depicted by this diagram over here so it simply follows from the above definition that the inverse of an octagonal matrix is simply its transpose this is the reason is because if we take any matrix of r or inverse multiply by itself is going to give an identity and this is exactly what we get from taking a octagonal matrix of r transpose multiplied by r as shown in the previous slide and since r1 r2 and r3 form a right-handed frame we further have the condition that the determinant of this is going to be equals to plus one and hence the r wc here is a special octagonal matrix the word special indicates that it's uh orientation preserving so compare this with special euclidean group where the special word refers to the preservation of the distance and the space of all such special octagonal matrix r3 is usually denoted by the this notation here so3 and this simply means that r is a 3 by 3 matrix such that r transpose r equals to identity and the determinant of r equals to plus 1 and this determinant here simply refers to the right hand rule and traditionally the three by three special octagonal matrices are core rotation matrices and directly from this definition it can also be easily shown that the rotations indeed preserve both the inner and cross product of the vectors so directly from the definition it can be easily shown that rotations indeed preserve both inner and cross products of vectors i won't show this in explicitly but uh i'll just give a rough sketch of proof here the inner product here is the what we saw earlier that it's going to be v transpose u of two particular vectors of v and u and this is going to be equals to the rotation matrix multiplied by v since we are transforming the vector v and then transpose this and the same trans rotation matrix multiplied by u here we can see that this is going to be equals to v transpose r transpose and multiplied by r and u so this guy here is equals to identity according to the definition here and hence we can see that the dot product before after rotation applying the rotation matrix on the vectors are going to be the same and we can do the same for the cross product so cross product would be v cross u we can also prove that this is the same as r multiplied by v cross with r multiplied by u by simply taking the skew symmetric metric uh and every component of r the vector of r 1 r 2 and r 3 and taking the property that they must be equals to 0 or 1 after the expansion so i'll leave this to you to work it out and show to yourself that this is true so we can define the rotation matrix according to three angles the minimum representation which we also call the uh euler angle here so the first euler angle is what we call the yaw angle it's actually the rotation defined to be the rotation around the z-axis so here we are revolving around move or rotating around the your axis by pointing our thumb towards the angle and this your angle the positive your angle is given by the direction where we uh where our fingers point towards two and uh this angle beta that uh be before and after the x-axis is rotated well it's given by this alpha gamma over here this angle gamma over here and that is how defined to be the your angle and we can see that uh any point that is uh here for example any point here it's actually can be defined with respect to the original frame of x 0 and y 0 and in this case i'm ignoring z let's say it's on the x y or plane or z actually remains the same because the z-axis doesn't change along this and now we can actually define this according to in the new frame of x1 and y1 from x0 and y0 here according to this according to this rotation matrix here and the pitch angle we can do the same for the pitch angle here so any point defined with respect to the this orientation or the rotation around the y axis you will be you you will form this particular angle beta here which is the pitch angle so suppose that i have a point here and it's defined with respect to x1 and z2 here and i'm going to express this point with respect to the original axis y x1 and 01 over here before my uh transformation before the rotation so this would be with respect to here and with respect to here we can see that there's an angle that's made here so this particular z-axis here would have two components it can be cosine uh theta beta here on the z component of this axis and the z component and the uh and the sine angle of this uh with respect to the x component of this hence there's a cosine beta and a sine beta here so that's why it can be represented the pitch angle around the y axis can be represented by this rotation matrix here and we can also denote the row angle which is the which is represented by the orientation the rotation around the x-axis here and the anchor here will be formed by the y and the x at the z axis here so each component of this y 3 with respect to y 2 here we can see that there's a cosine theta component cosine alpha component of y 3 in y2 and then there's also a sine component here where we can take the sine of this so this is a sine alpha so this means that y 3 here will become cosine alpha in the y 2 and will become sine alpha in z2 that's why we have this that's why we have this particular uh factor in our rotation uh matrix of the row angle here and similarly the z axis can also be expressed in these two terms so zx here can be also expressed in this two term you can there's a component in z2 as well as there is also a component in y2 here and this is similarly given by cosine and minus sine of alpha hence that is this component here and putting the definition of the euler angles together to form the rotation matrix we could have many different uh combinations of this for example it could be a row followed by a pitch followed by a yaw or a yaw followed by a row followed by a pitch and and so on so forth but uh there is a special configuration which we use to define uh in we used to define in aviation the orientation of an aeroplane which is called the tie brian angle and this is denoted by the orientation over the z-axis first and then followed by y prime followed by x prime sequence so this means that it's rotation around z followed by rotation around the y axis after we have transformed after we have rotated around the z axis and then finally the rotation along x prime prime which is the new axis new x axis after we have rotated around z followed by y prime and this can be given by this formulation over here where r 3 2 r 2 1 and r 1 0 is simply what we have defined to be row pitch and your uh previously so this would be the final concatenating everything together this would be the final rotation matrix in all the three axis that we have uh seen in all the three possible angles denoted by the euler angle this means that given x3 a point x3 or vector x3 that is defined with respect to the final frame after the rotation of these three angles a combination of these three angles and i want to figure out what's the orientation with respect to the original frame i would take r3 in 0 multiplying by this x3 over here so after defining the rotation matrix we are now ready to describe the rigid body motion and its representation in full now suppose that we have a point p on the object with respect to the world frame uh fw here which is given by this this is my world frame which is given by fw and suppose that i have this point p here that lies on the my rigid object over here and i denote as p and the vector f x w over here represents this particular point with respect to the world frame origin and x w is given by the translation vector of twc which is given by the trans this translation vector that relates the origin of the world frame as well as the object frame denoted by fc that is rigidly fixed onto the rigid body here and the sum of this translation vector and xc which is the vector that represents the point p in the body frame in the this body frame over here denoted by fc and since xc is the point p in fc so this means that xc is this point p represents the point p in xc then it becomes the uh the rotation of this r multiplied by x will give me the point in the that is oriented with respect to the world frame here so this uh rotation frame over here is simply to align the rotation over here is simply to align the world frame with the or the camera frame with the world frame here together with xc and then what would be remaining would be the translation vector here to bring these two frames together uh align them together as one hence the point p in the representation with respect to my world frame would be the rotation matrix multiplied by xc where xc is the point in the camera frame or in the object frame plus the translation vector here this would complete the rigid body uh transformation this means that i have a point p which i represent as xc and i want it in that is expressed in my object frame fc over here i want to transform it into my world frame x w and this would be equals to r uh c that brings me into the world frame plus a translation vector that brings c into the world frame here and the transformation of this the linear transformation here can be simply represented as a linear form we can factorize it out and as a linear uh mapping over here where this guy here is simply my linear transformation on xc and we can now see that this is also the homogeneous representation since this part here would give me the homogeneous representation the homogeneous coordinates of xc the point of p in the 3d space and this would become my homogeneous representation it's a four by four matrix where the last row is 0 0 0 1 and this would give me the linear mapping here and let us call g to be the complete mapping function that maps a point from uh or one reference frame into another reference frame or we could simply say that it transformed a point that is uh we define with respect to a frame a specific body frame into the world frame in the 3d space and the homogeneous representation of g gives rise to a natural metric representation of what we call the se 3 transformation denoted by s e3 here so this means that g here denoted by this 4 by 4 matrix is uh defined to be in the se3 space or se3 group condition that there are two the conditions here is that the r here the rotation or matrix here has to be in the so3 space this means that it has to fulfill r transpose r equals to identity as well as the determinant of r equals to plus 1 as what we have defined earlier and t here would simply be the three by one vector in the euclidean space and here we can also see that this is a group as i have mentioned earlier on that sh3 is actually a group where it should be closed under the transformation so this means that if i were to use a suppose i have g1 and g2 in the se3 space and if i were to transform g2 using g1 here this means that it's a multiplication of the two i can see that after the transformation of g2 by g1 i would still end up with a valid se3 transformation that is in the se3 group hence is closed under the transformation and this means that this is a follows the group theory here or the principle of group theory here and furthermore the inverse of g is given by this uh expression over here i'll leave this to for you to work it out that this that is true so we can by doing this we can see that we can make use of this property that g inverse g equals to identity so i'll leave to it to you that the inverse of this and the this thing here the r transpose minus 1 multiplied by r transpose 0 1 is equals to identity and this guy over here so we can also make use of the sc3 transformation to represent composition of rigid motion given the 3 suppose that given three camera frames time t equals to 1 t goes to 2 and t goes to 3 respectively so this means that i have 3 reference frame here or 2 at t equals to 1 and then the second one t equals to 2 and the third frame here t equals to 3 by moving from t goes to 1 f1 to f2 and then to f3 here we can see that the individual motion here denoted by g 2 1 and g 3 2 they are both in the s e 3 space here can be can be denoted by the can be concatenated together to form the motion from f1 to f3 and that would be simply uh given by the product of the two respective uh transformation here so we would have the following or more formally we would have the following relation between the coordinates of the same point at different frames so suppose that i define x1 in the frame in the first frame so i have this point x over here in my first frame f1 and this is uh given by the vector of x1 over here and to transform it into the second frame which is given by f 2 here this means that into the second frame of x 2 then i would have the relation of g 2 1 multiplied by x this is the rotation matrix 2 1 translation vector of 2 1 0 and 1 multiplied by x 1 and this would give me x in terms of the with respect to f 2 here and now i have a third frame over here which is given by f 3 here the reference of this x now is suppose that i denote this with x 3 that is with respect to f 3 here i can easily compute this x 3 from x2 by having a transformation or sc 3 transformation of r 3 2 and t 3 2 and this would help me to transform x 2 into the reference frame of x 3. and what's interesting here is that given x 1 here i don't have to go through the two transformation of g 2 1 followed by g 3 2 into on the well to transform the first point x1 into the third frame here i can directly define a transformation of g31 which is this transformation here g31 that brings x1 into x3 and what this means is that it follows the following composition rule that g31 here to bring the point over here from define in x1 frame f1 into f3 i can simply concatenate the transformation between the two and it follows this composition rule since we can see that from this definition here the individual transformation frame over here the x3 equals to g32 multiplied by x2 this is the second uh def transformation here and and since x2 is equals to g2 1 multiplied by x1 we can substitute this guy here with this equation here and this is what we will get and this is equivalent to since this is transformation of x1 into x3 or f1 into the frame x in the frame of f1 in to x into the frame of f3 then we can say that this is directly this guy over here is directly equivalent to this third equation over here and hence this part here is equivalent to this part and this shows the composition rule and the same composition rule would imply for the inverse since the the inverse of this uh g21 multiplied by uh g12 it will be equal to g22 and this is equals to identity this means that i am actually what i'm doing here is that i have two frames f1 and f2 so what i'm doing here is that having a point here defined in uh f1 which i call x21 and define in x2 here i'm going to transform x1 into the frame of f2 and hence i get x x2 here so xg2 here would be equals to g of uh it will be x equals to g of 2 or 2 1 multiplied by x 1 and then i'm going to i'm going to transform this back into the x 2 back into x 1 hence this would become x 1 equals to g 1 2 x x 2 so if i were to substitute this guy back into this equation i will get x 2 equals to g 2 1 multiplied by g 1 2 multiplied by x and this guy here should be equals to identity in order for x2 to be equals to x2 here and in general uh the composition rules in homogeneous representation is given by this representation here so if i want to transform a point uh represented in frame j into a point into the same point uh represented by f i then i would need the transformation of g i j so you can think of it this way the order here would be i'm transforming j into the frame of i and hence i get j here and then transforming it into the frame of i here so this would be equals to r of i j multiply and t of i j and 0 and 1 for my s e 3 transformation and in general we can also say that g i k would be equals to g i j multiplied by g of j k since we saw that earlier on that uh this is this thing here holds true and from the i j and k here would be equals to g of 2 3 for example and uh the inverse here would also generally hold through since the the in the special case here where we only have two frames i proved that this is the relation relationship has to be equal to identity and we can extrapolate it into or generalize it into inj |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_8_Part_3_Absolute_pose_estimation_from_points_or_lines.txt | so we saw that the three-point correspondences are needed to solve the camera pose estimation problem in the grenade formulation and although it uses a minimal set of three-point correspondences the disadvantage of this particular formulation is that it ends up with a four degree polynomial which means that it could give up to a total of four possible solutions and this is not good because uh which one of the solution to choose so uh in order to choose the correct solution at least the fourth point is needed here so what we usually do here is that if we choose the grenade algorithm we'll use the three point to solve for the p3p problem and given that camera pulse of rnt because we also know k so this means that we know the camera projection matrix and the fourth point we will make use of this camera projection matrix that was uh computed using the grenade algorithm to project this particular point back onto the image and we will check for the reprojection error between the projected point and its 2d image correspondence and out of the four possible solutions the solution that gives the minimal reprojection error would be considered as the correct solution since minimal or fourth point is needed to check for the correct solution the question becomes whether can we find the solution directly from any four point correspondences or more than four point correspondences such that the solution is unique the answer is yes and it formulated by quan at all in a paper that was published in the 1990s and first i will talk about the linear four-point algorithm that was published in that particular paper so recall that we have a system of three polynomials equation from the cosine rule where s1 s2 and s3 are the unknown depth of the three points so we have uh three point correspondences here uh which is uh intersects at the camera center so the depth of this points are denoted as s1 s2 and s3 and these are the unknowns and we know the distances between every pair of points and we also know the angle that is made by the ray of any pair of point so since this system of three polynomial equations are function of s1 s2 s3 this means that this equation this system of polynomial equations can be rewritten into this form and uh interestingly we note that uh for every equation here it's only a function of our two unknowns so uh we'll get this system of uh polynomial equation written as f12 that is dependent on s1 and s2 and f13 depending on s1 and s3 as well as f23 dependent on s2 and s3 so in in general what we can do here is that we can directly start off from the three equations over here to get a univariate polynomial equation we can directly eliminate for example s2 and s3 to get the polynomial equation in terms of s1 and we will find that if we do this it will end up to be a 4 degree polynomial in the unknown of s1 so this guy here would be a fourth order polynomial that is uh given by gx over here where we denote x as s1 square over here so this simply agrees with the grenade algorithm which also ends up to be a fourth order polynomial if we simply use three point correspondences here so now if we have four point correspondences we can actually add on to this fourth order polynomial or equation and to over constrain the system uh so as a result we'll first get six polynomials in terms of uh s i and s j so why is the reason why we get six polynomials is because we have four points and each pair gives one polynomial so we have four points that means that we have to do four choose two and this simply gives us six polynomial over here where uh each polynomial it's a function of a pair of unknown depths represented by s i and sj and now one of the straightforward way is that since we have six polynomial equations written as f i and j as i and sj equals to zero over here so what we can do here is that uh this is from here so what we can do here is that we can use these six polynomial equations to generate a fourth degree polynomial equation so we can see that any three of this combination would give us one fourth degree polynomial which we denote as g x what this means is that since we have six of them the six choose three this is going to give us three polynomial equations three fourth degree polynomial equation so which we denote as g x g prime x and g prime prime x where x can simply be equals to one of the s square for example we call it s i square and now since we ended up with a three fourth order polynomial equation one straightforward way would be to simply solve for x independently in each one of these three polynomial equation and then just find a common solution to the to this subset but it's not a good approach for three reasons because what we have to do here is that we have to solve several four degree polynomial equations which can be time consuming and it's also difficult to find a common solution so what this means is that we solve for g x we'll get a x we solve for g prime x we'll get another x which this x and this so we call this x prime so by right they should be the same x but now because we are solving it independently using two separate polynomial equations there's no guarantee that this two solution would be the same solution and furthermore we have three polynomial equations with the same unknown variable so there's no guarantee that all this are going to be the same solution due to noisy data and probably the most important part is that we cannot profit from this data redundancy which should increase the stability so in this case here because get solutions doesn't agree well due to the noise so it means that the solution is not stable at all and a better solution here is proposed by uh kwan and lan in the paper published in active tipami in the year 1999 so what they suggest in this particular paper is that for n equals to four that means that it's a four-point algorithm uh they propose a linear four-point algorithm where we have seen earlier on that since we have six polynomial equations represented by f i j and s i and s j equals to 0 where i and j is between 1 to 4 because we have a 4 point correspondence over here so this 6 polynomial equation is going to give rise to three fourth degree polynomial equations as written here where x here is uh just a representation of any of the unknowns in the the depth where we simply write it as s i square over here so this can be derived in in this way so here i wrote out all the six combinations of the polynomial equations f i j over here we can see that each one of this equation it's a just a function of two unknowns in the four uh unknown depths s1 s2 s3 and s4 and any three of them it's going to give rise to a fourth order polynomial equation where in this case in this example here i'm just simply going to write x equals to s1 square so now given this 3 4 degree polynomial equation we can stack them together we can see that this set of equations over here it can be stacked together when we can factorize out all these coefficients as a matrix and the unknowns which are x 4 x 3 x 2 x and 1 can be written as a vector in this way over here 0 0 and as a result we'll get a three by five matrix and a five by one vector followed by a three by one vector over here which is in the form of a x equals to zero uh we have seen this homogeneous linear equation many times so now this means that we get this form of equation over here the homogeneous linear equation a t which is written as a t equals to 0 over here and what it means is that we simply have to solve for the unknowns over here so this matrix here is known and the unknown contained in this particular vector here and we know that uh since a is a three by five matrix uh at most have a rank of uh three this is the same as solving ax equals to zero where now a here has a rank of three the rank of a is at max going to be equals to uh just three so but we have a vector so x here is actually a vector of uh 5 by 1 which means that this has to live in a five-dimensional subspace but a here has a only three constraints which means that the system of homogeneous linear equations is under constraint but what we can do here is that we can take the svd of a which is uh given by this guy over here the svd of a is going to give us a left octagonal matrix of three by five and a square singular matrix with a diagonal of sigma one sigma two sigma three where the last two singular values here are going to be zero so uh what it means here is that we are going to take the last two columns that corresponds to b4 and v5 in the right singular uh matrix ester solution the not what it means is that the null space of a is going to span be spammed by the two basis vector that we have found from the right singular uh null space and uh we can rewrite what uh and this means that we can rewrite the solution we can write the solution of t5 that we have formulated earlier on here in terms of the two the null space basis vector that we have found uh parameterized by two unknown parameters here lambda and rho so we have to now determine lambda and rho in order to get a solution for t which will lead to the solution of s1 we know that t5 here is actually a vector that is made up of the same entry but of different order so x4 x3 x2 x and 1 of different entry so we know that by observation any two elements the product of tij is going to be equal to tk and tl for this constraint to be valid where i plus j must equal to k plus l we can verify this suppose that i here equals to 1 and j here equals to 2 and if k here for example equals to 3 so k here equals to 3 l here must be equals to 0 we can uh substitute this in and we can see that the product of this so what this means is that uh i equals to 1 over here it simply means that we will have x multiplied by x squared and then this guy over here t k will be x cubed multiplied by x to the power 0 so you can see that the end result of this is this gives rise to x cube and the end result of this also gives rise to x cube you can try it for yourself any combinations of this is going to fulfill this particular constraint over here so making use of this constraint we can substitute the respective components of t5 into this particular constraint to get this equation over here so for example in in this case i choose i equals to 1 j equals to 2 and k equals to 3 and l equals to 0. what we can do here is that we can take the respective components which corresponds to x x square x cube and x 0 in t5 and equate it to this constraint over here so uh this simply means that because t5 here earlier on we found the solution here it's spanned by the two right singular vector so we can write the components of this parameterized by lambda rho and v4 v5 and then take the respective component and the dummy inside this particular constraint over here for any combination of i j k and l that satisfy this relation over here putting it into the constraint and moving the right hand side to the left hand side we'll get this form of equation where it's a function of the unknown's lambda and rho where the coefficients are made up of the components of the basis vector that we have found earlier on respectively would be v4 and v5 for the combination of i j k and l that we have chosen earlier on so we know that i j k and l if we look at all the combinations we are in total get seven such equation that is given by this table over here that fulfills the constraint of i plus j equals to k plus l and within this particular range that is written here so as a result we'll get seven of these equations over here which we can stack them together because this equation we can see that it's linear in terms of lambda square lambda rho as well as rho square over here so we can rewrite it into a coefficient matrix which is a and then x equals to 0 over here so in this case here what i have written here would be b multiplied by y equals to 0 is also a homogeneous linear equation where b here simply means that all the coefficients which are known and this y which is a three by one vector it consists of lambda squared lambda rho and rho squared which are the unknowns that we want to solve and now uh this is actually an over determinant system because b here is seven by three so this means that we have seven equations over here but only three unknowns that we want to solve so taking svd of b we get the left singular factors multiplied by the singular values as well as the right singular vector matrix over here and the solution would be the one that corresponds the last column of v over here so once we have solved for the null space vector y using the svd what we can do is to proceed on to solve for lambda and rho because we know that y 3 over here is equals to lambda square lambda rho as well as a lambda square over here and we can equate lambda squared to the first component of y3 and lambda rho to the second component and rho squared to the third component of y 3. we will end up with this lambda square equals to y 0 lambda rho equals to y 1 and rho square equals to y 2 which we can take this divided by this to cancel away lambda one of the lambdas here to get the ratio of lambda over rho equals to y zero over y1 similarly we can do the same thing by y1 divided by y2 where rho cancels off and we will get the same ratio so there are two possible solutions which we can take over here after we obtain the ratio of lambda over rho we can substitute it back into this equation over here which i have gotten from the two null space basis vectors e4 and v5 respectively earlier on when we solve for t5 so we can see that the first row of t over here is actually equals to one 1 and this equates to lambda multiplied by the first component of v4 plus rho multiplied by the first component of v5 which gives rise to this particular equation over here and together with this equation over here you can call this equation 1 and then with this equation 2 over here that provides a constraint on lambda and rho we have two equations and two unknowns and we can solve for lambda and rho uniquely here which means that after solving this and putting it back into the equation over here we'll be able to get a unique solution for t5 coming back to this in the case where the ratio that's solved by y 0 over y 1 and y 1 over y 2 is not the same what we can do here is that we can simply take the average of these values two values and two to be equal to lambda over rho we'll get the second equation over here and solve for lambda and rho uniquely to recover t5 so once uh t5 is uh recovered because t5 here is actually a vector that's made out of x x square x cubed and uh x4 what we can do here is that we can take the ratio and solve for x so in total we will end up with uh four ratios we would have four solutions for x but normally these four solutions are very close to each other so what we can do here is that we can simply take the average of these four values to get the final solution of x where we can solve for since we know that x equals to s1 squared we can solve for the final depth by taking the square root of x note that this equation x equals to s1 squared will give rise to s1 equals to plus minus of square root of x but we discard the solution of minus square root x because s1 is a depth value which always must be more than zero and once s1 is solved we can back substitute s1 into the polynomial equation of f i j s i and s j equals to zero so in this case here we can back substitute s1 to let's say i equals to s1 for example we can back substitute it to solve for sj and once f sj is solved we can back substitute it into another equation with sj and solve for the other unknown depth and finally we will get all the depth after we have gotten all the unknown depths we can do the same thing to to apply absolute orientation to recover the camera pose as in the grenade algorithm and here what's interesting here is that because we have four point correspondences and we can see that because rho and lambda here can be determined in a unique way and uh x here can also be determined in the unique way where we simply take out the average of this because all the illusions are going to be quite similar or quite close to each other so you can simply take the average of these values and take the x and solve for x so as a result we'll get the unique solution provided that the four points are not degenerate so uh in this linear four point algorithm the the there will still be two degenerate cases where uh if the four points are collinear because in this case we are the camera center is still going to lie on a plane with these four points and this will give us degenerate solutions and in the other case would be if all the four points uh and they are coplanar together with the camera center so if this is p1 p2 p3 and p4 and this is the camera center p and all of them are going to lie on the plane then this is also going to be degenerate as what we have seen earlier on now uh it happens that the linear four-point algorithm the formulation of the linear four-point algorithm can also be applied to more than four points so for example uh when n equals to five when there are five point correspondences and in this particular case uh we uh we will end up to have a system of polynomial equations that is in this form a uh t equals to 0 where t here still remain as a 5 by 1 vector that consists of x that we saw earlier on where x here is simply equals to s 1 square is the same as before but since we have more equations because there are five points which gives rise to a more fourth order polynomial equation in fact in total we will get six uh polynomial uh equation so this can be calculated easily because uh first we have to get five choose two and this will give rise to all the fijs and then uh the all the fijs we will choose uh three we've got six fourth order polynomial equation so uh and since we got this sixth fourth order polynomial equation we can stack them all up so one two all the way to six equations and we end up with a coefficient matrix with an a matrix of of dimension 6 by 5. in this particular case it will be overly determinate homogeneous linear equation where a here is equals to 6 by 5. in order for non-trivial solution to exist then this guy here better be of a maximum of rank four so what we can do here is that we can take the svd of a and this will give us u sigma v transpose where we simply the vector that corresponds to the least singular value in sigma and this means that we are minimizing the norm of this subjected to the norm of t to be equals to 1 and now in in this particular case here since the solution is unique that means that because we would have to solve for t5 it simply equals to simply equals to v5 over here the the last column of this uh v matrix over here since there's a lambda over here that we parameterize this is a family of solution but because we know that t5 the first element of t5 is going to be one so what we can do here is that we can solve for the first component so we can use the first component of this to an equate it to be one since the first component of t5 is equals to one to solve for the unknown lambda and substitute it back here to get a unique uh solution for t5 then once we get the unique solution for t5 we can proceed on to solve for the unique solution for x which is equals to s1 square that gives rise to the solution of s1 over here now it turns out that the same algorithm that we use to solve the linear four point and the linear five point can be applied to any number of point correspondences that is four or uh more points and here we just need to solve for the svd of a m minus 1 multiplied by m minus 2 divided by 2 by 5 uh matrix of a to get the solution for the vector t 5 but the problem here is that the overall complexity of a svd of this svd of a taking that as video of a is it can be proven the overall complexity is a cubic in the order of the number of points that is used to form a so i'll skip the the proof here but the the overall complexity can be shown to be a cubic complexity over here so what this means is that uh this becomes the cubic complexity here becomes a limiting factor for us to apply for this particular algorithm to a number of point correspondence which is significantly large now in the year 2006 the paper published by vincent lapati which is called the epmp algorithm mitigates the problem of the linear endpoint algorithm that was shown earlier on by quan and lun that was published in the year 1999 which has cubic complexity in the order of the number of points so here they call this epmp where it's a linear solution it's a solution with a linear complexity with respect to the number of points we'll see the formulation of this particular algorithm and why is it in the order of linear complexity with the number of points so the brief idea of this particular paper that was first proposed by lapati in the year 2006 is that instead of using every single point correspondences that is given to us the the core idea is that here we'll make use of all these points to define four control points so we'll make use of all these points to define four control points because we saw earlier on that uh in the linear four point algorithm a set of four points is the minimum number of points that we need to solve for the unique solution of the camera pose so now given this set of 2d to 3d correspondences we'll define four control points and we'll solve for the camera pose using these four control points that we found from from the given set of endpoint correspondences and we can see that uh here it becomes more tractable because uh even for a very large n the number of control points still stay constant as uh fall uh we'll first look at the case where these four control points are non-coplanar control points this means that these four control points will not lie on a plane so first let us define the 3d points in the world plane as a pi uh with a superscript of w what this means is that uh this pi this particular point pi is defined with respect to a world frame which we call f w over here then similarly we will define the four control points that do express the world coordinates so given all these coordinate points of piw here that we've seen earlier on what we want to do here is that we want to define four control points that is in the same space which we call c1 c2 c3 and c4 over here so now this control point becomes our unknown so that we also need to solve for in addition to the camera rotation and translation where these 3d points they are given and they are known having defined the control points will express each reference points as a weighted sum of the control points so for example a point p i over here will express this as a sum as a weighted sum of all the four control points where alpha i j here is simply a wait for us to weigh the percentage of c j the control points that we want to take in the uh sum and it must sum up to 1 over here and we will apply the same relation for the 3d points in the camera coordinates so just now pi here i said that is p i w is defined with respect to a world frame in this case here we also have the same point which we call p i of c but uh it's will be defined with respect to a camera frame over here so the objective is actually to find out the relative transformation between the camera frame and the world frame and notice that uh in this case here the way that it's they define a control a set of control points in this particular approach over here they actually avoid the expression of or expressing the constraints using the unknown depth altogether so there's no unknown depth here and this is also one of the reasons why because we saw that the number of unknown depth actually grows with the number of points and so now since we are using the control point which stay consistent uh as four points we will see that uh we have a lot lesser number of unknowns over here so we'll make use of the same set of alpha the same set of weights that we have seen defined earlier on to weigh the control points in the camera frame so this is uh in the camera frame which we uh use a superscript of c to denote it but alpha j here would be the same alpha i j for both the world frame as well as the camera uh frame and now the unknowns would be this set of c j in the camera frame uh we'll first see how to solve for alpha as well as cj in the world frame using a simple technique that was proposed in this particular paper so one way to select the the control points in the world frame would be as follow the first step would be to compute the centroid of the world points that means that i give uh we are given all the this points which we denote as p i c over here expressed in the world frame the first step here is to compute the centroid of this bunch of points over here and assigned it to be the first control point and then we select the other three control points because all together we need four uh control points and uh since we have the first control point as the centroid the the easiest way to select the rest of the three control points would be simply to make use of the distribution of the set of 3d points in the world frame to get the other three control points we can see that these are distributed according to the principal axis and this principle axis over here including the cente the centroid will all together have four points over here so uh since we already have uh gotten the centroid we can compute the principal axis we have three principal axes all together which we can assign to the two remaining three control points where the principal axis is simply uh computed by taking the svd of the covariance matrix and the covariance matrix here refers to this covariance matrix which we denote as v is equal to the sum of all the dot products of the differences between every single point with the centroid and of course we have to normalize it with the number of points over here so this covariance matrix tells us the distribution it tells us how these 3d points are being distributed in the 3d space and by taking the svd over here since this is a covariance matrix which is a symmetrical square matrix the svd of it would have the left and right singular vector as the same matrix which we denote by u over here and since this is leaving in the three-dimensional space u here is going to be three by three which we can get the uh three uh principal axis or the three singular vectors that corresponds to the three principle axis that we will assign to c1 c2 and c3 in the world frame so now after we have computed the control points in the world frame which we denote by c 1 c 2 c 3 and c 4 with a sub superscript of w over here we can proceed on to solve for alpha and note that there's one set of alpha for every 3d point i over uh pr denoted by pi over here so all together we will have alpha i 1 alpha i 2 alpha and alpha i three and as well as alpha i4 for one of the 3d point which we denote as pi over here and this this can be easily this can be easily solved because we know that uh p i over here is a four by one factor in homogeneous uh frame which we can rewrite as this form over here so these are all known because this is the given 3d point in the world frame and uh we also have solved for the control points c1 c2 c3 and c4 so the only unknowns are all these four unknowns over here we see that every entry here every row here will give us one constraint one equation so all together we have four equations and four unknowns which is in terms of the alpha over here and we can solve for alpha 1 alpha 2 alpha 3 alpha 4 for every single point and it's interesting to note that we do not have to take this constraint here where alpha sums up to 1 explicitly because this is explicitly or implicitly expressed in the last row of this equation over here now once we have solved for the alpha as well as the control point so we know alpha ij for every 3d point we'll use the same alpha here in our expression of the expression of the 3d points expressed in the camera frame we'll use the same alpha that was uh that we have solved earlier on now the only set of unknowns that remain would be the control points in the camera frame denoted by cj uh to the superscript of c where j here equals to uh 1 2 3 and 4. and we saw earlier on that this can be expressed in terms of p i c would be the weighted sum of the control points in the camera frame so uh this where this is now un unknown this becomes our objective that we need to solve for right now uh and we already seen that alpha can be obtained in the previous step and p here which is the 3d in in the camera frame is also unknown but the objective here would be to solve for the control points and then indirectly we will uh obtain the 3d points in the camera frame so uh but what's interesting here is that we because we are given the 2d to 3d correspondences where these 3d correspondences correspondence simply refers to pi in w but we also know the the the corresponding projection into the camera frame so what we can do here what this means here is that we know the projection of pic into the image now let's denote the camera intrinsic value a over here so note that this a refers to the camera intrinsic value in this particular paper i'm just merely following their notations in the paper instead of using the standard convention of k as an intrinsic value so but this means the same thing because this is calibrated camera so a here is known i can simply project the 3d point in the camera frame onto the image frame so this is what i get here where u here is the inhomogeneous coordinate of the pixel and w here is some scalar projective scale so we can substitute this p-i-c using what we have defined earlier on as a weighted sum of the control points to get this equation over here so now this equation can be expanded into a matrix notation where we fill up every entry in the matrix in this particular form over here where we know that these 2d image points are known and this intrinsics which we call a over here intrinsics are also known because this is a calibrated camera and uh alpha here it's also known because this was found in the earliest step so the only unknown here would be the control points in the camera frame and now given this equation over here the objective becomes to solve for the unknown control points using all the known parameters that we have either found or given in our problems i think and we observed that the last row of this guy here the last row since it's equals to one where we take w multiply by one we can express this equation here in the last row to make this w as a function of alpha and z where z here is the last coordinate of the of the control point of the respective control point and since we can make w here the subject we can substitute w back into the first two equations to eliminate w over here the projective scale over here hence we get these two independent equations so now we have two equations over here in terms of uh in terms of the control points and all the known uh parameters because this is just a projection of one single point i over here so what this means is that one point correspondence that we have is going to give us two independent equations where we have in total here uh four control points where each control point is parameterized by three unknown coordinates over here so in total we have 12 unknowns over here where it's a three multiplied by four control points equals to 12 unknowns in total so uh since each point correspondence gives us two independent equations what this simply means is that we need to stack them up into 2n by 12 matrix which we call m over here to solve for the 12 by 1 unknown of the control points in this homogeneous linear equation so we'll see that the number of basis solutions that we can get from solving the equation mx equals to zero over here depends on the size of m and which in turn depends on the number of point correspondences that we have and uh so one way of solving for the basis equation from this uh mx equals to 0 where m here is a 2n by 12 matrix and x here it's a 12 by 1 vector would be to take the svd of the matrix m and this guess vdd of m can be computed effectively or efficiently by taking the svd of m transpose m which will give the same result as the svd of m the matrix m itself and we can see that m transpose m is a constant time which is 12 by 12 and this means that the most expensive step of computing this equation over here would be to compute m transpose m itself and since m is linear with respect to the number of points n this means that uh to compute this particular step over here it was it would be in the time complexity of a linear in the order of uh n and that's why the epmp is a linear complexity uh solution and now let's look at the various cases of the number of point correspondences that would give rise to a different number of basis solutions so in the case where we have only one basis solution where capital n here equals to 1. what this means is that we can write the solution as x equals to beta of v where v here is the null space vector of the equation m x equals to zero and in this particular case where we only have one null space vector where m here is actually 2n by 12 and x here is 12 by 1. what this means is that the rank of m must be equals to 11 and means that there are 11 unique constraints that forms the metric of m and since we know that each point correspondences is going to give us two constraints this simply means that we need a total of 5.5 point correspondences in order to form a uh m with rank of 11 and that would equate to be six point so in this particular case uh where we have six or more point correspondences we'll always have a one null space equation that's parameterized by beta over here and now the objective here becomes to solve for this unknown beta here in order to recover x which is a 12 by 1 vector that parameterize the four unknown control points in the camera coordinate frame and we'll make use of the known distance between the control points to solve for the unknown beta over here so we note that uh the control points regardless of whether it's expressed in the camera frame or in the world frame uh cjw we note that the distance between any two of the points are always going to be the same regardless of the reference frame so what we can do here is that we since the entry every three entries in the vector x in the 12 by 1 vector x represents a control point in the camera frame we can take any combinations of the two control points and compute the distance between these two control points in the camera frame and equate it to the known distance so this guy here is a known distance because we know that we already know the control points in the world frame and solve for beta over here so effectively one uh one of these distances between any two control points is sufficient for us to solve for beta but since we have four points and all together we would have six constraints over here we have all together six pairs of distances over here one two three four five and six the six distances which we can put together into this weighted sum equation over here which we normalize over all the combination of the distances to solve for the unknown beta and in the case where we have two basis equations we can write the solution in this form which is parameterized by v1 and v2 and beta1 and beta2 so in this case here beta1 and beta2 are the unknowns which we will need to find in order to solve for the control points in the camera frame parameterized as x over here so in this case uh n equals to 2 because we are solving the homogeneous equation in this case uh 12 it's 2m by 12 m matrix and then x here is 12 by 1 vector so the case where we have two null space basis solutions over here it means that the rank of m has to be 10 this also means that there has to be 10 unique constraints in this particular n matrix over here and we know that each point gives two constraints this means that we have to have a total of five points in order to get the null space solution of two null space vectors and here we can make use of the same trick that we did earlier on to solve for beta1 and beta2 since we have four control points we get six constraints over here this means that we have in we still have enough constraints to solve for the two unknowns beta1 and uh beta2 using the same set of our equation and more specifically we can stack those six constraints into a non-homogeneous linear equation that looks like this l beta equals to rho where beta is parameterized by beta 1 1 beta 1 2 and beta 2 2 made up of beta 1 and beta 2 over here because we have six constraints this means that l over here would be a six by three matrix which is formed with the basis that is found earlier on v1 and v2 and rho here would be a six factor with the square distances from the six constraints in form by the camera coordinate control points and beta here would be simply the unknowns that is made up of beta 1 beta and beta2 so this is an over-determinant system where we can just take the pseudo inverse of l where beta here is given by the pseudo inverse of l multiplied by rho and this will give us the solution for beta which we can then use to solve for beta 1 and beta2 so in the case where n equals to 3 which means that the solution is a formed by a linear combination of 3 null space basis vector v1 v2 v3 which is parameterized by beta1 beta 2 and beta 3 here what this means is the solutions to this mx equals to 0 where m here is a 2n by 12 and x here is 12 by 1. so since there are three null space vectors this means that the rank of m has to be equals to 9 which simply means that they are all together 4.5 point correspondences here so for some reason uh that we might want to ignore one of the equation in one of the point correspondences we'll end up with three null space basis equation over here which is in practice we actually do not end up here because we will not take 4.5 points usually we will just take a minimum of 5 point in order to get these equations over here mx equals to 0. so uh here we solve for beta one beta two and beta three using the same method as uh described earlier on because we have six constraints where we have three unknowns over here using the same way we end up with l beta equals to rho which is a non-homogeneous linear equation where l here is a 6 by 6 matrix that is in just enough for us to solve for beta and by just simply taking the inverse of l over here since this is a non-homogeneous linear equation beta would be equal to l inverse multiplied by rho over here and this would give us the solution to uh to beta 1 beta 2 and beta 3. so now let's look at the case where all the four control points are called planar which means that all the four control points lies on a plane and what we need to solve for the solution would be since uh the four of them are lying on the same plane that means that any one of the point is dependent on all the other three points which means that we have less constraints right now but uh we still can solve for the equation where we will just simply take any three of this uh four coplanar planes with our points which it's always going to form a uh it's also always going to form a plane because any three point uh defines a plane and we'll use only three of these control points to solve for the camera pose and uh here what we can see here is that because we are down to three control points where each control point gives uh three coordinates so three multiplied by three this means that we have nine unknowns in this control point we'll still use the same formulation as before to formulate m but now m is going to be only two by nine where there's a nine dimensional eigenvector uh that we need to solve for that's a basis uh uh solution the main difference here it would be that uh the number of quadratic constraints that we have between the the points so just now we say that we are solving for beta 1 beta 2 and so on using the constraints the fixed distance between any two uh control point now we are reduced to only three uh constraints from the six constraints that we have earlier on uh what this means is that since we have only three constraints we can only solve for the first two cases where n equals to one and n equals to two uh in the same way now finally uh after solving the control points in the camera center in the camera reference frame we can substitute it back into this equation to recover the 3d points that is defined in the camera frame and once we know this uh we will be able to solve for the rotation and translation between this two sets of 3d points they are actually the same 3d points that is defined in two different reference frames so let's say these are the 3d point p i w would be all expressed with respect to f w and we have p i w over here then we have another frame over here which we have solved for using the camera control point fc over here so uh these points would be defined as p i c so given these two sets of points over here we can solve r and t between these two frames using the absolute orientation algorithm now in summary we have looked at how to define the perspective endpoint problem to solve the camera post estimation and then in particular we have looked at two cases uh the first would be the uncalibrated camera case where k the intrinsic is unknown and we'll simply solve this as the camera projection matrix uh using n 2d 3d line or point correspondences then we look at the case of a calibrated camera where k is known so in this case uh we look at the two solutions over here which make use of three points and four points and these are the two stage approach we solve for the unknown depth and then make use of the unknown depth to solve for the rotation and translation using the absolute post-estimation problem using the absolute orientation and then finally we look at the efficient pmp algorithm which is a on complexity algorithm that solves for the camera pose using a set of control points using four control points in particular and we saw that this uh it's uh the complexity is a linear in terms of the number of points which is much easier to compute then we also saw the degeneracy cases for the camera post estimation problem in particular we saw that in the if the points are all collinear this means that it forms a plane with the camera center then this is a degenerate case and as well as we saw that if all the points plus the camera center forms a plane then this is also a degenerate case and that's the end of today's lecture thank you mx equals to zero |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_12_Part_2_Generalized_cameras.txt | so unfortunately linear 17-point algorithm that we've seen we will not work for any configurations of the general camera we'll look at the degenerate cases under three configuration in particular the locally central projection axial cameras and the locally central and axial camera configuration now we'll follow the notation of the paper that proposed this analysis of the degeneracy written by hong kong li that i have mentioned in the acknowledgement slide earlier on so suppose that we have an image ray so in this case it would be the same general camera suppose that we have an image ray which we call l with a point that we conveniently chosen as the camera center that this is mentioned earlier on in when we define the plucker line uh or the poker light array for that denotes a point in the general camera so here we are going to denote this camera center as v or any point on the light ray as v and this light ray would also have a unit direction which we denote as x hence we can denote the light ray as a six factor that we have seen earlier on as x transpose the unit vector and a moment vector of v which is any point on the line cross the unit direction of the line and using this representation we can then restate the generalized epipolar geometry which is what we have defined uh previously as this form over here in the coordinates of x and v that we have defined so r and t here will be r and e over here where e is equals to the cross product of t and r where t and r are simply the relative transformation these are the relative transformation between the two general view so the generalized camera view here i have a corresponding light ray and which intersects at a 3d point and here r and t simply represents the relative transformation between this two generalized frame but now we are rewriting it in terms of x the unit direction and the center of the camera which we denote as v over here so our goal here is to examine the various structure in particular the three different configuration of the general camera under which this set of equations would produce a different set of solutions we'll identify the several degeneracy for the set of equations from this general epipolar geometry such that it has a rank that is smaller than what we expect as we have seen earlier on that this generalized epipolar geometry equation over here given sufficiently enough points we can rewrite this into the form of a e equals to 0 where e here it's our 18 by 1 vector and a here would be n by 18 that is made up of the known variables from the applicant line coordinates and the 18 by one vector of e over here would be the unknowns that is made up of the essential matrix the first nine by one entry would be from e and the next nine by one entry would be from r so we saw that from the linear 17 point algorithm we will make use of a e equals to 0 the homogeneous linear equation to solve for the svd of a such that the solution to this particular equation e will be the last row of this right singular vector that corresponds to the least singular value after taking svd of a so here we saw that it will end up with a one parameter family of solution provided that since this is 18 by one vector this means that for this homogeneous linear equation to have a one parameter family of solution the rank of a must be equals to 17. so in the case where degeneracy occurs we can say that the rank of a would drop below 17. this means that we will have a family of solution which can be parameterized by more than one particular parameter we could have two parameters of solutions or more parameters of solutions depending on the severity of the degeneracy over here so in general we will say that if the rank of a4 below 17 then we would have a degenerate case so assuming that there are at least a number of equations that arise from the point correspondences via the generalized epipolar geometry we'll see that the number of solutions that arise from the equation of a e equals to 0 would be 18 minus r that means that the system must a rank of no greater than r over here for example in this particular case where we have 17 point correspondences suppose that this 17 points where r equals to 17 they are all linearly independent so what this means is that we'll have all together 18 minus 17 equals to one parameter family of solution over here and this is the case that we have seen earlier on so in the case where the rank dropped below 17 this means that r here is going to be lesser than 17 which also means that all of these poi correspondences or the r number of point correspondences they are not all linearly independent so we will have lesser than 17 linearly independent constraints over here and as a result we will get 18 minus our parameters of a solution where the rank is no greater than r so in the most general case the camera is simply a set of unconstrained images in general position so in this case here the rank is going to be 17 and from 17 point correspondences a unique solution can be obtained via the linear 17 point algorithm from uh solving the svd and in this case here what it means is that uh suppose that this is view one of the generalized camera so v uh one and this is a view two v two of the generalized camera where the v and v2 v1 and v2 are the reference frame of the generalized camera so here uh suppose that this generalized camera is made out of a multi-camera system where c1 c2 c3 c4 their individual respective camera so in the most general case what it means here is that suppose that this c1 sees a particular 3d point in the space and in the most general case the its correspondence the point that is being seen in the next view the same point here seen by c1 uh in the next view or of the generalized camera would not be from the same camera itself it would not be from c1 this means that this particular 2d point or 3d point here is not track over the generalized camera in the same camera under the multi-camera setting so in this particular case this would be the most general uh scenario where the 17-point correspondences will be linearly independent and there will be a unique solution as we have seen earlier on in the case of locally central camera we will still have the tool view general camera and but in this particular case we can see that the same point the is tracked by the same camera in the multi-camera system setup suppose that this is our generalized camera for view number one and this is the generalized camera for view number two where it consists of a multiple camera setting here all together there are six cameras uh on the general camera so suppose uh we can see an example here suppose that we have a camera c1 over here it's looking at a 3d point suppose we call this x and the same 3d point it's being tracked by the same camera in the second view of the general camera here we can see this happening for every other point it's always track over the same camera for example this second point here is tracked by camera number two in the multi multi camera system setup and so on so forth so in this particular case over here since the light rays are represented in the coordinate system attached to the camera rig we can write the correspondence between the points as x i v i its corresponding point would be x there's a missing prime over here x i prime and v i prime where v i and v i prime are the camera center of the respective two views so what this means is that i have the first view and the second view of the general camera and here in this general camera it consists of multiple camera setup where vi over here it's the camera center of the first view and vi prime over here would be the camera center of the exactly the same camera on in the multi-camera setting in the second view over here and x i would be a point that is seen by this particular camera in the first view and x i prime would be the corresponding point that is seen by the same camera in the second view so these two uh light rays will intersect at a certain 3d point that gives rise to x i and x i prime the correspondence so substituting this pair of corresponding points and the camera centers into the general epipola constraint that we have derived earlier on we'll get this equation over here where now the transformation of the second view can be written as a camera center of the first view instead of having a vi prime in this case here so now we will have vi which is coherent since they are the same camera center with respect to the same reference frame so this after the rigid transformation the reference frame of this generalized camera with respect to the camera that we are interested in would not change now we can easily see that e and r it's one solution of the family of solutions that we can obtain here this is because of the generalized epipolar constraint that we have seen earlier on that e and r uh plugging into this equation here would fulfill the constraint to be zero and we can also easily see that zero and identity where the essential matrix equals to zero and the rotation matrix now equals to identity it's also a solution to the generalized epipolar constraint this is because substituting this guy here the essential matrix here to be 0 and the rotation matrix here to be identity we can see that this guy here it cancels out and here since r here becomes an identity we can see that this terms here becomes the sum of an anti-symmetry of the triple product which is shown by this term over here and this term here in linear algebra we know that this is always equals to zero hence zero and identity would fulfill the generalized essential uh constraint and generically the rank is now not lesser than 16 that means that a complete solution to this generalized epipolar constraint will therefore have a two-dimensional linear family of solution given by e and r which is the general solution that we have seen early on so this can be parameterized by one family of solution and as well as the other solutions that we have seen which is zero and identity so we can combine these two sets of equations to give us the two two parameter family of solution which is lambda e and lambda r plus mu and i we can easily see that family of solution over here fulfills the generalized epipolar constraint by plugging it into the generalized epipolar constraint that we have derived earlier on we can see that this can be easily decomposed into the addition of two terms the first term here which will see that it's exactly the original generalized epipolar geometry constraint which is always equal to zero and the second term here which is parameterized by mu the second parameter that we have defined earlier on for the solution would also be always equals to zero since this is a result of a triple product hence the addition of these two terms will always be equal to zero that means that the this particular solution that we have defined here it has to fulfill the generalized epipolar constraint an interesting property of this set a two-parameter family of solution is that the ambiguity is contained entirely in the rotation matrix because we can see that the solution here e essential matrix is it basically remains as the essential matrix which is up to a certain scale and but the rotation matrix here would become a parameter two parameter family of solution lambda r plus mu multiplied by identity and this means that the essential we can still determine it uh up to a scale but the rotation matrix will not be able to identify it uniquely anymore so it's also interesting to see under this particular configuration the locally central projection camera we can see that under pure translation this means that if the rotation here becomes to identity we can see that this whole term here would become a triple product because this would be lambda multiplied by identity we can split this up into two terms of identity which means that we can we will get this twice we'll get this whole term here multiply by lambda plus this particular equation over here uh in this case this term here would become this term here and we will get this particular term here to be equals to zero so what it means here is that this triple product will vanish under pure translation this means that t is not equal to zero but r is equals to identity and we can also see that under this circumstance that uh the generalized epipola geometry it actually degenerates to the epipolar geometry of the normal pinhole camera because only this particular term here remains since these two terms here is going to they are all going to vanish because of the triple product uh symmetry and hence in this case we can see this here so in this particular case under pure translation we can conclude that it's never possible to recover the scale of the translation here because it's equivalent to the single pinhole camera epipolar geometry the second configuration that we are going to look at would be the axia camera configuration so this particular general camera is defined as the case where all the light rays they are all going to intersect in a single line this means that there's a single line axis which defines the camera center of the multi-camera system set up one example here would be suppose that this is the rigid body or rigid rig that contains the multi-camera system set up all the individual cameras they're all going to be mounted along a particular axis over here this particular line over here is what we call the axis and uh there are several examples where this could be of a practical interest the first would be the stereo setup that we have seen earlier on so in this case for it to be a general camera this means that the pair of cameras they would not have overlapping fuel view it could be placed very far away where the amount of overlap in the view would be minima or it could even be a pair of cameras set up where one is looking forward and one is looking backward so in this case here a practical setup can be found in actually your mobile phone where you can see that generally your mobile phone has two cameras one in front and one looking at the back so because this two cameras they are mounted so close to each other that means that the thickness of your cam mobile phone is actually very small and hence these two cameras here can be treated as an axial camera another case would be a set of central cameras with a collinear center and this is what i have shown earlier on in this case here where we can see that there's a set of central camera a multi-camera system set up with a set of cameras that where all the centers of these cameras sits on the straight line on the exit which we call the axis over here so in this case it would also be a axial camera and in the third case it would be a set of non-central catalytics cameras or fish eye cameras mounted with collinear axis and the first two cases here since they are also central camera with a single point of convergence for every particular camera in the multi-camera setup this would be both a locally central and axia camera setup here's an example of a camera where that it can be realized using a central camera that is looking at a semi-sphere lens that looks something like this or a mirror that looks something like this so when we project when lights are being projected onto this camera they are scattered in different ways where we treat the mirror here as our lens this light rays that are reflected from this particular lens the semi-sphere lens over here they do not converge at the single center of projection and similarly for fish eye camera here's an example of fisheye camera that you can purchase to mount it onto your mobile phone for example so uh in this case here since there's a wide angle fuel of view well we can also treat this cam this light rays not to be converging at a single center of projection i will not go too much into the detail of the two models or of the camera models of the cutter the optrix and the fisheye cameras but here these two cameras can also be generally treated as a non-central camera so now let's assume that the origin of the world coordinates of this two general cameras lies on the axis so what it means is that suppose that i have this general camera where there is a axis and all the cameras are going to lie here and this is the same but of the second view so here i'm going to define the reference frame of this particular general camera on the axis itself hence we may write the center of the cameras which we denote as vi to be equals to alpha i multiplied by w where w is actually the unit vector or along this particular axis with respect to the origin or reference frame and v similarly for v i prime in the second view here these two are the same camera except for they are in two different views and what this means is that the this particular guy over here the axis is still going to be along the direction of w except for now there's a different scale of alpha i prime and in the direction of the axis and by plugging in these two v values the camera center values into the generalized epipola constraint we'll get this particular equation over here instead of being expressed with respect to vi and vi prime now we express it in terms of alpha i and alpha i prime and w over here so generically we can see that this has a rank of 16 the reason is because here in this case here we can work it out where the e and r would generally be still be a solution of this generalized epipolar constraint but we will have an additional eq solution over here which is w and w uh transpose for r and 0 for e hence by combining we'll do the same thing we combine these two sets of equations together we'll get this family of two parameters solution for this particular equation over here and i won't go on to prove it uh here but you can verify this by plugging in this equation back into this generalized epipolar constraint equation over here and you will see that uh you still end up with a set of equation that is equivalent to the original generalized epipolar constraint and the other set of the equations which is addition to be consisting of the triple product which is always going to be zero hence the solution here the family of two parameter solutions here is always going to fulfill the generalized epipolar constraints to be equals to zero now uh what's interesting here is that we can also see that in this particular case the ambiguity only lies in the r part of the solution which can be seen here so e here can still be determined up to a certain scale but r here we will have a family of two parameter solution where there's an additional component over here so in contrast or compare this with the previous case of the central uh projection camera where we have the solution to be delta uh lambda e and mu or lambda r plus mu of i so the ambiguity also lies in the rotation matrix similarly in this particular case we can see that the ambiguity also lies in the rotation matrix and it is important to note that the fact that this happens or this particular ambiguity only lies in the rotation matrix but not the essential matrix the reason is because we choose the coordinate system to be on the axis itself so here it means that we chose this particular reference frame to be on the axis hence we can derive this particular form of solution where the ambiguity only lies in the rotation component of the solution instead of the essential matrix it's important to also note that in general if we were to choose a axis or a reference frame that is anywhere not on the axis of the cameras then this ambiguity cannot be so well defined this means that the ambiguity generally spills over to the essential matrix as well but the rank deficiency will still occur so we have also seen in the earlier case that of a locally central camera that there's a degeneracy and we also seen that for axial cameras there are two cases where it can also be defined as both axial camera as well as a locally central camera that's the case where we have either a pair of camera that lies on the axis it could be non overlapping view or view camera or it could be an array of cameras that lies on the the same axis so these are cameras with central projection or locally central camera that lies on a axial camera configuration hence it has both the property and we will see now that in this particular configuration further degeneracy occurs so the condition of the local centrality means that alpha i equals to alpha i prime that means that the scale of both the frames having these two general uh camera frame uh defining a reference frame over here suppose that this is the camera i and this is camera i we can see that even though the view changes alpha i over here and alpha i prime over here that are used to define the position of the camera with respect to the reference frame will not change because essentially these two are the same camera so we are not moving this particular camera is rigidly fixed onto the axis and by plugging this relation alpha i equals to alpha i prime back into the generalized epipola constraint we'll get this particular equation uh over here so from the new generalized epipolar constraint that we have obtained from the locally central and axial cameras we can further identify a solution which is zero and the skew symmetric matrix formed by w which is the directional vector of the axis in a generalized camera that we have defined earlier on so we can see that by plugging this back into the generalized epipolar geometry we'll end up with this particular equation over here which is always zero the reason why this is uh always zero is that because we can see that this here and here they are the same but in this particular case here we are taking w cross with i this means that it's going in this direction for example and now we are swapping the position of x and w and do the cross product again this means that we will get a vector that is equal and opposite in the direction of this vector over here and adding both up this means that they will become zero over here which means that this guy over here is one of the solution that we will get from the locally central and axia camera configuration and putting everything back into the previous solutions that we have from the locally central projection camera as well as the axial camera we'll get a whole family of solutions which consists of four parameters over here alpha beta gamma and delta over here so in this case we can see that e and r it's from the original epipola constraint and then we will get zero and identity which is from the locally central configuration and then we will see that there is also the omega from this particular locally central and axial configuration and finally w multiplied by w transpose it's from the axial camera setup so similar to the axia camera configuration the complete solution set is under the assumption that the origin lies on the camera axis so in this particular case we have a general camera and there's also an axis so every one of this multi-camera system that is fixed onto this general camera lies on the camera axis and now we made the assumption that the camera reference frame has to be lying on this particular axis in order for the ambiguity to lie only on the rotation part so in general if we were to fix this particular axis anywhere on the rigid body over here or anywhere in the 3d space then the ambiguity can get spilt over to the essential matrix and this will be much more difficult to handle as we will see later in our lecture so uh once more the e part over here can be determined uniquely up to scale and there's a four dimensional family of solutions here we'll see later that we would not need to decompose e into four different solutions in fact we'll only take the two rotation matrices that we can decompose from the essential matrix to get the solution so having defined the three different configurations of the degeneracies for our generalized camera we'll now look at how to solve the equations under this degeneracies we'll see that this will still remain as a linear algorithm which will allow us to solve for the solution uniquely instead of having a family of uh solutions that is parameterized by two or more parameters that we have seen earlier on so uh we know that for each point correspondence under the generalized epipolar constraint this is the constraint that we will get from one particular point correspondence and we have also seen that if we are given many point correspondences we can stack these equations up into the homogeneous linear equation of this form where a here is generally n by 18 and the vector is a 18 by 1 vector where the first nine by one entries would be formed by the unknowns from the essential matrix the last nine by one vector over here would be made out of the three by three entries in the rotation matrix and however we also see that typically under the degenerate cases if we were to apply the standard svd to this particular homogeneous linear equation over here we'll get a family of solutions which is more than one parameter if the rank falls below 17 so in this case because we saw that the solution would correspond to the svd of a in this case it would be let's denote this as u sigma and v transpose and the solution will corresponds to the last row of this v matrix that corresponds to the least singular value in the case where the rank of a equals to 17 but in the case where the rank of a falls below 17 this means that we would be under one of the degenerate cases as we have seen earlier on where the rank could be 16 or lesser for example if the rank of a call equals to 16 then what it means is that we would have to take the last two columns of v here to be the solution and the solution of this vector over here the 18 by 1 vector over here will be parameterized by v 1 plus gamma v 2 for example so where v1 and v2 corresponds to the last two columns of the uh right octagonal matrix of the from the svd of the metric a so this means that we would have a whole family of solutions if we completely ignore the rank deficiency here or worst case is that if we completely ignore the rank deficiency that means that we don't care about the rank efficiency the svd of a still exists this means that this guy over here the factorization into this form over here it still exists but we might be taking the wrong solution assuming that we thought that this a has a rank of 17 which in fact that it is lesser than a rank of 17 so we might mistook the solution to be just the last column of v over here and that would be a totally wrong solution hence uh we will now show the derivation of a solution that completely avoid degeneracy or this ignorance from happening and the observation here would be that the e part of the solution is always unchanged by the ambiguity uh and this is what we have observed in the solutions under the three degenerate cases that we have defined earlier on and we'll make use of this particular property that the e part remains unchanged to derive the linear algorithm under the degeneracy and first we'll rewrite the this equation over here which is the homogeneous linear equation that we seen many times in this lecture now so this is the general essential constraint that we have seen many times and what we aim to achieve by finding the solution would be that we want to minimize this uh generalized uh epipolar constraint such that it's subjected to the norm of the essential matrix to be equal to one hence we are ignoring the norm of the whole vector over here which we have been doing for the generalized linear 17 point algorithm so in that particular case let's write this as the vector as x so we are solving ax equals to zero subjected to the norm of x to be equals to 1. if we simply take the svd of a and solve for the solution from the svd of a so this means that we are doing this but now we are not going to do this anymore because we noted that under the degeneracy the only thing that remains consistent would be the solutions for the essential matrix and hence we can ignore the part from the the constraint from the rotation matrix and simply solve this particular equation this minimization equation over here with respect to the or subjected to the fact that the norm of the essential matrix must be 1. so we can rewrite the generalized epipolar constraints into this particular form over here where we can see earlier on that uh the we have this con constraint the generalized epipolar constraint a multiplied by a vector of e which consists of the first nine by one entries of the eighteen by one vector and the second part of this vector would be uh r equals to zero so we can split a into two parts this means that i can write a as a e and a r over here where a e is the first m by nine entries of a and a r will be the next m by nine entries of a and this can be multiplied by the same vector of e and the vector of r over here so we can rewrite this into a submission of a e multiplied by the vector of e over here and a r multiplied by the vector of r over here this would still be equals to 0 and now we can rearrange this equation over here so we can rewrite this equation over here into a e the vector of e transpose equals to minus a r v the vector of r transpose simply by moving this guy over here on to the right side of the equation so we can now solve for the vector of r by simply taking the pseudo inverse of a r and that would be equals to a r pshudo inverse denoted by a superscript of plus over here multiplied by a e and the vector of e transpose over here so plugging this guy back into this equation over here here vector of r over here we can eliminate this vector of r from this particular equation and rewrite it into the form of a e vector of e transpose plus a r multiplied by this equation over here which is minus a r plus a e and the vector of e transpose equals to zero so we can factorize out a e multiplied by the vector of e which is here and then the other parts here would be simply equals to a r multiplied by a r transpose minus away identity in this case over here so we can rewrite the equation into this form which as a result still ends up to be a homogeneous linear equation because we can see that this component over here which is made out of ar and ar plus and ae there are actually a known matrix over here which now simply becomes a n by nine equation over here where the vector of e would be a nine by one vector that consists of the three by three entries in our essential matrix and since this n by nine vector here is a known vector and this would mean that this whole equation over here would still be a homogeneous linear equation that we can easily solve by taking the svd of this whole block of m by nine entries of the matrix over here so the last vector the last column of the v vector that corresponds to the least singular value would be the solution for this homogeneous linear equation subjected to the fact that the norm of the vector of e would be now equals to 1. hence we will be able to solve this without any ambiguity on the essential matrix so as i mentioned earlier that the solving this particular equation with the standard svd method will give us a unique solution for the essential matrix we can then decompose the essential matrix according to the method that we have seen in the pipola geometry lecture to get a pair of rotation matrices which we denote as r and r prime so now instead of further using this r and r prime and the essential matrix to get the other pair of the translation vector we're going to make use of the generalized epipolar geometry that we have seen earlier on here to solve for the translation vector we'll do this by plugging this r rotation matrix into this generalized epipolar constraint uh individually so for each r we'll do this individually and then we will solve for the translation vector here so note that after we have plugged in r into this generalized apipola constraint this equation over here would be linear with respect to the translation vector and we can form this into an equation of a t equals to 0 where t over here would be a vector of t x t y t z and 1 transpose and then we can solve for this t over here without any ambiguity by enforcing the constraint that the last element would be zero so this can be simply solved as the svd of a and this would be equals to u sigma v transpose we take the last column of v that corresponds to the smallest value of uh the least singular value entries in sigma over here and since we have a one family of solution which means that we have the lambda of v i the last column of this particular matrix over here we'll be able to solve for lambda the parameter by simply enforcing the last element of this t over here should be equals to one hence as a result we'll get a unique solution for both r and t since t here is solved by using this equation over here where we have mentioned earlier on that x which is x and x prime over here since they are from the plucker line like ray that is expressed in terms of the translation from the in extrinsic value with absolute matrix scale this also means that the t vector that is being solved from this particular equation over here would also contains an absolute matrix scale compare this to the decomposition of the translation vector with the essential matrix in this particular case we will not be able to recover the absolute skill so once we have the solution because we have r and r prime from the decomposition of our essential matrix this means that we get two possible solutions and for each one of this r and r prime we will plug it into the generalized epipolar constraints to solve for t this means that we have the set of a pair of solution which is equivalent to r and t and r prime and t prime will check for the solution such that the 3d points are lying in front of both cameras so for example r and t if it gives us the configuration such that all the 3d points slide in front of the camera then we will choose this as the correct solution as i have mentioned earlier on that we do not take the translation vector from the decomposition of the essential matrix because of the unknown scale and in the case of the generalized epipolar geometry we'll be able to get the absolute skill because of the knowledge of the matrix scale in the intrinsic value of the translation |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_6_Part_2_Single_view_metrology.txt | so now let's look at two applications of the camera under pure rotation the first application that we are going to look at would be to generate what we call the synthetic views suppose that we are given a source image of a corridor and we know that this corridor contains of multiple planes for example the floor plane that we can see over here so what we the objective of generating synthetic view would be to map this particular source image into a target image where the plane of the interest for example this ground plane over here would become frontal parallel to the target image plane and we know that since the world scene that we are interested in it consists of a plane and this plane is being projected onto the image and we have seen that the projection is equivalent to a homography that means that the warping of this we can be done with a planar homography h which is equivalent to pure rotation similarly we can select another plane of interest for example this wall plane over here we can select this wall plane over here and then map it and warp it such that this wall plane becomes frontal parallel to the target image under uh another homography which is equivalent to a pure rotation as we have what we have seen earlier on let's look at the algorithm to achieve this the first step of this algorithm is simply to compute given a source image suppose that this is my source image and this is my target image that i want to unwant so given a source image i would have to first identify the quadrilateral shape of the plane in this particular source image and from here i will be able to identify the four vertices of this uh qualitative shape that is in the source image and the objective here is that i want to find the homography such that i can map this particular quality shape from the source image into a perfect rectangle of a certain predefined expect ratio so this means that i need to define the aspect ratio the desired aspect ratio of this whopping we can see that we have four point correspondences from the source image with the target image and this is related by the homography and we have seen in earlier on in lecture three that every point let's call the target one point here on the target image as x prime and the source image at the corresponding point as x we have seen that x prime is essentially equals to h multiplied by x and with four such correspondences we can solve for h using the linear method that we have seen in lecture three once we have computed the h what we can do here is that since we have the source image and we want to warp it into the target image we know that the relation between this is a homography just now we denote this as x and x prime every pixel here and every pixel here as x so we know that uh pretty much x prime equals to h multiplied by x and what we can do since h is already computed in the first step over here what we can do to transform every pixel from the source to get the target image will be simply to compute the lookup table by simply doing this transformation here inverse of x prime so every pixel here i have the pixel coordinate i can simply input it into x prime and then with the known h homography i can compute the corresponding location in the source image and by doing this i will be able to look out for the corresponding rgb value in the source image and fill it up in the target image and hence i'll be able to walk the source image into the target image as given by the homography the next application that we're going to look at from pure camera rotation would be the example of planar panoramic mosaic king let's suppose that we are given uh several images that is taken by a camera that undergoes pure rotation so it's only going undergoing pure rotation and we know that uh earlier on from uh what we have seen that this images every point correspondences of these images any pair of these images they are actually related by a homography so suppose at this point and this point because of a fixed camera center and it only under the images only undergoes a pure rotation these two points suppose i call this x prime and x over here so this two point x prime is in fact equals to h multiplied by x which is a planar homography and we'll see how from these three images that i have taken of the scene i can create a panoramic image that means i'm going to stitch these three images out to create a panoramic view of the scene and here's an example suppose that i give you all together eight images that is taken under pure rotation i'll be able to make one of the image for example uh say this this particular image here as my reference image so if this is my reference image i can pretty much compute the homography of every other image with respect to this image and make use of this image as a reference frame and simply walk all the other images uh into the reference frame to get the panoramic image that looks like this so let's see how this can be done as i have mentioned earlier on we can choose any one of the images suppose that we are given uh five images for example so i1 i2 i3 i4 as well as i5 suppose that i choose i3 as my reference image what i can do here is that i can detect four point correspondences from i2 and to i3 and then i can compute the homography that brings me from 2 to 3 for example i'll be able to projectively walk this image 2 into the reference frame i can actually create a larger image but i'm going to fix this the reference of i3 in the as the reference frame on this particular image after i compute the homography to walk i2 into the reference frame of i3 i will be able to check for every pixel here for example because this undergoes rotation so it will be the the image the image scene that is seen in this image would be outside the field of view of i3 so but what i can do here is that i can actually compute every pixel here via h2 the homography of h2 h3 and then i will be able to look up from every pixel here suppose i call this pixel here h prime and i know let's say h prime is related to x uh so suppose that x prime is related to h and x via this relation here that we have seen let's say the points on i2 is denoted by x and the whopping of i2 onto the reference frame is denoted by x prime for every pixel of x prime over here i'll be able to compute the corresponding location on x on image two and then i'll be able to create a lookup table to look up every rgb value here to fill in to this reference frame so i can do this pretty much for all the images let's say in this case i'm going to compute h4 that is relating h4 and h3 and then do the same step and walk the all the pixels on i4 onto the reference image that is shown here for images that are further away from i3 for example i5 if there is still an overlapping field of view i can pretty much take four point correspondence and compute the homography from five to three but what if in in the case where in the case where this direct correspondence from i3 to i5 is not possible uh this means that this this do not share overlapping field of view anymore you can just imagine that you are rotating the camera and i3 might be looking in front of the camera but i5 might be looking at the side of the camera where the views in the two images might not share any overlapping scene so in this particular case the computation of h the homography that relates the fifth image to the third image would not be possible anymore but what we can do here is that we can simply compute the homography using any four point correspondence between image four and five this will be bringing an image point from the fifth image onto the fourth image and then by once i have this homography that's being computed i can easily compute since homography is a linear mapping so what this means is that the linear relation exists so this means that i can easily compute uh the homography of five to three and that this this would be equals to simply the homography of four to three multiplied by the homography of five to four so this linear relation uh works here and what it simply means is that if i have a point or in image five that i call x prime for example and uh image pointer is an i3 x the image point in x which is in i3 will be simply given by the homography that brings the point x prime to in image 5 to x3 and this can be simply written in the linear chain of transformation which is given by this computation over here and we simply have to do this for every pair of images that is referencing to i3 for example i2 to i3 i1 to i3 and i4 to i3 and i5 to i3 so once we have they all computed all the homographies that relates every other image to the reference image we can pretty much compute the lookup table and such that we can map every pixel from other images onto the reference frame in order to get this particular panoramic image over here now let's move on to look at what does the knowledge of a camera or calibration gives us so what this means is that what this is simply referring to the intrinsic value the knowledge of the intrinsic value which we have saw that this three by three k matrix so if this is known that there are several things that can be deduced directly from this known intrinsic value and on the points image points that is identified on a single image so suppose that we denote let's let us denote the points on the array that is given by this inhomogeneous coordinate of a 3d point so suppose that this is my 3d point and this is the camera center so this is the light ray that intersects the image plane on this 3d point suppose that i denote the direction of this light ray as lambda d or the what this simply means is that this is the inhomogeneous coordinate of this uh this so this is equals to x y z and one so what i'm doing here is that i'm taking the first three value and denoting denoting it as a vector of d and uh assigning it with lambda this simply means that i turn this into a whole family of points that is lying on this light ray so now let's denote this light ray a whole family of points that's lying on the light ray as x tilde and we know that from what we have seen earlier on that this uh this whole family of points they are all going to be projected onto the image at this particular point which you call small x and this is simply given by uh the equation that we have looked at many times uh it goes to the projection matrix multiplied by the 3d point x and which can be rewritten as uh into the intrinsics and extrinsic so here the extrinsic is identity this means that we are treating the camera at the canonical location where the camera coordinate aligns with the world frame and then we are going to multiply by the homogeneous coordinates of the this light ray or this particular point that is denoted by lambda d transpose and one so this is essentially the ray that joins the camera center to the image coordinate as well as the 3d point because there's a zero here so multiplying this here simply gives us the intrinsic value of the three by three intrinsic multiplied by the three by one uh directional vector of the 3d point uh what this means is that all the points in this direction are going to be projected onto the same image point which you denote by x so conversely uh if we were to take the inverse of this guy since we know that small x equals to k multiplied by d now and k here unlike the camera projection matrix is actually an upper triangular matrix a upper triangular 3x3 matrix that is invertible so what this simply means is that we can do the reverse we can invert this guy here to get the direction of the light ray and this would be simply equals to k inverse of x over here and what this means is that just from the knowledge of the camera intrinsics and given a 2d coordinate point on the image we will be able to express the light ray the mathematical equation of the light ray as well as we'll also be able to say for example the converse so we if we know the direction of this guy we'll also be able to find the projection onto the particular image point a further application of this is we can actually use this knowledge to find the true angle between two points in the world as illustrated here so suppose that in the real world that i have a point which i call x1 and i have another point which i call x2 over here and what i want to do here is that given a camera i want to find the angle that is defined by the light ray the join is x1 to c and light ray that joins x2 to c and we know that we can actually observe these two 3d points on the image which we denote by small x1 and small x2 so from what we have seen earlier on that the direction d1 is simply given by the inverse of the camera multiplied by x1 and the direction of d2 is simply given also by the inverse of the camera intrinsic multiplied by x2 over here the angle would be simply the dot product between the two directional vector which is given by this guy over here what we have here it will be d1 transpose d2 which is the dot product and this is simply equal to the magnitude of d1 multiplied by the magnitude of d2 multiplied by the cosine of the angle between the two light ray and this by moving the magnitude across we can see that the cosine of the angle is simply given by the dot product of the two directions divided by the magnitude of the two directions which is given by this guy over here if we were to substitute the correction factor that we have found earlier on with respect to the camera intrinsics and the image point back into this particular cosine relation over here this essentially gives us this particular equation over here in terms of the image point and the camera intrinsics and we can further simplify this expression into this particular expression here what is this simply means is that if a camera is calibrated we can actually use it to measure the angle between any two corresponding image point and this is equivalent to the angle between any two 3d points in the real scene that's what we have seen earlier on so as a result we can also think of it this way that a calibrated camera is actually a directional sensor it acts like a 2d protractor that allows us to measure the angle between any two points x1 and x2 we can also further use the knowledge of the unknown camera matrix to define the normal direction of a plane that results from a back projection of a 2d line the normal direction of this plane is simply given by the transpose of the camera intrinsics multiplied by l and uh let's look at the proof on why is this true suppose that we have a point uh on that we call x on the 2d line over here and we know that this particular point it backs projects to a light ray which is given by this guy over here from the equation that we have seen earlier on using the known camera intrinsic value and we also know that this particular direction is going to be octagonal to the normal the back projected plane of the line and we know that this normal vector is octagon to this particular plane that is created by the back projection of the line and what it simply also means is that the ray d over here is going to be octagonal to this particular normal direction and hence the dot product of the light ray and the normal direction of the plane that is created by the back projection of the line is going to be equals to zero so we can substitute this guy over here in terms of the intrinsic value back into this dot product defined earlier on and this is essentially the equation that we get due to the point and line relation we can also see that uh here it simply means that x here is lying on a line which is l that is defined by k inverse transpose of n and that's why the line equation can be rewritten as a k inverse transpose of n whether we can see further that the normal direction here is simply by bringing this k over or to the left hand side and that would be equal to k transpose of l which is what we have claimed earlier on and this shows the proof so another thing that we can obtain from a known camera intrinsic value is what we call the image of our absolute conic on a plane at infinity there's a conic and this particular conic is called the absolute konig so by knowing the camera intrinsic value what we're claiming here now is that we'll be able to define the projection the absolute conics onto the image plane which we call the image of absolute conics that is we'll see that it's denoted by omega over here now let's look at why is this uh true suppose that we denote this plane at infinity with pi uh infinity and all the points uh it can be written as consisting of a set of points a set of infinite ideal points that is a bit lying on this plane that we denote by x infinity over here and this is simply a collection of all the direction vector on the plane at infinity so the last coordinate over here must be zero and now suppose that this set of ideal points that's lying on the plane at infinity it's bring image onto a image using a general camera p over here which is denoted by uh kr and multiply by i minus c tilde so this is the intrinsic and this is the extrinsic of the camera is a general camera because the extrinsic value here is no longer identity so suppose we apply this particular projection matrix onto the set of infinite points that lies on the plane at infinity we will take p multiplied by x infinity or substituting all the equations into this expression over here this is what we will get and we will see that uh the end result would be simply k multiplied by r times d over here and what's interesting here is that we can see that the projection of the ideal points at infinity is actually independent of the camera center or the camera translation in the xtreme 6. this is only dependent on the direction of the ideal point as well as the intrinsics and the rotation of the camera and what's even more interesting here is that since we get this relation of x equals to k r multiplied by d which is simply a three by one vector and three by one vector here which is simply a p2 projection onto the same space itself hence we can rewrite k r over here k multiplied by r over here as a homography so what this also means is that the set of ideal points on the plane of infinity maps onto the image plane via a relation of homography so this shouldn't comes as a surprise result because we all know that homography is a general relation that relates to planes so since x infinity are all the points that lies on the plane at infinity and this is a plane and uh what this and what we are going to do is that we are going to map this points at infinity uh onto an image plane which is also a plane so the relation between these two the image plane and the plane at infinity must be a homography and we have seen earlier on that this relation is independent on the camera center it's only dependent on the intrinsics as well as the orientation of the camera so now suppose that we we have the absolute conics that lies on the plane and infinity so suppose that this is my absolute conics that lies at length infinity what we want to find here is that we want to find the relation of this absolute conics when it's being mapped onto a image plane and the this should also be a cloning since it's a 2d cloning on a plane mapping onto another entity on the image plane which should also be a 2d uh coding and we'll see that this particular image of the absolute chronic is going to be defined by omega over here which is simply k transpose k multiplied by k transpose inverse and that works out to be k inverse transpose multiplied by k so essentially what this means is that if we know the camera intrinsics we will be able to find the image coding the interesting thing about this image of a cloning is that it consists of only imaginary points and there's no real points at all that this means that we cannot physically observe this image of the absolute conic on any image nonetheless we'll see some practical usage of this for example a hint here would be it should become pretty obvious that since the absolute conics is defined by only the camera intrinsics what it simply means is that if there is a way to find the image of the absolute conic then we will pretty much be able to find the intrinsics of the camera and this simply means that the image of the absolute conics can be used to calibrate the camera before we look at this let's prove the relation on why is it that omega over here the image of absolute conic is equals to k multiplied by k transpose in inverse of the whole thing so we know that under point homography or there's a missing prime over here under the point homography or one image x here it's going to be mapped onto another image which we denote by x prime via a homography and that relation is given by x h x equals to x prime which is seen in this equation over here if this particular relation that maps the x into x prime via homography holds true then a conic on the image or chronic on the image is also going to map onto a conic on the other image via this particular relation over here so i'm not going to prove this particular relation because we have seen it earlier on in lecture one please refer to lecture one for the proof what it simply means is if i have a conics on the plane at infinity and this chronic is none other than the absolute chronic which is denoted by omega infinity it lies on the plane at infinity then i also know that this particular konig has a equation is defined by just identity and this is what we have seen earlier on in the previous lecture then this particular coding is also going to be mapped onto the image plane via the same kind of relation and since we know that the absolute conic here is equals to identity and we also know that the homography that relates this to plane is simply given by k r that we have defined earlier on we can substitute this expression into the transformation of the conics that we have seen earlier on so essentially this is my h inverse and this guy here is equivalent to my c and this guy here is equivalent to h inverse so if i multiply this i'll be able to get the mapping of the absolute conic onto the image of the absolute chronic on that on the particular image which i denote as omega over here so since this is identity the konig absolute coordinate is equals to identity we can further simplify this expression into this expression over here where we see that r multiplied by r inverse cancels out to be identity so what we are left with is simply this equation here which is equals to the image of the absolute conics that we have seen earlier on so a few remarks here on the image of absolute conics is that it actually only depends on the internal the intrinsic parameter k of the camera matrix and it does not depend on the camera orientation or position so we will see later that we can actually make use of the knowledge of w or omega over here to calibrate the camera to find the unknown intrinsic value and the second remarks that we can make about the absolute image of the absolute konig is that the angle between two rays that we have seen earlier on can also be expressed with respect to the image of the absolute conics in in this particular equation over here we also see that this relation it simply remains unchanged under projective transformation so this is also true uh because if i have the the direction so what this simply means is i'm going to project this direction which i denote as d1 and d2 over here so these are going to intersect on the plane on the image plane as x1 and x2 so previously what we have seen is that we have seen that d1 transpose d2 is i'm only looking at the numerator so d1 transpose d2 is going to be equal to the the numerator which is what we have seen earlier on here because this is the direction so d we've seen that d1 is equals to the inverse of k multiplied by x1 and we can substitute this into this expression which further simplify into this expression over here the the proof uh that is that we can see that this term over here now since we have defined the image of absolute coding is indeed equals to omega and hence we will have this particular relation over here so we can further prove that this expression remains unchanged under projective transformation of an image let's denote this by h for example so uh let's just look at the numerator term of this the denominator will follow the same proof so starting from x 1 transpose omega x 2 so under any general projective transformation of x prime that transform a point x into x prime which we denote by h over here we can see that this guy here it's my x prime and this guy here is my omega prime after transformation and uh this guy here is my x two prime sorry this should be x one prime and since x one prime is given by h uh multiply so one prime so the transpose of this guy x1 transpose will simply become x1 transpose multiplied by h transpose and then we know we have seen earlier on that the uh conics under when he undergone a projective transformation is simply given by h transpose inverse multiplied by the conic and multiply by h inverse and then similarly for x2 this would directly follow the projective transformation that we have seen here and we can see that all this homography all this h over here the projective transformation actually cancels off and it will still give us the original equation and hence this shows that the angle between any two ray that is defined by the image of absolute conics is invariant to any form of projective transformation a third remark that can be made on the image of the absolute conan it's directly obtained from the result of remark two that we have seen earlier on so the in the case where two points x1 and x2 are gonna this simply means that i will have a cosine 90 degree and that's equals to zero so the new denominator that we have seen here would uh cancels off when this guy here equals to zero so and this would be the relation that we have another remark that we can make on the image of absolute conics is that we can also define the dual image of the absolute conics so as we have seen earlier on in lecture one that a co the dual of a conic c star over here is simply equals to the inverse of the chronic when this conic is full rank that means that is a non-degenerate conic and in this case here the image of the absolute conic is always full rank because uh it's defined the intrinsic value of the camera which is always full rank that means that the dual of the image of the absolute coin it always exists and we can simply get the expression for this omega star by taking the inverse of omega and that would be equals to k multiplied by k transpose over here and this particular expression over here or a definition of the dual image of the absolute cooling it directly follows the definition of the dual of a conic what this means is that the original conic is actually a poinconic that is defined by points the duo of this conic would be a tonic that is defined by a set of lines that envelopes this particular uh calling although in this case this is a special conic where it contains no real points this means that we cannot observe the physical image of this conic on any images and the dual image of the absolute chronic omega star here is also equivalent to the image of the dual absolute conic lies on the plane of infinity which we denote as q infinity star and this uh simple this this is simply the direct result of a projection on this particular quadratic on the 3d space by the projection matrix that we have seen earlier on so once the image of the absolute conic omega or equivalently its lure omega star is identified in an image then the intrinsic calibration matrix can be uniquely identified via a kollewski decomposition this is because omega star is simply equals to k multiplied by k transpose we can simply apply a cholesky factorization on omega star to recover the upper triangular matrix of k and next the image circular points lie on omega at the points at which the vanishing line of the plane pi intersects omega this is because at the plane at infinity we have this absolute conics which we denote as omega infinity here and this plane over here is pi infinity we know that the vanishing line also lies on this particular plane at infinity which we denote as l infinity here and these two points here are the circular point where the intersection of the line infinity and the absolute conics happens and the projection of the absolute conics onto the image is denoted by omega and the vanishing line would also be projected onto the image as the finite value line here so yeah it's important to note that although we can observe the projection of the infinite line on the image we cannot observe the projection of the absolute conic as well as the circular points on the image this is because these are complex numbers the absolute conic and the circular points are complex numbers and its projections onto the image cannot be observed physically but mathematically it actually exists there so we can mathematically express omega as k multiplied by k transpose and the intersection of this vanishing point with this omega over here omega star here it's going to define the projection of the image circular points and we also further see in lecture 3 that the plane pi intersects at pi infinity in a line and this is the line that intersects omega infinity or the absolute cooling so this is the line at infinity and the back projection of this or the projection of this onto the image is actually a plane and this particular plane we can take it as that this particular plane pi intersects the plane at infinity at the line of infinity l infinity here let's look at how the knowledge of the image of absolute conics can be used to design a simple calibration device to find the intrinsics of a camera and it turns out that this can be pretty simple we can easily design a collaboration device by simply having three squares uh a shape of our three squares over here so this is just three planes where we print out the black square and uh paste it on the plane with a white white border so the important thing is that we must be able to observe these four corner points on the image in order for this uh calibration to work by simply do uh having this three planes over here we can actually use it as a calibration device to compute k let's see how it's uh it's done one interesting point is that there's no need for each one of this plane to be in any configuration for example it need not be it need not be octagonal but it's important that they are not parallel if they are placed in a parallel fashion then we will have simply defeated the purpose because uh when you place them on a parallel plane this simply form a one larger plane then you wouldn't have three independent uh planes to get the required constraints anymore but the planes here need not be in any other configuration except for we have to avoid that placing it in in a parallel manner |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_10_Part_4_StructurefromMotion_SfM_and_bundle_adjustment.txt | so if we were to naively make use of the lovable algorithm we will find ourselves quickly running into a cubic complexity at every iteration since we are going to solve for the normal equation given by j transpose j delta equals to minus j transpose epsilon over here and this we have seen that this is equivalent to uh inhomogeneous linear equation of a x equals to zero and we all know that so when we solve for this inhomogeneous linear equation for x which is equivalent to delta over here we have to invert this guy over here the a matrix to in order to get solved for a which is equals to inverse of a multiplied by b and this incurs a cubic complexity where n over here can be very large well and this where this is actually uh in the order of the total number of camera parameters overall camera so it would be 12 multiplied by the total number of camera in our 3d reconstruction as well as the total number of 3d points that would be three coordinates of xyz multiplied by the total number of 3d points in the scene in the 3d reconstruction problem this can easily be up to an order of millions of 3d points so fortunately we will exploit the sparse block structure in this a matrix which is equivalent to the j transpose j matrix over here in order to mitigate the computational complexity more specifically we first divide the or partition the set of parameters p which is a m dimensional parameters into two sets of parameter vectors which we call a and b so what this means is that uh i'm going to rewrite p into p1 p2 all the way to pm here and i'm going to partition this set of parameters into two sets where i call the first set of parameters over here a vector of a and the second set would be a parameter a vector of a b over here and i'm going to rewrite this p as a transpose b transpose and all together transpose so this in the context of our bundle adjustment problem a could represent all our camera parameters and b here could represent all the parameters from the 3d points so we can partition them into two sets of parameters that we wish to optimize over and now the jacobian j which is defined as del x over del p where x hat here represents the nonlinear function that we want to optimize over that would be our reprojection function in the context of bundle adjustment you will now form a block structure of the form of a and b where a and b here are respectively given by the partial differentiation of x the nonlinear function with respect to the first partitioned parameter a and the second partition parameter b respectively now after we computed the jacobian in the partition way we can put it together into the normal equation to compute j transpose j which is this guy over here and we will see that it has a very nice structure over here and delta would also have a partition structure where the first set of this vector would be representing a and the second set would be representing the delta increment of a b and similarly for the right side of this normal equation which is given by minus j transpose of epsilon we can see that the structure would also be the first part here is only dependent on the parameters of a and the second part will only be dependent on the parameter of b and we can rewrite this block over here the j transpose j block over here into this form where we saw that the off diagonal here it consists of the same parameters but just a transpose of it so we'll rewrite this part as w and the lower left part has w transpose over here and then we'll write this block over here as u star and this block over here in the diagonal block as v star in this particular form over here now we further observe that if we were to multiply the normal equation with the left and right side of the normal equation with this particular matrix over here we'll be able to get this equation over here where we can see that in the first row the top right uh entry the top right block over here will become zero because we have this element over here which will act as a subtraction over here to eliminate w in this particular block over here so once we have arrived at this particular equation where the top right block over here is equals to zero we can make use of this to solve for delta a because we can see that after multiplying this row over here into this uh column over here delta b is being eliminated so we will get the equation of this block multiplied by delta a equals to this particular block over here and the only unknowns in this case would be [Music] dependent on the parameters of a so in this particular form over here and this is also known as the shears complement and which we can use to solve for a partial block of the parameters that we are interested in optimizing over and we can we can also see that this uh form of equation it becomes uh a inhomogeneous linear equation where it's equivalent to ax equals to b this whole block here is b and this block here is a and delta a here becomes x where if we were to compare this with the original normal equation over here now we have a smaller block of x over here to solve for as compared to this where because this x over here it is the original parameter which consists of the parameters of a and b that means that it's actually a delta a and delta b all together and what this means is that j transpose j in the original form is much larger than the form that we are the inhomogeneous linear equation that we are trying to solve for here and uh we will also see that another advantage of this is that a this particular block here it's much sparser and this can be solved efficiently using several linear algebra factorization technique so once we have found delta a we can subsequently solve for delta b using back substitution by the following equation that is shown here this equation is in fact obtained from the second row of this particular equation over here where we multiply this guy to delta a and this guy to delta b and that would be equals to epsilon b since we already know delta a we will have one equation that is consists of the unknown of delta b and this particular equation over here would again be equivalent to ax equals to the inhomogeneous linear equation which we can solve efficiently for x now once we have computed the delta a and delta b the update vectors of delta a and delta v we can put it back into this equation over here to get the updated equation of p so this is simply equivalent to p i plus 1 is equals to p i plus delta where delta is the value that we have solved for in the normal equation so since here we split this delta into two parts of delta a and delta b we can put it back into the equation by adding it back into the respective part of a and b to get the new vector of p and we'll do this iteratively until convergence so we'll uh also follow the way that we increase or decrease lambda in the naive uh louisville markov algorithm that we have seen earlier on and here's a table that summarizes the sparse level markup algorithm that i have talked about earlier on now we'll look at the application of the sparse louvered markov algorithm on the multiple image bundle adjustment in particular we'll take advantage of the lack of the interaction between the parameters of the different cameras to obtain the sparse structure in our normal equation so we'll define the measurement data as capital x over here where each one of this x 1 to x i it represents a set of the observation on the images so we can think of it this way that each one of this x is actually a 3d point in the scene which we call capital x i over here and it actually also contains a set of corresponding 2d points on the image for example x i 1 that means that i'm observing this x i this 3d point in image number 1. so i'm going to call this x i 1. this means that there is a correspondence from the first image and there's also a correspondence in the second image which i'm going to denote as x i 2 over here so note that uh this one two and this indexes it need not be consecutive it need not be continuous here because what this means is that we are allowing for missing data that means that it's not possible to observe the same point over all the image frames so it could be image number one image number two and then it jumps to image number five for example so we are going to have i 1 i 2 and then x i 5 transpose over here and all the way to x i m this means that this particular 3d point x i is being seen in m number of uh images and we are denoting small x i j as the image 2d point that corresponds to the 3d reconstructed points and as mentioned earlier on we are going to denote the camera parameters as a which is denoted by a1 a2 all the way to am where they are all together m images in our bundle adjustment context and each one of this parameter ai it represents the jth camera in the scene and this could be a 12 by 1 vector which represents all the parameters in our projection matrix so suppose that we are given a reprojected point which we call x i j hat here we know that this x i j is a result of the reprojection of a 3d point x i in the 3d scene and we know that the this particular 3d point would only be seen in the jth camera that means that if i have a camera a k for example we will only see this particular 3d point as x i j if k is equals to j what this means is that the partial differentiation of x i j hat over x a j will not be equals to zero and in general if k is not equals to j what this means is that uh this particular 3d point would not be reprojected onto this particular image if this is i k over here and see in similar way we can see that the partial differentiation of the reprojection will not be equal to zero if we partially differentiate with respect to the 3d point that is reprojected onto the that particular image suppose that i have a image which i see the reprojection of x hat i and j i know that a 3d point whose parameter is a b of k over here it would be reprojected onto this particular image if k is equals to i over here and what this means is that if i take a partial differentiation of x hat i j with respect to the partial differentiation of b of i then this in general will not be equal to zero but if i have a case where k is not equals to i this means that this particular point will not be reprojected onto this particular image or where i see a reprojection of x hat i j over here and in this particular case x hat i j over here will be independent of b k and this means that the partial differentiation will be equals to zero and the form of the jacobian after we do this partial differentiation would be shown in this particular figure over here we can see that rejection matrices which we denote as the vector of a and the 3d points which we denote as the vector of b over here it's only affecting the reprojected points when we can see that it's in that particular respective camera or it's in that particular uh respective 3d point for example in this case here x1 and j this 3d point is seen in the first camera so let's say this is my a 1 which is denoted by the parameters of p 1 over here and i'm going to see in this example over here 4 points inside here so this 4 point is going to be x 1 1 x 1 2 x 1 3 and x 1 4. and we can see that these four points it's only affected by the variable a1 or p1 that is noted in this particular diagram over here and since p1 is the parameter of the first image we can see that in the second image if we were to see also four points in this particular case over here four points that is projected onto this particular uh image over here which you denote as a2 over here or p2 in this case we can see that the p1 over here is not going reprojected point in the second image hence all the entries over here the partial differentiation of del x2 j over del p1 is going to be zero in this block over here but it's not going to be zero in the first block over here because we can see that the first camera matrix is affecting the points that is reprojected onto the first image and similarly the first camera matrix is not going to affect the or the reprojector points in this particular example here there are four uh reprojected points onto the third image which we denote as a3 so there are four of these points we can see that the first camera matrix is not going to affect this and hence all the the partial differentiation of this block over here is going to be equal to 0 as well and we can also see this in in the 3d points which we denote as the vector b here so in this particular case in the first image over here we have four three projections which we denote as x 1 1 x 1 2 x 1 3 and x 1 4 over here as mentioned earlier on and we can see that capital x 1 which is the first 3d point so this means that i have a 3d point here which i call x1 it's only going to be affecting one of the entries one of the reprojected point which is x 1 1 over here what this means is that x11 is the reprojection of x1 the first 3d point into the camera image of the first camera so we can see that it's affecting this only hence taking a partial differentiation of the reprojection over here with respect to x1 is going to be uh a certain value over here a non-zero value over here but at the same time if we look at all the other entries over here the other three entries over here which represents the three reprojected point in the first image it's not going to be affected by x1 because these are caused by some other 3d points in the scene hence the partial differentiation of this is going to be equals to 0 over here and we can see that in the first image over here the second point which is x2 over here which is caused by a reprojection of x2 the second point over here it's going to be non-zero value uh when we partially differentiate x12 with respect to x2 and it's going to be zero everywhere else and we can see this trend happening in all the other entries in this block as well so once we have computed this we would have obtained the jacobian metric in this particular form and we can see that this is indeed a very sparse matrix where most of the entries here are equals to zero and it's only sparsely filled in the white parts as illustrated in this particular figure so once we obtain the jacobian matrix we can continue to get the normal equation where we can compute the haitian of j transpose j which is in this particular form format over here as we what we have described earlier on we can also see that this is a sparse matrix in the diagonal block over here where all these entries over here are going to be equals to zero and now delta over here would be divided into two parts the first part here would be dependent on a which is our camera matrix and the second part over here would be dependent on the 3d point entries which corresponds to the partition of the blocks over here in the jacobian metric and similarly epsilon a and epsilon b we denoted by this term over here would be divided into two parts the first part is again dependent on the camera matrices and the second part would be dependent on the 3d point so we can see more clearly this illustration over here that the first block over here uh taking the second camera where i equals to two taking the this block over here we can see that uh partially differentiation of the this point x two j uh for the four points that is being reprojected onto the second image where i equals to two so in this case here i'm looking at p p2 so i'm looking at four points over here where each point represents x2 1 x2 2 x2 3 and x2 4 and in the case where i'm partially differentiating with respect to the first camera or matrix this would be zero and it would be the same for the case where if i partially differentiate this with respect the third camera matrix over here it will also be equal to zero and the only case is that uh that will won't be equal to zero is when i partially differentiate this reprojection with respect to the camera matrix uh p2 or a2 that i denote over here this is because uh this is the camera projection matrix that affects this particular image and in turn it will affect the reprojection onto this particular image over here and here's an illustration on how the 3d points affect the jacobian we can see that this is the reprojection onto the second image the four points onto the second image and we can see that it's non-zero when we partially differentiate this the respective reprojection with respect to the 3d points that causes this particular reprojection over here what's interesting here is that if we look at the jacobian matrix where we obtain the haitian matrix as j transpose j over here and it's interestingly that j transpose j or the haitian matrix it actually represents the adjacency matrix of the graph that relates the relation between all the parameters so in this particular case over here we can see that there is a connection between a to 2 and 1. so here in this example in this example over here these are the 3d points and these are the cameras so what this means is that this particular 3d point a is seen by camera 1 and 2 and it's not seen by camera 3 and four and uh the second point b over here is seen by camera one two and four so what it means is that if i have an image here which i call a and b and i have a point uh where i call one or x one and then another point which i call x2 so in this graph over here it simply means that x1 is seen by both images and x2 is also seen by both images but x3 for example is not seen by neither of these images but x3 over here is actually seen by c and d that is the third and fourth camera so x3 over here will be seen by these two camera and x1 over here is actually also seen by c concurrently with a so this means that this is going to reproject into c as well and we can see that this graph over here the that relates the camera parameters as well as the 3d point it's well represented as an adjacency matrix uh in the haitian matrix over here that we compute in the normal equation and as mentioned earlier on uh once we have computed this uh the haitian uh matrix and the jacobian matrix that we have seen earlier on we it's equivalent to this structure over here which is represented by this diagram over here so we can see that the w and w transpose is just transpose of each other and the diagonal over here is actually a sparse metric and we can rewrite this into this form over here that we have seen earlier on when we look at the sparse lower markov algorithm and we also mentioned earlier on that we can compute the shear complement by multiplying this block of matrix over here on both sides of the normal equation where we can see that the first half or the first block over here it represents the camera parameters and the second block over here represents the 3d structure so by multiplying this we obtain the shear complement equation of the first row because there's a zero here so it effectively decouples out delta b from a we will have an equation that with only delta a that is remaining and we also know that this will can efficiently speed up or effectively speed up the computational power because the number of cameras is far lesser than the number of 3d points in the normal 3d reconstruction problem and here's what i have mentioned earlier on we get the shear complement equation which is equivalent to ax equals to b inhomogeneous linear equation that we can solve for delta a in this particular case over here and it's in easy to invert this guy over here because when we compute this from this block over here it will become a block diagonal matrix and it's both sparse and symmetric positive definite this means that the inverse of this guy exists and once we have computed for delta a we can back substitute it into the this equation over here multiplied by this and when delta a now is a known vector and which we can solve for delta b and to solve for the non-homogeneous linear equation system of non-homogeneous linear equation which is equivalent to ax equals to b we can make use of many linear algebra techniques such as the lu decomposition where we factorize this into this a matrix over here into a lower and upper triangular matrix so once we have a lower or upper triangular matrix for example we can easily solve for x over here by doing this so l u which is which represents a multiplied by x and b over here so we can collectively call this u as a y and solve this equation as l y equals to b where this is a lower triangular matrix so if this is a lower triangular matrix that means that the upper triangular part of this matrix is equals to 0 multiplied by a vector of y and equals to b what we can do here is that we can simply solve for this element multiplied by the first entry of y and then solve for this first entry and gradually do a back substitution to solve for all the other entries in y so once we have obtained y over here we can see that y is actually equals to u and x we'll solve for another homogeneous linear equation where this becomes an upper triangular matrix so what we will have here would be the upper triangular matrix over here multiplied by x we can where y is now a known value we can do a back substitution solve for the first value over here that corresponds to the last element of x and do a back substitutions to solve for all the elements over here similarly we can also apply the qr decomposition and cholesky decomposition where qr decomposition here simply means that the q matrix is an octagonal matrix and the inverse of an alternative matrix is simply the transpose so we can solve for this as qr multiplied by x equals to b and then we can do a rx equals to q transpose or b which can be easily computed by taking the transpose we do not need a inverse of this which is easier to compute in this case so once we have computed this then we'll end up with another system or linear equation which we call the this collectively as a y we will end up having to solve r x equals to y where r here is a lower or upper triangular matrix that we can take advantage of the same method to solve for the unknowns of x over here and similarly for cholesky decomposition which is probably the most commonly used uh decomposition or factorization technique to solve for the normal equation in louisville macro algorithm where we simply factorize a into a square root matrix l multiplied by l transpose where l here is a lower triangular matrix that we can then solve for the x in the similar way as what we have seen in the lu factorization for more information about this because you should have learned this in your undergraduate linear algebra course i won't go through in very much detail except for what i have just talked about just now and for more detail of this i advise you to look into uh this textbook by gilbert strong linear introduction to linear algebra there's also very nice video lectures online that is uh that is taught by gilbert strong himself and i strongly encourage everyone of you to look into this if you forgot uh some of the basic uh background knowledge of linear algebra and there are also other techniques to solve for this uh non-homogeneous linear equation in an iterative method such as conjugate gradient and gauss schneider algorithm and i also strongly encourage you guys to look into this part of this can also be found in this particular textbook now unfortunately the normal equation cannot be solved in a naive way of just taking the lu decomposition cholesky decomposition or other form of decomposition because we will more than often face what we call the problem of fill-in in the this kind of factorization techniques so what it means here is that if i were to have a matrix a in the homogeneous linear equation of ax equals to b i would one technique that i have mentioned earlier on here would be to solve this using the kolovsky factorization technique by decomposing this or factorizing a into l multiplied by l transpose but more than often if given a sparse matrix of a by simply naively doing a color scheme decomposition on the sparse matrix a we will end up with a lower triangular matrix that is much more denser than the original sparse matrix itself and this would not be good because we will end up taking more time in solving for the equation in the back substitution steps and one way to resolve this is to reorder the sparse matrix of a by a permutation matrix so permutation matrix simply means a matrix that is with entries of one or zeroes in the metric and every column and every row of this particular matrix has to sum up to one so this means that i have to have only one once in a particular row as well as a particular column over here and what this means is that i'm going to re-order the permutation matrix given a square matrix over here particular all the entries in this particular matrix would be reordered that means that i will have to sort some of the entries by simply applying the permutation matrix on the a matrix i'll not go into the detail of this i'll just simply give you a flavor of the how to do the reordering and or what does the reordering means over here so unfortunately this particular problem to reorder the sparse matrix is an mp complete problem and this means that it's uh it could become intractable to compute for the best permutation such that we reorder the a matrix to minimize the amount of fuel in after for example kolowski decomposition there are some approximation algorithm such as the minimum degree the column approximate minimum degree permutation reverse mccarthy mckee algorithm and the nested dissection algorithm i will not go into the detail of this algorithm if you are interested you should read the related text and uh here we'll but we will see the effect of applying this uh fill in the the permutation algorithm or the reordering algorithm to prevent uh fill in given a sparse matrix where we have 18 441 non-zero entries for example after applying the reordering we will see that uh we reorder these elements to become something like this for example by applying p and p transpose and p on a so we will still get the same number of non-zero because we are simply just reordering the entries in the sparse matrix of a then after colossal decomposition we applied cholesky decomposition on this matrix over here after reordering we can see that the total number of non-zero it becomes 78 120 which is uh lesser than what we begin with and it's also lesser than this example over here where we simply naively uh apply a cholesky decomposition on the sparse matrix of a and will lead to a 8.28 increase of the non-zero elements as compared to this particular guy over here so similarly if we were to apply a minimum degree reordering on the sparse matrix this is what we will get we will still get the same number of non-zero elements because we are simply reordering the the entries in the sparse matrix of a and after applying cholesky decomposition we can see that we are we end up to have a 52 988 non-zero elements over here this is still much lower than what we have over here which is about 600 and 6 000 uh non-zero entries we can see that we have the same effect by applying column count as well as nested dissection where we will end up with a much lower number of non-zero elements after applying the color scheme decomposition so here's a comparison of the non-zeros after factorization we can see that the original factorization the naive kolovsky decomposition on the original sparse matrix you will lead to many non-zeros over here and it seems like apparently it seems like nested dissection has the best performance in this particular example where it gives the least number of non-zero after kolovsky factorization and what this means is that because cholesky decomposition it turns a into ll transpose which i have mentioned earlier on that we are going to solve this equation by grouping this together as a y and then solving as l y equals to b so l over here is our lower triangular matrix over here so what this means is that if i have very few non-zero entries over here then the number of operations that i need to do by back substitution would be far lower than the number of computation that i need in the naive color scheme composition of the original sparse matrix and this would lead to a much more efficient computation of the normal equation which has to be done at every iteration in our macro algorithm and finally here's some open source software that is capable of implementing a love markov algorithm for bundle adjustment so you don't have to code everything from scratch these three libraries which helps you to do this bundle adjustment and particularly for the problem of structure from motion and google series is most commonly used in the computer vision community for structure for motion and g2o and gt sam it's more commonly used by the robotics community for visual slam visual simultaneous localization and mapping as a summary we have looked at in today's lecture how to describe the pipeline of large scale 3d reconstruction in particular we look at data association and the structure for motion we'll leave dance stereo to the next week lecture and then we have looked at how to ex how to do data association using robust to view geometry as well as the back of words algorithm and we look at how to do the initialization of 3d reconstruction with two view geometry pnp as well as the linear triangulation algorithm in fact these three three algorithms are what we have learned in the past lectures and finally we look at the definition of a bundle adjustment and how to do bundle adjustment using the iterative methods such as newton gauss newton and the louver makua algorithm and that's all for today's lecture thank you |
3D_Computer_Vision_National_University_of_Singapore | 3D_Computer_Vision_Lecture_9_Part_2_Threeview_geometry_from_points_andor_lines.txt | so after defining the trifocal tensor from the incidence relations of three views we'll now move on to see that a fundamental geometric property that is encoded in the trifocal tensor is the homography between the first and the third view or any of the two views that is induced by the line the back projection of the line in the third view or in the second image in this example over here we can see that more specifically given a line correspondence which we denote as l prime uh in the in the second image over here this line uh this image line is a back projects to a plane which we call pi prime over here and this induces a homography together with the trifocal tensor such that uh this homography can be used to transfer a point that is observed in the first image to the third image we can also see that this particular back projector plane together with the trifocal tensor can also be used to define the the same homography that can also be used to transfer a line that is observed in the first image to the third image without actually going through the computation of the the 3d line l over here or the 3d point over here in the first example more specifically the homography that they induced by the trifocal tensor and the back projected plane of the second image is given by uh this this guy over here h13 uh it's a function of l prime and the trifocal tensor because it's induced by the trifold tensor over here as well as l prime that is the image line that is observed in the second view over here in the subsequent slides we'll prove that this is true but before we do that we can see that similarly a lie in the third image which we call l prime prime over here it's going to induce a homography together with the trifocal tensor that transfer a point from the first image which we call x over here it's going to transfer this view to the second image over here which we call x prime and here this two x and x prime is going to be related by the homography h12 which we define over here and we will see that this h12 over here it's formulated by the trifocal tensor as well as the line correspondence l prime prime in the third view now we'll go on to prove that the homography between any two views it's induced by the trifocal tensor and the back projected plane from the image correspondences from the line correspondence in the third view more specifically we learned from the earlier lecture that the homography map between the first entered image can be written as x prime equals to h multiplied by x where h here is the homography that transfer the the point from the first view here which we call x over here uh to the third view so the x prime prime here is equals to h multiplied by x which is where we are transferring the pawn from the first image to the third image over here which we call x prime prime and the similar relation can also be obtained from earlier lecture or a line between the first and the third view uh if we were to transfer a line from the third view into the first view this means that it's this way around we can see that the relation here can be obtained as l equals to h transpose l prime prime and we also saw earlier that the incidence relation between the three line correspondences l l prime and l prime prime in the three in the respective three views it's given by l i equals to l prime transpose multiplied by the trifocal tensor t ti multiplied by l prime prime where l i here called that this is the i-coordinate or of the line l and t i it's the three by three metric that corresponds to the trifocal tensor and a direct comparison of this incidence relation that we have derived earlier on as shown here in this equation here and the homography equation that we have learned in the earlier lectures can shows that this l prime transpose multiplied by t i over here it defines a homography or more specifically in this particular notation over here since we are looking at l i that is the respective coordinates of the the i coordinate of the line uh what this means here is that the l prime transpose multiplied by t i will give us the i column of the homography matrix over here and which we can rewrite in this particular equation over here so here the the particular column h i is given by t i transpose multiplied by l i since there's a transpose over here this means that we transpose this guy over here and this is the relation that we will get for every single column so uh which means that uh if i want to obtain compute for h1 i will simply take a t1 transpose multiply by l1 and similarly for h2 that would be t2 transpose multiplied by l2 and we'll do the same for the third column of the homography matrix so in similar vein we can also rewrite the homography between the first and the second view as l transpose equals to l prime transpose multiplied by h where h here defines the homography between the first and second view which we denote as h12 over here and we'll see that a direct comparison with this guy over here the incidence relation over here l i equals to l transpose l prime transpose multiplied by t i multiplied by l prime prime over here we will be able to directly obtain this relation where h i over here uh which relates the first the homography the this is the i've column of the homography matrix that relates the first two view it's simply given by t i multiplied by l prime prime over here now we can take the cross product of the incidence relation that we have derived earlier on to eliminate the unknown scale in between this that is inside this particular incidence relation to get the homogeneous form as we as we see in this particular equation over here so this is uh simply taking the cross product of l transpose and l so what this means is that we can uh let's rewrite the cross product over here as this guy over here cross product multiplied by l transpose and we can do the same on the right hand side of this equation as a result we will get this form of the equation over here that is equal to zero and the unknown scale which let's say we call it lambda over here in this particular equation we it can be eliminated or more briefly instead of writing t1 t2 33 in the square bracket shown here we can write in a short form where it's simply denoted as the square bracket of t i over here and uh so this means that this two equation here they actually means the same thing but this is the shorthand of the longer form of the equation as shown earlier on here and the symmetry between l prime and l prime prime also means that we can take the transpose of this guy over here and hence we can see that l prime prime here and l prime sort position and we have to add a transpose and remove the transpose and l prime and but the relation will still remain the same the cross product of these two terms over here is still going to be giving us zero over here and uh here we by writing the the equation into these two forms over here we eliminated the unknown scale and now the relation that is shown by these two equations over here would be simply relating the image correspondences l l prime and l prime prime and the trifocal tensor without any other unknown parameters in this particular equation over here we can do the same thing for point line line correspondences this means that in the first image i would have a point image point and then in the second image i have a line correspondence that is uh what i call l prime over here and in the third image i would have another line correspondence which which is what i call l prime prime over here so what this means is that the point and the line image correspondences they back project to an intersection in 3d space of a 3d line which we call l so this particular 3d line it's the intersection of the back projection of l prime and l prime prime or from the second and the third uh view this particular line this 3d line over here has to intersect with the ray that is being back projected from the image point x over here so notice that uh earlier on i said that the line correspondences from any two views would not be enough to constrain the two image between these two views in this particular case if we were to ignore the point correspondence over here then there is not enough constraint for us to constrain the two views because simply any two your back projected planes by the two respective view would intersect at any line you will always intersect at the line and this is not enough to constrain the relative pose or the relative relation between the two views but now we have a third view over here which is denoted by the back projection the light ray that is being back projected from the image point correspondence over here and we know that this particular image point correspondence it must back project to a 3d point that sits on this particular line that is an intersection of the back projected plane from the second and third view hence the third view over here that projected light ray over here from this point correspondence helps to constrain the three views together and what uh and in short what this means is that uh having the third view over here is simply acts as a constraint to constrain the relative positions the relative poses of the image views within the three different views over here and now we know that the pawn on a line where we have learned in the first lecture must satisfy this dot product equation over here the incidence relation over here where the dot product of the point and the line must always be equals to zero and now let us rewrite this particular dot product because this dot product can be seen as x one x two and x three multiplied by l1 l2 and l3 notice that i'm using a superscript here to represent the row the coordinates of the or of the respective of the eighth row in the point and i'm using the subscript here to represent the i-coordinate of the line in in the respective column over here by doing a dot product of these two entities over here the point and the line essentially what we will get is this guy over here x one multiplied by l one plus x two uh multiplied by l two plus x three multiplied by l which can be rewritten into this form over here so it's simply going to be the summation over the i index x i multiplied by l i and this has got to be in uh equals to zero so we'll make use of this notation over here uh to denote the dot product of a point and a line uh here and now uh we also know from earlier uh derivation of the trifocal tensor that the i coordinate l is going to be given by l prime transpose t i uh multiplied by l prime prime over here and this uh can be now put into this particular incidence relation of the point and line so we can see that now this becomes summation of i x i l prime transpose t i l prime prime where we substitute this guy over here with this since l i is equals to this term over here so uh and we notice that l prime and l prime prime they are independent of the index so we can actually factorize them out and to rewrite the summation of i as this term over here summation x i multiplied by t i and we can put the bracket outside here and then we pre-multiply by l prime and post-multiply by l prime prime and since the dot product of this guy over here has to be equals to zero it also means that this particular equation over here has to be equals to zero hence uh we have derived the relation the incidence relation of a point line line correspondence which is also free from any unknown scale or any other unknown parameters over here the only unknown over here would be entries in the trifocal tensor so the correspondences here would be what we observe from the respective images which is more precisely a point line line correspondence we can also derive the same thing for point line point correspondence what this means is that uh in the first view i would have a point that back projects to array and the second uh view i would have a line which i denote as l prime that makes projects to a plane and the third view over here uh which it's going to be a point correspondence which i denote as x prime prime that's going to back project to the uh light ray over here and this x and l prime and x prime prime here they are correspondences what this means is that uh they better meet at the certain part of the in the 3d space as illustrated by this figure over here we can see that the point and the point they will meet at the point and they will also meet that the 3d line that gives rise to the projection of this l prime here in the second image and what's interesting here is that we can make use of the homography relation that we have derived earlier on to get the point line point uh incidence equation over here so more specifically we can see that the back projection of l prime it's going to form a plane which we can call pi prime over here and this particular plane it contains both the line that gives rise to this particular l prime in the second image as well as the 3d point that is uh from the intersection of the back projector array from the first image and the third image specifically from the correspondence from the point correspondences of x and x prime prime over here since the intersection which is the 3d point that gives rise to the two points in the first view and the third view sits on the plane what this means is that there is a homography that relates the x prime prime and x over here so we can write this relation x equals to h multiplied by x x prime prime over here and this is what we have seen earlier on which is the homography between the first center view and is induced by the back projection plane of l prime as well as the trifocal tensor that is given by this equation over here so we can simply write this using the homography relation x prime prime equals to this guy over here multiply by x which can be rewritten into this particular form over here what this means is that we because l prime over here it's common to all the three trifocal matrix over here t1 t2 as well as t3 what we can do is that we can uh pull this out factorize this out and simply write l prime on the right hand side over here and then we push x into the coordinates over here in into the respective trifocal tensor entries here and uh we can see that this can be rewritten in the same form as what we have seen earlier on except for uh now ti represents the respective entries of the trifocal type it's actually a three by three matrix over here and the summation over x i this x i over here simply refers to the i've entry of the image point we can make use of this to represent the homography relation now the homogeneous scale factor from the homography relation can be eliminated by uh taking the cross product of x on both sides so we can see that uh here given this particular equation over here by taking the cross product of x on both sides you can see that this guy over here this term over here will be com0 and hence the unknown scale of the of this homography equation can be eliminated and this gives rise to this particular uh incidence relation a homogeneous incidence relation that is shown in this particular equation over here so the similar analysis can also be done instead of using a point line point can also be done using a point point line relation we can see that the relation is almost the same except for now the point the second view contains a point which we denote by x prime over here and the third view instead of having a point it becomes a line which we denotes as l prime prime over here but the second view is still a point so this particular summation over here that arose from the homography multiplied by x yeah it still remains now we can also do the same thing to derive the incidence relations between three views which contains a point point point uh correspondence so what this means is that in instead of having a line on all the respective three views now each view contains a point that back projects to a light ray and the three light rays over here it must you must meet at a single point in the 3d space which is the three which is the 3d point capital x over here that gives rise to the projection of x and x prime and x prime prime in the respective uh three views so here i will show that it's uh the incidence relation over here is given by this particular equation over here so here's the proof on why is this true though we know that the line in the second image which you denote as l prime over here it can be expressed in this particular incidence relation so this is what we have seen earlier on it's a point line point correspondence or incidence relation where the first point here is represented by x and then in the first view the second view it contains a line which you call l prime over here and the third view it contains a point which we call x prime prime over here so this is what we have seen earlier on and like this particular line over here which we denote as l prime so pass through some points which we write as x prime over here and so this line over here it can be written as the cross product of two points on the line so what this means is that i have in the second view this is my image 2 in the second view i have a line correspondence which i denote as l prime and this line here is actually given by two points which i denote as x prime and y prime over here so the cross product of these two points would give me l prime and i can simply write this cross product as into this matrix multiplication form and consequently we can substitute the cross product of these two points back into the point line point incidence relation to substitute this l prime over here and this is what we get this particular equation over here where this term over here represents uh l prime trans now uh what's interesting here is that we know that the point line point which is given by this guy over here so this is the same as what we have seen earlier on here the the incidence relation here is going to be true for all every line l prime here passes through any point x prime over here and so this means that this particular equation here which represents the incidence relation of point line and point where the line is being represented by the cross products of any two points that lies on this line l prime is independent of the second point on the line we should call y prime over here which implies that since it's independent of y prime here where y prime is generally not equal to 0 over here so in order for this relation to be true then the the product of the remaining terms over here it must be equals to 0. hence we can rewrite this relation into this relation over here where the product of the remaining terms equates to a three by three zero matrix over here and this as a result gives us the incidence relation between a point point and point correspondences between the three views now here's a table that summarizes all the point and line incidence relations that we have seen earlier on so it consists of line line line and this is what we have seen earlier on we it all started with this uh incidence relation the equation that we have derived when we derive the trifocal tensor so taking the cross product to eliminate the unknown scale we will get this guy over here then we look at the point line line correspondence where the first view has a point and we get this particular equation over here and so on and so forth so we saw point line point point point line and as well as point point and point correspondences in the respective uh three views now the next thing that we can derive from the three view relation would be the respective epipolar lines on the respective images in the three view now suppose that x is a point and in the first view and l prime and l prime prime are the corresponding epipolar lines in the second and third view so what this means is that i have a first view which contains a point which i call x and then in the second view i have a line because we know that the back projected ray of this particular image point x over here the the actual 3d point can be anywhere on this particular back projector line so when transfer of the point from the first view to the second view is going to become a epipolar line over here which means that this is simply the projection of the back projected array from the image point x over here and uh we're going to denote this as a sl prime and this is what we call the epipolar line so similarly in the third view but we are going to forward project this particular light ray onto the third view and this is going to define another line over here the this plane and the image plane is going to intersect at this particular line over here which we call l prime prime over here and this is going to be the epipolar line that is formed by a point correspondence in the first image we'll do the proof later but we now will just state that the relation between the epipolar line and the trifocal tensor as well as the image point that gives rise to this particular epipolar lines is given by this equation over here this is between the first and second view and what it means here is that i'm back projecting the point to the ray and then projecting this ray onto the second view to get the epipolar line over here and similarly it's now a space equation over here it's going to define the epipolar line and the trifocal tensor as well as the point that in the first image that gives rise to the epipolar line in the third view so this is going to be the first and uh third view and consequently from these two equations over here we can see that since this is a null space equation we can see that the epipola line in the second view that is formed by the a point in the first view which called x over here it's given by the left null vector of this equation over here which simply consists of the image point in the first view and the trifocal tensor and the epipolar line in the third view which you call l prime prime here is solved as the right null space vector of this particular matrix equation over three this is a three by three uh matrix they are both the same over here and what this means is that l prime is solved as the left null vector and the l prime prime over here is solved as the right and now vector of this matrix this three by three matrix over here that we denote in this case and now we show the proof for the left and right now space for the epipolar line as shown earlier on we will consider a special case of the point line line uh correspondence in in three views over here so now in this particular case we can see that uh the point in the first image which we denote as x over here back projects to a light ray which contains the 3d point over here which we call capital x over here and now we assuming that the line correspondence in the second view over here it's formed by the plane pi prime over here that is formed by the projection of the back projector light array of the point correspondence in the first view which called x over here so uh this line over here is going to back project to a plane that is seen by the second camera c prime over here and the intersection over here is going to form l prime and we learned in the lecture in fundamental matrix that this intersection of l prime over here is what we call the epipolar line we can see that this particular plane over here that contains the camera center of both c and c prime the first and second view as well as the light ray formed by the back projection of x over here it's what we call the epi polar plane that relates the first and second view and suppose that x is a point on the plane pi pi prime over here so x is the point which is the actually the 3d point of this that gives rise to this particular image projection in the first image point projection in the first image over here that is denoted by a small x we further the node array that is the defined by x which is this capital x over here the 3d point and see the camera center the first camera center over here this like this light ray over here will sit on this particular plane which is the epipolar plane and hence the line l prime that is formed by the intersection of this epipolar plane and the second image would be what we call the epipolar line let's denote this by l prime so uh we will also further uh denote a plane uh pi prime prime that is formed by the back projection of a line l prime prime in the third view so this is the third view and note that in this particular case l prime prime can be any line on the third view that is in the incidence relation with the point in the first image and the epipolar line in the second image and let's call the intersection of this back projected plane from the line correspondence in the third view l prime prime with the epipolar plane as this line over here which we denote as capital l over here and hence this gives rise to a three-way intersection between the the back projected light ray from the first image of the point and uh the back projected planes from the respective line correspondences which is denoted by l and l prime respectively this actually constitutes uh what we have seen earlier on the point line line correspondence which must satisfy this incidence relationship that we have uh derived earlier on so now this the important part to note is that this relation must be true for any l prime prime for any line in the view notice that we didn't put any constraint that this l prime prime over here in the third view must be any particular line so uh whereas in contrast in the second view over here we define very strictly that this guy over here l prime is going to be the uh epipolar line because it has to be formed by the projection of this light ray from the first image onto the second image but this l prime prime over here in the third view can be any arbitrary line so what this means is that the following relation must be true since l prime prime can be any line and in general this l prime prime here uh will not be zero so what this means is that product of the remaining terms over here which is denoted by this guy over here it has to be equal to zero in order for the incidence relation here to be true and the same arguments uh holds even if the rows of l and l prime uh reverse so since uh we derive equation over here by us making the assumption that uh l prime in the second view is the epipolar line and we didn't make any assumptions for the line in the third view uh as a result we get this particular equation over here this particular relation over here we can also swap the rows of l and l prime this means that we insist that l prime prime over here uh it's equals to the epipolar line and l prime over here we relax it to say that this is not going to be a people online but it can be any line in general in the second view so now we can see that if this is any line in the second view that is non-zero what it means is that this relations here product of the remaining terms over here must always be equals to zero hence we can rewrite the equation here in this particular form here where l prime prime now uh in this particular case is the epipolar line of the sec or of the third of the third view and hence we get these two equations that we have seen earlier on here which is equivalent to the left and right null spaces of the 3x3 matrix over here that is formed by the image point correspondence in the first view and the trifocal tensor and this completes our proof now we know that given a point which we denote as x in the first view this is going to give rise to the epipolar line which you call l prime in the second view and uh so as x varies here in the first view so that means that if i have a different x over here it's going to give rise to different epipolar lines and we know from our earlier lectures when we learn about fundamental matrices and epipolar lines as well as epipose that this different point in the first view is going to give rise to different epipolar lines in the second view but all these different epipolar lines they must intersect at the common point and this common point is what we call the epipole and we are going to denote this epiphone in the second view as e prime so similar relation can also be obtained from the third view where a point correspondence in the first is going to transfer to different uh epipolar lines in the third view so we are going to denote this as l prime prime and this particular epipose here is going to we are going to denote it as e prime prime and hence the epipose in the second and the third view can be computed by the intersection of the various epipolar lines that was obtained by varying the point in the first view and or three convenient choices for x over here would be the homogeneous coordinate of one zero zero zero one zero as well as zero zero one the reason why these three are convenience choices because we have to compute the left and right now spaces of this equation over here in order to get the respective epipolar line we can evaluate by for example taking this 1 0 0 over here as x we can see that the x 1 over here will become 1 multiply by t 1 plus 0 multiplied by t2 and plus 0 multiplied by t3 and this would simply give rise to only t1 over here so what it means here is that the summation when the point is equals to 1 0 zero over here then we simply just need to compute the left and right in our space of t one in order to get the epipolar line in the second and third view that corresponds to this particular point over here so similarly uh zero one zero would also be a good choice and this would result in computing just t 2 as well as 0 1 0 1 this would result in just computing the left and right now space of t 3 over here and now once we have obtained the three epipolar lines that is uh from these three convenient choices of the point in the first image we can simply compute the epipol as the common intersection of these lines by the left and right now space so e prime over here which is the epipole in the second image would be the intersection of the three epipolar lines that comes from 1 0 0 0 1 0 and 0 0 1 from the first view |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_5_GDA_Naive_Bayes_Stanford_CS229_Machine_Learning_Andrew_Ng_Autumn_2018.txt | Hey, morning everyone. Welcome back. Um, so last week you heard about uh, logistic regression and um, uh, generalized linear models. And it turns out all of the learning algorithms we've been learning about so far are called discriminative learning algorithms, which is one big bucket of learning algorithms. And today um, what I'd like to do is share with you how generative learning algorithms work. Um, and in particular you learned about Gaussian discriminant analysis so by the end of the day, you will know how to implement this. And it turns out that uh, compared to say logistic regression for classification, GDA is actually a um, simpler and maybe more computationally efficient algorithm to implement ah, in some cases. So um, and it sometimes works better if you have uh, very small data sets sometimes with some caveats. Um, and there was a helpful comparison between generative learning algorithms, which is a new class of algorithms you hear about today, versus discriminative learn- learning algorithms. And then we'll talk about naive Bayes and how you can use that to uh, build a spam filter, for example. Okay? So um, we'll use binary classification as the motivating example for today. And um, if you have a data set that looks like this with two classes, then what a discriminative learning algorithm, like logistic regression would do, is use gradient descent to search for a line that separates the positive-negative examples, right? So if you randomish - randomly initialize parameters, maybe starts with some digital boundary like that and over the course of gradient descent, you know, the line migrates or evolves until you get maybe a line like that, that separates the positive and negative examples. And um, logistic regression is really searching for a line, searching for a decision boundary that separates the positive and negative examples. Um, and so if this was the uh, malignant tumors [NOISE] and the benign tumors example, right, that's - that's what logistic regression would do. Now, there's a different class of algorithm which isn't searching for this separation, which isn't trying to maximize the likelihood that you - the way you saw last week, which is um, here's an alternative, just call it generative learning algorithm; which is rather than looking at two classes and trying to find the separation. Instead, the algorithm is going to look at the classes one at a time. First, we'll look at all of the malignant tumors, right? In the cancer example and try to build a model for what malignant tumors look like. So you might say, ah, it looks like all the malignant tumors um, roughly [NOISE] all the malignant tumors roughly live in that ellipse. And then you look at all the benign tumors in isolation and say, ah, it looks like all the benign tumors roughly live in that ellipse. And then at classification time, if there's a new patient in your office with those features, uh, it would then look at this new patient and compare it to the malignant tumor model compared to the benign tumor model and then say, in this case, ah, it looks like this one. Looks a lot more like the benign tumors I had previously seen, so we're gonna classify that as a benign tumor. Okay? So um, rather than looking at both classes simultaneously and searching for a way to separate them, a generative learning algorithm, uh, instead builds a model of what each of the classes looks like, kind of almost in isolation, with some details we'll learn about later. And then at test time uh, it evaluates a new example against the benign model, evaluates against the malignant model and tries to see which of the two models it matches more closely against. So let's formalize this. Um, a discriminative learning [NOISE] algorithm learns P of y given x, right? Um, or uh, what it learns um, [NOISE] right? Some mapping [NOISE] from x to y directly. You know, as I learn- Or it can learn, I think Annan briefly talked about the Perceptron algorithm, it's helpful to support vector machines later. But learns a function mapping from x to the labels directly. So that's a discriminative learning algorithm. We're trying to discriminate between positive and negative classes. [NOISE] In contrast, a generative learning algorithm, [NOISE] it learns P of um, x given y. So this says, what are the features like, [NOISE] given the class, right? So um, instead P of y given x, we're gonna learn p of x given y. So in other words, given that a tumor is malignant, what are the features likely gonna be like? Or given the tumor's benign, what are the features x gonna be like? Okay? And then as- and then they'll also- generative learning algorithm, will also learn P of y. So this is a- this is also called the class prior to be this probability, I guess. It's called a class prior. It's just- when the patient walks into your office, before you've even examined them, before you've even seen them, what are the odds that their tumor is malignant versus benign, right? Before you see any features, okay? And so using Bayes' Rule, [NOISE] if you can build a model for P of x given y and for P of y, um, if- you know, if you can calculate numbers for both of these quantities then using Bayes' rule, when you have a new test example [NOISE] with features x, you can then calculate the chance of y being equal to 1 as this, [NOISE] right? Where P of x by the - [NOISE] okay? [NOISE] Um, and so if you learn this term, P of x given y, then you can plug that in here, right? And if you've also learned this term P of y, you can plug that in here. Right. Um, and so P of x in the denominators, goes into denominator, okay? So if you've learned both- both of those terms in the red square and in the orange square, you could plug it into all of those terms and therefore use Bayes' rule to calculate P of y equals 1, given x. So given the new patient with features x, you could use this formula to calculate what's the chance that a tumor is malignant. If you've estimated you know these - these two quantities in the red and in the orange circles. Okay? So um, [NOISE] that's the framework we'll use to build generative learning algorithms. And in fact, today you see two examples of generative learning algorithms. One for continuous value features, which is used for things like the tumor classification and one for discrete features, which uh, you can use for building, like in email spam, for example, right? Or - or I don't know. Or If you want to download Twitter things and see how positive or negative a sentiment on Twitter is or something. right? Well we'll have a natural language processing example later. So um, let's talk about Gaussian discriminant analysis. [NOISE] GDA. Um, so uh, let's develop this model, assuming that the features x are continuous values. And when we develop um, generative learning algorithms, I'm gonna use x and Rn. So you know, I'm gonna drop the x 0 equals 1, convention. So I'm not gonna- we're not gonna need that extra x equals 1. So x is now Rn rather than Rn plus 1. And the key assumption in Gaussian discriminant analysis is, we're going to assume that P of x given y [NOISE] is distributed Gaussian, right? In other words conditioned on the tumors being malignant, the distribution of the features is Gaussian. The other features like uh, size of the- size of the tumor, the- the cell adhesion or whatever features you use to measure a tumor um, and condition on it being benign, the distribution is also Gaussian. So um, actually, how many of you are familiar with the multivariate Gaussian? Raise your hand if you are. Like half of you? One-third? No. Two-fifths? Okay. Cool. Alright. How many of you are familiar about a uni-variate, like a single dimensional Gaussian? Okay. Cool. Almost everyone. All right. Cool. So let me- let me just go through what is a multivariate Gaussian distribution. So the Gaussian is this familiar bell-shaped curve. A multivariate Gaussian is the generalization of this familiar bell-shaped curve over a 1-dimensional random variable to multiple random variables at the same time to- to- to vector value random variables rather than a uni-variate random variable. So um, if z, [NOISE] this is due to Gaussian, with some mean vector mu and some covariance matrix sigma um, so if z is in Rn then mu would be Rn as well. And sigma, the covariance matrix, will be n by n. So z is two-dimensional, mu is two-dimensional and sigma is two-dimensional. And the expected value of z is equal to um, the mean. And the um, covariance of z, [NOISE] if you're familiar with multivariate co-variances, uh, this is the formula. Right. Um, and this simplifies, we show in the lecture notes. You can get this in the lecture notes. [NOISE] So you- and uh, following sometimes semi-standard convention, I'm sometimes gonna omit the square brackets. So instead of writing the expected value of z, meaning the mean of z, sometimes I just write to this e, z right? And omit- omit the square brackets to simplify the notation a bit. Okay? And the derivation from this step to this step is given in the lecture notes. Um, and so, well, [NOISE] the probability density function for a Gaussian looks like this. [NOISE] And this is one of those formulas that, I don't know. When you're implementing these algorithms you use it over and over. But what I've seen for a lot of people is al- almost no one- well, very few people start their machine learning and memorize this formula. Just look at it every time you need it. I've used it so many times I seem to have it seared in my brain by now, but most people don't even- when you've used it enough, you- you- you end up memorizing it. But let me show you some pictures of what this looks like since I think that would, um, that might be more useful. So the Multivariate Gaussian density has two parameters; Mu and Sigma. They control the mean and the variance of this density. Okay? So this is a picture of the Gaussian density. Um, this is a two-dimensional Gaussian bump. And for now, I've set the mean parameter to 0. So Mu is a two dimensional parameter, it's uh, it's 0, 0, which is why this Gaussian bump is centered at 0. Um, and the Co-variance matrix Sigma is the identity, um, i- i- i - is the identity matrix. So uh, so you know, well, so- so you've have this standard- this is also called the standard Gaussian distribution which means 0 and covariance equals to the identity. Now, I'm gonna take the covariance matrix and shrink it, right? So take a covariance matrix and multiply it by a number less than 1. That should shrink the variance- reduce the variability of distributions. If I do that, the density um, the p- probability density function becomes taller. Uh, this- this is a probability density function. So it always integrates to 1, right? The area under the curve, you know, is- is 1. And so by reducing the covariance from the identity to 0.6 times the identity, it reduces the spread of the Gaussian density, um, but it also makes it tall as a result, because, you know, the area under the curve must integrate to 1. Now let's make it fatter. Let's make the covariance two times the identity. Then you end up with a wider distribution where the values of um- I guess the axes here, this would be the z1 and the z2 axis; the two-dimensional Gaussian density, right? Increases the variance of the density. So let's go back to a standard Gaussian, uh, covariance equal 1, 1. Now, let's try fooling around with the off-diagonal entries. Um, I'm gonna- So right now, the off diagonal entries are 0, right? So in this Gaussian density, the off-diagonal elements are 0, 0. Let's increase that to 0.5 and see what happens. So if you do that, then the Gaussian density, uh, hope you can see see the change, right? It goes from this round shape to this slightly narrower thing. Let's increase that further to 0.8, 0.8. Then the density ends up looking like that, um, where now, it's more likely that z1 now- z1 and z2 are positively correlated. Okay? So let's go through all of these plots. But now looking at contours of these Gaussian densities instead of these 3-D bumps. So uh, this is the contours of the Gaussian density when the covariance matrix is the identity matrix and I apologize the aspect ratio. These are supposed to be perfectly round circles but the aspect ratio makes this look a little bit fatter, but this is supposed to be perfectly round circles. Um, and so, uh, when, uh, the covariance matrix is the identity matrix, you know, z1 and z2 are uncorrelated. Um, uh, and the contours of the Gaussian bump, of the Gaussian density look like brown circles. And if you increase the off-diagonal, excuse me, then it looks like that. If you increase it further to 0.8, 0.8, it looks like that, okay? Uh, where now, most of the probability mass- probability ma- most probably density function places value on, um, z1 and z2 being positively correlated. Um, next, let's look at, uh, what happens if we set the off-diagonal elements to negative values, right? So, um, actually what do you think will happen? Let's set the off-diagonals to negative 0.5, 0.5. Right. Oh well. People are seeing, fewer making that hand gesture. Okay, cool. Right. [LAUGHTER] Right. So- so- so as you- you endow the two random variables with negative correlation, so you end up with, um, this type of probability density function, right? Uh, and the contours, it looks like this. Okay? Whe- whereas now slanted the other way. So now z1 and z2 have a negative correlation. And that's 0.8, 0.8. Okay? All right. So- so far we've been keeping the mean vector as 0 and just varying the covariance matrix. Um, oh good. Yeah? [inaudible]. Uh, yes. Every covariance matrix is symmetric. Yeah. [inaudible] Uh, the true thing about the covariance matrix has interesting column vectors, that point in interesting directions. Not really. Um, let me think. Maybe you should- yeah- yeah- uh, no I- I- I think the covariance matrix is always symmetric. And so I would usually not look at single columns of the covariance matrix in isolation. Uh, when we talk about Principal components analysis, we talk about the Eigenvectors of the covariance matrix, which are the principle directions in which it points but, uh, yeah we- we- we- we'll get to that later. [inaudible] Uh, yeah. So the Eigenvectors are a covariance matrix, points in the principal axes of the ellipse. That's defined by the contents. Yeah. Cool. Okay. Um, so this standard Gaussian would mean 0. So the Gaussian bump is centered at 0, 0 because mu is 0, 0. Uh, let's move Mu around. So I'm going to move, you know, Mu to 0, 1.5. So that moves the Gaussian, uh, the position of the Gaussian density right. Now let's move it to a different location. Move it to minus 1.5, minus 1. And so by varying the value of Mu, you could also shift the center of the Gaussian density around. Okay? So I hope this gives you a sense of, um, as you vary the parameters, the mean and the covariance matrix of the 2D Gaussian density, um, those are probably- probably density functions you can get as a result of changing Mu and Sigma. Okay? Um, any other questions about this? Raise the screen. [NOISE] All right, cool. Here is a GDA, right, model. Um, and- and, uh, let's see. So, um, remember for GDA, we need to model P of x given y, right? It's up here, y given x. So I'm gonna write this separately in two separate equations P of x given y equals 0. So what's the chance- what's the, uh, probability density of the features if it is a benign tumor? Um, I'm going to assume it's Gaussian. So I'm just going to write down the formula for Gaussian. [NOISE] And then similarly, I'm going to assume that if is a malignant tumor as if y is equal to 1, that the density of the features is also Gaussian, okay? And, um, I wanna point out a couple of things, so the parameters of the GDA model are mu0, mu1, and sigma. Um, and for reasons, we'll go into a little bit, we'll use the same sigma for both class. Um, but we use different means, 0 and 1, okay? Uh, and we can come back to this later. If you want, you could use separate parameters, you know, sigma 0 and sigma 1, but that's not usually done. So we're going to assume that the two Gaussians, for the positive and negative classes, have the same covariance matrix but they, they have different means. Uh, you don't have to make this assumption, but this is the way it's most commonly done. And then we can talk about the reason why we tend to do that in a second. Um, so this is a model for P of y given x. The other thing we need to do is model P of y. Uh, so y is just a Bernoulli random variable, right. It takes on, you know, the values 0 or 1. And so, I'm going to write it like this, phi to the y times 1 minus phi to the 1 minus y, okay? Um, and you saw this kind of notation when we talked about logistic regression, but all this means is that, um, you know, probability of y being equal to 1 is equal to phi, right. Because y is either 0 or 1. And so, um, this is the way of writing, uh, uh, probability of y equals 1 is equal to phi, okay? And, uh, you saw a similar explanation, it's a notation when we're talking about, um, logistic regression, right, one week ago, last Monday. And so, the last parameter is phi. So this is Rn, this is also Rn, this is Rn by n and that's just a real number between 0 and 1, okay? So, um, for any- let's see. So if you can fit mu0, mu1, sigma, and phi to your data, then these parameters will define P of x given y and P of y. And so, if at test time you have a new patient walk into your office, and you need to compute this, then you can compute, right, these things in the red and the orange boxes. Each of these is a number, and by plugging all these numbers in the formula, you get a number alpha P of y equals 1 given x and you can then predict, you know, malignant or benign tumor. Right. So let's talk about how to fit the parameters. So you have a training set, um, as usual, I'm gonna write the tre- well, I'm go- let me write the training set like this xi, yi, for i equals 1 through m, right? This is a usual training set. Um, and what we're going to do, in order to fit these parameters is maximize the joint likelihood. And in particular, um, let me define the likelihood of the parameters to be equal to the product from i equals 1 through m, up here, xi, yi, you know, parameterized by the, um, the parameters, okay? Um, and I'm, I'm just like dropped the parameters here, right? To simplify the notation a little bit, okay? And the big difference between, um, a generative learning algorithm like this, compared to a discriminative learning algorithm, is that the cost function you maximize is this joint likelihood which is p of x, y. Whereas for a discriminative learning algorithm, we were maximizing, um, this other thing, right. Uh, which is sometimes also called the conditional likelihood, okay? So the big difference between the- these two cost functions, is that for logistic regression or linear regression and generalized linear models, you were trying to choose parameters theta, that maximize p of y given x. But for generative learning algorithms, we're gonna try to choose parameters that maximize p of x and y or p of x, y, right. Okay? So all right. So if you use, um, maximum likelihood estimation. Um, so you choose the parameters phi, mu0, mu1, and sigma they maximize the log likelihood, right. Where this you define as, you know, log of the likelihood that we defined up there. Um, and so, uh, th- we, we actually ask you to do this as a problem set in the next homework. But so the way you maximize this is, um, look at that formula for the likelihood, take logs, take derivatives of this thing, set the derivative equal to 0 and then solve for the values of the parameters that maximize this whole thing. And I'll, I'll, I'll just tell you the answers you are supposed to get. [LAUGHTER]. But you still have to do the derivation. Right. um, the value of phi that maximizes this is, you know, not that surprisingly. So, so phi is the estimate of probability of y being equal to 1, right? So what's the chance when the next patient walks into your, uh, doctor's office that they have a, a malignant tumor? And so the maximum likelihood estimate for phi is, um, it's just of all of your training examples, what's the fraction with label y equals 1, right. So it's the, the maximum likelihood of the, uh, bias of a coin toss is just, well, count up the fraction of heads you got, okay? So this, this is it. um, and one other way to write this is, um, sum from i equals 1 through m indicator. Okay. Right. Um, let's see. So as you saw the indicator notation on Wednesday, did you? No. Uh, did you so- do, did we talk about the indicator notation on Wednesday? No. Okay. Um, so, um, uh, this notation is an indicator function, uh, where, um, indicator yi, equals 1 is, uh, uh, return 0 or 1 depending on whether the thing inside is true, right? So there's an indicator notation in which an indicator of a true statement is equal to 1 and indicator of a false statement is equal to 0. So that's another way of writing, writing this formula, right. Um, and then the maximum likelihood estimate for mu0 is this, um, I'll just write out. Okay. Ah, so, well, it- it actually if you, ah, put aside the math for now, what do you think is the maximum likelihood estimate of the mean of all of the, ah, features for the benign tumors, right? Well, what you do is you take all the benign tumors in your training set and just take the average, that seems like a very reasonable way. Just look- look at your training set. Look at all of the- look at all of the benign tumors, all the Os, I guess, and you just take the mean of these, and that, you know, seems like a pretty reasonable way to estimate Mu 0, right? Look all of your negative examples and average their features. So this is a way of writing out that intuition. Um, So the denominator is sum from i equals 1 through m indicates a y_i equals, 0, and so the denominator will count up the number of examples that have benign tumors, right? Because every time y_i equals 0, you get an extra 1 in this sum, um, ah, and so the denominator ends up being the total number of benign tumors in your training set. Okay? Um, and the numerator, ah, sum for m equals 1 through m indicator is a benign tumor times x_i. So the effect of that is, um, whenever, a tumor is benign is 1 times the features, whenever an example is malignant is 0 times the features and so the numerator is summing up all the features, all the feature vectors for all of the examples that are benign. Does that make sense? I- I just write this out, so this is the sum of feature vectors for, um, for all the examples with y equals 0 and the denominator is a number of the examples, where y equals 0, okay? And then if you take this ratio, if you take this fraction, then you're summing up all of the feature vectors for the benign tumors divide by the total number of benign tumors in the training set, and so that's just the mean of the feature vectors of all of the benign examples. Okay? Um, and then, right, maximum likelihood for Mu 1, no surprises, is sort of kind of what you'd expect, sum up all of the positive examples and divide by the total number of positive examples and get the means. So that's maximum likelihood for Mu_1, um, and then I just write this out. If you are familiar with covariance matrices, this formula may not surprise you. But if you're less familiar, then I guess you can see the details in the homework. Okay. Don't worry too much about that. Ah, you can unpack the details in the lecture notes. So we'll know how it works, okay? But the covariance matrix, basically tries to, you know, fit contours to the ellipse, right? Like we saw, ah, so- so try to fill the Gaussian to both of these with these corresponding means but you want one covariance matrix to both of these. Okay? Um, So these are the- so- so- so the way- so the way I motivated this was, you know, I said, well, if you want to estimate the mean of a coin toss, just count up the fraction of coin tosses, they came up heads, ah, and then it seems that the mean for Mu_0 and Mu_1, you just look at these examples and pick the mean, right? So that- that was the intuitive explanation for how you get these formulas. But the mathematically sound way to get these formulas is not by this intuitive argument that I just gave, it's instead to look at the likelihood, ah, take logs, get the log likelihood, take derivatives, set derivatives equal to 0, solve for all these values and prove more formally that these are the actual values that maximize this thing, right? By- by the same theories as you solved, so you can see that for yourself, um, in the problem sets. Okay? So- All right. Um, finally, having fit these parameters, um, if you want to make a prediction, right? So given the new patient, ah, how do you make a prediction for whether their tumor is malignant or benign? Um, so if you want to predict the most likely class label, ah, you choose max over y, of p of y, given x, right? Um, and by Bayes' rule, this is max over y of p of x given y, p of y divided by p of x. Okay? Now, um, I wanna introduce one esh- well, one- one more piece of notation which is, ah, I wanna introduce, actually, how- how many of you are familiar with the arg max notation? Most of you? Like two- two-thirds? Okay, cool. I- I- I'll go over this quickly. So, um, this is just an example. So the, um, let's see. Ah, boy. All right. So, you know, the Min over z of, uh, z minus 5 squared is equal to 0 because the smallest possible value of z by a 5 squared is 0, right? and the arg min over z of z minus 5 squared is equal to 5. Okay? So the min is the smallest possible value attained by the thing inside and the arg min is the value you need to plug in to achieve that smallest possible value, right? So ah, the prediction you actually want to make, if you want to output a value for y, you don't wanna output a probability, right? You know what I'm saying? Well, what do I think is the value of y? So you might choose a value of y that maximizes this, and- so- so there's the arg max of this and this would be either 0 or 1, right? Um, so that's equal to arg max of that, and you notice that, ah, this denominator is just a constant, right? It doesn't- it doesn't- it's a p of x, it's- y doesn't even appear in there? It's just some positive number. And so this is equal to, just arg max over y, p of x given y times p of y, okay? So when implementing, um, ah, when- when making predictions with Gaussian disc- in a- with the generative learning algorithms, sometimes to save on computation, you don't bother to calculate the denominator, if all you care about is to make a prediction, but if you'd actually need a probability, then you'd have to normalize the probability, okay? Okay. So let's examine what the algorithm is doing. [NOISE]. All right. So let's look at the same dataset and compare and contrast what a discriminative learning algorithm versus a generative learning algorithm will do on this dataset. Right. Um, here's example with two features X1 and X2 and positive and negative examples. So let's start with a discriminative learning algorithm. Um, let say you initialize the parameters randomly. Typically, when you run a logistic regression, I almost always initialize the parameters as 0 but- but this just, you know, it's more interesting to start off for the purposes of visualization, with a random line I guess. And then if you run one iteration of gradient descent on the conditional likelihood, um, one iteration of logistic regression moves the line there. There's two iterations, three iterations, um, four iterations and so on and after about 20 iterations it will converge to that pretty decent discriminative boundary. So that's logistic regression, really searching for a line that separates positive and negative examples. How about the generative learning algorithm? What it does is the following, which is fit with Gaussian discriminant analysis. What we'll do, is fit Gaussians to the positive and negative examples. Right, and just one- one technical detail, um, I described this as if we look at the two classes separately because we use the same covariance matrix sigma for the positive and negative classes. We actually don't quite look at them totally separately but we do fit two Gaussian densities to the positive and negative examples. And then what we do is, for each point try to decide whether this is class label using Bayes' rule, using that formula and it turns out that this implies the following decision boundary. Right. So points to the upper right of this decision boundary, to that straight line I just drew, you are closer to the negative class. You end up classifying them as negative examples and points to the lower left of that line, you end there classifying as- as a positive examples. And I've- I've also drawn in green here the decision boundary for logistic regression. So- so- so these two algorithms actually come up with slightly different decision boundaries. Okay, but the way you arrive at these two decision boundaries are a little bit different. So, um. All right, let's go back to the- Any questions about this? Yeah. [NOISE] [inaudible]. Oh, sure yes, good question. So why- why- why do we use two separate means, mu 0 and mu 1 and a single covariance matrix sigma? It turns out that, um-. It turns out that if you choose to build the model this way, the decision boundary ends up being linear and so for a lot of problems if you want to linear decision boundary, um, uh, um, yeah. And it turns out you could choose to use two separate, um, covariance matrix sigma 0 and sigma 1, and they'll actually work okay. Right. There's- it is actually very reasonable to do so as well, but you double the number of parameters roughly and you end up with a decision boundary that isn't linear anymore. But it is actually not an unreasonable algorithm to do that as well. Um, now, there's one- [BACKGROUND]. Now, there's one very interesting property, um, about Gaussian discriminant analysis and it turns out that's- ah. Well, let's- let's compare GDA to logistic regression and, um, for a fixed set of parameters. Right. So let's say you've learned some set of parameters. Um, I'm going to do an exercise where we're going to plot, P of Y equals 1 given X, you're parameterized by all these things, right, as a function of x. So I'm gonna do this little exercise in a second, but what this means is, um, well, this formula, this is equal to P of X given Y equals 1, you know, which is parameterized by- right well, the various parameters times p of y equals 1, is parameterized by phi divided by P of X which depends on all the parameters, I guess. Right. So by Bayes rule, you know this formula is equal to this little thing and just as we saw earlier, I guess right. Once you have fixed all the parameters that's just a number you compute by evaluating the Gaussian density. Um, this is the Bernoulli probability, so actually P of Y equals 1 parameterized by phi is just equal to phi is that second term and you similarly calculate the denominator. But so for every value of x, you can compute this ratio and thus get a number for the chance of Y being 1 given X. So I'm gonna go through one example of what function you'd get for P of Y equals 1 given X, for what function you get for this if you actually plot this for, um, different values of X. Okay. So, um, let's see. Let's say you have just one feature X, so X is a- a- and let's say that you have a few negative examples there and a few positive examples there. Right. So it's a simple dataset. Okay, and let's see what Gaussian discriminant analysis will do on this dataset. Um, with just one feature so that's why all the data is parsing on 1D. So let me map all this data to an x-axis. I just filled this data and mapped it down. And if you fit a Gaussian to each of these two data sets then you end up with, you know, Gaussians as follows where this bump on the left is P of X given Y equals 0 and this bump on the right is P of X given Y equals 1. Right, and- and again just to check on all details that we set the same variance to the two Gaussians, but you know, you kinda model the Gaussian densities of what does this class 0 look like? What does class 1 look like with two Gaussian bumps like this? Then because the dataset is split 50-50 P of Y equals 1 is 0.5. Right, so one half prior. Okay. Now, let's go through that exercise I described on the left of trying to plot P of Y equals 1 given X for different values of X. So the vertical axis here as P of Y equals 1 given different values of X. So, um, let's pick a point far to the left here. Right. With this model you- if you actually calculate this ratio you find that if you have a point here, it almost certainly came from this Gaussian on the left. If- if you have an unlabeled example here, you're almost certain it came from the class 0 Gaussian because the chance of this Gaussian generating example all the way to left is almost 0. Right, and so chance of P- P of Y equals 1 given X is very small. So for a point-like that, you end up with a point you know, very close to 0, right. Um, let's pick another point. Right, how about this point, the midpoint. Well, if you're getting example right at the midpoint, you- you really have no idea. You really can't tell. Did this come from the negative or the positive Gaussian? Can't tell. Right. So this is really 50-50. So I guess if this is 0.5 for that midpoint you would have P of Y equals 1 given X is 0.5. Um, then if you go to a point away to the variance, if you get an example way here, then you'd be pretty sure this came from the positive examples and so, you know, you get a point like that. Right. Now, it turns out that if you repeat this exercise sweeping from left to right for many many points on the X axis you find that, for points far to the left, the chance of this coming from, um, the Y equals 1 class is very small and as you approach this midpoint, it increases to 0.5 and it surpasses 0.5. And then beyond a certain point, it becomes very very close to 1. Right, and you do this exercise and actually just for every point, you know, for a dense grid on the x-axis evaluate this formula which will give you a number between 0 and 1. Is the probability and go ahead and plot, you know, the values you get a curve like this. It turns out that if you connect up the dots, um, then this is exactly a sigmoid function. The shape of that turns out to be exactly a shaped sigmoid function and you prove this in the problem sets as well. Right. Um, so, um, both logistic regression and Gaussian discriminant analysis actually end up using a sigmoid function to calculate P of Y equals 1 given X or- or the, the outcome ends up being a sigmoid function. I guess the mechanics is, you actually use this calculation rather than compute a sigmoid function. Right. But, um, the specific choice of the parameters they end up choosing are quite different and you saw when I was projecting the results on the display just now in PowerPoint, that the two algorithms actually come up with two different decision boundaries. Right. So, um, let's discuss when a genitive algorithm like GDA is superior and when a distributed algorithm like logistic regression is superior. Um, let's see if I can get rid of this. [BACKGROUND] All right. So GDA, Gaussian Distributed Analysis. So the generative approach. This assumes that x given y equals 0, this is Gaussian, with mean Mu_0 and co variance Sigma. It assumes x given y equals 1, this is Gaussian with mean Mu_1 and covariance Sigma, and y is Bernoulli with, um, parenthesis Phi. Right. And what logistic regression does. [NOISE] This is a discriminative algorithm, uh, there is some [LAUGHTER] strange wind at the back, is it? Yeah. I see. Okay. Cool. All right. Yeah. Why? You know the-there's just a scary UN report on global [LAUGHTER] warming over the weekend. I hope we don't already have storms here, um. Okay. It's okay. Did you guys see the UN report? It's slightly scary actually wa- the- the UN report on global warming but hopefully- all right. Good. Hurricane stopped. [LAUGHTER] Um, let's see. Uh, so what logistic regression assumes is p of y equals 1 given x. You know, that this is, uh, governed by logistic function. Right. So this is really 1 over 1 plus e is a negative Theta transpose x. We-where some details about x_0 equals 1 and so on. Right. So just- just- okay. So- so in other words, uh, it's assumed that this is, um, p of y equals 1 given x is logistic. Okay. And the argument that I just described just now, uh, plotting you know p of y equals 1 given x point-by-point to really the sigmoid curve I drew on the other board. What that illustrates. Um, it doesn't prove it. You prove it yourself in a homework problem. But what that illustrates is that, this set of assumptions implies that p of y equals 1 given x is governed by a logistic function. Right. But it turns out that the implication in the opposite direction is not true. Right. So if you assume p of y equals 1 given x is governed by logistic function by- by this shape, this does not in any way shape or form assume that x given y is Gaussian, uh, uh, x given y equals 0 is Gaussian x given y equals 1 is Gaussian. Right. So what this means is that GDA, the generative learning algorithm in this case, this makes a stronger set of assumptions and which this regression makes a weaker set of assumptions because you can prove these assumptions from these assumptions. Okay. Um, and by the way as- as- uh, as- as- uh, let's see. And so what you see in a lot of learning algorithms is that, um, if you make strongly modeling assumptions and if your modeling assumptions are roughly correct, then your model will do better because you're telling more information to the algorithm. So if indeed x given y is Gaussian, then GDA will do better because you're telling the algorithm x given y is Gaussian and so it can be more efficient. And so even if a very small dataset, um, if these assumptions are roughly correct, then GA will do better. And the problem with GDA is, if these assumptions turn out to be wrong. So if x given y is not at all Gaussian, then this might be a very bad set of assumptions to make. You might be trying to fit a Gaussian density to data that is not at all Gaussian and then GDA would do more poorly. Okay. So here's one fun fact. Here's another example, get to your question in a second, which is let's say the following are true; let's say that x given y equals 1 is Poisson with, uh, parameter Lambda_1 and x given y equals 0 is Poisson with mean, uh, Lambda_0, or lambda_1 not 0 and y, as before, is Bernoulli 5x. Right. It turns out that this set of assumptions also imply that p of y equals 1 given x. This is logistic, okay, and you can prove this. And this is actually true for, um, any generalized linear model, actually where, uh, where- where, uh, the difference between these two distributions varies only according to the natural parameter as a generalized name. Excuse me, of the exponential family distribution. Right. And so what this means is that, um, if you don't know if your data is Gaussian or Poisson, um, if you're using logistic regression you don't need to worry about it. It'll work fine either way. Right. So- so, you know, maybe, um, you are fitting data to s- maybe a fitting, uh, uh, a model, binary classification model to some data. And you don't know, is a data Gaussian? Is it Poisson? Is this some other exponential family model? Maybe you just don't know. But if you're fitting logistic regression, it- it'll do fine under all of those scenarios. Right. But if your data was actually Poisson but you assumed it was Gaussian, then your model might do quite poorly. Okay. So the key high level principles when you take away from this is, um, uh, uh, if you make weaker assumptions as in logistic regression, then your algorithm will be more robust to modeling assumptions such as accidentally assuming the data is Gaussian and it is not. Uh, but on the flip side, if you have a very small dataset, then, um, using a model that makes more assumptions will actually allow you to do better because by making more assumptions you're just telling the algorithm more truth about the world which is, you know, "Hey, algorithm, the world is Gaussian," and if it is Gaussian, then it will actually do- do- do better. Okay. Your question at the back or a few questions. Go ahead. Just from that, is there a point do you know like what sort of data it usually has a Gaussian problem? Oh, oh, yeah. Practical sample without data is a Gaussian probably, you know, it's, uh, uh- yeah, you know, it's a matter of degree. Right. Most data on this universe is Gaussian [LAUGHTER] uh, uh, uh, except at this feed data, I guess. Yeah, but- but, um- I think it's actually a- a matter of degree. Right. If- if you plot- actually if you take continuous value data- no, ther- ther- there are exceptions. You could plot it and most data that you plot, you know, will not really be Gaussian but a lot of it you can convince yourself is vaguely Gaussian. So I think a lot of it is amount of degree. I- I- I actually tell you the way I choose to use, um, these two algorithms. So I think that the whole world has moved toward using bigger than three datasets. Right. Digital Civil Society which is a lot of data and so for a lot of problems we have a lot of data, I would probably use logistic regression. Because with more data, you could overcome telling the algorithm less about the world. Right. So- so the algorithm has two sources of knowledge. Uh, one source of knowledge is what did you tell it, what are the assumptions you told it to make? And the second source of knowledge is learned from the data and in this era of big data, we have a lot of data, you know, there is a strong trend to use logistic regression which makes less assumptions and just lets the algorithm figure out whether it wants to figure out from the data. Right. Now, one practical reason why I still use algorithms like the GDA, general discriminant analysis, so algorithms like this, um, uh, is that, it's actually quite computationally efficient and so the- there's actually one use case at Landing. AI that we're working on where we just need to fit a ton of models and don't have the patience to run the GC progression over and over. And it turns out computing mean and variances of, um, covariance matrices is very efficient and so there's actually apart from the assumptions type of benefit, uh, which is a general philosophical point. We'll see again later in this course. Right. Th- this idea about do you make strong or weak assumptions? This is a general principle in machine learning that we'll see again in other places. But the very concrete- the other reason I tend to use GDA these days is less that I think I perform better from an accuracy point of view but there's actually a very efficient algorithm. We just compute the mean covar- covariance and we are done and there's no iterative process needed. So these days when I use these models, um, is more motivated by computation and less by performance. But this general principle is one that we'll come back to again later when we develop more sophisticated learning algorithms. Yeah. Uh, if the data is generated from a Gaussian but my program synthesis are different with the assumption that we just use the same program for performance-? Oh, right, ah, so what happens when the co-variance matrices are different? It turns out that, uh, uh, trying to remember, it still ends up being a logistic function but with a bunch of quadratic terms in the logistic function. So it's not a linear decision boundary anymore. You can end up with a decision boundary, you know, that- that- that looks like this, right? With positive and negative examples separated by some- by some other shape from a linear decision boundary. Uh, you- you could- you could- you could fig- actually, I- if you're curious, I encourage you to, you know, uh, uh, fire up Python NumPy and- and play around their parameters and plot this for yourself, uh, questions? Is it recommended that we use some kind of statistical global test to make sure that the plot distribution results have a equal variance before we do GDA? Yeah. It's recommended that you do some cyclical tests to see if it's Gaussian, um, I can tell you what's done in practice. I think in practice, if you have enough data to do a cyclical test and gain conviction, you probably have enough data to just use logistic regression, um, uh, the- the- I- I don't know. [LAUGHTER] Well no, that's not really fair. I don't know. If they're very high dimensional data, I- I think what often happens more, is people just plot the data, and if it looks clearly non-Gaussian, then, you know, there will be reasons not to use GDA. But what happens often is that um, uh, uh, yeah sometimes you just have a very small training set and it's just a matter of judgment, right? Like if you have, you- if you have, uh, uh, uh, you know, I don't know, 50 examples of healthcare records, then you just have to ask some doctors and ask, "Well, do you think the distribution is rath- rath- relatively Gaussian," and use domain knowledge like that. Right? I think- by the way a- another philosophical point, um, I think that, uh, the machine learning world has, frank- you know, a little bit overhyped big data, right? And- and yes it's true that when we have more data, it's great and I love data and a- um, having more data pretty much never hurts and usually the more data the better, so all that is true. And I think we did a good job telling people that high-level message, you know, more data almost always helps. But, um, uh, I think a lot of the skill in machine learning these days is getting your algorithms to work even when you don't have a million in examples, even you don't have a hundred million examples. So there are lots of machine learning applications where you just don't have a million examples, uh, you have a hundred examples and, um, it's then the skill in designing your learning algorithm matters much more. Um, so if you take something like ImageNet, mi- million in- in- images, there are now dozens of teams, maybe hundreds of teams, I don't know. They can get great results. They give a million examples, right? and so the performance difference between teams, you know, there are now dozens of teams that get great performance, if a million examples, uh, for- for- for image classification, for ImageNet. But if you have only a hundred examples, then the high-skilled teams will actually do much, much, much, much better than the low skilled teams, whereas the performance gap is smaller when you have giant data sets I think, so and I think that it's these types of intuitions, you know, what assumptions you use, generative or discriminative, that actually distinguishes the high-skilled teams and, uh, and, uh, and the less experienced teams and drives a lot of the performance differences when you have small data. Oh, and if someone goes to you and says, "Oh you only have a hundred examples, you'll never do anything." Uh, then I don't know, if- if there's a competitor saying that, then I'll say, "Great, you know, don't do it because I can make it work." Uh, well, I don't know. Uh, but- but I think there are a lot of applications where your skill at designing a machine learning system, really makes a bigger difference when you have a- make- makes a- it makes a difference from big data and small data, but it just- this is a very clear where you don't have much data, is the assumptions you code into the algorithm like, is it Gaussian, is it Poisson? That- that skill allows you to drive much bigger performance than a lower-skill team would be able to. All right. This is- uh- uh- coul- could- I should still take questions from all of you. Yeah, go ahead. Um, what's the implication when [inaudible]. Oh, sure. So does this, uh, yes, so what's the general statement of this? Yes, so if, uh, x given y equals 1, uh, it comes from an exponential family distribution, x given y equals 0 comes to an exponential family distribution, it's the same exponential family distribution and if they vary only by the natural parameter of the exponential family distribution, then this will be logistic. Yeah. Um, I think this was once a midterm homework problem to prove this actually? But, yeah. All right, uh, actually let's take one last question then we move on, go ahead. Uh, if performance [inaudible] Oh, uh, does performance improvement happen even as you increase the number of classes? Uh, ye- I think so yes, uh, and the generalization of this would be the Softmax Regression which I didn't talk about. But yes. I think it's a similar thing holds true for, um, GDA for multiple- and we have so far we're going to talk about Binary Classification, whether you have more than two classes. But, uh, but yes, similar- similar things holds true for, uh, like a GDA with three classes and Softmax. Yeah. Oh yes, right. You saw Softmax the other day. Cool. Um, and this- this theme that when you have less data the algorithm needs to rely more on assumptions you code in. This is a recurring theme that we'll come back to it as well. This is one of the important principles of machine learning, that when you have less data your skill at coding and your knowledge matters much more. Uh, this is a theme we'll come back to you when we talk about much more complicated learning algorithms as well. All right. So, uh, I want a fresh board for this. So you've seen GDA in the context of, um, continuous valued, uh, features x. The last thing I want to do today, um, is talk about one more generative learning algorithm called Naive Bayes, um, and I'm gonna use as most of the example; e-mail spam classification, but this- this is- this- I guess this is our first foray into natural language processing, right? But given in a piece of text, like given a piece of email, can you classify this as spam or not spam? Or, uh, other examples, uh, uh, actually several years ago, Ebay, used to have a problem of, you know, the- if someone's trying to sell something and you write a text description, right? "Hey, I have a secondhand, you know, Roomba, I'm trying to sell it on Ebay." How do you take that text that someone wrote over the description and categorize it, is it an electronic thing or are they trying to sell a TV? Are they trying to sell clothing? Uh, but these- these examples of text classification problems, we have a piece of text and you want to classify into one of two categories for spam or not spam or one of maybe thousands of categories, and they're trying to take a product description and classify it into one of the classes. Um, and so the first question we will have is, um, uh, given the e-mail problem, uh, given the e-mail classification problem, how do you represent it as a feature vector? And so, um, in Naive Bayes what we're going to do is take your e-mail, take a piece of e-mail and first map it to a feature vector X. And we'll do so as follows, which is first, um, let's start with a- let's start with the English dictionary and make a list of all the words in the English dictionary, right? So first of all there's the English dictionary as A, second word in the English edition is aardvark. Third word is aardwolf. [BACKGROUND] No, it's easy, look it up. [NOISE] [LAUGHTER] Um, and then, you know, uh, uh, e- e-mail spam lot of people asking to buy stuff so that they would buy, right? And then, um, uh, and then the last word in my dictionary is zymurgy, which is the technological chemistry that refers to the fermentation process in brewing. Um, So- so- I think it is a useful way to think about it, in- in- in- practice, what you do is not, uh, uh, actually look at the dictionary but look at the top 10,000 words, you know, in your training set. Right? So maybe you have 10,000, it's easier to think about it as if it was a dictionary but, you know, in practice, well, you- the other thing that's- dictionary has too many words, but where- the other way to do this is to look through your own e-mail co-pairs and just find the top 10,000 occurring words and use that as a feature set, and so I don't know. Right? And your e-mails, I guess you're getting a bunch of e-mail about- from us or maybe others about CS229. So CS229 might appear in your dictionary of building your e-mail spam filter for yourself, even if it doesn't appear in the- in the official, uh, was it like the Oxford dictionary, just yet just- just- just- you wait, we'll- we- we'll get CS229 there someday. All right. Um, and so given an e-mail, what we would like to do is then, um, take this piece of text and represent it as a feature vector. And so one way to do this is, um, you can create a binary feature vector, that puts a 1, if a word appears in the e-mail and puts a 0 if it doesn't. Right? So if you've gotten an e-mail, um, uh, that asks you to, you know, buy some stuff and then the word A appears in e-mail, you put a 1 there. Did not try to sell aardvark or aardwolf, so 0 there, buy and so on. Right? So you take a- take an e-mail and turn it into a binary feature vector. Um, and so here the feature vector is 0, 1 to the n, because there's a n-dimensional binary feature vector, where- where for the purpose of illustration, let's say, n is 10,000 because you're using, you know, take the top 10,000 words, uh, that appear in your e-mail training set as the dictionary that you will use. So, um. So in other words, X_i is indicator word i appears in the e-mail, right? So it's either 0 or 1 depending on whether or not that word i from this list appears in your e-mail. Now, um, in the Naive Bayes algorithm, we're going to build a generative learning algorithm. Um, and so we want to model P of x given y, right? As well as P of y, okay? But there are, uh, 2 to the 10,000 possible values of x, right? Because x is a binary vector of this 10,000 dimensional. So we try to model P of x in the straightforward way as a multinomial distribution over, you know, 2 to the 10,000 possible outcomes. Then you need, right, uh, uh, you need, you know 2 to the 10,000 parameters, right? Which is a lot, or technically, you need 2 to 10,000 minus 1 parameter because that adds up to 1, and you can see one parameter. But so, modeling this without additional assumptions won't- won't work, right, because of the excessive number of parameters. So in the Naive Bayes algorithm, we're going to assume that X_i's are conditionally independent given y, okay? Uh, let me just write out what this means, but so P of x_1 up to x_10,000 given y by the chain rule of probability, this is equal to P of x_1 given y times P of x_2 given, um, x_1 and y times p of x_3 given x_1, x_2 Y up to your p of x_10,000 given, and so on, right? So I haven't made any assumptions yet. This is just a true statement of fact as always true by the- by the chain rule of probability. Um, and what we're going to assume which is what this assumption is, is that this is equal to this first term no change the x_2 given y p of x_3 given y and so on, p of X_10,000 given y, okay? So this assumption is called a conditional independence assumption it's also sometimes called the Naive Bayes assumption. But you're assuming that, um, so long as you know why the chance of seeing the words, um, aardvark in your e-mail does not depend on whether the word "A" appears in your e-mail, right? Um, and this is one of those assumptions that is definitely not a true assumption and that is just not mathematically true assumption. Just that sometimes your data isn't perfectly Gaussian, but if it was Gaussian you can kind of get away with it. So this assumption is not true, um, in a mathematical sense, but it may be not so horrible that you can't get away with it, right? Um, and so- so- so it's like, if you- if any of you are familiar with probabilistic graphical models, if you've taken CS-228, uh, this assumption is summarizing this picture, and if you haven't taken CS-228 this picture won't make sense, but don't worry about it. Um, right, that, uh, once you know the class label is a spam or not spam whether or not each word appears or does not appear is independent, okay? So this is called conditional. So the mechanics of this assumption is really just captured by this equation, um, and you just use this equation, that's all you need to derive Naive Bayes. But intuition is that if I tell you whether this piece- if I tell you that this piece of e-mail is spam then whether the word by appears in it doesn't affect you believes that what- whether the word mortgage or discount or whatever spammy words appear, right? So just to summarize, this is product from i equals 1 through n of p of X_i given y. All right, so the parameters of this model, um, are, I'm going to write it, Phi subscript, um, j given y equals 1 as the probability that x_j equals 1 given y equals 1, phi subscript j given y equals 0, and then Phi. And just to distinguish all these Phi's from each other, we can just call this Phi subscript y, okay? So this parameter says, if a spam e-mail, if y equals 1 is spam and y equals 0 is not spam. If it's a spam e-mail, what's the chance of word j appearing in the e-mail? If it's not spam e-mail what's the chance of word j appearing in the e-mail. Then also, what's the cost prior, what's the prior probability that the next e-mail you receive in your, uh, in your- in your inbox is spam e-mail? And so to fit the parameters of this model, you would s- similar to Gaussian discriminant analysis, write down the John- joint likelihood. So the joint likelihood of these parameters, right? Is a product, you know, given these parameters, right? Similar to what we had for Gaussian discriminant analysis. And the maximum likelihood estimates, um, if you take this, take logs, take derivatives, set derivatives to 0, solve for the values that maximize this, you find that the maximum likelihood estimates of the parameters are, Phi_y, this is pretty much what you'd expect, right? It's just a fraction of spam e-mails and, uh, Phi of j given y equals 1 is, um, well, I'll write this out in indicator function notation. Oh, shoot, sorry. Okay. So that's the indicator function notation of writing notes. Look through your, uh, training set, find all the spam e-mails and of all the spam e-mails, i.e., examples of y equals 1 count up what fraction of them had word j in it, right? So you estimate that the chance of word j appearing- you estimate the chance of the word by appearing in a spam e-mail is just we have all the spam e-mails in your training set, what fraction of them contain the word by? What- what fraction of them had, you know, x_j equals 1 for say, the word by, okay? Um, and so it turns out that if you implement this algorithm, it will- it will nearly work, I guess, uh, uh, but this is Naive Bayes for, um, for e-mail spam classification, right? And I mentioned, uh, one reason this, uh, and it turns out that what one fixed to this algorithm, which we'll talk about on Wednesday, um, this is actually, it's actually a not too horrible spam classifier. It turns out that if you used logistic regression for spam classification you do better than this almost all the time. But this is a very efficient algorithm, because estimating these parameters is just counting, and then computing probabilities is just multiplying a bunch of numbers. So there's nothing iterative about this. So you can fit this model very efficiently and also keep on updating this model even as you get new data, even as you get new- new- new uses hits mark or spam or whatever, even as you get new data, you can update this model very efficiently. Um, but it turns out that, uh, actually, the biggest problem with this algorithm is, what happens if, uh, this is zero or if- if you get zeros in some of these equations, right? But we'll come back to that when we talk about Laplace moving on Wednesday, okay? All right, any quick questions before we wrap up? Okay, okay good. So now you've learned about generative learning algorithms, um, we'll come back on Wednesday and learn about some more fine details how to make this work even better. So let's break, I'll see you on Wednesday. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_12_Backprop_Improving_Neural_Networks_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Hi everyone. [NOISE] Welcome, welcome to the second lecture on deep learning for CS229. So a quick announcement before we start. There is a Piazza post Number 695 which is the mid-quarter survey for CS229, so fill it in when you have time. Okay. So let's get back to deep learning. So last week together we've seen, uh, what a neural network is and we started by defining the logistic regression from a neural network perspective. We said that logistic regression can be viewed as a one-neuron neural network where there is a linear part and an activation part which was sigmoid in that case. We se- we've seen that sigmoid is a common activation function to be used for classification tasks because it casts a number between minus infinity and plus infinity in 0, into 0, 1 interval which can be interpreted as a probability. And then we introduced the neural network, so we started to stack some neurons inside a layer and then stack layers on top of each other and we said that the more we stack layers the more parameters we have, and the more parameters we have, the more our network is able to copy the complexity of our data because it becomes more flexible. So, uh, we stopped at a point where we did a forward propagation, we had an example during training, we forward propagated through the network, we get the output, then we compute the cost function which compares this output to the ground truth, and we were in the process of backpropagating the error to tell our parameters how they should move in order to detect cats more properly. Does that make sense for this part? So today, we're going to continue that. So we're in the second part, neural networks, we're going to derive the backpropagation with the chain rule and after that, ah, we're going to talk about how to improve our neural networks. Because in practice, it's not because you designed a neural network that it's going to work, there's a lot of hacks and tricks that you need to know in order to make a neural network work. Okay, let's go. So first thing that we talked about is in order to define our optimization problem and find the right parameters, we need to define a cost function, and usually we said we would use the letter j to denote the cost function. So here, when I talk about cost function, I'm talking about the batch of examples. It means I'm forward propagating m examples at a time. You remember why we do that? What's the reason we use a batch instead of a single example? Vectorization. We want to use what our GPU can do and parallelize the computation. So that's what we do. So we have m examples that go- forward propagate in the network. And each of them has a loss function associated with them, the average of the loss functions over the batch give us the cost function. And we had defined these loss function together. L of i. Assuming we're still, and just as a reminder, we're still in this network where, where we had a cat, remember? This one. Remember this guy. x_1 to x_n. The cat was flattened into a vector, RGB matrix into one vector and then there was a neural network with three neurons, then two neurons, then one neuron. Remember? Fully-connected here. Everything. Up, up, and then we add y hat. You remember this one? I think that was this one here. Yeah, okay. So now, we're here, we take m images of cats or non-cats, forward propagate everything in the network, compute our loss function for each of them, average it, and get the cost function. So our last function was the binary cross-entropy or also called the loss function- the logistic loss function and it was the following. y_i log of y hat i plus 1 minus y_i log of 1 minus y hat i. So let me circle this one, it's an important one. And what we said is that this network has many parameters. And we said, the first layer has w_1, b_1, the second layer has w_2, b_2, and the third layer has w_3, b_3 where the square brackets dis- denotes the layer. And we have to train all these parameters. One thing we notice is that because we want to make a good use of the chain rule, we're going to start by, by computing the derivative of these guys, w_3 and b_3 and then come back and do w_2 and b_2 and then back again w_1 and b_1. In order to use our formulas of the update of the gradient descent where w would be equal to w minus Alpha derivative of the cost with respect to w and this for any layer l between 1 and 3, same for b. Okay, so let's try to do it. This is the first number we want to compute. And remember, the reason we want to compute derivative of the cost with respect to w_3 is because the relationship between w_3 and the cost is easier than the relationship between w_1 and the cost because w_1 had much more connection going through the network before ending up in the cost computation. So one thing we should notice before starting this calculation is that the derivative is linear. So this, if I take the derivative of j, I can just take the derivative of l, and it's the same thing, I just need to add the summation prior to that because derivative is a linear operation. That makes sense to everyone? So instead of computing this, I'm going to compute that and then I will add the summation, it will just make our notation easier. So I'm taking the derivative of a loss of one example propagated to the network with respect to w_3. So let's do the calculation together. I have a 1, I have a minus y_i derivative with respect to w_3, of what? We remember that y hat was equal to sigmoid of w_3 x plus b or w_3 a_2 plus b because a_2 is the input to the second layer, remember. So I would write it down here, sigmoid of w_3 a_2 plus b_3. Okay? Yeah. It's good like that? It's too small? w_3 a_2 plus b_3. It's good like that, yeah? Okay. So we have this term and then we have the second term which is plus 1 minus y_i times derivative of w_3. Derivative with respect to w_3 of 1. Oh sorry, I forgot the logarithm here. Of log of 1 minus sigmoid of w_3 a_2 plus b_3. And so just a reminder, the reason we have this is because we've written the forward propagation in the previous class. You guys remember the pro- forward propagation? We had z_3, which took a_2 as inputs and computed the linear part, as sigmoid is- is the activation function used in the last neuron over here. Okay. So let's try to- to compute this derivative. y_i, so the derivative of log, [NOISE] log prime equals 1 over log. Remember this- this- this formula, so I will just take 1 over, sorry, 1 over x minus- 1 over x if you put an x here. So log prime of x. So I will take one over sigmoid of w_3 a_2 plus b_3. I know that thing can be written a_3, right? So I will just write a_3 instead of writing the single a again. So we have 1 over a_3 times the derivative of a_3 with respect to w_3. We remember that, I'm going to write it down here. If we take the derivative of sigmoid of blah, blah, blah. Let's say, derivative of log of sigmoid over w. What we have is 1 over the sigmoid times the derivative with respect to w_3 of the sigmoid. Does that makes sense? That's what we're using here. So the derivative of sigmoid, sigmoid-prime of x is actually pretty easy to compute. It's sigmoid of x times 1 minus sigmoid of x. Okay. So I'm just going to take the derivative. It's going to give me a- a_3 times 1 minus a_3. There's still one step because there is a composition of three functions here. There is a logarithm, there's a sigmoid, and there is also a linear function, w_x plus b or w a_2 plus b. So I also need to take the derivative of the linear part with respect to w_3. Because I know that sigmoid of w_3, a_2 plus b_3. If I wanna take the derivative of that with respect to w_3, I need to go inside and take the derivative of what's inside, okay? So this will give me the sigmoid or whatever a_3 times 1 minus a_3 times the derivative with respect to w_3 of the linear part. [NOISE] Does this make sense? So I am going to write it here bigger. Here, I need to take the derivative of the linear part with respect to w_3, which is equal to a_2 transpose. So one thing you- you may wanna check, is when we compute- when I'm trying to compute this derivative. [NOISE] I'm trying to compute this derivative. Why is there a transpose that comes out? How do you come up with that? You look at the shape here. What's the shape of w_3? Someone remembers? 1 by 2. 1 by 2. Yeah, why 1 by 2? [BACKGROUND] Yeah, it's connecting two neurons to one neuron. So it has to be 1 by 2. Usually flip it. And in order to come back to that, you can write your forward propagation, make the shape analysis, and find out that it's a 1 by 2 matrix. How about this thing? What's the shape of that? [NOISE]. The scalar. It's a scalar, yeah. So scalar. So it's 1 by 1. How do you know? It's because this thing is basically z_3. It's the linear part of the last neuron and a_3, we know that it's y-hat. So it's a scalar between 0 and 1. So this has to be a scalar as well. Because taking the sigmoid should not change the shape. So now, the question is what's the shape of this entire thing? The shape of this entire thing should be the shape of w_3 because you're taking the derivative of a scalar with respect to a higher-dimensional matrix or vector here called a row vector. Then it means, that the shape of this has to be the same shape of w_3. So 1 by 2. And you know that when you take this simple derivative in- in real life, like in- in, uh, with scalars, not with high-dimensional, you know that this is an easy derivative. It just should- it should give you a_2, right? But in higher dimension, sometimes you have transpose that come up. And how do you know that the answer is a_2 transpose? It's because you know that a_2 is a 2 by 1 matrix. [NOISE] So this is not possible. It's not possible to get a_2, because otherwise it wouldn't match the derivative that you are calculating. So it has to be a_2 transpose. So either you- you learn the formula by heart or you- you learn how to analyze shapes, okay? Any questions on that? Okay. So that's why it's a_2 transpose. Now, l minus y_i. So I'm- I'm on this one now. The second term of the- of the derivative. And I take the derivative of this. So I get 1 over 1 minus a_3. a_3 denotes the sigmoid. So I'm just copying this back using the fact that the derivative of the logarithm is 1 over x, and then I will multiply this by the derivative of 1 minus a_3 with respect to w_3. I know that there is a minus that needs to come up. So I will write it down here, minus 1 and I also have the derivative of the sigmoid with respect to what's inside the sigmoid. So a_3 [NOISE] times 1 minus a_3. And what's the last term? The last term is simply the one we just talked about. It's the derivative of what's inside the sigmoid with respect to w_3. So it's a_2 transpose again. Okay. So now, I will just simplify. I know this scalar simplifies with this one. This one simplifies with that one. We're going to copy back all the results minus [NOISE] y_i times 1 minus a_3 a_2 transpose plus 1 minus y_i times the minus- I'm going to put the minus here. So I'm taking the minus putting it on- on the front times a_3 times a_2 transpose. And then, quickly looking at that I see that some of the terms will cancel out, right? Okay. So I have one term here, y-hat- y_i times minus a_3 a_2 transpose would cancel out with plus y_i a_3 a_2 transpose. This makes sense? So like, the term that we multiply this number, we cancel out with the term, we multiply this number. We need to continue. [NOISE] It gives me y_i times a_2 transpose, this part, minus a_3 times a_2 transpose. I, I can factor this because I have the same term a_2 transpose. And it gives me finally, y_i minus a_3 times a_2 transpose. Okay, so it doesn't look that bad actually. I don't know, when- when we take a derivative of something kin- kinda ugly we- we expect something ugly to come out but this doesn't seem too bad. Any questions on that? I let you write it quickly, and then we're going to move through to the rest. So once I get these results, I can just write down the costs of the derivative with respect to w_3. I know it's just one minus. I just need to- to take the summation of this thing. So y_i minus a_3 times [NOISE] y_2 transpose- a_2 transpose. And I have a minus sign coming upfront. So that's my derivative. [NOISE] Okay. So we're done with that. And we can, we can just take this formula, plugging it back in our gradient descent update rule, and update w_3. Yeah. Now, the question is, you can do the same thing as, as we just did but with b_3. It's going to be the similar difficulty. We're going to do it with w_2 now, and think how does that backpropagate to w_2. So now it's w_2 star. We want to compute the derivative of l, the loss, with respect to w of the second layer. The question is how I'm gonna get this one without having too much work. I'm not gonna start over here as we said last time, I'm going to use the chain rule of calculus. So I'm going to try to decompose this derivative into several derivatives. So I know that y hat is the first thing that is connected to the loss function, right. The output neuron is directly connected to the loss function. So I'm going to take the derivative of the loss function with respect to y hat, also called a_3. Right? This is the easiest one I can calculate. I also know that a_3, which is the output activation of the last neuron, is connected with the linear part of the last neuron, which is z_3. So I can take the derivative of a_3 with respect to z_3. Do you remember what this is going to be? Derivative of a_3 with respect to z_3? Derivative of Sigmoid. I know that a_3 equals Sigmoid of z_3. So this derivative is very simple. It's just that. It's just a_3 times 1 minus a_3. All right. So I'm going to continue. I know that z_3, z_3 is equal to what? It's equal to w_3, a_2 plus b. Which path did I need- do I need to take in order to backpropagate? I don't wanna take the derivative with respect to w_3 because I will only get stuck. I don't wanna take the derivative with respect to b_3 because I will get stuck. I will take the derivative with respect to a_2. Because a_2 will be connected to z_2, z_2 will be connected to a_1, and I can backpropagate from this path. So I'm going to take derivative of z_3 with respect to a_2 to have my error backpropagate, and so on. I know that a_2 is equal to Sigmoid of z_2. So I'm just going to do that. And I know that this derivative is going to be easy as well. And finally, I also know that z_2 is connected to w_2. So I'm going to take derivative of z_2 with respect to w_2. So just what I want you to get is the thought process of this chain rule. Why don't we take a derivative with respect to w_3 or b_3? It's because we will get stuck. We want the error to back propagate. And in order for the error to backpropagate, we have to go through variables that are connected to each other. Does this makes sense? So now the question is how can we use this? How can we use the derivative we already have in order to, to, to, to compute the derivative with respect to w_2? Can someone tell me how we can use the results from this calculation, in order not to do it again? Cache it. You cache it? Um, so there's another discussion on caching, which is, which is correct that in order to get this result very quickly we will use cache. But, uh, what I want here is to- you to tell me if these results appear somewhere here. Yeah? [inaudible] the first three terms. The first three terms. So this one, this one, and this one? I'm not sure. Yeah. Is it the first two terms or the first three terms? Two. The first two terms. Yeah. But good intuition. Yeah. So this result is actually the first two terms here. We just calculated it. Okay. What- how do we know that? It's not easy to see. One thing we know based on what we've written very big on this board is that the derivative of z_3, because this is z_3, right? Derivative of z_3 with respect to w_3 is a_2 transpose. Right. So I could write here that this thing is derivative of z_3 with respect to w_3. Is it correct? So I know that because I wanted to compute the derivative of the loss to w_3, I know that I could have written derivative of loss with respect to w_3 as derivative of loss with respect to z_3, times derivative of z_3 with respect to w_3. Correct. And I know that this is a_2 transpose. So it means that this thing is the derivative of the loss with respect to z_3. Does that make sense? So I got, I got my decomposition of the derivative we had. If we wanted to use the chain rule from here on, we could have just separated it into two terms, and took the derivative here. Okay. So I know the result of this thing. I know that this thing is basically a_3 minus y, times a_2 transpose. I just flipped it because of the minus sign. Okay. Is it mine? [NOISE]. Okay. [NOISE]. Now, tell me what's this term. What is this term? Let's go there. Yeah. Sigmoid. So Sigmoid. I'm just going to write it a_2 times 1 minus a_2. Does that make sense? Sigmoid times 1 minus Sigmoid. What is this term? Uh, oh sorry my bad. That's not the right one. This one, this one is that. This one is Sigmoid. a_2 is Sigmoid of z_2. So this result comes from this term. Was- what about this term? w_3. Sorry. w_3. w_3. Is it w_3 or no? I heard transpose. How do we know if it's w_3 or w_3 transpose? So let's look at the shape of this. What's z_3? One by one. It's one by one. It's a scalar. It's the linear part of the last neuron. What's the shape of that? This is 2, 1. We have two neurons in the layer. w_3. We said that it was a 1 by 2 matrix, so we have to transpose it. So the result of that is w_3 transpose. And how about the last term? Same as here. One layer before. Yeah, someone said they won't transpose. Okay. Yeah? The numbers are [inaudible] that one. This one? Yeah. There is a transpose here. [inaudible] w_5. Oh yeah, yeah. You're correct. You're correct. Thank you. That's what you mean? Yeah. Yeah. This one was from the z_3, to w_2. We didn't end up using that because we will get stuck, so there's no a_2 transpose here. Thanks. Any other questions or remarks? So that's cool. Let's, let's, let's write- let's write down our derivative cleanly on the board. So we have derivative of our loss function with respect to w_2, which seems to be equal to a_3 minus y, from the first term. The second term seems to be equal to, uh, w_3 transpose. Then we have a term which is a_2 times 1 minus a_2. Okay. And finally, finally we have another term that is a_1 transpose. So are we done or not? So actually there is that- the thing is there's two ways to compute derivatives. Either you go very rigorously and do what we did here for w_2, or you try to do a chain rule analysis, and you try to fit the terms. The problem is this result is not completely correct. There is a shape problem. It means when we took our derivatives, we should have flipped some of the terms. We didn't. There is actually- we, we won't have time to go into details in this lecture because we have other things to see, but there is, uh, a section note I think on the website, which details the other method which is more rigorous, which is like that for all the derivatives. What we are going to see is how you can use chain rule plus shape analysis to come up with the results very quickly. Okay. So let's, let's analyze the shape of all that. We know that the first term is a scalar. It is a 1 by 1. We know that the second term is the transpose of 1 by 2. So it's 2 by 1. And we know that this thing here a_2 times 1 minus a_2 is, uh, 2 by 1. It's an element-wise product. And this one is a_1 transpose, so it's 3 by 1 transpose. So it is 1 by 3. So there seems to be a problem here. There is no match between these two operations for example. Right? So the question is, how- how can, we how can we put everything together? If we do it very rigorously, we know how to put it together. If you're used to doing the chain rule, you can quickly sh- quickly do it around. So after experience, you will be able to, to fit all these together. The important thing to know is that here there is an element-wise product, which is here. So every time you will take the derivative of the Sigmoid it's going to end up being an element-wise product. And it's the case whatever the activation that you're using is. So the right result is this one. So here I have my element-wise product of a 2 by 1 [NOISE] by a 2 by 1. So it gives me a 2 by 1 column vector and then I need something that is 1 by 1 and 1 by 3. How do I know, wha- what I need to have, I know that the shape of this thing. W3 needs to be 2 by 3. It's connecting three neurons to two neurons. So W2 has to be 2 by 3. In order to end up with this, I know that this has to come here A3 minus y and A1 transpose comes at the end. And here I get my correct answer. Don't worry if it's the first time th- the chain rule is going quickly, don't worry. Read the lecture notes with the rigorous parts. Taking the derivative, it will make more sense. But I feel it's, uh, usually in practice, we don't compute these chain rules anymore, uh, because- because programming frameworks do it for us but it's important to know at least how the chain rule decomposes, uh, and also how to make these, compute these derivatives. If you read research papers specifically. Any questions on that? I think I wanna go back to what you mentioned with the cache. So why is cache very important? That was your question as well? [BACKGROUND] Yeah, yeah it has to be. Right. So it means when you take the derivative of Sigmoid, you take derivative with respect to every entry of the matrix which gives you an element-wise product. Um, going back to the cache. So one thing is, it seems that during backpropagation, there is a lot of terms that appear that were computed during forward propagation. Right. All these terms; a1 transpose, a2, a3, all these, we have it from the forward propagation. So if we don't cache anything, we have to recompute them. It means I'm going backwards but then I feel, oh, I need a2 actually. So I have to re- go forward again to get a2. I go backwards, I need a1. I need to forward propagate my x again to get a1. I don't wanna do that. So in order to avoid that, when I do my forward propagation, I would keep in memory almost all the values that I'm getting including the Ws because as you see to compute the derivative of loss with respect to W2 we need W3, but also, the activation or linear variables. So I'm going to save them in my, in my network during the forward propagation in order to use it during the backward propagation. So it makes sense. And again, it's all for computational ef- efficiency. It has some memory costs. Okay. So that was backpropagation. And now I can use my formula of the costs with respect to the loss function. And I know that this is going to be my update. [NOISE] This is going to be used in order to update W2 and I will do the same for W1. Then you guys can do it at home. If you wanna meet, wanna make sure you understood, take the derivative with respect to W1. Okay. So let's move on to the next part, [NOISE] which is improving your neural network. So in practice, when you, when you do this process of training forward propagation, backward propagation updates, you don't end up having a good network mo- most of the time. In order to get a good network, you need to improve it. You need to use a bunch of techniques that will make your network work in practice. The first, the first trick is to use different activation functions. So together, we've seen one activation function which was Sigmoid. And we remember the graph of Sigmoid is getting a number between minus infinity and plus infinity and casting it between 0 and 1. And we know that the formula is Sigmoid of z equals 1 over 1 plus exponent so minus z. We also know that the derivative of Sigmoid is Sigmoid of z times 1 minus Sigmoid of z. Okay. Another very common, uh, activation function is ReLU. We talked quickly about it last time. ReLU of z which is equal to 0 if z is less than 0 and z if z is positive. So the graph of ReLU looks like something like this. And finally, another one we were using commonly as well is tan h. So hyperbolic tangents and tan h of z exponential z minus exponential minus z over exponential z plus exponential minus z. The derivative of tan h is 1 minus tan h squared of z. And the graph looks kind of like Sigmoid, but, but it goes between minus 1 and plus 1. So one question. Now that I've given you three activation functions, can you guess why we would use one instead of the other and, and which one has more benefits? So when I talk about activation functions, I talk about the functions that you will put in these neurons after the linear part. What do you think is the main advantage of Sigmoid? Yeah. We use it for classification. Yep. You use it for classification, between it gives you a probability. What's the main disadvantage of Sigmoid? It's easy. It's easy. That should be an advantage, should be a benefit. Yeah? [BACKGROUND] Correct. If you're at high activation, if you are at high z's or low z's, your gradient is very close to 0. So look here. Based on this graph we know that if z is very big. If z is very big our gradient is going to be very small, the slope of this, of this graph is very, very small. It's almost flat. Same for z's that are very low in the negative. Right. What's the problem with having low gradients is when I'm back propagating. If the z I cached was big, the gradient is going to be very small and it will be super hard to update my parameters that are early in the network because the gradient is just going to vanish. Does that makes sense? So Sigmoid is one of these activations which, which works very well in the linear regime, but has trouble working in saturating regimes because the network doesn't update the parameters properly. It goes very, very slowly. We're going to talk about that a little more. How about tan h? Very similar, right? Similar like high z's and low z's lead to saturation of a tan h activation. ReLU on the other hand doesn't have this problem. If z is very big in the positives, there is no saturation. The gradient just passes and the gradient is 1, when we were here. The slope is equal to 1. So it's actually just directing the gradient to some entry. Is not multiplying it by anything when you backpropagate. So you know this term here, this term that I have here. All the a3 minus a3 times 1 minus a3 or 1 minus a2. If we use ReLU activations, we would change this with what's- with- with the derivative of ReLU and the derivative of ReLU can be written indicator function of z being positive. You've seen indicator functions. So this is equal to 1 if z is positive, 0 otherwise. Okay. So we will see why we use ReLU mostly. Yeah? [BACKGROUND] Yeah. You remember the house prediction example? In that case, if you want to, if you want to predict the price of a house based on some features, you would use ReLU. Because you know that the output should be a positive number between 0 and plus infinity, it doesn't make sense to use 1 of tan h or similar. Yep. [BACKGROUND] Doesn't really matter. I think if, if I want my output to be between 0 and 1 I would use Sigmoid, if I want my output to be between minus 1 and 1 I would use tan h. So you know, there is, there are some tasks where the output is kind of a reward or a minus reward that you want to get. Like in reinforcement learning, you would use tan h as an output activation which is because minus 1 looks like a negative reward, plus 1 looks like a positive reward, and you want to decide what should be the reward. Why do we consider these functions? Good question. Why do we consider these functions? We can actually consider any functions apart from the identity function. So let's see why. Thanks for the transition. [LAUGHTER] Like why do we need activation functions? So let's assume that we have a network which is the same as before. So our network is three neurons casting into two neurons casting into one neuron, ah, and we're trying to use activations are equal to identity functions. So it means z is given to z. Let's try to derive the forward propagation, y_hat equals a_3, equals z_3, equals w_3, a_2 plus b_3. I know that a_2, a_2 is equal to z_2 because there is no activation and z_2 is equal to w_2 a_1 plus b_2. So I can cast here w_2, w_2 a_1 plus b_2 plus b_3. I can continue. I know that a_1 is equal to z_1, and I know that z_1 is w_1 x plus b, and b equals w_3 times w_2 times b_1 plus w_3 times b_2 plus b_3. So what's the insight here? Is that we need activation functions. The reason is, if you don't choose activation functions, no matter how deep is your network, it's going to be equivalent to a linear regression. So the complexity of the network comes from the activation function. And the reason we can understand- if we're trying to detect cats, what we're trying to do is to train a network that will mimic the formula of detecting cats. We don't know this formula, so we want to mimic it using a lot of parameters. If we just have a linear regression, we cannot mimic this because we are going to look at pixel by pixel and assign every weight to a certain pixel. If I give you an example, it's not going to work anymore. Yeah, yeah. So I think that's, that, that goes back to your question as well. So this is why we need activation functions. And then the question was, can we use different activation functions and how do we, how do we put them inside a layer or inside neurons? Usually, we would use, there are more activation functions. I think in CS230 we'll go over a few more but not, not, not today. These have been designed with experience, so these are the ones that's, that, that's work better and lets our networks train. There are plenty of other activation functions that have been tested. Usually, you would, you would, uh, use the same activation functions inside every layer. So when you, it's, it's a, it's, it's for, for training. It doesn't have any special reason I think but when you have a network like that, you would call this layer a ReLU layer meaning it's a fully connected layer with ReLU activation. This one a Sigmoid layer, it means it's a fully connected layer with the Sigmoid activation. And the last one is Sigmoid. I, I think people have been trying a lot of putting, activat- different activations in different neurons in a layer, in different layers and the consensus was using one activation in the layer and also using one of these three activations. Yeah. So if someone comes up with a better activation that is obviously helping training our models on different datasets, people would adopt it but right now these are the ones that work better. And you know, last time we talked about hyper-parameters a little bit. These are all hyper-parameters. So in practice, you're not going to choose these randomly, you're going to try a bunch of them and choose some of them that seem to help your model train. There's a lot of experimental results in deep learning and we don't really understand fully why certain activations work better than others. Okay, let's move on. [NOISE] Okay, let's go over initialization techniques. [NOISE] Uh, actually, let me use this board. So another trick that you can use in order to help your network train are initialization methods and normalization methods. So, um, earlier we talked about the fact that if z is too big, or z is too low in the negative numbers, it will lead to saturation of the network. So in order to avoid that you can use normalization of the input. So assume that you have a network where the data is two-dimensional, x_1, x_2 is our two-dimensional input. You can assume that x_1, x_2 is distributed like this, let's say. So this is if I plot x_1 against x_2 for a lot of data, I will get that type of graph. Uh, the problem is that if I do my wx plus b, to compute my z_1, if xs are very big, it will lead to very big zs which will lead to saturated activations. In order to avoid that, one method is to compute the mean of this data using Mu equals 1 over the size of the batch of data that you have in the training sets. Sum of xis. So it's just giving you the mean for x_1, and the mean for x_2. You would compute the operation x equals x minus Mu, and you will get that type of plot. If you replot the transform data, let's say x_1 tilde, x_2 tilde. So here is a little better, but it's still not good. In order to solve the problem fully, we are going to compute Sigma squared, which is basically the standard deviation squared, so the variance of the data, and then you will divide by, uh, Sigma squared. So you would do that and you would make the transformation of x being equal to x divided by Sigma, and it will give you a graph that is centered up here. So you, you usually prefer to, to work with a centered data. Yeah? [inaudible] tilde? Sorry, oh yeah, yeah, sorry, sorry, yeah, correct. So if we subtract the mean of x_1 and x_2, it will be [inaudible]. Sorry, it should look like this, but it would be centered. Okay, and then if you stan- if you standardize it, it looks like something like that. So why is it better? Because if you look at you- your loss function now, before the loss function would look like something like this. [NOISE] And after normalizing the inputs, it may look like something, something like this. So what's the difference between these two loss functions? Why is this one easier to train? It's because if you have the starting point that is here let's say, their gradient descent algorithm is going to go to towards approximately the steepest slope. So we're going to go like there, and then this one is going to go there, and then you're going to go there, and then you're going to go there like that and so on, until you end up at the right points. But the steeper slope in this loss contour is always pointing towards the middle. So if you start somewhere, it will directly go towards the minimum of your loss function. So that's why it's helpful usually to normalize. So this is one method, uh, and in practice, the way you initialize your weights is very important. Yeah? [BACKGROUND] Uh, yes. So. [BACKGROUND] Exactly. So here I used a very simple case but you would divide elementwise by, by the Sigma here, okay? So like every entry of your matrix you would divide it by the Sigma. One, one other thing that is important to notice. This Sigma and Mu are computed over the training set. You have a training set, you compute the mean of the training, set the standard deviation, of the training set, and these Sigma and Mu have to be used on the test set as well. It means now that you want to test your algorithm on the test set, you should not compute the mean of the test set, and the standard deviation of the test set and normalize your test inputs through the network. Instead, you should use the Mu and the Sigma that were computed on the train set because your network is used to seeing this type of transformation as an input. So you want the distribution of the inputs at the first neuron to be always the same, no matter if it's a train or the test set. What you do is that [inaudible] Here? Likely, yeah. This leads to fewer iterations. Okay, we have a lot to see so I will, I will skip a few questions. So let's, let's delve a little more into vanishing and exploding gradients. So in order to get an intuition of why we have these vanishing or exploding gradient problem, we can consider a network which is very, very deep and has a two-dimensional input, okay? And so on. So let's say we have, let's say we have ten layers in total. Ten layers plus an output layer. So assume, assume all the activations are identity functions, and assume that these biases are equal to 0. If you compute y hats, the output of the network with respect to the input. You know that y hat will be equal to w of layer L, capital L denotes the last layer, times a l minus 1 plus bL, but bL is 0 so we can remove it. w_l times a_L minus 1. You know that a_L minus 1 is w_l minus 1 times a_L minus 2 because the activation is an identity function and so on. You can back propagate, you can go back and you will get that y hat equals w_L times w_l minus 1 times blah, blah, blah, times w_1 times x. You get something like that, right? So now, let's consider two cases. Let us consider the case where the w_l matrices are a little bigger than the identity function, a little larger than the identity function in terms of values. Let's say w_l, including all these. So all these matrices which are 2 by 2 matrices, right, are these ones. What's the consequence? The consequences that this whole thing here is going to be equal to 1.5 to the power L, 1.5 to the power L, 0, 0. It will make y hat explode. It will make the value of y hat explode, just because this number is a tiny little bit more than 1. Same phenomenon, if we had 0.5 instead of 1.5 here, the value, the multiplicative value of all these matrices will be 0.5 to the power L here, 0.5 to the power L here, and y hat will always be very close to 0. So you see, the issue with vanishing exploding gradients is that all the errors add up like multiply each other. And if you end up with numbers that are smaller than one, you will get a totally vanished gradient. When you go back, if you have values that are a little bigger than 1 you will get exploding gradients. So we did it as a forward propagation equation, we could have done it exactly the same analysis. We did derivatives, assuming the derivatives of the weight matrices are a little lower than the identity, or a little higher than the identity. So we want to avoid that. One way that is not perfect to, to avoid this is to initialize your weights properly, initialize them into the right range of values. So you agree that we would prefer the weights to be around 1, as close as possible to 1. If they're very close to 1, we probably can avoid the vanishing and exploding gradient problem. So let's look at the initialization problem. The first thing to look at is example of the one neuron. [NOISE] If you consider this neuron here, which has a bunch of inputs and outputs and activation a. [NOISE] You know that the equation inside the neuron is a equals whatever function, let's say sigmoid of Z and you know that z is equal to W_1 X_1 plus W_2 X_2 plus blah, blah, blah plus W_n X_n. So it is a dot product between the W's and the X's. So the interesting thing to notice is that we have n terms here. So in order for Z to not explode, we would like all of these terms to be small. If W's are too big, then this term will explode with the size of the inputs of the layer. So instead if we have a large n, it means the input is very large, what we want is very small W_i's. So the larger n, the smaller it has to be W_i. So based on this intuition, it seems that it would be a good idea to initialize W_i's with something that is close to 1 over n. We have n terms, the more terms we have, the more likely Z is going to be big. But if our initialization says the more terms you have, the smaller the value of the weights, we should be able to keep Z in a certain range that is appropriate to avoid vanishing and exploding gradients. So this seems to be a possible initialization scheme. So in practice, I'm going to write a few initialization schemes that we're not gonna prove. If you're interested in seeing more proofs of that, you can take CS230, where we prove this initialization scheme. May I take down the board? So there are a few initializations that are commonly used and again, this is, this is very practical and people have been testing a lot of initializations, but they ended up using those. [NOISE] So one is to initialize the weights. I'm writing the code for those of you who know numPy. I'm not gonna compile it here. With whatever shape you are using, elementwise times the square root of 1 over n of L minus 1. So what does that mean? It means that I will look at the number of inputs. I'm writing an L minus 1 here, n to the L minus 1. I'm looking at how many inputs are coming to my layer assuming we're at layer L. How many inputs are coming. I'm going to initialize the weights of this layer proportionally to the number of inputs that are coming in. So the intuition is very similar to what we described there. So this initialization has been shown to work very well for sigmoid activations. So if you use sigmoid. What's interesting is if you use ReLU, it's been, it's been observed that putting a 2 here instead of a 1 would make the network train better. And again, it's very practical. It's one of the fields that, that we need more theory on it, but a lot of observations had been made so far. Do you guys want to just do that as a project to see why is this happening? It would be interesting. Okay. [NOISE] And finally, there is a more common one that is used which is called the Xavier initialization, which proposes to update the weights [NOISE] using, uh, square root of 1 over n_ l minus 1 for tan h. This is another one. And another one that is I believe called Glorot initialization recommends to initialize the weights of a layer using the following formula. So quickly, the, the quick int- intuition behind the last one. The last one is, is very often used. The quick intuition is that we're doing the same thing but also for the backpropagated gradients. So we're saying the weights are going to multiply the backpropagated gradients. So we also need to look at, at how many inputs do we have during the backpropagation. And L is the number of inputs you have during backpropagation and L minus 1 is the number of inputs you have during forward propagation. So taking an average, a geometric average of those. [NOISE] And the reason we have a random function here is because if you don't initialize your weights randomly, you will end up with some problem called the symmetry problem where every neuron is going to learn kind of the same thing. To avoid that, you will make the neuron starts at different places and let them evolve independently from each other as much as possible. So now we have two choices. Either we go over regularization or optimization. How much have you talked about regularization so far L1, L2, early stopping, all that? Early stopping, everybody remembers what it is? No? Little bit? So let's go over optimization, I guess, and then we will do some regularization depending on the time we have. [NOISE] So I believe so far you've seen gradient descent and stochastic gradient descent as two possible optimization algorithms. In practice, there is a trade-off between these two which is called mini-batch gradient descent. What is the trade-off? The trade-off is that batch gradient descent is cool because you can use vectorization, you can give a batch inputs, forward propagate it all at once doing vec- using a vectorized code. Stochastic gradient descent's advantage is that the updates are very quick. And imagine that you have a dataset with one million images. One million images in the dataset and you wanna do batch gradient descent. Do you know how long it's going to take to do one update? Very long. So we don't want that because maybe we don't need to go over the full dataset in order to have a good update. Maybe the updates based on 1,000 examples might already give us the right direction for the gradient [NOISE] of where to go. It's not gonna be as good as on the median example where it's going to be a very good approximation. So that's why most people would use mini-batch gradient descent, where you have a trade-off between stochasticity and also vectorization. So in terms of notation, [NOISE] I'm going to call X the matrix x_1, x_2, x_m, and capital Y the same matrix with y_m. So we have m training examples. And I'm going to split these into batches. So I'm going to call the first batch x_1 like this until x maybe T like that. And x_1 can contain probably x_1 until x_1,000. Assuming it's a batch of 1,000 examples. X_2 then will contain x_1,001 until x_2,000 and so on. So this is the notation for the batch when I use curly brackets. Same for Y. [NOISE] So in terms of algorithm, how does the Mini-batch gradient descent algorithm work? We're going to iterate. So for iteration t from 1 to blah, blah, blah, to how many iteration you wanna do. We're going to select a batch, select a batch of x_t- x_t, y_t. You will forward propagate the batch, and you will backpropagate the batch. So by forward propagation, I mean, you send all the batch to the network and you compute the loss functions for every example of the batch, you sum them together and you compute the cost function over the entire batch, which is the average of the loss functions. And so assuming- assuming the batch is of size 1,000, this would be the- the formula to compute the batch over 1,000 examples. And after the backpropagation, of course, updates, W_l and D_l for all the l's, for all the layers. This is the- the equation. So in terms of graph, what you're likely to see is that for batch gradient descent, your cost function j would have looked like that, if you plot it against the number of iterations. On the other hand, if you use a Mini-batch gradient descent, you're most likely to see something like this. So it is also decreasing as a trend, but because the gradient is approximated and doesn't necessarily go straight to the- to the middle of your loss fun- to the lower point of the loss function, you will see a kind of graph like that. The smaller the batch, the more stochasticity. So the more noise you will have on your cost function graph. And of course, if you- if we plot again- if we plot the loss function and this was gradient descent, so this is the top view of the loss function, assuming we're in two dimensions. Your stochastic gradient descent or batch gradient descent would do something like that. So the difference is- there seem to be less iteration with the red algorithm, but the iterations are much heavier to compute. So each of the green iterations are going to be very- very- very quick, while the red ones are going to be slow to compute. This is a trade off. Now there is another algorithm that I wanna go over which is called the momentum- momentum algorithm. Sometimes called gradient descent plus momentum algorithm. So what's the intuition behind momentum? The intuition is, let's look at this loss contour plot. And I'm doing an extreme case just to illustrate the intuition. Assume you have the loss that is very extended in one direction. So this direction is very extended and the other one is smaller. You're starting at a point like this one. Your gradient descent algorithm itself is going to follow the falling bar, it's going to be orthogonal to the current contour, uh, iso- iso term. Contour loss is going to go there, and then there, and then there, and then there, and so on. So what you would like is to move it faster on the horizontal line and slower to the vertical- on the vertical side. So on this axis you would like to move with smaller updates. And on this axis, you wanna move with larger updates, correct? If this happened, we would probably end up in the minimum much quicker than we currently are. So in order to do that, we're going to use a technique called momentum, which is going to look at the past gradients. So look at the past updates. Assume we're here. Assume we are somewhere here. Gradient descent doesn't look at its past at all. You just will compute the forward propagation, compute the backdrop, look at the direction and go to that direction. What momentum is going to say is look at the past updates that you did and try to consider these past updates in order to find the right way to go. So if you look at the past update and you take an average of the past update. You would take an average of these update going up and the update after it going down. The average on the vertical side is going to be small, because one went up, one went down. But on the horizontal axis, both went to the same direction. So the update will not change too much on the vert- on- on this axis. So you're most likely to do something like that if you use momentum. Does it make sense the intuition behind it? So that's the intuition why we want to use momentum. And for those of you who do physics, sometimes you can think of momentum as friction. You know like- like if you- if you launch a rocket and you wanna move it quickly around. It's not gonna move, because the rocket has a certain weight and has a certain momentum. You cannot change its direction very, very noisily. [NOISE] So let's see the implementation of- of- of momentum gradient descent. Oh, and I believe we- we're almost done, right? Yeah. Okay. [NOISE] So let's look at the- the implementation quickly. So gradient descent was w equals w minus Alpha, derivative of the loss with respect to w. What we are going to do is we're going to use another variable called velocity, which is going to be the average of the previous velocity and the current weight updates. So we're going to use that, and instead of the updates being the derivative directly, we're going to update the velocity. So the velocity is going to be a variable that tracks the direction that we should take regarding the current update and also the past updates with a factor Beta that is be- going to be the weights. The interesting point is that in terms of implementation it's one more line of code, in terms of memory, it's just one additional variable, and it actually has a big impact on the optimization. There are much more optimization algorithms that we're not going to see together today. In CS230, we teach something called RMSProp and Atom. That are most likely the- the- the ones that are used the most in deep learning. Uh, and the reason is, uh, if you come up with an optimization algorithm, you still have to prove that it works very well on the wide variety of application between- before researchers adopt it for their research. So Atom brings momentum to the deep learning optimization algorithms. Okay. Thanks guys. Uh, and that's all for deep learning in CS229 so far. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_17_MDPs_ValuePolicy_Iteration_Stanford_CS229_Machine_Learning_Andrew_Ng_Autumn2018.txt | Welcome back, everyone. I hope you had a good Thanksgiving. Um, I actually didn't ask, I'm not sure why this chair is here. All right. Let's get rid of this. Um, by the way, not sure- um, thanks, Anand. I'm not sure if you guys are following the news, but in, in reinforcement learning, we chat a lot about robotics, right? And one of the, you know, uh, constant problems a lot of people use reinforcement learning to solve is robotics and, um, I think, ah, uh, back in May, um, the InSight Mars lander had launched from, um, here in California and it's about to make an attempt at landing on the planet Mars in the next 2.5 hours or so, so excited about that, uh, I think that is actually one of the grandest, um, applications of robotics because, you know, with a- with 20 minute light-speed from Earth to Mars, you know, once it starts its landing, there is nothing anyone on Earth can do and so I think that's one of the most exciting applications of autonomous robotics. When you launch this thing, it's now about 20, 20 light minutes away from planet Earth, so you actually can't control it in real time, uh, and you just have to hope like crazy that your software works well enough for it to land on this planet, you know. Uh, and then so we, we will find out a little bit afternoon if the landing happened successfully or not. I, I think, um, so I, I just get excited about stuff like this, I, I hope you guys do too. And for those of you that are from California, I mean, take some pride that it launched from the home state of California and, and is now nearing its, er, landing on Mars. Okay, um, all right. So, um, what I wanna do today is, uh, continue our discussion on reinforcement learning. Do a quick recap of the MDP or the Markov decision process framework. Um, and then we'll start to talk about algorithms for solving MDPs. In particular, we need to define, uh, something called the value function which tells you how good it is to be in different states of the MDP and then, um, we'll define the value function and then talk about an algorithm called value iteration for computing the value function and this will help us figure out how to actually find a good controller or find a good policy for an MDP, and then we'll wrap up with our learning state transition probabilities and how to put all these together into an actual reinforcement learning algorithm that you can implement. Um, to recap, um, our motivating example- running example from the last time, from before Thanksgiving was, uh, this 11-state MDP. And we said that an MDP comprises a five tuple, a lists of five things with, er, states. So that example had 11 states. Um, actions, and in this example the actions were the compass directions; North, South, East, and West, I can try to go in each of the four compass directions. The state transition probabilities and in the example, if the robot attempts to go North, it has an 80% chance of heading North and a 0.1% chance of veering off to the left and a 0.1 chance of veering off to the right. Um, Gamma is a number slightly less than 1, um, usually slightly less than 1, there is a discount factor, think of this as 0.99, um and R is the reward function that helps us specify where we want the robot to end up. Um, and so what we said last time was that, um, the way an MDP works is you start off in some state S_0, um, this one's much better, you choose an action, uh, a_0, and as a result of that, it transitions to a new state, S_1, which is drawn according to P_s_0 a_0. Um, and then you choose a new action a_1 and as a result the MDP transitions to some new state P_s_1 a_1, um, and the total payoff is the sum of rewards, right? Um, and the goal is to come up with a way, um, and formally the goal is to come up with a policy, Pi, which is a mapping from the states to the actions, uh, that will tell you how to choose actions from whatever stage you are in so that the policy maximizes the expected value of the total payoff, okay? Um, and so I think last time I, I kinda claimed that this is the optimal policy for this MDP, right? Um, and what this means for example is, if you look at this state, um, this policy is telling you that Pi of 3, 1 equals, uh, West, I guess, or you can write West or left, well, what do you call that left arrow, right, where from this state, um, from the state 3,1, you know, the best action to take is to go left, it's to go West. And so if you're executing this policy what that means is that, um, on every step the action you choose would be, you know, Pi, right, of the, the state that you're in, okay? So, um, what I'd like to do is now, uh, to find the value function. So, how, how, how, how did I come up with this, right? Well, what I'd like to do is, have you, um, learn given an MDP, given this five tuple, how do you compute the optimal policy? And one of the challenges with, um, finding the optimal policy is that, you know, there's a- there's an exponentially large number of possible policies, right? If you have 11 states and four actions per state, the number of possible policies is, er, 4 to the power of 11 which is not that big because 11 is a small MDP, right? Because the number of, of policies- possible policies for, for an MDP is combinatorially large, is, uh, number of actions, the power of the number of states. So how do you find the best policy? Okay. So what you learn today is, um, how to compute the optimal policy. Now, in order to develop an algorithm for computing an optimal policy, um, we'll need to define three things. So just as a roadmap. Um, what I'm about to do is define V_Pi, V_star, and Pi_star, okay? Um, and based on these definitions we'll see that- we'll, we'll come to the, uh, definition. We will- uh, derive that Pi_star is the optimal policy, okay? But so let's, let's go through these few definitions. Um, first V_Pi. So for a policy Pi, V_Pi is a function mapping from states to the rules, uh, [NOISE] is such that V_Pi of S is the expected total payoff, um, for starting in state S and executing Pi. And so sometimes we write this as V_Pi of S is the expected total payoff given that you execute the policy Pi and the initial state, S_0 is equal to S, okay? So the definition of V_Pi, this is called the, um, value function for a policy. Well, this is called the value function. [NOISE] For the policy Pi, okay? Um, and so what the value function for a policy Pi denoted v_Pi is? Is it tells you for any state you might start in, there's a function mapping of states to rewards, right? For any state you might start in what's your expected total payoff if you start off your robot in that state, and if you execute the policy Pi? And execute the policy Pi means take actions according to the policy Pi. Right? So here's a, here's a specific example. Um, this policy. So let's consider the follo- following policy Pi, right. Um, [NOISE] so this is not a great policy. You know, from some of these states, it looks like it's heading for the minus 1 reward or sorry. So if one of the reward was plus 1 that we get here. And secondly, this is called an absorbing state. Meaning that if you ever get to the plus 1 and minus 1, then the world ends and then there are no more rewards or penalties after that. Right? So but so this is actually not a very good policy, so the policy is any function mapping from the states to the actions. So this is one policy that says, uh, in this state, you know, this policy tells you in this state for one go north, which is actually a pretty bad thing to do, right, is take you to the minus 1 reward. So this is not a great policy, um, but, but this is just a policy. And v_Pi for this policy, um, looks like this. Okay. Um, don't worry too much about the specific numbers. But you've- if you look at this policy, you see that from this set of states it's pretty efficient at getting you to the really bad reward, and from this set of states it's pretty efficient at getting you to the good reward right, with some mixing because of the noise in the robot veering off to the side. And so, you know, these numbers are all negative. And those numbers are at least somewhat positive. Right. So but so v_Pi is just, um, if you start from say this state, from the state 1, 1 on expectation, you're expecting some these counts of rewards will be negative 0.88. Okay? Um, so that's what v_Pi is. Right. Now, um, the following equation. Let me think, uh, governs, um, the value function. It's called, it's called Bellman's equation. Um, and this says that your expected payoff at a given state is the reward that you receive plus the discount factor, times the future reward. So let me, let me actually explain, um, the intuition behind this, right? Which is that, um, let's say you start off at some state s_0, right? So and again, let's, let's say s is equal to s_0. So v_Pi of s is equal to, well, just for your robot waking up in that- I'm going to add to that in a second, okay? But just for the sake, just for this- for the fact that your robot woke up, um, in this state s, you get the immediate- you get a reward R of s_0 right away. This is something that's called- this is also called the immediate reward. [NOISE] Right. Uh, because, you know, just for the, for the, uh, good fortune or bad fortune of starting off in this state, the robot gets a reward right away. This is called the immediate reward. And then it will take some action and get to some new state s_1. Where it will receive, you know, Gamma times the reward of s_1. And then [NOISE]. Right. And then it will get some future reward at the next step and so on. Um, and just to flesh out the definition, the value function v_Pi is really this. Given that you execute the policy Pi and our s_0 equals s, right, and you start off in this state as 0. Now, what I'm going to do is rewrite this part of the equation little bit. I'm going to factor out. I'm just going to take the rest of this and factor out one factor of Gamma. So let me put parentheses around this, right, and just take out Gamma there. Okay. So I'm just, you know, taking this previously this was Gamma squared, right? But adding the parenthesis here, I'm just taking out one factor of Gamma, uh, that multiplies in the rest of that equation, okay? Does that make sense? No. So as Gamma R of s_1 plus gamma squared R of s_2, plus dot, dot, dot equals Gamma times R of s_1 plus. Okay. So that's, that's what I did down there, right, just factor out one, one factor of Gamma. And so, um, this is the, the value of state s is the immediate reward, plus Gamma times the expected future rewards. Right? So this, the expected value of this is really v_Pi of s_1. Right. So this- and, and so the second term here, this, this is the expected future rewards, right? So Bellman's equation says that, um, the value of a state, the value- the expected total payoff you get if your robot wakes up in a state s is the immediate reward plus Gamma, times the expected future rewards. Okay. Right. And, and this thing under, you know, above the curly braces is really, um, uh, asking if your robot wakes up at the state s_1, and executes Pi, what is the expected total payoff, right? And this when your robot wakes up in state s_1 then it'll take an action, gets s_2, take an action, get s_3, and this somewhat discounts the rewards for a bit, starts off with the state s_1. Okay. Makes sense? So, um, uh, this- based on this, you can write out what- justify Bellman's equation, which is, um, and, excuse me. And the mapping from this equation to this equation. [NOISE]. All right. The mapping from the equation on top to the equation at the bottom is that, S maps to S_0 and S prime maps to S_1, right? Um, and, what was I going to say, um, and so if we have that V_Pi of S equals, um, makes sense? [BACKGROUND]. So the value of, um, state S is, uh, R of S plus V_Pi of S prime, where this is really S_0 and this is S_1. Uh, and and in, in the notation of MDP, if you want to write a long sequence of states, we tend to use S_0, S_1, S_2, S_3, and S_4, and so on, but if you have, want to look at just the current state and the state you'd get to after one time step, we tend to use S and S prime for that. So that's why there's this mapping between these two pieces of notation. Uh, so S prime let's say you get to after one step, well, let's see, what is S prime drawn from, right? This so- the, the, the state S prime or S_1 is the state you get to after one time step. So what is, what is the distribution the S prime is drawn from? S prime is drawn from P of what? S. Okay, P of S, and then? Pi of S. Pi of S, pretty cool. Does that make sense? Because, um, in state S, you will take action a equals Pi of s, right. So we're executing the policy Pi. So that means that when you're in a state S, you're gonna take the action a given by Pi of S, because Pi of S tells you, please take this action a when you're in sate S. And so, um, S prime is drawn from P of Sa, where a is equal to Pi of S, right? Because they- because that's the action you took, which is why S prime, the state you get to after one time step, is drawn from a distribution S Pi of S, okay? Wow, that pen really left a mark. So putting all that together, that's why- well, I just write out again, where Bellman's equation which is, um, V_Pi of S equals R of S plus the discount factor times the expected value of V_Pi of S prime. And so this term here is just sum of S prime V S Pi of S, V_Pi of S prime. So that underlying term I guess is this just underline term here, okay? Um, now, notice that this gives you a linear system of equations for actually solving for the value function. Um, so let's say I give you a policy, right? It could be a good policy, could be a bad policy, and you want to solve for V_Pi of S. What this, um, does is, if you think of V_Pi of S as the unknown you're trying to solve for, um, given Pi, right, these equations [NOISE] , um, these equa- the Bellman's equations defines a linear system of equations, uh, in terms of V_Pi of S as the ve- values to be solved for. So make sure- here's a, here's a specific example. Um, let's take the state V1, right, so this is the state V1, okay. What this- what Bellman's equation this tells us is, V_Pi of the state 3, 1 is equal to the immediate reward you get at the state 3,1, plus the discount factor times, well, sum of S prime PS Pi of S V_Pi of S prime, right? So, um, when- let's see- le, le- let's say that Pi of 3,1 is north, right? So let's say you try to go north. If you try to go north from this state, then you have a 0.8 chance of getting to 3, 2, plus a 0.1 chance of, uh, veering, uh, left, plus a 0.1 chance of veering right. Um, let me just close out that parenthesis, okay. So that's what Bellman's equation says about these values. All right, and if your goal is to solve for the value function, then these things I'm just circling in purple are the unknown variables [NOISE] okay? And, um, if you have 11 states, uh, like in our MDP, then this gives you a system of 11 linear equations with 11 unknowns. Um, uh, and so using sort of a linear algebra solver, you could solve explicitly for the value of these 11 unknowns. Does that make sense? Okay. So the way you would- so let's say I give you a policy Pi, you know, any policy Pi. Um, the way you can solve for the value function is, create an, an 11 dimensional vector, um, with V_Pi of, you know, 1, 1, V_Pi of 1, 2 and so on, down to the V_Pi of whether is the last thing. You have 11 states, so V_Pi of 3, 3 or whatever, of 4, 3, right? So if you want to, er, solve for those, um, 11 numbers I wrote up just, uh, in terms of defining V_Pi, what you can do is, I'll give you a policy Pi, you can then construct an 11 dimensional vector, you know, 11 dimensional vector of unknown values that you want to solve for. And Bellman's equations for each of the 11 states, um, for each of the 11 states you could plug in on the left-hand side. This gives you one equation for how one of the values is determined as a linear function of a few other of the values in this vector, okay? And so, um, what this does is it sets up a linear system of equations with 11 variables and 11 unknowns, right? And using a linear algebra solver, you, you will be able to solve this linear system of equations. Does that make sense? Okay. Um, all right. And so this works so long as you have a discrete- If you have 11 states, you know, it takes like a, it, it takes almost a- takes almost no time, right, in a computer to solve a linear system of 11 equations. So that's how you would actually get those values, if you're ever called on to solve for V_Pi, okay? [NOISE] Actually, the, the- did what I just say make sense? Raise your hand if what I just explained made sense. Okay, good, awesome, great. All right, good. So moving on our roadmap, um, we've defined V_Pi, let's now define V_star. Um, so [NOISE]. So V star is the optimal value function. And we'll define it as V star of S equals max over all policies Pi of V Pi of S. Okay. Um, one of the I don't know, slightly confusing things about reinforcement learning terminology is that there are two types of value function. There's value function for a given policy Pi and there is the optimal value function V star. So both of these are called value functions, but one is a value function for a specific policy, could be a great policy, could be a terrible policy, can be the optimal policy. The other is V star which is the optimal- optimal value function. So V star is defined as, um, look at the value for, you know, any- lo- lo- look across all of the possible policies you could have all, um, 4-11. Over all the combinatorially large number of possible policies for this MDP. And V star of this is, well let's just take the max, where was of all the possible- of all the policies you know anyone could implement of all the possible policies, let's take the value of the best possible policy for that state, so that's V star. Okay. And that's the optimal- optimal, um, optimal value function. And it turns out that, um, there is a different version of Bellman's equations for this. And again, there's a Bellman's equation for V_Pi, for value of a policy. And then there's a different version of Bellman's equations for the optimal value function, right? So just as the two versions of value functions, there are two versions of Bellman's equations. But let me just write this out and hopefully this will make sense. Um, actually let's think this through. So let's say you start off your robot in a state S, what is the best possible expected sum of discounted rewards? What's the best possible payoff you could get, right? Well, ah, just for the privilege of waking up in state S, the robot will receive an immediate reward R of S, all right? And then it has to take some action and after taking some action, it will get to some other state S prime. Um, you know, and after some other state S prime it will receive, right, future expected rewards V star of S prime, and we have to discount that by Gamma, right? So, sorry. So well, the state S prime was arrived at but [NOISE] you're taking some action a from the initial state. Um, and so whatever the action is you know, for- if, if you take action a, right? Okay, um, so if you take an action a in the state S, then your total payoff will be- expected total payoff will be the immediate reward plus Gamma times the expected value of the future payoff. But what is the action a that we should plug it in here? Right. Well, the optimal action to take in the MDP is whatever action maximizes your expected total payoff, maximizes the expected sum of rewards which is why the action you want to plug in is just whatever action a maximizes that. Okay. So this is Bellman's equations for the optimal value function, which says that, ah, the best possible expected total payoff you could receive starting from state S is the immediate reward R of S, plus max over all possible actions of whatever action allows you to maximize, you know, your expected total payoff- expected future payoff, okay? So this is the expected future payoff, or expected future reward, okay. Um, now based on the argument we just went through, um, this allows us to figure out how to compute Pi star of S as well, right? Which is, um, let's say- let's say we have a way of computing V star of S, but we don't yet. But let's say I tell you what is the V star over S, and then I ask you, you know, what is the action you should take in a given state? So remember, Pi, Pi star, oh Pi star is going to be optimal policy, right? And so, um, what should Pi star of S be, right? Which is le- let's say- let's say we're computing V star. Um, and now I'll see you, "Hey, my robot's in state S, what is the best action I should take from the state S, right? Then how do I- how do I decide what actions to take in the state S? What, what optimal? What do you think is the best action to take from the state? And the answer is almost given in the equation above, yeah. [inaudible]. Yeah, cool. Awesome, right. So the best action to take in state S, and best means of maximizing respect to total payoff. But the action that maximizes your expected total payoff is, you know, what- whatever action we were choosing a up here. And so it's just argmax over a of that. And because Gamma is just a constant that, that doesn't affect the argmax, usually we just eliminate that since it's just a positive number, right? So this gives us the strategy we will use for finding, um, the optimal policy for an MDP, which is, um, we're going to find a way to compute V star of S, which we don't have a way of doing yet, right? V star was defined as a max over a combinatorially or exponentially large number policy. So we don't have a way of computing V star yet. But if we can find the way to compute V star, then you know, using this equation, sorry, let me just scratch this out. Using this equation gives you a way for every state of every state S, to pretty efficiently compute this argmax, um, and therefore figure out what is the optimal action for every state, okay? [NOISE]. All right, um. So all right. So just to practice with confusing notation. All right, let's see if you understand this equation. I'm, I'm just claiming this. I'm not proving this. But for every state as V star of S equals V of Pi star of S, is greater than V Pi of S, all right? For every policy Pi in every state S, okay? So ho- hope this equation makes sense. Ah, this is what I'm claiming. I didn't prove this. What I'm claiming is that, um, the optimal value for state S is- this is the optimal value function on the left. This is the value function for Pi star. So this is- this is the optimal value function. This is the value function for a specific policy Pi, where the policy Pi happens to be Pi star. And so what I'm claiming here is that- wh- what I'm writing here is that, um, the optimal value for state S is equal to the value function 4 Pi star applied to the state S, and just as greater than equal to V Pi of S for any other policy Pi, okay? Right. All right. So, um, the strategy you can use for finding for optimal policy is: one, ah, find V star. Two, you know, use the argmax equation to find Pi star, okay? And so what we're going to do is- well, step two, right? We, we know how to do from the argmax equation. So what we're gonna do is talk about an algorithm for actually computing V star because if you can compute V star, then this equation helps- allows you to pretty quickly find the optimal, um, action for every state [NOISE]. So, um. So value iteration is, ah, is an algorithm you can use to, um, to find V star. So let me just write out the algorithm, um. So this is um- Okay? So in the value iteration algorithm, you initialize the estimated value of every state to 0, and then you update these estimated values using Bellman's equation. And this is the, uh, optimal value function, the V star version of Bellman's equations, right? And, um, [NOISE] so to be concrete about how you implement this, you know, if you're implementing this, right? If you are implementing this in Python, um, what you would do is create a 11 dimensional vector to store all the values of V of S. So you create a, you know, 11 dimensional vector, right? That, that represent V of 1, 1, V of 1, 2, you know, down to V of 4, 3, right? So this is, um, 11 dimensional vector corresponding to the 11 states. Um, [NOISE] oh, I'm sorry I shou - wait did I say 11? We got 10 states in the MDP, don't we? Wait. Yes, we have 10 states. We've been saying 11 all long? Sorry. Okay, 10. Um, uh, yeah, uh, wait. [inaudible]. 11? [inaudible]. Oh, Yes. You're right. Sorry. Yes, 11. Okay. Sorry. Yes, 11 states. Okay, It's all right. Right. So 11 states MDP so you create an initial, ah, create an 11 dimensional vector um, and initialize all of these values to 0. And then you will repeatedly update, um, the estimated value of every state according to Bellman's equations, right? Um, and so uh, there, there, there are actually two ways to interpret this um, and sim- similar to, er, similar to gradient descent, right? We've written out, you know, a gradient descent rule for updating the Theta, uh, the, the, vector parameters Theta. And what you do is, you know, then you have, um- and what you do is you update all of the components of Theta simultaneously, right? And so that's called a synchronous update, er, in gradient descent. So one way to- so the way you would, um, er, update this equation in what's called a synchronous update, would be if you compute the right hand side for all 11 states and then you simultaneously overwrite all 11 values at the same time. And then you compute all 11 values for the right-hand side and then you simultaneously update all 11 values, okay? Um, the alternative would be an asynchronous update. And an asynchronous update, what you do is you compute v of 1, 1, right? And the value of v of 1, 1 depends on some of the, the other values on the right hand side, right? But the asynchronous update, you compute v of 1, 1 and then you overwrite this value first. And then you use that equation to compute v of 1, 2. And then you update this and then you observe update these one at a time. And the difference between synchronous and asynchronous is um, you know, if you're using asynchronous update by the time you're using V of 4, 3 which depends on some of the earlier values, you'd be using a new and refreshed value of some of the earlier values on your list, okay? Um, it turns out that value iteration works fine with either synchronous update or asynchronous updates. But, um, for the, er, er, but, um, er, because it vectorizes better, because you can use more efficient matrix operations. Most people use asynchronous update but it turns out that the algorithm will work whether using a synchronous or an asynchronous update. So I, I, I, I guess unless, unless otherwise uh, uh, you know, stated you should usually assume that. Whe- when I talk about, uh, value iteration, I'm referring to asynchronous update where you compute all the values, all 11 values using the- a- an- and then update all 11 values at the same time, okay? Was there a question just now, someone had, yeah. [inaudible] Yeah, yes. So I think there, there, uh, uh, yes. So how do you represent the absorbing state? The sync state? We get to plus 1 minus 1 then the world ends. Um, in this framework one way to code that up would be to say that um, the state transition parameters from that to any other state is 0. That is one way to, to, to- that, that will work. Uh, another way would be, um, less- done less often maybe mathematically a bit cleaner but not how people tend to do this, would be to take your, um, 11 state MDP and then create a 12 state, and a 12 state always goes back to itself with no further rewards. So both, both of these will give you the same result. Mathematically, it's pretty more convenient to just set, you know, P of Sa S prime equals 0 for all other states. It's not [inaudible] probably but that, that will give you the right answer as well. Yeah. All right. Cool. Um, so just as a point of notation, if you're using synchronous updates, you can think of this as, um, taking the old value function, er, O estimate, right? And using it to compute the new estimate, right? So this, this, you know, assuming the synchronous update, you have some, uh, previous 11 dimensional vector with your estimates of the value from the previous iteration. And after doing one iteration of this, you have a new set of estimates. So one step of this algorithm is sometimes called the Bellman backup operator. And so where you update V equals B of V, right? Where, uh, where now V is, a 11 dimensional vector. So you have an order 11 dimensional vector, compute the Bellman backup operator with just that equation there and update V according to V of P. Um, and so one thing that you see in the, um, problem set, uh, is prove- is, er, er, showing that, um, this will make a V of S converge to V star, okay? So it turns out that, um, okay, so it turns out that, um, er, you can prove and you'll see more details of this in the problem set, that by repeatedly and forcing Bellman's, er, equations, that this equa- this, this algorithm will cause your vector of 11 values or cause V to converge to your optimal value function of V star, okay? Um, and more details. You- you'll see in the homework and a little bit in the lecture notes. And it turns out this algorithm actually converges quite quickly, right? Um, to, to, to give you a flavor, I think that, uh, with the discount factor, the discount factor is 0.99, it turns out that you can show that the error, er, reduces, you know, by a factor of 0.99 on every iteration, um, and so V actually converges quite, quickly geometrically quickly or exponentially quickly, um, to the optimal value function, V star. And so if it's, you know, if the discount factor is 0.99, then we've like a few, we've 100 iterations or a few hundred iterations, V would be very close to V star, okay? And, and the discount factor is 0.9, then we've just, you know, 10 or a few dozens of iterations that'll be very close to V star. So these algorithm actually converges quite quickly to V star, okay? Um, so let's see. [NOISE]. All right. So just to put everything together, um, if you- if you run value iteration on that MDP, you end up with this. Um, er, so this is V star, okay? So it's a list of 11 numbers telling you what is the optimal, um, expected pay off for starting off in each of the 11 possible states. And so, um, I had previously said, I think I said last week, uh, o- of the week before Thanksgiving, that this is the optimal policy, right? So, you know, let's just use as a case study how you compute the optimal action for that state, um, given this V star, all right? Well, what you do is you, you actually just use this equation. And so, um, if you were to go west, then if you were to compute, I guess this term, um, sum of S prime west or left I guess, right? P of S A, S prime V star of S prime is equal to, um, if you were to go west, you have a, um- Right. Um, right. So if you're in this state, and if you attempt to go left, then there's a 0.8 chance you end up there with, ah, ah, V star of 0.75. There's a 0.1 chance. You know if you try to go left, there's 0.1 chance you veer off to the north and have a 0.069. And then there's 0.1 chance that you actually go south and bounce off the wall and end up with a 0.71. And so the expected future reward, the expected future payoff given this equation is that if you tend to go west, you end up with a 0.740 as expected future rewards. Whereas if you were to go north, and we do a similar computation. [NOISE] You know, so 0.8 times 0.69, plus 0.1 times 0.75, plus 0.1 times 0.49, is the appropriate weighted average. You find that this is equal to 0.676. Um, which is why the expected future rewards for if you go west, if you go no- ah, left is 0.740 which is quite a bit higher than if you go north, which is why we can conclude based on this little calculation, um, that the optimal policy is to go left by that state, okay? And- and really, and technically you check north, south, east, and west and make sure that going west gives a high reward. And that's how you can conclude that going west is actually the better action, at this state, okay? So that's the value iteration. And based on this, if you, um, ah, are given an MDP you can implement this, ah, south of V star and, ah, ah, be able to, ah, compute Pi star, okay? All right. Few more things to go over. But before I move on, ah, let me check if there any questions, yeah. [inaudible] Oh, sure yep. Is the number of states always finite? So in what we're discussing so far, yes. But what we'll see on Wednesday is how to generalize this framework. I'll, I'll do this a little bit later but it turns out if you have a continuous state MDP, ah, one of the things that's often done I guess is to discretize into finite number of states. Ah, but then there are also some other versions of, um, ah, you know, value iteration that applies directly to continuous states as well. Okay, cool. All right. So [NOISE]. Um, what I describe is an algorithm called value iteration. The other, um, I know, common, ah, sort of textbook algorithm for solving for MDP is, is called policy iteration. And let me just- I'll just write out what the algorithm is. So here's the algorithm which is, um, you know initialize Pi randomly, right? [NOISE]. Okay, so let's see what this algorithm does. So we'll talk of pros and cons of valuation versus policy iteration in a little bit. Um, in policy iteration, ah, instead of solving for the optimal policy V star, so in- in value iteration our focus of attention was V star, right? Where, um, you know, you do a lot of work to try to find the value function. And then once you solve for V star, you then figure out the best policy. In policy iteration, the focus of attention is on the policy Pi rather than the value function. And so initialize Pi randomly. So that means for- for each of the 11 states pick a random action, right? So a random initial Pi. And then we're going to repeatedly carry out these two steps. Um, the first step is, um, solve for the value function for the policy Pi, right? And remember, um, for V Pi, this was a linear system of equations, right? With 11 variables, with 11 unknowns in a linear- there is a linear system of 11 equations with 11 unknowns. And so using a sort of linear algebra solver or linear equation solver, given a fixed policy Pi, you could just, you know, at the cost of inverting a matrix roughly, right? You can solve for- you can solve for all of these 11 values. And so in policy iteration, um, you would, you know, use a linear solver to solve for the optimal value function for this policy Pi that we just randomly initialized. And then set V to be the value function for that policy. Okay, um, and so this is done quite efficiently with the linear solver. And then the second step of policy iteration is pretend that V is the optimal value function, and update Pi of S, you know, using the Bellman's equations for the optimal value function, right, or updated, um, as you saw right how you update Pi of S. And then you iterate, and then give it a new policy, you then solve that linear system equations for your new policy Pi. So you get a new V_Pi and you keep on iterating these two steps, um, until convergence, okay? Yeah. [inaudible] Yeah, yep. Yes, that's right. So in, in, in value, ah, yeah, yeah, yeah, yeah. So in, in value iteration, um, ah, actu- in value iteration think about value iterations as waiting to the end to compute Pi of S, right? Solve for v star first, and then compute Pi of S. Whereas in policy iteration, we're coming up with a new policy on every single iteration, right? Okay? So, um, pros and cons of poly- and, and it turns out that this algorithm will also converge to the optimal policy. Um, pros and cons of policy iteration versus value iteration. Policy iteration requires solving this linear system of equations in order to, um, get V_Pi. And so it turns out that if you have a relatively small state space, um, like if you have 11 states, it's really easy to solve a linear system of equations, ah, you know, of 11 equations in order to get V_Pi. And so in a relatively small set of states like 11 states or really anything, you know, like a few hundred states, um, policy iteration would work quite quickly. Ah, but if you have a [NOISE] relatively large set of states, you know, like 10,000 states or, or, or a million states. Um, then this step would be much slower. At least if you do it right by solving linear system of equations and then I would favor a value iteration over policy iterations. So for larger problems, usually value iteration will, um, ah, ah, usually I would use value iteration because solving this linear system of equations, you know, is, is pretty expensive if it's- it's like a million Pi. Is a million equations and a million unknowns, that's quite expensive. But even 11 states 11 unknowns is a very small system of equations. Um, and then one, one other pros and cons, one of the, ah, ah, differences that- that's maybe, maybe more academic and practical. But it turns out that if you use value iteration, um, V will converge towards V star, but it won't ever get to exactly V star, right? So just as, if you apply gradient descent for linear regression, gradient descent gets closer and closer and closer to the global optimum, but it never, you know, gets exactly the global optimum. It just gets really, really close, really, really fast. Actually gradient descent, actually turns out asymptotically converges geometrically quickly or exponentially quickly, right? But they've been never quite gets, you know, definitively to the optimal, to the one optimal value. Whereas, you, you saw using normal equations it just jumped straight to the optimal value and there's no, you know, converging slowly. And so value iteration converges to a V star, but it doesn't ever end up at exactly the value of V star. Ah, this difference may be a bit academic because in practice it, it doesn't have, ah, right? Ah, ah, but in policy iteration, um, if you iterate this algorithm then after a finite number of iterations, ah, this algorithm will stop changing meaning that after a certain number of iterations Pi of S will just not change anymore, right? So you find Pi of S update the value function, and then after another integration. When you take these argmax's, you end up with exactly the same policy. And so, ah, just- just to solve for the optimal value and the optimal policy, and then just, you know, ah, ah, it doesn't converge- it doesn't just converge to what the optimal value. It just gets the optimal value when it- when it converges, okay? Um, so I think in practice I actually see value iteration used much more, ah, ah, ah, because, um, solving these linear system equations gets expensive, you know, if you have a larger state space but, um, value iteration, excuse me, val- I see value iteration used much more. But if you have a small problem, you know, I think you could also use policy iteration which may converge a little bit faster. If, if you have a small problem, okay? [NOISE] All right, good. So the last thing is, um, kinda putting it together, right? And what if you don't know [NOISE]. So it turns out that when you apply this to a practical problem, you know, in- in- in robotics right. Um, one common scenario you run into is if you do not know what is P of S, A. If you don't know the state transition priorities right. So when we built the MDP we said, well, let's say the robot if you're going off you know, has a 0.8 chance of going off and a 0.1 chance of veering off to the left or right. If you actually- again it's a very simplified robot. But, if you build a actual robot or build a helicopter or whatever, play- play- play chess against an opponent. Uh, the state transition probabilities are often not known in advance. And so in many MDP implementations you need to estimate this from data. And so the workflow of many reinforcement learning projects will be that, um, you will have some policy and have the robot run around, you know, just have a robot run around a maze and count up of all the times you had to take the action north, how often did it actually go north and how often do they veer off to the left or right, right? And so you use those statistics to estimate the state transition probabilities. So let me just write this out. So you estimate. So after you're taking maybe a random policy it takes some policy, executes some policy in the MDP for a while. And then you would estimate this from data. And so, the obvious formula would be, estimate P of Sa S prime to be number of times took action a, in the state S and got to S prime and divide that by the number of times you took action a in state S, right. So P of Sa S prime estimates- does actually a maximum likelihood estimate. When you look at the number of times, you took action a in state S, and of that was a fraction of times you got to the state S prime right. Or one over S and the above is 0, 0 right. [NOISE] And a common heuristic is, if you've never taken this action in this state before, if the number of times you try action A in state S is 0. So you've never tried this action in this state. So you have no idea what it's going to do. They just assume that the state transition probability is 1 over 11, right? That it randomly takes you to another state. So this would be common heuristics that people use when implementing reinforcement learning algorithms, okay? And it turns out that you can use Laplace smoothing for this if you wish, but you don't have to. Because, so you're in Laplace smoothing right. So it would be, you know, adds 1 to the numerator and add 11 to the denominator would be, if you were to use Laplace smoothing, which avoids the problems of 0 over 0s as well. But it turns out that unlike the Naive Bayes algorithm, these solvers of MDPs are not that sensitive to 0 values. So if- if one of your estimates were probably a 0, you know, unlike Naive Bayes' where having a 0 probability was very problematic for the classifications made by Naive Bayes, it turns out that MDP solvers, including evaluation of policy iteration, they do not give sort of nonsensical/horrible results just because of a few probabilities that are exactly 0. And so in practice, you can use Laplace smoothing if you wish. But because the reinforcement learning algorithms don't- don't perform that badly if these estimates often will be a zero in practice, Laplace moving is not commonly unison. What I just wrote is- is more common. Okay. So to put it together. All right, if I give you a robot and asked you to implement a MDP Solver to find the good policy for this robot, what you will do is the following. Take actions with respect to some policy pi. To get the experience in the MDP. Right. So go ahead and let your robot lose and have it execute some policy for awhile. And then update estimates of P of Sa. Based on the observations of whether robot goes and takes different states, update- update the estimates of P of Sa. Solve, um, Bellman's equation using value iteration to get V and then update. So this is the value iteration we are putting together. If you want to plug in policy innovation instead in this step that's also okay. But so if you actually get the robot, um, you know, yeah right- right. If you actually get a robot, uh, where you do not know in advance the state transition probabilities, then this is what you would do in order to, um, iterate a few times I guess. Repeatedly find a- find a- find a policy given your current estimate of the state transition probabilities. Get some experience, update your estimates, find a new policy and kind of repeat this process until hopefully it converges to a good policy. Okay. Now just to add more color and more richness to this, we usually think of- we usually think of the reward function as being given, right, as part of the problem specification. But sometimes you see that the reward function may be unknown. And so for example, if you're building a stock trading application and the reward is the returns on a certain day, it may not be a function of the state and it may be a little bit random. Um, or if your robot is running around but depending on where it goes, it may hit different bumps in the road and you want to give it a penalty every time it hits the bump. We're going to build a self-driving car right, every time it hits a bump, hits a pothole, you give it a negative reward, then sometimes the rewards are a random function of the environments. And so sometimes you can also estimate the expected value of a reward. But- but in- in some applications, if the reward is a random function of the state, then this process allows you to also estimate the expected value of the reward from every state and then running this will help you to converge. Okay yeah. [inaudible] Yeah, cool. [NOISE]. [inaudible] Yeah, cool. Great question. So let me, let me talk about exploration, right. So it turns out that, um, this one [NOISE] so it turns out this algorithm will work okay for some problems but the- the- there's one other, ah, again to add richness to this, there's one other, um, issue that this is not solving which is the exploration problem. And [NOISE] in, in reinforcement learning sometimes you hear the term exploration versus exploitation, [NOISE] right? Which is, um, let me use a different MDP example, right. Which is, um, if your robot, you know, starts off here and if there is a, um, plus 1 reward here, right and maybe a plus 10 reward here. If just by chance during the first time you run the robot it happens to find its way to the plus 1 then if you run this algorithm, it may figure out that going to the plus 1 is a good way, right? We were giving it a discount factor and there is a fuel surcharge of minus 0.02 on every step. So if just by chance your robot happens to find its way to the plus 1 the first few times you run this algorithm then this algorithm is, um, is uh, locally greedy, right. Ah, it may figure out that this is a great way to get to plus 1 reward and then the world ends, it stops giving these minus 0.02 surcharges for fuel. And so this particular algorithm may converge to a bad, you know, kind of local optima where it's always heading to the plus 1. And as it hits the plus 1, it sometimes will veer off randomly right and get a little bit more experience in the right half of the state space and end up with pretty good estimates of, ah, what happens in the right half of this state space. And, um, and it may never find this hard-to-define plus 10 pot of gold over on the lower left, okay? So this problem is sometimes called actually, well, it is called the exploration versus exploitation problem which is, um, when you're acting in an MDP, you know, how aggressively or how greedy should you be at just taking actions to maximize your rewards? And so the algorithm we describe is relatively greedy, right? Meaning that, um, is taking your best estimate of the state transition probabilities and rewards and is just taking whatever actions and this is really saying, you know, pick the policy that maximizes your current estimate of the expected rewards and it's just acting greedily, meaning on every step it's just executing the policy that it thinks allows it to maximize the expected payoff, right? And what this algorithm does not do at all is explore which is the process of taking actions that may appear less optimal at the outset, um, such as if the robot hasn't seen this plus 10 reward, it doesn't know how to get there, maybe it should, you know, just try going left a couple of times just for the heck of it, right, to see what happens. Because even if it seems less, even if going left from the perspective of the current state of the knowledge of the robot, um, maybe if it tries some new things it's never tried before maybe it will find a new pot of gold, okay. So this is called the exploration versus exploitation trade-off, um, and this is actually not just an academic problem. It turns out that some of the large online web advertising platforms, ah, have the same problem as well. And again, I, I, I, I have, have mixed feelings about the advertising business. It's very lucrative but it causes other problems, um, as well but, but it turns out that for some of the large online ad platforms, um, ah, you know, when a, when an advertiser, um, starts selling a new ad or your posts and you add on one of the large online ad platforms, the ad platform does not know who is most likely to click on this ad, right? And so pure explo- pure exploitation, boy exploitation has such horrible connotations especially [LAUGHTER] for online ad platforms. Ah, it's the technical term, not a, not a social term when used in this context. But the pure, you know, reinforcement learning sends exploitation policy not, not the other even more horrible sense of exploitation. Um, would be to always just show you, show, show users the ads that, you know, they are most likely to click on to drive short-term revenues because we want to just show people the ad they're most likely to click on to drive short-term revenue. Whereas an exploration policy for large, you know, some of these large online ad platforms, is to show people some ads that may not be what we think you are most likely to click on in this moment in time but by showing you that ad or by showing the pool of users an ad that you might be less likely to click on, maybe we'll learn more about your interests. And that, um, increases the effectiveness of these large or these ad platforms at finding more relevant ads, right? And for example, I don't know, um, probably not- I, I, I guess, ah there are probably no advertisements for ah, Mars landers as I know. But if the large online ad platforms don't know that I'm actually pretty interested in Mars landers if it shows me an ad for a Mars lander which I don't think such a thing exists, right? If I did I click on it and they may learn that showing me ads for Mars landers is a great thing, right, ah, or, or some other thing that you may not know you're interested in. So this is actually a real problem. There are, um, some of the large online ad platforms, ah, um, actually do explicitly consider exploration versus exploitation and make sure that sometimes it shows ads that may not be the most likely you'll click on but, you know, allows us to gather information to then be better situated to figure out where the future rewards to be better positioned to, ah, learn how to match ads not just to you but to other users like you, right? Um, sorry. Okay but so in order to make sure their reinforcement learning algorithm, um, ah, explores as was exploits a, um, ah, a common a, a modification to this would be tak- instead of taking actions with respect to Pi, you may have a, um, a 0.9 chance. [NOISE] Respect to Pi and 0.1 chance, [NOISE] take an action randomly, okay. And so, um, this particular, [NOISE] exploration policy is called Epsilon-greedy where on every time step and on every time step you toss a biased coin. But on every time step, let's say 90% of the chance you execute whatever you think is the current best policy and with 10% chance you just take a random action. And this type of exploration policy, um, increases the odds that you know, every now and then maybe just by chance, right, it'll find it's way to the plus 10 pot of gold, and learn state transition probabilities and, and, and then eventually, um, end up exploring the state-space more thoroughly, okay. Um, this is called Epsilon-greedy exploration and, um, it's a little bit of a misnomer I think. So in, in, in the way we think of Epsilon-greedy Epsilon is, um, say 0.1 is the chance of taking a random action instead of the greedy action. Um, this algorithm is, has always been a little bit strangely named because, ah, if 0, 0.1 is actually the chance of you acting randomly, right. So Epsilon greedy sounds like you're being greedy 0.1 of the time but, but you're actually taking actions randomly 0.1 at a time so Epsilon-greedy is actually maybe 1 minus Epsilon-greedy. So th- these name has always been a little bit, um, off but that's what, that's, that's how people use this term. Epsilon-greedy exploration means Epsilon of the time which is the hyperparameter, which is the parameter of the algorithm you act randomly into- instead of going to what you think is the best policy, okay. And it turns out that, um, if you implement this algorithm with, um, Epsilon-greedy exploration then this, ah, ah, this algorithm, ah, will converge to the optimal policy for any discrete state MDP, right. Ah, sometimes they take a long time because, you know, if there's a, if it takes a long time to randomly find plus 10, it, it, it could take a long time before it randomly stumbles upon the plus 10 pot of gold. But, um, this algorithm with an, with an exploration policy will converge to the optimal, um, will, will converge to the optimal policy for any MDP. What is your question? [inaudible] Yeah, yeah, so, right, should you always keep epsilon constant or should you use a dynamic epsilon. So yes, ah, there are, there, there are. There are many heuristics for how to explore, ah. One reasonable thing to do would be we start with a large value of epsilon and we slowly shrink it. Um, another common heuristic would be, um, there is a different, ah, type of exploration called Boltzmann exploration, which you can look up if you want which is, ah, if you think that the value of going north is, um, you know, 10 and the value of going south is 1, then there is such a huge difference that you should bias your action to upgrading to the bigger result, the, the bigger reward and, ah, you could have the probability be f E to the value basically time, ah, divide, times of a times the scaling factor, right? So that's called Boltzmann exploration where instead of having a 10% chance of taking an action completely at random, ah, you could just, you know, have a very strong bias to, heading toward the higher values but also have some probability to go into lower values but where the exact probability depends on the difference in ideal values is. So another probably the, I think Epsilon-greedy, I feel like I see this used the most often for these types of MDPs and then Boltzmann exploration which is why I just drive this also. Two more questions before we wrap up, go ahead. [inaudible] Yes, can you get a reward for reaching states you've never seen before? Yes, there is a fascinating line of research called intrinsic reinforcement learning. Ah, and it really started by search indexing. If you Google for intrinsic, intrinsic motivation, you find some research papers on. Um, and then there was some recent followup work I think by DeepMind or some other groups but intrinsic motivation is the term to Google where you reward a reinforcement learning algorithm for finding new things about the world. Just one last question. How many actions you should take with respect to Pi? Sorry, say that again? How many actions you should take with respect to Pi before updating the Pi? I see, right. How often, how many actions you should you take before updating Pi? Um, there's no harm to do it as frequently as possible. Ah, in the, if you're doing this with a real robot what, you know, I've seen is, um, this is sometimes going to physical robot and so, you know, I don't know, when we're flying helicopters you go out to the field for the day, collect a lot of data, and they go back to the lab in the evening and rerun the algorithms. Ah, but if there's no barrier to running this all the time, then it doesn't hurt the performance, it's just running as frequently as it can. All right, that's it for basis of MDP. Um, on Wednesday, we'll continue with generalizing all these to continuous state MDPs. Okay, let's break, I'll see you on Wednesday. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_16_Independent_Component_Analysis_RL_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Hey everyone. Um, let's get started. So, um, let's see, the plan for the [NOISE] day is, uh, we'll go over the rest of ICA, independent component analysis. In particular, talking about CDFs, cumulative distribution functions [NOISE]. And then, um, actually, uh, let's do that later. [NOISE]. All right. So the plan is we'll go over, uh, the rest of ICA, independent component analysis, and we'll talk a bit about CDFs, um, cumulative distribution functions [NOISE], and then derive the ICA model. And uh, in the second half of today, we'll start on the final of the, um, interesting, four major topics of the class, which is reinforcement learning. We'll talk about MDPs, or Markov decision processes, okay? So to recap briefly, um, we had- you remember the overlapping voices demo. So we said that in the ICA problem, independent component analysis problem, we'll assume we have sources S, which are RN if you have N speakers. So for example, if this is speaker one's audio, then at time T [NOISE], um, S, you know, superscript parentheses T subscript 1 is the [NOISE] sound emitted by speaker one at time T. Sorry, I don't have the- that's interesting. All right [NOISE]. Just let me go back over a little bit. Um, and, uh, yeah, we're, we're using sometimes I to index training examples, and so the training examples sweep over time, um, and sometimes usually I use I, sometimes I use T, I guess in the case where, um, the, uh, uh, the different examples come from different points in time in your recording. And what your microphones record is XI equals A of SI. So just for now, let's say you have two speakers and two microphones, in which case, A will be a 2 by 2 matrix, and in the homework problem, we have five speakers and five microphones in which case A will be a 5 by 5 matrix. We'll talk later, um, about what happens if the number of speakers and microphones is not the same [NOISE]. And the goal is to find the matrix W, uh, which should hopefully be A inverse, um, so that SI is W times X recovered the original sources. Uh, and we're going to use these W1 up through WN to represent the rows of this matrix W. Yeah. [inaudible]. Uh, oh yes you're right. Thank you. Right. Okay. Thank you. Okay. [NOISE] So, um, [NOISE] last time we had [NOISE]. All right. So remember this is a picture of the Cocktail party problem. And, uh, last time I showed these pictures about, you know, why, why, why is ICA even possible, right? Given two overlapping, um, voices, how is it even possible to separate them out? How is there enough information to know, um, uh, you know, what are the two overlapping voices? And so one picture [NOISE] we saw was this one, where if S1 and S2 are uniform between minus 1 and plus 1, then the distribution of data will look like this [NOISE]. If you pass this data through the mixing matrix A, then your observations, now the axes have changed to X1 and X2, may look like this, and your job is to find an unmixing matrix W that maps this data back to the square, okay? Now, this example is possible because the examples- because the, uh, sources S1 and S2, were distributed uniformly between minus 1 and plus 1. Um, it turns out human voices, you know, the recordings per moment in time are not distributed uniform between minus 1 and plus 1. And it turns out that, um, uh, if the data was Gaussian, then ICA is actually not possible. Here's what I mean [NOISE]. Let's say that, uh- so, so the uniform distribution is a highly non-Gaussian [NOISE] distribution, right? Uniform B minus 1 plus 1, you know, this is non-Gaussian and that, that makes ICA possible [NOISE]. Um, what if [NOISE] S1 and S2 came from Gaussian densities, right? Um, if that were the case, then this distribution S1 and S2 would be rotationally symmetric. And so, um, there would be a rotational ambiguity, right? Any axis could be S1 and S2 [NOISE]. You can't map, you know, this type of parallelogram back to this square, right? So, so [NOISE] you can't sort of I think in this parallelogram, um, you can sort of lead off, you know, that may be one axis should look like that. So I'm drawing with a mouse, not doing very well. Well, second axis should maybe look like that, right? And by, by inverting that you can get the data back to the square. But in the case of if the data look like this, then [NOISE] you actually don't know, um, because [NOISE] maybe this should be S1 [NOISE] and that should be S2, right? But so this is rotational ambiguity, because the Gaussian distribution is, um, rotationally symmetric, if S1 and S2 are standard Gaussians, then, then, [NOISE] then this distribution is rotationally symmetric, and you don't have enough information to recover the directions that correspond to the original sources, okay? So it turns out that, um, there is some ambiguity in the output of ICA. In particular, last time we talked about, uh, two sources of ambiguity. Um, you don't know which is speaker one and which is speaker two, right? You don't know which one to number speaker one and which one to number speaker two, and you might take this data and flip it horizontally, uh, reflect this, you know, on, on, on the neg- S1 goes to negative S1 [NOISE], or reflect this, uh, on a vertical axis. We don't know if it's positive S2 and negative S2. And in the case of this example, where S1 is, uh, uniform minus 1 plus 1. Those are the only sources of ambiguity. Um, but if the data was Gaussian there would be additional rotational ambiguity which makes [NOISE] it in part- whi- which actually makes it impossible to separate out the sources, okay? So [NOISE] it turns out that, um, all right. cool [NOISE]. So it turns out that the Gaussian density is the only distribution, um, that is rotationally symmetric. Uh, if, if, if S1 and S2 are independent and if the distribution is rotationally symmetric [NOISE], meaning that the distribution has sort of circular contours, uh, then it, then it, then it must be a Gaussian density [NOISE]. And so, there is a theorem, uh, which I'll just state it formally, that ICA is possible only if your data is non Gaussian, right? But, but so long as your data is non-Gaussian, then it is possible to recover the independent sources, okay? I'm just stating [NOISE] that informally. Um, so let's [NOISE] let's see [NOISE]. So what I would like to do is, um, develop [NOISE] the ICA algorithm assuming that the data is non-Gaussian, okay? Now, um, [NOISE] in order to, uh, develop the ICA model, we need to figure out what is the density of S, right? And I'm going to use P subscript S, you know, of, uh, the, the, uh, of the random variable S to represent the, um, density of S. Um, an equivalent way to represent the probability of the density of continuous random variables [NOISE] is via CDF, which stands for cumulative uh, distribution functions [NOISE]. And the, uh, the cumulative distribution function of a, uh, random variable F of S in probability, is defined as the chance that the random [NOISE] variable is less than that value. So I guess, um, notations have been inconsistent, sorry, but this is capital S I'm using to denote the random variable, and this is some constant. Right, and, uh, it's that same constant as that lowercase s, okay? Um, and so for example, if this is the PDF, of a random variable S, maybe of a Gaussian, right? The CDF is a function that um, [NOISE] increases from 0 to 1 where, um, the height of a CDF at a certain point is the probability. So if you take the curves at the same point, right? So the height of a CDF at a certain point lowercase s, is the probability that the random variable takes [NOISE] on a value equal to this value or lower, which means that the height of this function is equal to, um, you know, the probability mass, the area under the curve of your PDF, um, [NOISE] over to the left of that point, okay? So that's, uh, I don't know, sometimes this- some probability and statistics courses teach this concept and some don't I guess, but there's- so there's a mapping between the PDFs and the CDFs of a function, of a, of a continuous random variable. Um, and the relation between the PDF and the CDF is that the density [NOISE] is equal to the first derivative, right? Uh, F prime. So if you take the derivative of the CDF, then you should recover the PDF, [NOISE] okay? But so I think, um, in order to specify, you know, some random variable, we could either specify the PDF, right? The probability density function, or you can specify the CDF which is just, you know, let's tell me what's the chance of the random variable taking on any value less than any particular value S. And by taking the derivative of this, you can always recover the PDF, and by integrating this you can always go to the CDF, okay. And so, um, what we're going to do in, um, ICA is instead of specifying a PDF for how speakers' voices sound, we're instead going to specify a CDF, and, uh, we'll have to choose a CDF that is not the Gaussian density CDF, because we have to assume that the data is non-Gaussian. Uh, um, uh, and the CDF, you know, is a function that always goes from, right, 0 to 1, okay? So, um, [NOISE]. All right. So we'll specify [NOISE]. So in a little bit, we'll specify some CDF for the density of the sources of what human voices sound like let's say. And if you differentiate this, uh, you will get the PDF of the density of s, right? Was equal to that. Now, we're um, going to derive a maximum likelihood estimate mission algorithm in a minute. But our model is [NOISE] that, X is equal to A_s, um, which is equal to, I guess w inverse of s, and s is equal to w_x, right? So that- that's, that's the model. And in order to derive a maximum likelihood estimate for the parameters, um, when you have- [NOISE] so this is going to be the density of x. Okay? So this is a relationship between, um, ah, this is the relationship between x and s. X is equal to A_s, equals W inverse s and s equals W_x, right? So this is the model. And what I'd like to do is, let's say you know what's the density of s. Um, what is the density of x if x is computed as the matrix A times s? Right? So one step that's tempting to take is to just say, well, s is equal to W times X. So the probability of x is just equal to the probability of s taking on the certain value, right? So, so I mean this is s, right? And so the probability of seeing a certain value of x is equal to the probability of s taking on that corresponding value Because assuming, W is an invertible matrix, is a bijection. There's one-to-one mapping between x and s. So to find the probability of X, just find the probably of s and compute the corresponding probability. Um, it turns out this is- this is incorrect, and this works with probability mass functions, for discrete probability distributions, um, that take on discrete values. But this is actually incorrect for continuous probability densities. So let me- let me, um, uh, show an illustration, and we'll go back to derive what is a correct way of computing the density of x. Oh, and we'll want, uh, density of x because um, when you get the training set, you only get to observe x, and so for, uh, finding the maximum likelihood estimate parameters, you need to know, um, what's the density of x you can map, you know, choose the parameters, choose the parameters W that maximizes the likelihood. Okay? [NOISE] So that's what we want to compute the density of x. But, um, let's, let's use a simple example. [NOISE] Let's say the density of s is a indicator, s is between 0 and 1. Okay? So this is, um, s is distributed uniform from [NOISE] 0 to 1. Um, and let's say [NOISE] x is equal to 2 times s. Okay? So now notation, A is equal to 2, [NOISE] W is equal to one-half. Ah, this is, uh, n equals 1, 1-dimensional example. So, um, this is the density of s, right? Uniform distribution from 0 to 1. And if x is equal to 2 times s, then this seems like X should be equal- X is distributed uniformly from 0 to 2, right? Because if s is uniform from 0 to 1, you multiply it by 2, X is distributed uniformly from 0 to 2. And so the density for X is equal to this, 1, 2 [NOISE]. Right? And it's now half as tall because, uh, probability density function z to integrate to 1, right? So this is a uniform from 0 to 2 probability density function. And so the correct formula, um, is P of x, x equals one-half, times indicator 0 less equal to x less than equal to 2. Right? Um, [NOISE]. Okay? And, uh, more generally, the correct formula for this is actually this times, um, this is the determinant [NOISE] of the matrix W. Uh, and in the case of a real number, the determinant of a row number is just this absolute value which is why, um, we have the density of x equals one-half. You know, that's the absolute value of the determinant of W, um, times, times your- times indicator whether 2 times s is within 0, 0 to 1. Okay? Um, yeah, right. So I guess this, uh, this is indicator 0 less equal to one-half x less or equals to 1, right, since that's s. Okay? So this is illustration showing why this is the right way with the determinant of W multiplied here as the- as a way to compute the density of X. Um, and, er, for the- for those of you familiar with, um, determinants and- oh, and determinants is a function you can call you know, in NumPy to compute, um, ah, but also, uh, the intuition of a determinant is it measures how much it stretches out a, um, local walking, and so you need to, uh, uh, er, sort of divide by the determinant of A or multiply by the determinant of W, um, in order to make sure your distribution still normalizes to 1. Right? So that's where that comes from. [NOISE] So, um, we're nearly done. Just one more decision, and then we can derive the maximum likelihood estimation, uh, to derive a maximum likelihood estimate of this, of the parameters. The last thing we need to do is, um, [NOISE] choose the density of what your speakers' voices sound like. [NOISE] And as I said just now, um, what we're going to do, is, uh, choose a non-Gaussian distribution. Right? And so [NOISE] while F of s is equal to the chance of this person's voice. Right? Random variable s being less than a certain value. And we need a smooth function that goes between, you know, 0 and 1, um, where, we need a smooth function that has vaguely that shape. And so well, what functions we know that are vaguely that shape? Right? Let's pick the sigmoid function. Um, and it turns out this will- this will work. Okay. There are many choices that actually work fine. Um, it turns out that if you choose a sigmoid function to be the CDF, then if you look at the PDF this induces, if you take the derivatives of this. Right? So take P of x [NOISE] equals the derivative of the CDF. Um, it turns out that if this is the Gaussian, then the PDF that this choice induces, is, uh, something with fatter tails, right? Um, by which I mean that it goes to 0, you know. [NOISE] So Gaussian density goes to 0 very quickly, right, it's like e to the negative x squared, right? That's the Gaussian is a square in the exponent of the density. And it turns out that this particular density, uh, taken by compute derivative of a sigmoid it goes to 0 more slowly and this captures human voice and many natural phenomena better than a Gaussian density because there are a larger number of extreme outliers, that are more than one or two standard deviations away, um, but there are actually multiple distributions that work. You could- if you use a double, double exponential distribution. So this is an exponential distribution- exponential density. If you take a symmetric with two sided exponential density for P of s, it will also work quite well for ICA. But I think, um, early history of ICA, you know, researchers, I think it was, um, might been Terry Sejnowski, uh, down at the Salk Institute, just needed a function with these properties. He picked the sigmoid and plugged it in and it works just fine. It's been a good enough default that, um, it's still- it's still widely used, right? But, but, but, but, but I've used this um, double-sided exponential or sometimes also called the Laplacian distribution. This, this works fine as well as a choice of a P of s. Okay? [NOISE] So the final step. Um, the density of s is equal to [NOISE]. Right? The product of the, uh, uh, um, um, let's see. Ah, it's a product from i equals 1 through your n sources of the probability of each of the speakers emitting that sound, right? Ah, because the n speakers [NOISE] are speaking independently, right? Yeah. [inaudible]. Say, say that again? [inaudible] . Oh, yes. You're right. Sorry about that. Yes, this should have been [NOISE]. Sorry, yes this should have been a P_s. Yeah. Thank you. [NOISE] Right, go from a CDF to PDF by taking derivatives. All right. Cool. So, um, er, S is the vector of all, you know, two speakers' or all five speakers' voices at one moment in time. So the density of S, right? S as an RN is the product of the individual speakers' probabilities, and, um, this is the key assumption of ICA that, you know, your two speakers or your five speakers are having independent conversations, and so at every moment in time, they choose independently of each other what sound to emit. All right. Um, and so using the formulas we worked out just now. The density of x is equal to, um, well, as we did, the density of, uh, W_x times the determinant of W. [NOISE] Right? Uh, so- and this is equal to [NOISE] Okay. Um, and this notation, uh, W_I transpose x, this is, um [NOISE] right? Because W_I is the I th row of the matrix W and so, um, you know, I guess S- S_j is equal to, um, W_j transpose X, right? So t- take the corresponding row and multiply it by x to get a corresponding source. Actually, sorry. I think this right, yeah, let me use j there to make this clearer. [NOISE] Okay. [NOISE] And so, um, this writes out- so this shows what is the density of x, um, expressed as a function of, um, P_s, which we've assumed- which effects as a CDF of the Sigmoid as a, as the derivative of the Sigmoid and as a function of the parameter W. Right? So this is a model that, given a setting of the parameter W which is a square matrix, um, allows us to write down what's the density of X. [NOISE] So the final step is, um, we could use [NOISE] maximum likelihood estimation to estimate the parameters w. Um, so the log-likelihood of W is equal to sum over the training examples of log- of, you know, [NOISE] times by W. Right. And, um, you can use stochastic gradient ascent. [NOISE] All right. Take the derivative of w with respect to the log-likelihood. Um, and it turns out- this is derived in the lecture notes. I'll just write it out here. [NOISE] Times x i. [NOISE] I hope I got that right. Yeah. Okay. Right. [NOISE] Um, yeah. And it turns out that, um, if you use this formula don- don't worry about the formula for the derivatives, there are full derivations given in the lecture notes. But it turns out that, um, if you use the derivative of the log-likelihood with respect to parameter matrix W and use stochastic gradient ascent to maximize the log likelihood, uh, run this for a while, then you can get, um, ICA to find a pretty good matrix W, um, for unmixing the sources, okay? So just to recap the whole algorithm, right? You would have a training set of X_1 [NOISE] up through X_m, where each of your training examples is the, um, er, microphone recordings at one moment in time, [NOISE] and so the time goes from 1 through M. What you do is initialize the matrix W, say, randomly and use gradient ascent with this formula for the derivative in order to maximize the log-likelihood of the data, and after gradient ascent converges, you then have a matrix W and you can then recover the sources as S equals W_x. And then now, we have the sources, you can take, um, say, S_1_1 through S_1_m and play that through your, um, your laptop speaker in order to see what source one sounds like. Right? And so that's how you would take, you know, overlapping voices and [NOISE] try to unmix them. Okay. Oh, yeah. [inaudible] Oh why is choices A point not a rotation matrix? Uh, er, boy how to visualize that. Try plotting it in, um, NumPy, matplotlib I guess. If you plot the contours of the- so it turns out that if this is S_1 and S_2, what you do not want is the den- density whose contours look like that. Um, I haven't done this for a while. I believe if you take this distribution, the contours will look like that. [NOISE] It's been a while since I looked at this, but I think it'll look like that. So this is not rotational symmetry. You're on it. Well, it's Laplace. Yeah. Okay. Yeah. Oh, yes. Laplace definitely looks like that. I think Sigmoid looks a bit like that too. Yeah, little like that. Plot it and see if I'm right, or post on Piazza, if one of you plots it. So you can see it, I haven't done that for a long time. Yeah, at the back. [inaudible] Oh, um, um, why don't you interact with the derivative of the log? The- th- actually, yes, the log should be like this, I think. Yes. [BACKGROUND] Oh, sorry, uh, g is the sigmoid function. Yes, so g of z. Yeah, thank you. Right, more questions? [inaudible] Sure. What's the, you know, um, what's the closest non-linear extension of this? Um, I don't- we don't a have a great answer to that right now frankly, um, uh, so a bunch of people including, you know, my former students and me, have done research to try to extend this to nonlinear versions and there's some stuff that kind of works, but I don't think there's like, uh, tried and true algorithm that I'm ready to say this is a right way to do it. Um, uh, yeah, actually maybe I should [NOISE] think I could say a little bit more about that if you're interested. Well, yeah, actually, uh, let me- let me try to- [NOISE] All right. Let's see. So, so for several- several years ago and- and still kind of ongoing, there's been research, um, some done by my collaborators and me, some done by others on trying to build nonlinear versions of ICA, and so some of you might have seen this slightly infamous, um, Google cat result, right? Uh, so this one was in the Google Brain project, one of the first projects we did. This is a few years ago now where, um, we trained a neural network, uh, uh, on, um, was it many, many hours of YouTube videos, uh, and, and eventually it learnt to detect cats because apparently there are a lot of cats in YouTube videos. Um, uh, and so it turns out that the algorithm we used was a, um, was sparse coding which is actually very closely related to ICA. Um, and so this rough algorithm was attempting to build a nonlinear version of ICA, where you train one version one- train- train train one layer of sparse coding let's say, to extract low level features and then recursively apply this on top, to learn not just edge detectors, but object part detectors, and then eventually, you know, the somewhat infamous, um, uh, this somewhat infamous Google cat. Um, but I think that this is actually still ongoing research. Um, I think the most interesting research, uh, some of the most interesting research has been on hierarchical versions of sparse coding, sparse coding is a different algorithm that turns out to be very closely related to ICA, and then you can show that they're optimizing very similar things. So, so I say sparse coding is very similar to ICA, uh, but they're hierarchical versions of this, they tried to turn this as a multilayered neural network and it kinda worked, wherever that shows it can learn interesting features. But what happened was, uh, supervised learning then really took off and the whole world shifted a lot of this attention to supervised learning and building deeper supervised learning neural networks. And so, the hierarchical sparse coding running ICA over and over to learn nonlinear versions. There- there's very less, uh, attention from research on the- on that topic than it- than it really deserves. So may- maybe you or someone in a class could go back and do more research on that. I, I still think is a promising area. All right. Um, so let me wrap up with, uh, some ICA examples, um, so this is actually a former TA from the class, um, Catie Chang. Um, and so it turns out that, uh, ICAs are routinely used to clean up EEG data today, so what's an EEG, right? Um, place many electrodes on your scalp, uh, to measure low electrical recordings, uh, on the surface of your scalp. So, you know, wha- what does the human brain do, right? Human brain, your neurons in your brain right now, uh, fire, generate little pulses of electricity, and if you put- place electrodes on your scalp, you can get very weak measurements of the, um, of the voltage of the electrical activity, in a, you know, at a certain point in your scalp. So the analogy to- um, oh, excuse me. Uh, oh, what's wrong. All right. So the analogy to the cocktail party problem, the, um, overlapping speakers' voices is that, you know, your- your brain [NOISE] does a lot of things at the same time, right? Your brain helps regulate your heartbeat, um, part of your brain does that, another part of your brain, you know, makes your eyes blink every now and then, another part of your brain- part of your brain is also responsible for making sure that you breathe, another part of your brain is responsible to thinking about machine learning and stuff like that, right? [LAUGHTER] So, so your brain actually handles many, many tasks at the same time. And as your brain, um, sorry, not sure what's wrong with this. Okay. And as your brain, um, uh, carries out these different tasks in parallel, uh, different parts of your brain generate different electrical impulses. So think of there as, um, imagine that you have a, you know, cocktail party in your head, right? So many overlapping voices, so this is now voices in your head, uh, just going back, but one- one- one part of your brain is saying, all right heart, go and beat, heart go and beat, heart go and beat, and another part of the brain is saying, hey, breathe in and breathe out, breathe in and breathe out, another part of the brain is ooh, you know. What's wrong with this PowerPoint? [LAUGHTER] That's what my brain is saying, right? Um, and uh, what each electrode on the surface of your scalp does is it measures an overlapping combination of all of these voices because different parts of your brain are sending these electrical impulses, they add up and so any one point on the surface of your brain, reflects a sum or a mixture, re- really a sum of these different voices, of these different things your brain is doing. Um, and so, uh, if you- just- just zooming in to the EEG plot, um, each line is a voltage measured at a single electrode, right? On say your scalp and, um, these, uh, signals are quite correlated, you see that when there's a massive voice in your brain shouting, you know, like, uh, uh, uh, uh, right, beat your heart or blink your eyes, that signal can go through all of the different electrodes, which is why you can see these artifacts reflected in all of these electrodes, um, uh, sorry. All right. Turns out a pretty good way to clean up this data is to take all of these time series pre- pretty much exactly as we learned about it with the ICA algorithm [NOISE] and separate it out into the independent components, and so, um, it turns out in this example, there are two components corresponding to driving the heartbeat, um, that's actually the eye blink component, and so one way to clean up this data- sorry, I should really wonder what's wrong with this. All right. Let me try something, [NOISE] um, maybe if I, [NOISE] uh, oh, that's interesting. All right. Okay, well, all right. Um, if you, uh, uh, right, it says heartbeat, there's eye blink, and, uh, you don't get, all right. And, um, if you run the ICA and then remove outs, I have a person say, "Oh this heartbeat, this eye blink, can remove, subtract all those components, then you can end up with a, um, much more cleaned up EEG signal, which you can then use for downstream processing. So actually we possibly- is, there's been a lot of research on. You've taken an EEG reading to try to guess at a high-level what you're thinking, right? It turns out that, uh, uh, if your train a, train a, train a, you know, supervised learning algorithm, uh, to try to decide, are you thinking of a noun or a verb, are you thinking of, uh, something edible, or are you thinking of, uh, uh, something inedible. There's been very interesting research, uh, trying to use an EEG to figure out just at a very coarse level, um, not- not- not- not quite mindreading every thought you're thinking, but, but, uh, uh, uh, but, uh, can we categorize very coarse level thoughts? Like, are you thinking of a person, are you thinking of an object? And you can actually do that to some extent using EEG readings. But cleaning up the data to get rid of the eye blink, and the heartbeat artifacts is a very useful, um, pre-processing step to get cleaner data, to feed into the learning algorithm, to try to figure out, try to categorize, you know, some coarse category of what you're thinking. Okay. Um, and then more research here, it turns out that- uh, we're kind of- I, I mentioned the Google cat thing just now. It turns out that, um, if you, um, uh, train ICA, uh, oh, the font is messed up. Um, if you train ICA on, uh, natural images, um, ICA will say that the natu- the independent components of natural images are these edges. Uh, and as in that, you know, when you see a little image patch in the world, when you've seen, you know, look, look, look somewhere in the world, look at just a tiny little piece of the image, right? Like 10 pixels by 10 pixels. Um, and if you take that data and model in this ICA, ICA will say that, uh, the world is made up of edges or made up of patches like these and that, uh, the way you end up with images in the world is by each of these patches, you know, independently saying is there a vertical edge, is there a horizontal edge, was there, is there this type of, um, uh, light on the left, dark on the right? Is there this type of, uh, lighter on top, darker on the bottom and so on. And just by adding all of these voices that you get a typical image fashion of the world. So they're, they're interesting theories in neuroscience about whether this is how, you know, the human brain learns to see as well. So, so very, very same work on, um, ICA and sparse coding to try to use these mechanisms to explain how, you know, the human brain tries to explain, um, uh, tries, tries to learn to perceive images, for example. Okay? Um, so all right. So [NOISE] that's it for, um, uh, the algorithms of ICA, um, just the final comments. Um, I think on Monday someone asked, "Do the number of speakers and number of microphones need to be equal?" So it turns out that, um, if the number of, uh, um, microphones is larger than the number of speakers, that's actually fine, right? If you- if the number of microphones is larger than the number of speakers, then if you run ICA or, or a slightly modified version of it, you'll find that some of the speakers are just silent speakers. Um, uh, and so, you know, if you have, uh, 10 microphones and five speakers, if you run this algorithm on 10 microphones, you can find that, well, maybe five of the sources are just silent or there are ways to just not model those five sources as well, right? If, if you think that, uh, they're just some sources of silence. So, so, this, so, so a slightly modified version of this works quite well if, um, uh, the number of speakers is larger than the number of microphones. Um, if the- excuse me, if the number of microphones is larger than the number of speakers, this, this, this works quite well. If the number of microphones is smaller than the number of speakers, then that's still, um, uh, very much a cutting edge research problem. Uh, so, so for example, uh, if you have two speakers and one microphone, um, uh, it turns out that if you have one male and one female speaker, so one relatively higher pitch and one much lower pitch, then you can sometimes have some algorithms that separate out two voices with one microphone. Um, but it doesn't work that reliably, it's a little bit finicky but there have been research papers published showing that, you know, you could make a reasonable attempt at separating out, um, two voices with mi- one microphone if the pitches are quite different such as this one male one female voice. Um, uh, uh, but separating out two male voices or two female voices is still very hard, um, uh, and, and then there's ongoing research in, in those settings. Right? So that's ICA, um, and I guess you get to play more of it in your, um, homework problem as well. Okay? Any last questions about ICA? [inaudible] Oh sorry, say it again? [inaudible] [NOISE] Wait, sorry, was because a- So I'm just wondering why is it that hard [inaudible] Oh, [NOISE] yeah so, um, uh, I think- actually if you go through a lot of the math it, it, it, it just breaks down, I think. Um, because there- you can have two independent sources but W is now no longer a square matrix, right? Of your, what is it? Um, uh, uh, so- uh, uh, right. Is that x is equal to AS, right? And so if, um, x is a real number and S was two-dimensional, so I guess this would be, um, uh, uh, A would be 2 by 1, S would be- uh, S- uh, A would be 2 by 1, S would be 2- excuse me, A would be 1 by 2 and S would be a 2 by 1, and this is 1 by 1, then, you know, A inverse kind of doesn't exist, right? So you need to come up with a way to form the maximum likelihood model. And, and when you have one microphone, it's just how do you separate out two overlapping voices,right? Does that make sense? So it takes much higher level knowledge, um, uh, yeah, to separate out two voices. Does this make sense? Um, so go ahead. [inaudible] Oh I see, right, uh, let's see. So right, so if you don't know how many speakers there are, you have all these microphones where you have all- the number of electrodes you have is fixed, so that's just your data set. And it turns out that, uh, um, if you run ICA with a large number of speakers, you find there are many speakers are silent. There are also some versions of ICA that you- so if you think that there are, um, uh, let's see, boy- those transfer some of this. But it turns out that, um, if you think that there is a relatively small number of speakers, then you don't need to explicitly model all the speakers. Instead, what you would model wou- so again, um, uh, suppose it's a maximum likelihood estimation problem. Um, let's say that, uh, x is an R10, right? So you have 10 recordings. But you suspect that you only have five speakers. Then in this case, I guess the ma- matrix A would be um, what is it? Uh, was it? It would be 10 by 5, is it? Right? To mix the 5 sources into 10 speakers. And you could, um, form the maximum likelihood estimation problem assuming the existence of only five speakers without modeling a lot of speakers and then finding later that they're all silent. Does that make sense? So if you form the- so if- if you parameterize the model like this using A instead of W, um, uh, then you could form the maximum likelihood estimation problem where you just assume that there are five speakers and S is generated by five speakers mixing through a linear thing plus noise. But I just think that if you don't know how many speakers you have or even what you are- what speakers you are working on, how would you know if you probably had enough microphones? Oh I see, sure, right. How do you know if you have- how do you know how many speakers you have? So I, I think it's one of those things that's a little bit like k-means, I guess, where you try it and see what works. And if you find that, uh, the first few, you know, speakers will capture most of the variance, you find that digital speakers are quite silent and they're quite small, you could just cut it off at that time. I don't wanna go too much into the different numbers of speakers and, and, and, uh, microphones, ICA algorithms. Uh, uh, but let me just take a couple of last questions and move on. You have a question? Yeah. Do you ever see a problem with W? Say it again? Do you ever see a problem with W? Oh, do you ever see a problem with W? Um, I'm sure you can. It's not usually done in this version of the algorithm, but I would not be surprised if there are some other versions where you do. I've, I've not seen that a lot myself actually. All right, cool. All right, cool. Um, let's see. All right, good, we're far enough along. Okay, good. Um, so- [NOISE] Circumstantial- All right. All right, yeah, let's do these- [NOISE] All right. Um- [NOISE] All right. So that wraps up, um, our chapter on unsupervised learning. So, um, you learned about I, guess, k-means clustering, um, the EM algorithm for mixture of Gaussians, uh, or really mixture of Gaussians model, um, factor analysis model, and also PCA. And then, you know, today the ICA or independent components analysis algorithm. And all of these are algorithms that could take as input an unlabeled training set, just the xi's and no labels. And we'll find various interesting structures in the data such as clusters or subspaces or in the case of ICA, the voices of the independent speakers. And, and you implement ICA and play with it yourself in the homework problem, where you get to separate out many five overlapping, um, voices. The last of the four major topics, I want to cover in this class. We've talked about supervised learning, kind of device machine learning, unsupervised learning, and the fourth and the final major topic we'll cover in this course will be on reinforcement learning [NOISE]. Okay. So, um, so to motivate reinforcement learning. Um, let's say you want to have a computer, uh, learn to fly a helicopter, right? I think I showed you some of the videos that are in the first lecture, and so I just skipped that here. But it turns out that, um, if you are at every point in time given the position of a helicopter, called the state of a helicopter, and you're asked to take an action on how to move the control sticks, you know, to make the helicopter fly in a certain trajectory. It turns out that it's very difficult to know what's the one right answer for how to move the control sticks of a helicopter. Right. So if you don't have a mapping from X to Y because you can't quite specify the one true way to fly a helicopter, um, it's hard to use supervised learning for that, right. And what reinforcement learning does is, is, is an, an algorithm that doesn't ask you to tell it the right answer at every step, it doesn't ask you to tell it exactly what's the one true way to move the controls of a helicopter at any moment in time. Instead, your responsibility as a designer or machine learning engineer or AI engineer is to specify a reward function that just tells the helicopter when it's flying well and when it's flying poorly. So your job as a designer is to write the cost function or a reward function that gives a helicopter a high reward whenever it's doing well. Flying accurately, flying the trajectory you want it to, and it gives the helicopter a larger negative reward, um, whenever it crashes or does something bad, right? And I think I, I, I remember, I think, you know, think of it as like training a dog, right? When do you say good dog, when do you say bad dog? And the dog figures out when to do more of the good dog things. And your job is not to tell the dog, when you can't actually talk to the dog, and tell it what to do. I guess that doesn't work. But you can tell it good dog and bad dog, and hopefully it learns from those positive and negative rewards how to do more of the good things. Okay. Um, another example. Um, let's say you want to write a program to play chess or I guess most, you know, somewhat famously and, uh, uh, arguably somewhat slightly overhyped Go, AlphaGo, right. Um, so it's very difficult to know in given a certain chess board position or checkers or Go board position, what is the one true move, what's the one best move. So it's very difficult to formulate, um, you know, playing chess, uh, uh, as a supervised learning problem. And instead, um, the mechanisms used to play chess are much more like reinforcement learning, where you can, um, let your program play chess or Go or whatever. And whenever it wins you go, "Oh good computer." And when it loses you go, "Oh bad computer." So that's a reward function. And the learning algorithm's job is to figure out by itself how to get more of the positive rewards, right? And actually common rewards for, uh, learning to play, uh, chess or checkers or Othello or Go is, uh, plus a reward of plus 1 for a win, minus 1 for a lose, and a 0 for a tie, right? So as you write your chess-playing programs, there has to be a common choice for a reward. Um, where R is the reward function and S is the state. Okay. And I will go into the notation, um, in a little bit. And so as you can imagine, um, given only this type of information so say a chess-playing program, it places much more burden on the program to figure out what to do. Right. In fact, one of the challenges of reinforcement learning is, uh- so this is called a reward, and that's called the state. And the state means, um, the status of the chessboard. Where are the P's in the chessboard? Or the status of the helicopter. Where exactly is the helicopter? And you're either right-side up or you're upside down, and where are you, right? Um, and it turns out one of the challenges, one of the things that makes, um, reinforcement learning hard is, uh, the credit assignment problem. And that means that if, uh, your program is playing a game of chess, and let's say it loses on move 50. You know, so it plays a game, and then on move 50, right, is checkmated and loses to its opponent. So it gets a reward of negative 1. But how can the program actually figure out what it did well and what it did poorly, right? If you lose a game on move 50, it might be that the program made a really bad move, made a blunder at move 20. And then, you know, but it just took another 30 moves before its fate was sealed, right. So in a game of chess, you made a bad mistake early on, you can still take many, many games- many, many moves in the game of chess before, before the final outcome of, of losing or winning or losing is reached. Or, um, in a, uh, initiate another- it turns out that, uh, if you are trying to build a self-driving car, um, if ever car crashes, right, chances are the thing the car was doing right before it crashes was brake, but it's not braking that caused the crash. It's probably something else that caused it many, many seconds ago that led to the bad outcome. So there's a bad outcome. How does the algorithm know of all the things that it did before, how does it know what it did well? What it should do more of and what they should- did poorly, what it should do less of. And, and conversely, if there's a good outcome, you know, like it wins a game of chess. Well, how do you know what you did well, right? So that's called the credit assignment problem, which is when your algorithm gets some reward, how, how do you actually figure out what you did well and what you did poorly? So you know what to do you more of and what to do less of, right? So, um, as we develop reinforcing learning algorithms, we'll see that the algorithms we use have to at least indirectly, um, try to solve the credit assignment problem. Okay. So, um, reinforcement learning problems like playing chess or flying helicopters or, um, uh, you know, building these various robots is modeled using the, um, MDP or the Markov decision process formalism. [NOISE] Um, and this is a way- this is a notation and the formalism for modeling how the world works, and then reinforcement learning algorithms will solve problems using this formalism. So what's an MDP? So an MDP is a five tuple. And let me explain what each of these are. Um, so S is the set of states. So for example, uh, in chess this would be the set of all possible chess positions or in, uh, flying a helicopter. This would be the set of all the possible positions, and orientations, and velocities of your helicopter. A is the set of actions, um, where, uh, in the helicopter this would be all the positions you could move your controls sticks or in chess this would be all the moves you can make, you know, in a, in a game of chess. [NOISE]. Uh, P subscript sa is a-a state transition probabilities and so, um, we'll see later these-these state transition probabilities tell you, if you take a certain, uh, action a and a certain state s, what is the chance of you ending up at a particular different state s prime? Great. Um, gamma is a discount factor, that's a number between 0 and 1. Uh, don't worry about this for now, we'll come back to this in a minute, and R is that all important reward function. Okay, so, um, in order to develop a reinforcement learning algorithm, um, I'm going to use, as a running example, a simplified MDP that we can draw on the whiteboard. Right, so helicopters and chess and go and so on are really complicated MDPs. So just to illustrate the algorithms, I want to use a simpler MDP, uh, and this is, um, an example we've drawn from the textbook Russell and Norvig. Um, I'm going to use simplified MDP in which you have a robot navigating this simple maze, ah, and there's an obstacle. So this is a grid work, right. So a robot, you know- well the R2D2 like robot. Yes, right, um, and it's navigating this very simple maze, uh, and this is a pillar or this is a wall, so you can't walk into that wall, [NOISE] and let me just use, um, indexing on the states as follows. Um, so this MDP- let's- let's go through the five top points and talk about what, uh, the- the- each of the five things are. So this MDP has 11 states corresponding to the 11 possible positions that the robot could be in, right, each of these bank squares. So there are 11 possible states, and the actions, um, are North, South, East and West, right? You can command your robot to move in any of these directions. Um, and I don't know if- if you worked with robots before, you know that, um, when you command a robot, uh, you know, to head straight, um, it doesn't always go exactly straight. Sometimes the wheel slips and veers off at a slight angle, and so just simplifying the example, we're going to model it as that, um, if you command the robot to go North from a certain state, that there is a 0.8% chance of successfully go the way you told it to and a 0.1 chance that it will accidentally veer off to the left or accidentally veer off to the right, okay? Um, if you're working on real robots, right, What's a real robot? Uh, it is actually important to model the noisy dynamics of a robot wheel slipping slightly. Or the orientation being slightly off. Now, um, in a real robot, you have a much bigger state space than the 11 states, right, so- so this is simplified. So this is not a realistic model for how robots actually slip but because of using such a small state space, I think just for illustration purposes, we'll- we'll- we'll use this. Um, and so for example, the state transition probability would specify these. You'd say that if you're in the state 3, 1. So this state 3, 1, and you command it to go North, that the chance of getting to the state 3, 2 is, uh, 0.8, and the chance of getting to the state 4, 1 is 0.1, chance again to 2, 1 is 0.1, um, and the chance of getting to other states is like 3, 3 and other states is equal to 0, okay? So the state transition probabilities will capture that, if you're here and decide to go North, there is a 0.8 chance you are going here, 0.1 chance you are going here, 0.1 chance you are going here, and you know, you've got 0.0 chance of, right, hopping two steps. Okay. Um, and- and again just simplifying the MDP example. We'll just assume that the-the robot, you know, hits a wall, it just bounces off the wall and stays where it is. So if you told it to go East, it slips off and just bounced off the wall and stays exactly where it is. Now, let's specify the reward function, uh, we'll come back to discount factor later. But let's say you want the robot to navigate to this cell in the upper right-hand corner, um, and so to incentivize the reward- incentivize the robot to get to this square, you know, that's the prize or that's the goal anyways, let's put a plus 1 reward there and, um, let's say you really don't want the robot to go to this cell, you could put a negative 1 reward there. Alright. So, um, the way you specify the tasks for a robot to do is in designing the reward function. So in our example, um, well let me just copy that again, plus 1, minus 1. Um, we have that the reward at the cell 4, 3 is plus 1, and the reward at the cell 4, 2 is negative 1. Um, and then, you know, if you want the robot to get to the +1 reward cell as quickly as possible, then, um, again there- there are many ways of designing reward functions. Well, one common choice would be to, um, put the negative penalty, a very small negative penalty, right, such as a set the reward to negative 0.02 for all other states. And the effect of a small negative reward like this is to charge it, right, every- every step it's just loitering around. So charge it a little bit for using up electricity and wandering around, uh, because this incentivizes the robot to hurry up and get to the plus 1 reward. So you give a small penalty, you know, for- for loitering and wasting electricity. So this is how an MDP works. Um, your robot wakes up at some state as 0, um, at time 0, you know, as you turn on the robot and the robot says, "Oh, I'm at that state." And based on what state it is in, um, it will get to choose some action, a0, so decide to only go North, South, East or West and choose some action. Based on the action, the consequence of the choice is it will get to some state S1. Uh, the state at the next time step, which is distributed according to the state transition probabilities governed by the previous state and the action it chose, right. So depending on what action it chooses, there is different chances of moving North, South, East or West. Now, that there's an S1, it then has to choose a new action a1, and as a consequence of the action a1, it will get to some new state S2, which is governed by, um, the state transition probabilities, you know, s1, a1, and so on, okay? And- and the robot just keeps on running. And so the robots will go through a sequence of states S_0, S_1, S_2 and so on, depending on the choices it receives, depending on actions it chooses. And the total payoff is written as follows with one more detail, is that term Gamma. So think of Gamma as a number like 0.99. So Gamma is usually chosen to be just slightly less than one, and what the- so the total payoff is the sum of rewards or more technically is a sum of discounted rewards, and what this does is it adds up all the rewards that the robot receives over time, but the further reward is into the future, um, you know, the- the- the smaller the Gamma is the power of time that that reward is multiplied by. Okay. So any reward you get at time 1, you get all of that. Every reward you get at time 2 is multiplied by 0.99. And the reward you get at the next step is multiplied by 0.99 squared, 0.99 cubed and so on. And so what the, um, discount factor, ah, does, is it has the effect of giving a smaller weight to rewards in the distant future, um, and this means that this encourages the robot to also get deposited rewards faster, um, or postpone the negative rewards, right? And so in, uh, financial applications, um, the discount factor has a natural interpretation, as the time value of money, because if you have a dollar today, you know, you're better off having a dollar today that have being a year- dollar a year from now. Right? Because when you put the dollar in the bank and earn interests, uh, uh, for a year on your dollar and so a dollar today is strictly better than the dollar the future. Um, and conversely, having to pay $100, or having to pay one dollar a year from now is also better than having to pay a dollar today, right? Because if you could, you know, save your money and earn interest and then issue a payment to someone else a year from now rather than now, then you're actually slightly wealthier, um, and so, uh, uh, and so Gamma in financial applications has the interpretation as the time value of money, um, uh, or as the interest rate, I guess. Um, uh, and but, but, but, more generally even for non-financial applications, ah, mostly wrote, most- most the- there are some financial application reinforcement learning programs, there are lots of non-financial applications as well. Um, this mechanism of using a discount factor has the effect of encouraging the system to get to the positive rewards as quickly as possible, uh, but then also conversely to try to push the negative rewards as far into the future is possible, right? And I think, uh, to be pragmatic, there are two reasons why people use Gamma. The story I just told, time value of money, your frontal deposit of rewards, postponed rewards. That's, uh, that's the story you tend to people- you tend to, uh, uh, hear people say in terms of why we have a discount factor. Uh, the other reason we have the discount factor is actually much more pragmatic one which is that all the reinforcement learning algorithms you see, they converge much faster or they weren't much better if you're willing to have a discount factor. Whereas it turns out that if Gamma is, is, equal to 1, if, if Gamma is not strictly less than 1, um, uh, it's much harder or, or there, there are many reinforcement learning algorithms that, uh, may not converge. It's much harder to prove convergence or they may not converge. So just as a pragmatic thing. Um, this makes the job much easier for your algebra. So I see some of you shaking your heads in dis- in disapproval. [LAUGHTER]. [inaudible]. Yeah. Yeah. Yes yeah that's a good point. Yes. So one of the things if there's no Gamma is that, uh, the rewards, some of the rewards, you know, could be- can increase or decrease without bounds. So by having Gamma it does guarantees that the total payoff is a finite value or is a bounded value. So that, that's, that's one of the parts that go into some of the proofs or some of the reasoning behind why reinforcement learning algorithms converge. So, cool, that's good insight. Um, okay. So the goal of reinforcement learning is to choose actions over time, to maximize the expected total payoff. Okay. And in particular, um, what most reinforcement learning algorithms will come up with, is a policy, um, that maps from states to actions. Right? So the output of most reinforcement learning algorithms will be a policy, um, or, or controller, in the R world we tend to use the term policy but policy just means controller, the maps of states to actions. So it turns out that, um, for the MDP that we have, um, right? It turns out that this is the optimal policy. So for example, ah, if you take this example, this, this cell here, this cell over here, this policy is saying pi applied to the state 3, 1, is equal to West. Hey, and that- so [NOISE] excuse me. So it separately worked out, what is the optimal policy, and this turns out to be optimal policy in the sense that, if you, um, we say execute this policy, so to execute the policy means that whenever you're in the state S, take the action given by Pi of S, so that's what it means to execute a certain policy. And it turns out that, um, this policy will- I, I, I worked all separately, right, offline. Yeah. And, in, um, uh, on my laptop, uh, uh, that this is the optimal policy for this MDP, and it turns out that if you execute this policy, meaning whenever you're in a certain state, you know, you'll take the action indicated by the arrow, that this is the policy that will maximize the expected total payoff. Okay. Um, and the problem in reinforcement learning is, given a definition for an MDP, or given a problem to pose, the problem as an MDP, figure out what's the set of states, what's the set of actions, um, what are the state transition probabilities, specified discount factor and specified reward function. And then to have a reinforcement learning algorithm, find the policy Pi that maximizes the expected payoff. And then when you want your robot to act or when you want your chess playing program to act, um, whenever you're in some state S, take the action given by Pi of S, and hopefully this will result in a robot that, you know, efficiently navigates to the plus 1 state. Okay? So it turns out that MDPs are quite good at making fine distinctions. So one example, um, it's actually not totally obvious whether here you're better off going North or going West. Right? And it turns out that there is a trade off. If you go west here, then, you know, you're gonna take a longer route to get to the plus one. So you take longer, uh, the plus minus is discounted more heavily, you're taking these penalties along the way, [NOISE] excuse me. But on the flip side, if you were to try to go North, you could try to get there faster. But on this step, there's a 0.1% chance that you accidentally slip off to the minus 1 state. So, so what is the optimal action? Right? It's actually quite hard to just look at it with your eyes and make a decision. But it turns out that if you solve for the optimal set of actions in this MDP in this example is I just take a longer and safer route. Question? [inaudible] Does this advice take us in policies? So, uh, uh, if the optimal set of actions is to cycle around, then it should find out, uh, uh, I mean for example, if there are only penalties everywhere and you just go and run in a circle, you know, then, then, the- the algorithm will actually chose to do that. Uh, but in this case, you want to get to the plus one as quickly as possible. Right? And so what we'll see, um, there's one more question. Go ahead. [inaudible] So, so, so chess and checkers and go and so on- they're, one, one complication that is, you take a move. So actually, all right. To refine the description to chess. Um, what happens in playing chess is a state- status, um, your board, right? So there's your move. So you see a board that's a state, and so you make a move and then the opponent makes a move and then that's the new state. So the state is when you and your opponent both make- take turns then this comes back to you. Right? Um, and because you don't know exactly what your opponent will do, there is a probability distribution over if I make a move or what's the other person gonna do? Uh, I guess one last question. Yeah. Go ahead. [inaudible] [NOISE] Yeah. Right. The probabilities are assigned per node, the 0.8, 0.1, 0.1 where does that come from? Um, so we'll talk about that later, uh, in some applications does this learn? So if you build a robot, you might not know is it 0.8, 0.1, 0.1 or, you know, 0.7, 0.115, 0.115. So it's quite common to use data to learn those state transition probabilities as well. We'll we'll, we'll, see a specific example of that nature. Okay. So all right. So where we are just to summarize, this is how you formulate the problem as an MDP, um, and then the, the, the, the job reinforcing learning algorithm is ready to go from the MDP. to telling you what is a good policy. Okay. So let's break, um, and have a good Thanksgiving everyone, [NOISE] I won't see you for like a week and a half, uh, uh, enjoy yourselves and we'll, we'll reconvene after Thanksgiving with, uh, with this. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_15_EM_Algorithm_Factor_Analysis_Stanford_CS229_Machine_Learning_Andrew_Ng_Autumn2018.txt | All right. Hi everyone, welcome back. Um, so what we'll see today is, um, additional, uh, elaborations on the EM, um, on the expectation maximization algorithm. And so, um, what you see today is, um, go over, you know, quick recap of what we talked about EM on Monday, and then describe how you can monitor if EM is converging. Um, and, um, uh, on, on Monday we talked about the mixture of Gaussians model, and started deriving EM for that. I want to just take these two equations and map it back to specifically the E and M steps that you saw for the mixture of Gaussians models, uh, to see exactly how these map to, um, uh, you know, updating the weights of the i and so on, um, how you actually derive the M step. Um, and then mostly what I want to spend today talking about is the model called the factor analysis model. Um, and this model useful for, um, for, for data, um, that can be very high-dimensional even when you have very few training examples. So what I wanna do is talk a bit about properties of Gaussian distributions, and then um, describe the factor analysis model, uh, some more about Gaussian distributions and then we'll derive EM for the factor analysis model. And, uh, I want to talk about factor analysis for two reasons, because one is, it's actually a useful algorithm in and of its own right. And second the derivation for EM for factor analysis is actually one of the trickier ones, and, uh, there are key steps in how you actually derive the E and M steps that I think you learn better or you better- master better by going through the factor analysis example. Okay. Um, so just to recap, last Monday or on Monday we had talked about the EM algorithm, uh, and we wound up figuring out this E-step and this M-step, right? And remember that if this is the log likelihood that you're trying to maximize, what the E-step does is it constructs a lower bound uh, that- this is a function of Theta. So this thing on the right hand side, this is a function of the parameters Theta. And what we proved last time was that, um, uh, that function is a lower bound of the log likelihood, right? And depending on what you choose for Q, you get different lower bound. So one choice of Q you may get this lower bound, for a different choice of Q you may get that lower bound. For a different choice of Q you may get that lower bound, and what the E-step does is it chooses Q to get the lower bound this tight, that just touches the log likelihood here at the current value of Theta, and what the M-step does is it chooses the parameters Theta that maximizes that lower bound, right? So that was the EM algorithm that we saw. Now um, I wanna step through how you would take this, you know, slightly abstract mathematical definition of EM and derive a concrete algorithm that you would implement, right? In, in, in, in, you know, um, in Python. And so let's, let's just step through this for the mixture of Gaussians model. Um, so for the mixture of Gaussians model we had a model for P of x i, z i which is P of x i given z i times p of z i, right? Um, and our model was that z is multinomial with some set of parameters phi, and so, you know, the probability of z i to be equal to j is equal to phi j, right? So phi is just a vector of numbers that sum to 1 specifying what is the chance of z being each of the, um, k possible discrete values. And then we have that x i given z i equals j, that, that is Gaussian with some mean and some covariance, right? And what we said last time was that, um, this is a lot like the Gaussian discriminant analysis model, uh and uh the, the, the trivial- one trivial difference is this is Sigma j instead of Sigma, right? GDA, Gaussian discriminant analysis, had the same Sigma every class but that's not the key difference. The key difference is that in, um, this density estimation problem, z is not observed or z is a latent random variable, right? Which is why we have all this machinery of, um, of EM. So now that you have this, um, uh, model, um, let's see. So now that you have this model, um, this is how you would derive the E and the M steps, right? So the E-step is, you know, you have Q i of z i, right? But let, let me just write this as Q i of z i equals j. Thi- this is sort of the probability of z i equals j. I know this notation's a little bit strange but under the Q i distribution, whether you want the chance of z being equal to j, right? And so, um, in the E-step you would say that the p of z i equals j given x i parameterized by all of the parameters. And we actually saw with Bayes' rule, right, how you would flesh this out, okay? And what we do in the E-step is, um, store this number, right? In, uh, what we wrote as w i j last time, okay? So you remember, um, if you have a mixture of two Gaussians, maybe that's the first Gaussian, that's the second Gaussian, you have an example x i here so it looks like it's more likely to come from the first than the second Gaussian and so this would be reflected in w i j. That, that example is assigned more to the first Gaussian than to the second Gaussian. So what you implement in code is, you know, you write code to compute this number and store it in wij. Um, and then for the M-step, you will want to maximize over the parameters of the model, right? Phi, mu, and Sigma, these are the param- uh pa-parameters of the mixture of Gaussians of sum over i, sum over z i, right? Um, and so the way you would actually derive this is you write this as sum of i. Um, so z i, you know, takes on a certain distribution of values. So z i we tu- turn, turn z i into j, right? So z I can be I guess one or two, if you have a mixture of two Gaussians. So you sum over all the indices of the different clusters of w i j times log of, uh, the numerator is, um, going to be into the negative one half times phi j, um, that's the numerator. And so, you know, this term is equal to, um, this first Gaussian term times that second term right, because this term is p of xi um, given z i, right? And the parameters, and this is just p of z i. Does that make sense? Okay. Um, and then we ta- take this and divide it by w i j, okay? So I'm, I'm going to step you through the, the, the steps you would go through if you're deriving EM using that, you know, E-step and M-step we wrote up above. But if you're deriving this for the mixture of Gaussians model then these are the, um, steps of algebra, right? You would, you would take, okay? Sorry I'm just realizing that. So in order to perform this maximization, what you will do is, um, you want to maximize this formula, right? This big double summation with respect to each of the parameters phi, mu, and Sigma. And so what you would do is, you know, take this big formula, right? And take the derivatives with respect to each of the parameters. So you take the derivative with respect to mu j, dot, dot, dot there's that big formula on the left, set it to 0, right? And take, uh, and then, and then it turns out if you do this, um, you will, uh, derive that mu j should be equal to sum over i to the i j x i over sum over i to the i j. And, um, this is what we said was how you update the mean's mu, right? The w i j's are the strength with which x i. So w i j is the, informally this is the strength with which x i is assigned, right? To Gaussian j, um, and more formally this is really p of, um, z i equals j given x i and the parameters, right? And so, um, so you end up with this formula. But the way you compute this formula is by the, the, the rigorous way to show this is the right formula for updating mu j, is looking at this objective, taking derivatives, saying they're zero, zero to maximize, um, and therefore deriving that equation for mu j, you know, by, by, by solving for the value of mu j that maximizes this expression, right? And, uh, similarly, you know, you take derivatives respect of, of, of this thing, with respect to that phi and set it to 0, take derivatives of this thing, um, right? And set that to 0 and that's how you would derive the update equations in the M-step for phi and for Sigma as well. Okay? Um, so and, and so, for example, when you do this, you find that the optimal value for phi is, um. Let's see. Yeah, yeah. We had this at the - near the start of Monday's lecture as well, okay? Um, so this is a process of how you would look at how the E-steps and M-steps are relative, and apply it to a specific model such as a mixtures of Gaussians model and that's how you, you know, solve for the maximization in the M-step, okay? And so what I'd like to do today is describe the application of EM. It's a more complex model called the factor of analysis model, and so it's important that - so I hope you understand the mechanics of how you do this, because we're going to do this today for a different model, okay? Any questions about this before I move on? Okay. Cool. Oh, so in order to, you know, foreshadow a little bit what we'll see when it comes down to the mixture of Gaussians model, excuse me. The Factor Analysis model which we talked about, you know, which is what we'll spend most of the day talking about. In the factor analysis model, instead of zi being discrete, zi will be continuous, right? And the particular zi will be distributed Gaussian. So the mixture of Gaussians model we had a joint distribution for x and z, where x was a discrete random variable. So in the factor analysis model we'll, we'll describe a different model. You know, for p of x and z, where z is continuous. And so instead of sum over zi, this would be an integral over zi of dzi, right? So - so sum becomes an integral. And - and it turns out that, yeah. Well, right. Yeah. And - and it turns out that if you go through the derivation of the EM algorithm that we worked out on Monday, all of the steps with Jensen's inequality, all of those steps work exactly as before. Meaning you check every single step for whether zi was continuous it'll work the same as before if you have changed the sum to an integral, okay? All right. So let's see. So I want to mention one other view of EM that's equivalent to everything we've seen up until now which is, let me define j of theta, Q as this, okay? So is that formula you've seen a few times now. What we proved on Monday, was L of theta is greater than or equal to J of theta, Q right? And this is true for any theta and any choice of Q, okay? So using - using Jensen's inequality, you can show that, you know, J for any choice of theta and Q is a lower bound for the log likelihood of theta. So it turns out that an equivalent view of EM as everything we've seen before, is that at an E-step, what you're doing is maximize J with respect to Q and in the M-step maximize J with respect to theta, right? So in the E-step you're picking the choice of Q that maximizes this, and it turns out that the choice of Q we have we'll set J equal to L, and then in the M-step maximize this with respect to theta and pushes the value of L even higher. So this algorithm is sometimes called coordinate ascent. If you have a function of two variables and you optimize with respect to this, and also with respect to this, then go back and forth and optimize with respect to one at a time. That - that's the procedure that sometimes we call coordinate ascent, because you're maximizing with respect to one coordinate at a time. And so EM is a coordinate ascent algorithm relative to this course function J, right? And - and, you know, and every iteration J ends up being sent to L which is why you know that as the algorithm increases J, you know that the log-likelihood is increasing with every iteration and if you want to track whether the EM algorithm is converging or how it is converging, you can plot, you know, the value of J or the value of L on successive iterations and see if its validated, scoring monotonically and then when it plateaus and isn't improving anymore, then you might have a sense that the algorithm is converging, okay? All right. Okay. So that's it. The, basically an algorithm and a mixture of Gaussians. What I want to do now is - and is going to talk about the factor analysis algorithm. All right. So you know, that the factor analysis algorithm will work, actually sorry. So I want to compare and contrast mixture of Gaussians with factor analysis we're talking about a little bit, which is - for the mixture of Gaussians, let's say n equals 2 and m equals 100, right? So you have a data set with two features x 1 and x 2. So n is equal to 2 and maybe you have a data set that looks like this. You know, there's a mixture of two Gaussians. We have a pretty good model for this data set, right? Can fit one Gaussian there, fit the second Gaussian here. You kind of capture a distribution like this with a mixture of two Gaussians. And this is one illustration of when, when you apply mixture of Gaussians in this picture, m is much bigger than n, right? You have a lot more examples than you have dimensions. Where I would not use mixture of Gaussians and where you see the minute factor analysis will apply. Maybe if m is about similar to n, I don't know, even n is - or even m is much less than n, okay? And so just for purpose of illustration let say m equals 30 and n equals 100, right? So let's say you have 100 dimensional data but only 30 examples. And so to - to make this more concrete, you know, many years ago there was a Stanford PhD student that was placing temperature sensors all around different Stanford buildings. And so what you do is model, you measure the temperature at many different places, right? Around campus. But if you have 100 sensors, you know, taking 100 temperature readings around campus. But only 30 days of data or maybe 30 examples, then you would have 100 dimensional data because each example is a vector of 100 temperature readings, you know, at different points around this building say. But you may have only 30 examples of - of - if you have say 30 - 30 such vectors. And so the application that the Stanford PhD student at the time was working on, was she wanted to model p of x, right? So this is x as a vector of 100 sensors, 100 temperature readings. Because if something goes wrong or for example because a bad case would be if there's a fire in one of the rooms, then there'll be a very anomalous temperature reading in one place. And if you can model p of x and if you observe a value of p of x that is very small. You would say oh it looks like this anomaly there, right? And we're actually less worried about fires on Stanford. The use case was actually a - a- was it energy conservation. If someone unexpectedly leaves the window open in the building you are studying, you know, and it was hot and was it, and it's winter and it's warm inside the building and cool air blows in, and the temperature of one room drops in an anomalous way, you want to realize if something was going wrong with the windows, or the - or the temperature in part of the building, okay? So for an application like that, you need to model p of x as a joint distribution over, you know, all of the different senses, right? If you imagine maybe just in this room, let say we have 30 sensors in this room, then the temperatures at the 30 different points in this room will be highly correlated with each other. But how do you model this vector of a 100 - 100 dimensional vector with a relatively small training set? So it turns out that the problem with applying a Gaussian model. Well, all right. So one thing you could do is model this as a single Gaussian. And say that x is distributed, right? And if you look in your training set of 30 examples and find the maximum likelihood estimate parameters, you find that the maximum likelihood estimate of mu is just the average. And the maximum likelihood estimate of sigma is this, but it turns out that if m is less than equal to n, then sigma, this covariance matrix will be singular. And singular just means, uh - non-invertible, okay? I'll set for another illustration in a second. But, er, if you look at the formula for the Gaussian density, right? So the Gaussian density kind of looks like this, right? Abstracting away some details. And when a covariance matrix is singular, then this term, this determinant term will be 0. Um, so you end up with one over 0. Um, and then sigma inverse is also undefined or, er, blows up to infinity it will depending on how you think about it. Right but so, you know the inverse of a matrix like, um, 1, 10, right? Would be I guess one, 1 over 10, right? And, er, an example of a non-invertible matrix or singular matrix would be this, and you can't actually calculate the inverse of that matrix, right? So it turns out that, um, if your number of training examples is less than the dimension of the data, if you use the usual formula to derive the maximum likelihood estimate of Sigma, you'll end up with a covariance matrix that is singular. Uh, singular just means non-invertible, which means our covariance martix looks like this. And so, you know, the Gaussian density, if we try to compute p of x you get- you kind of get infinity over 0. You see, right? Oh, sorry not infinity, actually 0 over 0. Sorry, right. It doesn't matter, it's all bad. Um, and I think- let me just illustrate what this looks like. Which is, um, let's say m equals 2, and n equals 2, right? So you have two-dimensional data x1 and x2, and, um, uh, so n equals 2, and the number of training examples is equal to 2. So it turns out that, um- let's see, so you see me draw contours of Gaussian densities like this, right? Like ellipses like that. It turns out that if you have two examples, a two-dimensional space, and you compute the most likely- maximum likelihood estimate of the parameters of the Gaussian to fit to this data, then it turns out that these contours will look like that, right? Um, except that, instead of being very thin, as I'm drawing it, it will be, it will be infinitely skinny. So you end up with a Gaussian density where I can't draw lines, you know, of 0 width on the whiteboard, right? Um, uh, but it turns out that the contours will be squished infinitely thin. So you end up with a Gaussian density all- all of whose mass is on the straight line over there with infinitely thin contours that, that they're just, you know, we squish the centers on the, on the plane that goes on the line, um, connecting these two points. And so this is- so first there are, uh, practical numerical problems, right? As you end up with 0 over 0 if you try to compute p of x for any example. And second, um, this is- this very poorly conditioned Gaussian density puts all the probability mass on this line segment and so any example, right? Over there, just a little bit off, has no probability mass because, oh, has a probability mass of 0, a probability density of 0 because the Gaussian is squished infinitely thin, you know, on that, on that line, okay [NOISE]. But, but you can tell, this is just not a very good just- this is not a very good model, right? For, for this data. Um, So what we're gonna do is, ah, come up with a model that will work even for, um, for these applications, even, even for a dataset like this, right? Um, there's actually a, uh- I think the, the origins of the factor analysis model, uh, one of the very early applications was actually a psychological testing. Um, where, uh, if you have a, you know, administer a psychology, um, ah, exam to people to measure different personality attributes, right? So you might measure- you might have 100 questions or measure 100, uh, psychological attributes, um, but have a dataset of 30 persons, right? And again, you know, doing, doing psych research, collecting, you know, assembling survey data is hard. We assume you have a sample of 30 people and each person answers 100 quiz questions. Um, and so each person is one- gives you one example, right? X, and the dimension of this is, um, 100 dimensional, we have only 30 of these. And so if you want to model p of x, try to model how correlated are the different psychological attributes of people, right? Oh, is intelligence correlated with math ability, is that correlated with language ability, is that correlated with other things, uh, then how do you build a model for p of x, okay? All right. So, um, if the standard Gaussian model doesn't work, let's look at some alternatives. Um, one thing you could do is, uh, constrain Sigma to be diagonal, right? So Sigma is a covariance matrix, is an n by n covariance matrix. So in this case, it would be a 100 by 100 matrix. Um, but let's say we constrain it to just have diagonal entries and 0s on the off diagonals, right? So these giant 0s, I mean, the diagonal entries of this square matrix are these values in all of the entries of the diagonals you set to 0. So that's one thing you could do. And this turns out to be, um- this turns out to correspond to constraining your Gaussian to have axes align contours. So this is a Gaussian with 0 off-diagonals. Um, this would be another one, right? This would be another one. So these are examples of Gaussian- of, of contours of Gaussian densities with, um, 0 off diagonals. So the axes here are the X1 and X2, right? Whereas you cannot model something like this if your off diagonals are, are 0. Um, and so you do this, the maximum likelihood estimate of the parameters Sigma j, is pretty much what you'd expect actually. Right. The maximum likelihood estimate of the mean vector mu is the same as before. And this is maximum likelihood estimate of Sigma j, right? This kind of knowledge should be no surprise, it's kind of what you'd expect. Uh, and it turns out that, uh, right- and, and so the covariance matrix here has n parameters, instead of n squared or about n squared over two parameters, the covariance matrix Sigma now just has n parameters, which is the n diagonal entries. Now, the problem with this is that, this modeling assumption assumes that all of your features are uncorrelated, right? So you know, this just assumes that any two features they kind of share are, are completely uncorrelated. And, um, if you have temperature sensors in this room, it's just not a good assumption to assume the temperature at all points of this room are completely uncorrelated, completely independent of each other, or if you measure, you know, psychological attributes of people, it's just not a great assumption to assume that, you know, the different- different psychological measures you might have are completely, um, uh, independent. So while this model would take care of the problem, the, the technical problem of the covariance matrix being singular you can fit this model [NOISE], um you know, on a, on 100 dimensional dataset with 30 samples. You can fit this, you won't get this- you could build this model, you won't run into numerical singular, um, covariance matrix type problems, it's just not a very good model where you're just assuming nothing is correlated to anything else [NOISE]. Something else that you can do is, um, uh, make an even stronger assumption. So this is an even worse model, but I want to go through it because it will be a building block for what we'll actually do later, which is constrain Sigma to be Sigma equals, um, lowercase Sigma squared times i, right? And so, um, constrain Sigma to be dia- not only diagonal but to have the same entry in every single element. So now you've gone from, um, I guess n parameters to just one parameter, right? Uh, and this means that you are constraining the covariance matrix to- you are constraining the Gaussian you use to have circular contours. So this is an example where you can model. Uh, and this would be another example, right? And this is- I guess this is another example, okay? So you can model things like this, where every feature, not only is every feature uncorrelated but every feature further has the same variance as every other feature. Um, and the maximum likelihood is this, okay? And again, not, not, not a huge surprise, just the average over, uh, the previous values. So what we'd like to do is, um, not quite use either of these options, right? Which assumes- really, the biggest problem is it assumes the features are uncorrelated. Um, and what I'd like to do is build the model that you can fit even when you have very high dimensional data and a relatively small number of examples, um, but that allows you to capture some of the correlations, right? So if you have 30 temperature sensors in this room, you know, probably there are some correlations, right? Probably, this side of the room temperature is gonna be correlated, and that side of the room temperature is gonna be correlated and maybe the ambient temperature in this whole building. The, the temperature of this room really goes up and down as a whole, but maybe some of the lamps on the side heat up that side of the room a bit more, so different, the different. There are correlations but maybe you don't need a full covariance matrix either. So what [NOISE], what factor analysis will do is, um, give us a model that you can fit even when you have, you know, [NOISE] 100 dimensional data and 30 examples. They capture some of the correlations but that doesn't run into the a, a- uninvertible, um, covariance matrixes is that the naive Gaussian model does, okay? All right. So let me- just check any- let me, let me describe the model, let me just check, any questions before I move on? Okay. [BACKGROUND] Oh, sure. Yes. Um, yes. There is one thing you can do. A common thing to do is apply Wishart prior and what that boils down to is, um, add a small diagonal value to that- to the maximum likelihood estimate. Um, it- it kind of, uh, in a technical sense it takes away the, uh, non-invertible matrix problem. Uh, it's actually not the best algorithm for a lot of the types of data. Um, uh, the- the- the- the Wishart or inverse Wishart prior, yeah. Others, you know- basically, take the maximum likelihood for Sigma, and add, you know, some constant to the diagonal. Um, it takes care of the problem in a technical way, but it is not- it's not the best model for a lot of datasets, I see. Why do we even think about option two [inaudible] Oh, yes. Why do you think about option two, when it's likesemi even worse than option one. Um, yes. Option two is not a good option, but I need to use this as a building block for factor analysis. So you see this is a small component of, uh, of, uh, see I actually planned these things out, you know. [LAUGHTER]. Cool, yeah. And- and- and maybe- actually to- to- to give [inaudible] just- just to mention, you know, um, just mention some things I see. Yeah. Actually the- the machine learning work evolves all the time, which I find fascinating. But you look at all the big tech companies, um, a lot of the large tech companies, they're all like working on exactly the same problems, right? Every large tech company, you know, software, AI company, is working on machine translation, every one of them works on speech recognition, every one of them works on face recognition, and I- I- I've been part of these teams myself. Right? And I think it's great that we have so much progress in machine translation, because there are so many people, and so many large companies that work on machine translation. It's actually really happy to see so much progress in these problems that every single large tech company, large software, AI-ish tech company works on. Um, one of the fascinating things I see is that, um, uh, because of all this work into large tech companies working on very similar problems, one of the really overlooked parts of the machine learning world is small data problems, right? So there's a lot work in big data if you are Brazilian, English, and French, and Chinese, and Spanish sentences is the semi-close models that work. Um, and I think, uh, uh, there's actually a lack of attention, like a disproportionately small amount of attention, on, you know, small data problems, where instead of, uh, 100 million images, you maybe have 100 images. Um, and so, uh, some of the teams I work with these days, actually like Landing AI. Um, I actually spent a lot of my time thinking about small data problems, because a lot of the practical applications of machine learning, including a lot of things you see in your class projects, are actually small data problems. Right? And I think, um, when- when Annan, uh, worked with a healthcare system, works at Stanford Hospital, for some of the problems, you only have 100 examples, or even 1,000, or even 10,000. You don't have a million patients with the same medical condition. And so I think that, um, uh, a lot of these models- So- and again, uh, earlier this week, I was using a slightly modified version of factor analysis on a manufacturing problem at Landing AI. Right? And I think a lot of these small data problems are actually where a lot of the exciting work is to be done in machine learning, and is somehow- it- it feels like a blind spot of- or if like a- like a- like a gap of, uh, a lot of the work done in the AI world today. Go ahead. [inaudible]. Uh, yeah. Why don't we use the same algorithms with this big data? It turns out that, um, uh, you know, it turns out- if- if- if you look at the computer vision world, right? There's a data set that everyone is working on. Now- now we're past it, we don't really use it any more, called ImageNet, which had a million images, and so there are tons of computer vision architectures that have been heavily designed for the use case of if you have exactly one million training examples. Uh, and it turns out that the algorithms that work best if you have 100 training examples is, you know, looks like it's different than the best learning algorithm. I think, um, uh, uh, and so I think right now, we actually- I think the machine learning world, we are not very good at understanding the scaling. Uh, the best algorithm for one training example, you know, as far as we are able to invent algorithms as a community, is different than best algorithm for 1000, best for, for a million, it's actually different than, um, uh, uh- actually, and Facebook published a paper recently, with 3.5 billion images. The result was cool, it was very large, right? So I was saying, we don't actually have a good understanding of how to modify our algorithms, to have one algorithm work on every single point of this spectrum, going from one example to, like, a billion examples. Uh, and so there's a lot of work optimizing for different points of the spectrum, uh, and I think there's been, um, a lot of work optimizing for big data, which is great, you know, build some of these large systems that handle, like, whatever, petabytes of data a day, uh, that's great. But, um, uh, I feel like relative to the number of, um, application opportunities, there- there's a lot of work on small data well, that- that I find very exciting, that- that, uh, and I think of this as an example. Uh, the reason I was using this, literally, well, modified version of this, earlier this week on the manufacturing problem, um, is because, um, uh, there isn't that much data in those scenarios, right? Cool. All right. That's, um, off-topic. But let's- let's- let's go and describe- well, hopefully, maybe so, so this stuff does get used, right? Uh, so let's- let's talk about the model. Um, so similar to, uh, the mixture of Gaussians, I'm gonna define a model with, um, P of X, Z equals P of X, given Z times P of Z, uh, and Z is hidden. Okay? So that's the framework, same as, um, mixture of Gaussian. So let me just define the factor analysis model. So first, um, Z will be drawn- distributed according to the Gaussian density, where Z is going to be an RD, where D is less than N. And again, to think about it, um, maybe you can think of it as, uh, D equals 3, uh, uh, N equals 100, M equals 30. Okay? Um, and- and- but I guess, ju- just make sure this is a concrete example to think about it. And what we're going to assume is that X is equal to Mu, plus, um, Lambda Z. This is, uh, the capital Greek alphabet Lambda, plus Epsilon, where Epsilon is just using Gaussian with mean 0, and covariance Psi. Um, so the parameters of this model are Mu which is N dimensional, um, Lambda which is N by D, and Psi which is N by N, and we're going to assume that Psi is a diagonal. Okay? Um, and so- let's see. The second equation, an equivalent way to write that, equivalently, is that given the value of Z, the conditional distribution of X, right, X given Z, this is Gaussian with mean given by Mu plus, um, Lambda Z, and covariance Psi. Okay? So once you've given Z- once you sample Z- so this is P of Z and this is P of- P of Z and this is P of X, Z- X given Z. Right? So given Z, X is computed as Mu plus Lambda Z. So this is just some constant, and then you add Gaussian noise to it. And so this equation, an equivalent way to define this equation, is to say that the mean of X, uh, conditioned on Z, is this first term. Right? Since that's the mean. And the covariance of X given Z, is given by this, you know, additional term Psi, by that noise term that you add to it. Okay? So let me go through a few examples. And- and I think the intuition behind this model is, um, if- if you think that there are three powerful forces driving temperatures across this room, maybe one powerful force is just what is the temperature, you know, here in Palo Alto, what's the temperature here at Stanford. And another powerful force is how bright are the lights on the left side of the room, and how hot does it heat up this side of room, and another is how hot does is it heat up the right side of the room. Right? So, you know, let's say there are three main driving factors affecting the temperature of this room, then that's when D would be equal to 3. Then you assume that, you know, there are three things in the world that drive the temperature of this room that's three-dimensional, which is the temperature in Palo Alto, kind of, around this area, um, how bright that the light is there, and how bright that the light is there, and you try to capture that with three numbers. Given those three numbers, right? Given Z, the actual temperature for the 100 sensors we scatter around this room, will be determined by each sensor, right? So we plug 30 temperature sensors all over this room. Each sensor we plant will measure an actual temperature, that's a linear function of those three powerful forces, um, and if a sensor is on that side of the room, it'll be affected more by how bright that the lights are on that side of the room. Um, uh, if there's a sensor near the door, it will be more affected by the temperature outside- temperature here in Palo Alto. Right? But so X will be a linear function, but this first time I underlined. Um, but rather than just that term, there is [inaudible] noise. Right? So each sensor has its own noise term, which is governed by this additional noise term Epsilon. And, um, the assumption that this matrix Psi is diagonal, it's saying that after you compute the mean, the noise that you observe at each sensor is independent of the noise at every other sensor. Does that make sense? Right? That maybe- maybe the sensor, you know, up there, right? Maybe it's just noisy or something, just a gust of wind. But you assume that the noise of, you observe at different sensors is independent. The- the additional Epsilon error term has a- has a diagonal covariance matrix given by Psi. Okay? So you can- so you can think of that as what, um, uh, factor analysis is trying to model. Okay? So let me, um, just go through a couple of examples of the types of data factor analysis can model. All right, and again by the constraints of the whiteboard, I'm going to have to go low-dimensional here, right? Um, so actually let me- let me go through a couple examples. So let's say Z is R_1 and X is R_2. So in this example I guess d is equal to 1, n is equal to 2 and let's say m is 7, right? just- just. So what will be a typical example, generated by- what will be an example of a type of data that this can model? So this, let me erase this here. All right, so this would be a typical sample of Z_i right? which is you know- so this is z is just drawn from a standard Gaussian. So I guess z is just Gaussian, would mean 0 and unit variance. So that's the number line and you draw seven points from a Gaussian, you know, maybe you get a sample like that. Okay? and now let's say lambda is 2, 1 and let's just say mu is 0, 0, okay? So now let's compute lambda x plus mu, right? so given a typical sample like that um, if you compute lambda x plus mu, this will now be the R_2, right? so here is X_1, here is X_2. We're gonna take those examples and map them to a line as follows. Where these examples on R_1. So- excuse me, lambda z plus mu, okay. So this is just a real number and so lambda z plus mu is now two-dimensional, right? Because lambda is a 2 by 1 matrix. Okay? so you end up with- So this would be a typical sample- typical random sample of lambda z plus mu and it's a two-dimensional data-set but all of the examples lie perfectly on a straight line. Okay? Then finally let's say that psi, the covariance matrix is equal to this as a diagonal covariance matrix and so this covariance matrix corresponds to X_2 having a bigger variance than X_1, right? And so you know this, this- I guess the density of epsilon has ellipses that look a little bit like this, it's taller than wide. The aspect ratio should technically be 1 over root 2 to 1, right? Because the standard deviations will be root 2, I guess. And so in the last step of what we are going to do, x equals lambda z plus mu plus epsilon. We're going to take each of these points we have and put a little Gaussian contour. You know there's that shape. There's this- I'm just drawing one contour of the shape and just put it on top of this, and if you sample one point from each of these Gaussians, then maybe you get this example, this example, this example, this example, okay? So what I just did was look at each of the Gaussian contours and sample a point from that Gaussian. And so the red crosses here are a typical sample drawn from this model. Okay? and so if you have data that looks like this, that looks at the red crosses. The Zs are latent random variables, right? When you get the dataset you kind of just see Zs. So what you actually see, is just you know the red crosses, that's your training set and if you apply the factor analysis model with these parameters then you can find EM and so on. Hopefully you can find parameters that models this dataset pretty well, but hopefully this gives you sense of the type of dataset this could generate and so- and so on. And one way to think of this data is you have two-dimensional data but most of the data lies on a 1D subspace. So this is how to think about it, you have two-dimensional data since n is two. But most of the data lies on a roughly one-dimensional subspace meaning it lies roughly on a line, and then there's a little bit of noise off that line, okay? All right, let me quickly do one more example because these are- these are high-dimensional spaces. I think it's- I think it's useful to build intuition. All right, so let's go through an example where z is in R_2, x is in R_3 and let's use m equals 5. So d equals 2, n equals 3, okay? So we have a different set of parameters. Let's look at the type of data you can generate a factor analysis which is, here is Z_1 and Z_2. Z is distributed Gaussian, standard Gaussian 2D so it would be a circular Gaussian. So maybe this is what the typical sample, right, looks like. If you- if you if you sample sort of Z_1 and Z_2 from a standard Gaussian, right that would be a typical sample in Z_1 and Z_2. So now- all right, I'm going to do a demo. Let me take these five examples and just copy them to this piece of paper, okay? So, all right there, right? Transferred it from the whiteboard to this piece of paper, to this brown cardboard. So now you have Z_1 and Z_2 in a two-dimensional space. What we're going to do is compute lambda z plus mu, and this will be 3 by 2, and this will be 3 by 1. So what this computation will do as you map from z in two-dimensions to lambda z plus mu, is you're going to map from two-dimensional data to three-dimensional data. In other words, you want to take the two-dimensional data lying on the plane in the whiteboard, and map it, check out this cool animation into the three-dimensional space of our classroom [LAUGHTER]. And then the last step is for each of these points in this three-dimensional space like X_1 X_2 X_3, right? We'll have a little Gaussian bump that is axis aligned because epsilon is the features, the- the components of epsilon are uncorrelated and taking each of these five points and add a little bit of fuzziness, add a little bit of Gaussian noise to it. And so what you end up with is a set of red crosses and you end up with a few examples, you know add a little bit of noise, you end up with- except that they would have a bit of noise off this plane as well, right? But so what the factor analysis model can capture is if you have data in 3D, right? In this 3D space, but most of the dataset lies on this maybe roughly two-dimensional pancake but there's a little bit of fuzziness off the pancake, right, so this would be an example of the type of data that factor analysis can model. Okay? All right cool. Um, and the intuition is really think of factor analysis can take very high dimensional data, say, 100 dimensional data and model the data as roughly lying on a three-dimensional, five dimensional subspace with a little bit of fuzz, with a little bit of noise off that low dimensional subspace. Great. So- [NOISE] All right. So let's talk about- yeah. [BACKGROUND] Oh, right. It does not work as well if the data's not lying on low dimensional subspace. Um, let's see. So even in 2D, if you have, um, this data set, right? [NOISE] You actually have the freedom to choose Gaussian noises like that, in which case you can actually model things that are quite far off a subspace. Uh, but, um, uh, yeah, I, I, I, you know, I think when you have a very high dimensional data set, it's actually very difficult to know what's going on because you can't visualize these very high dimensional data sets, uh, and you also don't have enough data to build very sophisticated models. So, so I feel like yes, if you have- if the data actually does not roughly lie in a subspace, then this model, you know, may not be the best model, but when you have such high dimensional data in such a small data set, um, you- is- you can't fit very complex models through it anyway, so this might be pretty reasonable. Right. Cool. All right. So, um- [NOISE] all right. So it turns out that the derivation of EM for factor analysis is actually, it's actually one of the trickiest EM derivations, in terms of how you calculate the e-step, and how you calculate the m-step. Um, the whole algorithm is, you know, describe the- every, every, every single step, the- step three in great detail in the lecture notes. But what I want to do is give you the flavor of how to do the derivation, and to especially draw attention to the trickiest step, so that if you need to derive an algorithm like this yourself for maybe a different Gaussian model, then you know how to do it, but I won't do every step of the algebra here. All right? Um, so in order to set ourselves up to derive factor analysis, uh, EN- ENM for factor analysis, I wanna describe a few properties of, uh, multivariate Gaussians. So [NOISE] let's say that X is a vector, and I'm gonna write this as a partition vector, right? In which, um, uh, [NOISE] if there are R components there, and S components there. So [NOISE] X_1 is in R_r, X_2 is in R_S, and X is in R_ r plus S. Okay? So if X is Gaussian with mean Mu and covariance Sigma, then, uh, let- similarly, let Mu be written as this sort of partition vector. Right? Just break it up into two sub-vectors, corresponding to the first R components in the second S components. And similarly, let the covariance matrix be partitioned into, um, you know, these four diagonal blocks, where, I guess, this is R components, this is S components, this is R components, this is S components. Um, so all this means is, uh, you take the covariance matrix, and take the top leftmost R-by-R elements, and call that Sigma 1, 1. Right? And, and, uh, and, and, and then similarly for the other sub-blocks of this, um, covariance matrix. So in order to derive factor analysis, one of the things you need to do is compute marginal and, um, uh, conditional distributions of Gaussians. So the marginal is, [NOISE] you know, what is P of X_1. Right? Um, and so the, the- if you, you know, were to derive this, uh, the way you compute the marginal is to take the joint density [NOISE] of P of X, right? And you can write this as P of X_1 X_2, because X can be partitioned into X_1 and X_2, and integrate out X_2 under P of X_1 X_2, right? Dx_2, and this will give you P of X_1. Right? And if you plug in the Gaussian density, the formula for the Gaussian density, if you plug in, I guess, you know, 1 over 2 Pi to the N over 2, is equals to one-half, right? E to the, you know, minus one-half, X1 minus Mu 1, X_2 minus Mu 2, uh, right? If you plug this into P of X_1, X_2, and actually do the integral, um, then you will find that, um, the marginal distribution of X_1 [NOISE] is given by; X_1 is usually a Gaussian, with mean Mu 1, and covariance sigma 1, 1. So it- it's, kind of, not a shocking result, that the marginal distribution is given just by that and that. Right? And, and again, the way to show it vigorously is to do this calculation, [NOISE] but it's actually not shocking, I guess, that that's what you would get. Okay? Um, and then the other property you will [NOISE] need to use is a conditional, which is, um, [NOISE] given the value of X_2, what is the conditional value of X_1? Um, and so the way to do that would be, you know, [NOISE] in theory, you would take P of X_1, X_2 divide by P of X_2, right? And then simplify. And it turns out you can show that, um, [NOISE] X_1 given X_2, is itself Gaussian, [NOISE] with some mean and some covariance, we're just gonna write this Mu of 1 given 2 and Sigma of 1 given 2, where Mu of 1 given 2 is, uh- and, and- but this is one of those formulas that I actually don't- I actually don't manage to remember, but every time I need it I just look it up. It's written in the lecture notes as well. So um, [NOISE] X_2 minus 2 to- oops. Okay? [NOISE] So that's how you compute, um, marginals and conditionals of a Gaussian distribution. Okay? So [NOISE] using these properties of, uh, the multivariate Gaussian density, let's go through the high-level steps of how you derive the EM algorithm for this. [NOISE] All right. [NOISE] Um, step one is, uh, let's compute- actually, let's, um- excuse me. [NOISE] Let's derive what is the joint distribution of P of X and Z. Right? And in particular, it turns out [NOISE] that if you take Z and X and stack them up into a vector like so, um, Z and X viewed as a vector would be Gaussian with mean, um- [NOISE] with some mean and some covariance, uh, because X and Z jointly will have a Gaussian density. And let's try to quickly figure out what are this mean and that covariance matrix. [NOISE] So that was a definition of these terms. Um, and so the expected value of Z is equal to 0 because 0 is- Z is Gaussian with mean 0 and covariance identity, [NOISE] and the expected value of X is equal to the expected value of Mu plus Lambda Z, plus epsilon, um, but Z has 0 expected value, epsilon has 0 expected value, so that just leaves you with Mu. And so this mean vector Mu XZ, is going to equal to 0 Mu. Right. And so this is D-dimensional, [NOISE] and this is, uh, N-dimensional. Okay? Um, and it turns out that, uh, [NOISE] let's see- and it turns out that you can [NOISE] similarly compute the covariance matrix Sigma, right? Where this is, um, D dimensions and this is N dimensions. Um, [NOISE] it turns out that if you take this partition vector, and compute the covariance matrix, [NOISE] the four blocks of the covariance matrix can be written as follows. [NOISE] Um- Okay. And you can, one at a time, derive what each of these different blocks look like. Um, and let me just do one of these, and let me just derive what Sigma 2, 2, the lower right block is and the rest are derived similarly and also fleshed out in the lecture notes. So the way you derive what this block is like is that you say Sigma 2, 2 is x minus Ex, x minus Ex transpose. And so if I plug in the definition of x that would be a Lambda z plus Mu plus Epsilon minus Mu times the same thing. Right. Um, so there's x minus Ex. So there's x minus Ex, okay? Uh, because the expected value of x is Mu. So the Mus cancel out. And then if you do the quadratic expansion, I guess this becomes expected value of, um, let's see, Lambda z times each of these two terms transpose plus- it, it, it sort of, you know, a plus b times a plus b, right? It's a times a times a plus b, b times a, b plus b. You get four terms as a result. And so the first term is Lambda z times Lambda z transpose, which is this, plus Lambda z Epsilon transpose plus Epsilon, um, right? And so, um, this term has 0 expected value because, uh, Epsilon and, and z, both have zero expected value uncorrelated. So this is zero. This is zero on expectation. And so you're just left with the expected value of Lambda zz transpose, Lambda transpose plus the expected value of, uh, Epsilon Epsilon transpose, right? Um, and so by the linearity of expectation, you can take expectation inside a ma- matrix multiplication. So this Lambda times the expected value of zz transpose times Lambda transpose plus. And this is just the covariance of Epsilon, right? Which is- which is Psi. Um, and then because z is drawn from a standard Gaussian with identity covariance, that expectation in the middle is just the identity. So that's Lambda, Lambda transpose plus Psi. Okay. So that's how you work out what is this lower right block of this, um, covariance matrix. I know I did that a little bit quickly, but every, every step is, uh, written out, uh, more slowly in the lecture notes as well. Okay. And it turns out that if you go through a similar process to figure out, you know, one at a time using similar process, one of the other blocks of this covariance matrix, you find that the other blocks of this covariance matrix are identity, Lambda, Lambda transpose and the one we just worked out. Okay. That- that's the one we just worked out. But so that is the covariance matrix Psi. [NOISE] So where we are is that we've figured out that the joint distribution or the joint density of z x is Gaussian with mean given by that vector and covariance given by that matrix, okay? Um, and so what you could do, uh, is, um, you write down, right? P of x_i and try to take the- uh, so P of x_i will be this Gaussian density. And what you could do is take derivatives of the log likelihood with respect to the parameters, and set the parameters to 0 and solve. And you find that there is no known closed-form solution. There is actually no closed-form solution for finding the values of Lambda and Psi and Mu that maximize this log-likelihood. So in order to, uh, fit the parameters of the model, we're instead going to resort to EM, okay? And so in the E-step. Right. So let's, let's first derive what is the E-step, which is an E-step, you need to compute this, right? Now, um, z_i here is a continuous random variable. When we're fitting a mixture of Gaussian distributions, z_i was discrete, and so you could have a list of numbers represented by, you know, w_ i_ j, that just, just at the vector sorting what is the probability of each of the discrete values of z_i. But in this case, z_i is a continuous density. So how do you represent Qi of z_i at a computer? It turns out that using the formulas we have for the marginal- excuse me, for the conditional distribution of a Gaussian, it turns out that if you compute this right hand side, you find that z_i given x_i, this is going to be Gaussian with some mean and some covariance, right? Where- oh, it's basically those formulas, Mu of z_i given x_i is equal to- if you kinda of take that formula and apply it to our thing here, a zero, uh, plus Lambda transpose. And, um [NOISE] okay. So these equations exactly, these two equations, right, maps to- map to that big Gaussian Density that we have. Okay. So what you would do in the E-step is, um, compute this and compute this- compute this vector and compute this matrix, and store that- store these, you know, store these as variables, and your representation of the Q_i is that Q_i is a Gaussian Density, right, with this mean and this covariance. So this is what you actually compute to represent Q_i. Okay. [NOISE] All right. So step two was derive the E-step, and step three is derive the M-step. [NOISE] and, um, the derivation of the M-step is, is quite long and complicated, um, but I wanna mention just a key alge- algebraic trick you need to use when deriving the M-step. Um, so, you know, we know from the E-step that Q_i of z_i is that Gaussian Density. Right. So you know, it's 1 over 2 pi to the d over 2, that thing, and E to the, right, negative 1.5 dot dot dot. Right. So tha- that's the formula of a Q_i. It turns out that, um, in the M-step, there will be a few places in the derivation where you need to compute something like this. [NOISE] Right. And one way to approach this would be to plug into the density for Q_i which is [NOISE] So you'd end up with this. 1 over 2 pi to the d over 2 Sigma, you know, wha- uh, and so on into the negative 1.5 dot dot dot times Z_i d_Z_i, and then try to compute this integral. Um, it turns out there's a much simpler way to compute this integral. Anyone know what it is? All right. Cool. Awesome. Expected value. So the other way to compute this integral is to notice that this is the expected value of z_i when z_i is drawn from Q_i, right? So you know th- the, the definition of the expected val- value of a random variable is expected value of z is equal to integral over z probability_z times zdz, right? That's what the expected value of a random variable is. And so this integral is the expected value of z with respect to z drawn from the Q_i distribution. Um, but we know that Q_i is Gaussian with associated mean and certain variance, and so the expected value of this- this is just mu of z_i given x_i, right? It's that thing that you've already computed in the E-step. Makes sense? And so when students derive the M-step, you know, for EM implementations of Gaussians, one of the key things to notice is, uh, when are you actually taking an expected value with respect to a random variable, in which case, it's just the value computed already, and when do you need to plug in this big complicated integral which can lead to very complicated, very intractable calculations. Okay. So just when you're- whenever you see this, um, uh, think about whether you need to be expanding a big complicated integral, or if it can be interpreted as an expected value. Okay. Um, and so for the M-step, it's really, you know, the M-step is [NOISE] All right. So that's the M-step. And if you re-write this term as sum over i, the expected value of z_i, uh, drawn from Q_i of this- all right. It turns out that, um, if you go ahead and, uh, plug in the Gaussian density, here [NOISE] actually on- one rule of thumb for whether or not you should plug in a complicated integral or plug in a Gaussian density, um, this is just a rule of thumb after doing this type of math a long time, is that see if there's a log in front. If there's a log in front of a Gaussian density, basically Gaussian density has an exponentiation, right? The Gaussian density is 1 over e to the something. So whenever there's a log in front, the log exponentiation cancel out, and this equation simplifies. So one trick as you're doing these derivations is just see if there's a log in front of a Gaussian density. And when there is a plug in, go ahead and plug in the formula for your Gaussian density, the log will simplify that, and what you end up with is the log of a Gaussian density ends up being a quadratic function, a quadratic function of the parameters. And if you take the expected value with respect to a Gaussian density, respect to quadratic function, this whole thing ends up being a quadratic function. Um, and then you can take derivatives of that equation with respect to the parameters. With respect to mu of that whole thing, set it to 0, and then solve and they'll be roughly, um, level of complexity of, of maximizing quadratic function. Okay. Hope that makes sense. Um, the actual formulas are a little bit complicated. So I don't- I'll, I'll leave you to look at the actual formulas in the lecture notes, but I think the take away is, uh, don't expand this integral, um, and when you are deriving this, plug in the Gaussian densities here because the log will simplify. Okay. And details of it in the lecture notes. So let's break for today. Uh, best of luck with the mid-term and- seriously, I hope you guys do well. All right. I- I'll see you guys in a, in a few days. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Stanford_CS229_Machine_Learning_Course_Lecture_1_Andrew_Ng_Autumn_2018.txt | Welcome to CS229 Machine Learning. Uh, some of you know that this class has been taught at Stanford for a long time. And this is often the course that, um, I most look forward to teaching each year because this is where we've helped I think, several generations of Stanford students become experts in machine learning, go on to build many of their products and services and startups that I'm sure many of you are pre- or all of you are using, uh, uh, today. Um, so what I want to do today was spend some time talking over, uh, logistics, and then, uh, spend some time, you know, giving you a beginning of a intro, talk a little bit about machine learning. So about 229. Um, you know, all of you have been reading about AI in the news, uh, about machine learning in the news. Um, and you probably heard me or others say, AI is the new electricity. Uh, the emergence and rise of electricity about 100 years ago, it transformed every major industry. I think AI already we call machine learning for the rest of the world seems to call AI. [NOISE] Um, machine learning and, and AI and deep learning will change the world. And I hope that through 229, we'll give you the tools you need so that you can be many of these future titans of industries that you can be one to go out and build, you know, help the large tech companies do the amazing things they do, or build your own start-up, or go into some other industry. Go, go transform healthcare or go transform transportation or go build a self-driving car. Um, and do all of these things that, um, after this class, I think you'll be able to do. You know, um, the majority of students supplying- the, the demand for AI skills- the demand for machine learning skills is so vast. I think you all know that. Um, and I think it's because machine learning has advanced so rapidly in the last few years that there are so many opportunities, um, to apply the learning algorithms, right? Both in industry as well as in academia. I think today, we have, um, the English department professors trying to apply learning algorithms to understand history better. Uh, we have lawyers trying to apply machine learning into process legal documents and off-campus, every company, both the tech companies as well as lots of other companies that you wouldn't consider tech companies, everything from manufacturing companies, to healthcare companies, to logistics companies are also trying to apply machine learning. So I think that, um, uh, uh, if you look at it on a- on a factual basis, the number of people doing very valuable machine learning projects today is much greater than it was six months ago. And six months ago is much greater than it was 12 months ago. And the amount of value, the amount of exciting meaningful work being done in machine learning is, is, is very strongly going up. Um, and I think that given the rise of, you know, the, the amounts of data we have as well as the new machine learning tools that we have, um, it will be a long time before we run out of opportunities. You know, before, before society as a whole has enough people with the machine learning skill set. Um, so just as maybe, I don't know, 20 years ago was a good time to start working on this Internet thing and all people that started working on the Internet like 20 years ago have fantastic careers. I think today is a wonderful time to jump into machine learning, uh, and, and, and the number of- and the opportunities for you to do unique things that no one has- no one else is doing, right? The opportunity for you to go to a logistics company and find that exciting way to apply machine learning, uh, will be very high because chances are that logistic company has no one else even working on this. Because, you know, they probably can't- they, they may not be able to hire a fantastic Stanford student that's a graduate of CS229, right? Because there just aren't a lot of CS229 graduates around. Um, so what I want to do today is, um, do a quick intro talking about logistics. Um, and then uh, we'll, we'll spend the second half of the day, you know, giving an overview and, and talk a little bit more about machine learning. Okay? And uh- oh, and I apologize. I- I think that, uh, this room, according to that sign there, seats, what, 300 and something students. Uh, I think- we have, uh, uh, like not quite 800 people enrolled in this class. [LAUGHTER] Uh, so there are people outside, and, and all of the classes, uh, are recorded and broadcast in SCPD. Uh, they usually- the videos are usually made available the same day. So for those who they can't get into the room, my apologies. Um, the- there were some years, um, where even I had trouble getting into the room but I'm glad [LAUGHTER] they let me in. But, but I'm- but, but hopefully, you can watch. You, you can watch all of these things online shortly. [inaudible]. Oh, I see. Yes. Yeah. [LAUGHTER] I don't know, it's a bit complicated. [LAUGHTER] Yeah. Thank you. I think it's okay. Yeah. I- I- okay, yeah. Yeah. Maybe for the next few classes you can squeeze in and use the NTC. So for now, it might be too complicated. Okay. So quick intros, um, uh, I'm sorry, I should have introduced myself. My name is Andrew Ng. [LAUGHTER] Uh, uh, uh, and I wanted to introduce some of the rest of the teaching team as well. There's a class coordinator. Um, she has been playing this role for many years now and helps keep the trains run on time and make sure that everything in class happens when it's supposed to. Uh, uh, so, so, so should be uh- and then, uh, we're thrilled to have- Do you guys want to stand up? Uh, we have the co-head TAs, uh, respectively are PhD students working with me. Uh, and so bringing a lot of, um, uh, technical experience, uh, technical experience in machine learning as well as practical know-how on how to actually make these things work. And with the large class that we have, we have a large TA team. Um, maybe I won't introduce all of the TAs here today but you'll meet many of them throughout this course here. But the TAs expertise span everything from computer vision and natural language processing, to computational biology, to robotics. And so, um, through this quarter, as you work on your class projects, I hope that you get a lot of, uh, help and advice and mentoring from the TAs, uh, all of which- all of whom have deep expertise not just in machine learning but often in a specific vertical application area, um, of machine learning. So depending on what your projects, we tried to match you to a TA that can give you advice, um, eh, the most relevant, uh, whatever project you end up working on. Um, so yeah, goal of this class, I hope that after the next 10 weeks, uh, you will be an expert in machine learning. Um, it turns out that, uh, uh, you know, um, and- and I hope that after this class, you'll be able to go out and build very meaningful machine learning applications, uh, either in an academic setting where, uh, hopefully you can apply it to your problems in mechanical engineering, electrical engineering, and, uh, English, and law and, um, uh, and- and- and education and all of this wonderful work that happens on campus, uh, as well as after you graduate from Stanford to be able to apply it to whatever jobs you find. Um, one of the things I find very exciting about machine learning is that it's no longer a sort of pure tech company only kind of thing, right? I think that many years ago, um, machine learning, it was like a thing that, you know, the computer science department would do and that the elite AI companies like Google and Facebook and Baidu and Microsoft would do. Uh, but now, it is so pervasive that even companies that are not traditional because there are tech companies see a huge need to apply these tools, and I find a lot of the most exciting work, uh, these days. Um, and, and, and maybe some of you guys know my history some would be biased, right? I- I led the Google Brain team which helped Google transform from what was already a great company 10 years ago to today which is, you know, a great AI company. And then I also led the AI group at Baidu, and, you know, led the company's technology strategy to help Baidu. Also, it transformed from what was already a great company many years ago to today arguably China's greatest AI company. So having led the, you know, built the teams that led the AI transformations of two large tech companies, I, I, I feel like that's a great thing to do. Uh, but even beyond tech, I think that, um, there's a lot of exciting work to do as well to help other industries, to help other sectors, uh, embrace machine learning and use these tools effectively. Um, but after this class, I hope that each one of you will be well qualified to get a job at a shiny tech company and do machine learning there, or go into one of these other industries and do very valuable machine learning projects there. Um, and in addition, if any of you, um, are taking this class with the primary goal of, uh, being able to do research, uh, in machine learning, so, so, actua- so some of you I know are PhD students. Um, I hope that this class will also leave you well-equipped to, um, be able to read and understand research papers, uh, as well as, uh, you know, be qualified to start pushing forward, um, the state of the art, right. Um, so let's see. Um, so today, uh, so, so just as machine learning is evolving rapidly, um, the whole teaching team, we've been constantly updating CS229 as well. So, um, it's actually very interesting. I feel like the pace of progress in machine learning has accelerated, so it, it actually feels like that, uh, the amounts we changed the class year over year has been increasing over time. So- so for your friends who took the class last year, you know, things are a little bit different this year because we're, we're constantly updating the class to keep up with what feels like still accelerating progress in the whole field of machine learning. Um, so, so, so, so there are some logistical changes. For example, uh, uh, we've gone from- uh, what we used to hand out paper copies of handouts, uh, that we're, we're trying to make this class digital only. Uh, but let me talk a little bit about, uh, prerequisites as well as in case your friends have taken this class before, some of the differences for this year, right? Um, so prerequisites. Um, we are going to assume that, um, all of you have a knowledge of basic computer skills and principles. Uh, so, you know, Big O notation, queues, stacks, binary trees. Hopefully, you understand what all of those concepts are. And, uh, assume that all of you have a basic familiarity with, um, uh, probability, right? Hopefully, you know what's a random variable, what's the expected value of a random variable, what's the variance of a random variable. Um, and if- for some of you, maybe especially the SCPD students taking this remotely, it has been, you know, some number of years since you last had a probability and statistics class. Uh, we will have review sessions, uh, on, on, on Fridays, uh, where we'll go over some of this prerequisite material as well. But it's okay. Hopefully, you know what a random variable is, what expected value is. But if you're a little bit fuzzy on those concepts, we'll go over them again, um, at a- at a discussion section, uh, on Friday. Also, seem to be familiar with basic linear algebra. So hopefully that you know what's a matrix, what's a vector, how to multiply two matrices and multiplying matrices and a vector. Um, if you know what an eigenvector then that's even better. Uh, if you're not quite sure what an eigenvector is, we'll go over it that- that you, you, you better- uh, yeah, we'll, we'll, we'll go over it I guess. And then, um, a large part of this class, uh, uh, is, um, having you practice these ideas through the homeworks, uh, as well as I mention later a, uh, open-ended project. And so, um, one, uh, there are- we've actually, uh, until now we used to use uh MATLAB, uh and Octave for the programming assignments, uh, but this year, we're trying to shift the programming assignments to, uh, Python, um, and so, um, I think for a long time, uh, even today, you know, I sometimes use Octave to prototype because the syntax in Octave is so nice and just run, you know, very simple experiments very quickly. But I think the machine learning world, um, is, you know, really migrating I think from a MATLAB Python world to increasing- excuse me, MATLAB Octave world to increasingly a Python maybe and- and then eventually for production Java or C++, kind of world. And so, uh, we're rewriting a lot of the assignments for this class this school year. Having, having driving that process, uh, so that- so that this course, uh, you could do more of the assignments. Uh, uh, maybe most- maybe all of the assignments in, um, Python, uh, NumPy instead. Um, now, a note on the honor codes, um, we ask that, you know, we, we actually encourage you to form study groups. Uh, so, so you know I've been um, fascinated by education, a long time. I spent a long time studying education and pedagogy and how instructors like us can help support youth to learn more efficiently. And one of the lessons I've learned from the educational research literature is that the highly technical classes like this, if you form study groups, uh, you will probably have an easier time, right? So, so CS229, we go for the highly technical material. There's a lot of math, some of the problems are hard and if you have a group of friends to study with, uh, you probably have an easier time uh, uh, because you can now ask each other questions and work together and help each other. Um, where we ask you to draw the line or what we ask you to, to, to do relative to Stanford's, uh, Honor Code is, um, we ask that you do the homework problems by yourself, right? Uh, and, and, and most specifically, um, it's okay to discuss the homework problems with friends, but if you, um, but after discussing homework problems with friends, we ask you to go back and write out the solutions by yourself, uh, without referring to notes that, you know, you and your friends had developed together, okay? Um, the class's honor code is written clearly on the class handouts posted digitally on the website. So if you ever have any questions about what is allowed collaboration and what isn't allowed, uh, please refer to that written document on the course website where we describe this more clearly, but, um, out of respect for the Stanford honor code as well as for, uh, uh, you know, for, for, for students kind of doing their own work, we asked you to basically do your own work, uh, for the- it's okay to discuss it, but after discussing homework problems with friends, ultimately, we ask you to write up your problems by yourself so that the homework submissions reflect your own work, right? Um, and I care about this because it turns out that having CS 229, you know, CS 229 is one of those classes that employers recognize. Uh- uh, I don't know if you guys know, but there have been, um, companies that have put up job ads that say stuff like, "So long as you've got- so long as you completed CS 229 we guarantee you get an interview," right? [LAUGHTER] I've- I've seen stuff like that. And so I think you know in order to, to maintain that sanctity of what it means to be a CS 229 completer, I think, um, and I'll ask all of you so that- really do your homework. Um, or stay within the bounds of acceptable, acceptable collaboration relative to the honor code. Um, let's see. And I think that um, uh, if- uh, you know what? This is, um, [NOISE] yeah. And I think that, uh, one of the best parts of CS 229, it turns out is, um, excuse me. So I'm trying, sorry, I'm going to try looking for my mouse cursor. Uh, all right. Sorry about that. My- my- my displays are not mirrored. So this is a little bit awkward. Um, so one of the best parts of the class is- oh, shoot. Sorry about that. [LAUGHTER] All right, never mind. I won't do this. Um, you could do, you could do it yourself online later. Um, yeah, I started using- I started using Firefox recently in addition to Chrome here. It's a mix up. Um, one of the best parts of, um, the class is, um, the class project. Um, and so, you know, one of the goals of the class is to leave you well-qualified to do a meaningful machine learning project. And so, uh, one of the best ways to make sure you have that skill set is through this class and hopefully with the help of some of our TAs. Uh, we wanna support you to work on a small group to complete a meaningful machine learning project. Um, and so one thing I hope you start doing, you know, later today, uh, is to start brainstorming maybe with your friends. Um, some of the- some of the class projects you might work on. Uh, and the most common class project that, you know, people do in CS 229 is to pick an area or pick an application that excites you and to apply machine learning to it and see if you can build a good machine learning system for some application area. And so, um, if you go to the course website, you know, cs229.stanford.edu and look at previous year's projects, you ha- you, you see machine learning projects applied to pretty much, you know, pretty much every imaginable application under the sun. Everything from I don't know, diagnosing cancer to creating art to, uh, lots of, um, uh, projects applied to other areas of engineering, uh, applying to application areas in EE, or Mechanical engineering, or Civil engineering, or Earthquake engineering, and so on, uh, to applying it to understand literature, to applying it to um, uh, I don't know. And, and, and, and, and so, uh, if you look at the previous year's projects of many of which are posted on the course website. You could use that as inspiration to see the types of projects students complete, completing this class are able to do and also encourage you to, um, uh, you can look at that for inspiration to get a sense of what you'll be able to do at the end- conclusion of this class and also see if looking at previous year's projects gives you inspiration for what, um, you might do yourself. Uh, so we asked you to- we, we invite you I guess to do class projects in small groups and so, um, after class today, also encourage you to start making friends in the class both for the purpose of forming study groups as well as with the purpose of maybe finding a small group to do a class project with. Um, uh, we asked you to form project groups of, um, up to size three. Uh, uh, most project groups end up being size two or three. Um, if you insist on doing it by yourself, right without any partners that's actually okay too. You're welcome to do that. But, uh, but- but I think often, you know, having one or two others to work with may give you an easier time. And for projects of exceptional scope, if you have a very very large project, that just cannot be done by three people. Um, uh, sometimes, you know, let us know and we're open to- with, with to some project groups of size four, but our expectation- but we do hold projects, you know, with a group of four to a higher standard than projects with size one to three, okay. So- so what that means is that if your project team size is one, two or three persons, the grading is one criteria. If your project group is bigger than three persons, we use a stricter criteria when it comes to grading class projects. Okay. Um, and that, that reminds me um, uh, I know that uh- let's see. So for most of you since this- since this started 9:30 AM on the first day of the quarter, uh, for many of you, this may be- this may be your very first class at Stanford. How many of you, this is your very first class at Stanford? Wow. Cool. Okay. Awesome. Great. Welcome to Stanford. [LAUGHTER] Uh, and if someone next to you just raise their hand- uh actually, rai- raise your hand again. So I hope that, you know, maybe after class today, if someone next to you raised a hand, uh, help welcome them to Stanford, and then, say hi and introduce yourself and make friends on the way. Yeah. Cool. Nice, nice to see so many of you here. Um. [NOISE] All right. So um, just a bit more on logistics, uh- So, um, let's see, in addition to the main lectures that we'll have here, uh, on Mondays and Wednesdays, um, CS229 also has discussion sections, uh, on- held on Fridays that are- and everything we do including the- all the, all the lectures and discussion sections are recorded and broadcast through SCPD, uh, through the online websites. Um, and one of- and, uh, discussion sections are taught, uh, usually by the TAs on Fridays and attendance at discussion sections is optional. Uh, and what I mean is that, um, you- you know, you- 100% promise, there won't be material on the midterm that will sneak in from this kind of section. So it's 100% optional. Uh, and you will be able to do all the homework and the projects without attending the discussion section. But what we'll use the discussion section for, uh, for the first three discussion sections. So, you know, this week, next week, uh, the week after that, we'll use the discussion sections to go over prerequisite material in greater depth. So, uh, go over, uh, linear algebra, basic probability statistics, teach a little bit about Python NumPy in case you're less familiar with those frameworks. Uh, so we'll do that for the first few weeks. And then for the discussion sections that are held later this quarter, we'll usually use them to go over more advanced optional material. Uh, for example, um, CS229, most of the learning algorithms, you- you hear about in a class rely on convex optimization algorithms, but we want to focus the class on the learning algorithms and spend less time on convex optimization. So you want to come and hear about more advanced concepts in convex optimization. We'll defer that to the discussion section. Uh, and then, there, there are few other advanced topics, uh, Hidden Markov Models, time series, uh, that we're planning to defer to the, um, Friday discussion sections. Okay. Um, so, uh, let's see. Um, cool, and, uh, and, um, a final bit of logistics, um, uh, for- there are digital tools that some of you have seen, but, um, for this class, we'll drive a lot of the discussion through the, uh, online website Piazza. How- how many of you have used Piazza before? Okay, cool, mostly. Wow, all of you? That's very amazing. Uh, good. So, so, uh, online discussion board for those of you that haven't seen it before, but, um, I definitely encourage you to participate actively on Piazza and also to answer other student's questions. I think that one of the best ways to learn as well as contribute, you know, back to the class as a whole is if you see someone else ask a question on Piazza, if you jump in and help answer that, uh, that, that often helps you and helps your classmates. I strongly encourage you to do that. For those of you that have a private question, you know, sometimes we have students, um, uh, reaching out to us to- with a personal matter or something that, you know, is not appropriate to share in a public forum in which case you're welcome to email us at the class email address as well. Uh, and we answer in, in the class email address- the cla- teaching staff's email address on the course website, you can find it there and contact us. But for anything technical, anything reasonable to share with the class, uh, which includes most technical questions and most logistical questions, right? Questions like, you know, can you confirm what date is midterm, or, or, you know, what happens? Uh, can you confirm when is the handout for this going out and so on? For questions that are not personal or private in nature, I strongly encourage you to post on Piazza rather than emailing us because statistically, you actually get a faster answer, uh, posting it on- post- posting on Piazza than- than, you know, if you wait for one of us to respond to you, um, and we'll be using Gradescope as well, um, to- for, for online grading. And then, if, if you don't know what Gradescope is, don't worry about it. We'll, we'll, we'll send you links and show you how to use it later. Um, oh, and, uh, again, relative to- one last logistical thing to plan for, um, unlike previous, um, uh, years where we taught CS229, uh, so we're constantly updating the syllabus, right? The technical content to try to show you the latest machine learning algorithms, uh, and the two big logistical changes we're making this year, I guess one is, uh, Python instead of MATLAB, and the other one is, um, instead of having a midterm exam, you know, there's a timed midterm, uh, we're planning to have a take-home midterm, uh, this course, instead. So I, I know some people just breathed in sharply when I said that. [LAUGHTER] I don't know what that means. [LAUGHTER] Was that shock or happiness? I don't know. Okay. Don't worry, midterms are fun. You- you'll, you'll love it. [LAUGHTER] All right. So that's it for the- that's it for the logistical aspects. Um, let me check with the- so let- let me check if there are any questions. Oh, yeah, go ahead. On campus, are those courses offered every quarter [inaudible]. Yeah. So that's interesting. Uh, let's see. I think it's offered in spring. And one other person. Oh, yes, is teaching it. So someone else is teaching it in spring quarter. Um, uh, I actually did not know it was gonna be offered in winter. [inaudible] Yeah. [inaudible]. Yeah, right, yeah. So- so I think a free guide and teaching it in- sorry, in their [inaudible] and you are right, are teaching it in, uh, spring, uh, and I don't think it is offered in winter. [inaudible]. Will the session be recorded? Yes, they will be. Oh, and by the way, if, if, if you wonder why I'm recording that I'm repeating the question, I know it feels weird, I'm recording with a microphone, so that- so that people watching this at home can hear the question. But, uh, both the lectures and the discussion sections, uh, will be- will be recorded and put on the website. Uh, maybe the one thing we do that's not recorded and broadcast are the office hours. Great. Isn't that right? [LAUGHTER] Oh, oh, but, uh, I think, uh, this year, uh, we have a 60-hour, how many hour? Well, 60 office hours. Uh, 60 office hours per week. Right, yeah. [LAUGHTER] So- so- so hopefully, I- I just again, we- we're constantly trying to improve the course. In previous years, one of the feedback we got was that the office hours were really crowded. So- so we have 60 hour- 60 hours, about 60 office hour slots per week this year. That- that seems like a lot. So hopefully, if you need to track down one of us, track down a TA to get help, hopefully, that- that'll make it easier for you to do so. Go ahead. [inaudible]. Say that again. Well- [inaudible]. Oh, well logistical things like when homeworks are due, would be covered in lectures. Uh, we have uh, yes, so we have uh, four planned homeworks. Oh sorry. [inaudible] Yeah, and if you go to the- if you go to the course website and you click on the syllabus link uh, that has a calendar with when each homework assignments go out and when they'll be due. Uh, so four homeworks and uh, project proposals due a few weeks from now and uh, final projects due at the end of the quarter. But all the, all the exact days are listed on the course website I think. [inaudible] Uh, sure yes, difference between this class and 229a. Um, let me think how to answer that. Yes. Uh, so yeah I know, I was debating earlier this morning how to answer that because I've been asked that a few times. Um, so I think that what has happened at Stanford is that the volume of demand for machine learning education is just, right skyrocketing because anything everyone sees, everyone wants to learn this stuff and so um, uh, so within- so the computer science department has been trying to grow the number of machine learning offerings we have. Um, uh, we actually kept the enrollments to CS229a at a relatively low number at 100 students. So I actually don't want to encourage too many of you to sign up because uh, I think we might be hitting the enrollment cap already so, so please don't all sign up for CS229a because um, we- CS229a, does not have the capacity this quarter but since CS229a is uh, um, much less mathematical and much more applied, uh, uh, a relatively more applied version of machine learning and uh, so I, I guess I'm teaching CS229a and CS230 and CS229, this quarter. Of the three, CS229, is the most mathematical. Um, it is a little bit less applied than CS229a which is more applied machine learning and CS230 which is deep learning. My advice to students is that um, CS229, uh, CS229a, excuse me, let me write this down. I think I'm- so CS229a, uh, is taught in a flipped classroom format which means that, uh, since taking it, we'll mainly watch videos um, on the Coursera website and do a lot of uh, programming exercises and then, meet for weekly discussion sections. Uh, but there's a smaller class with [inaudible] . Um, I, I would advise you that um, if you feel ready for CS229 and CS230 to do those uh, but CS229, you know, because of the math we do, this is a, this is a very heavy workload and pretty challenging class and so, if you're not sure you're ready for CS229 and CS229a, it may be a good thing to, to, to take first, uh, and then uh, CS229, CS229a cover a broader range of machine learning algorithms uh, and CS230 is more focused on deep learning algorithms specifically, right. Which is a much narrow set of algorithms but it is, you know, one of the hardest areas of deep learning. Uh, there is not that much overlap in content between the three classes. So if you actually take all three, you'll learn relatively different things from all of them uh, in the past, we've had students simultaneously take 229 and 229a and there is a little bit of overlap. You know, they, they do kind of cover related algorithms but from different points of view. So, so some people actually take multiple of these courses at the same time. Uh, but 229a is more applied, a bit more, you know practical know-how hands-on and so on and, and uh, much less mathematical. Uh, and, and CS230 is also less mathematical more applied more about kind of getting it to work where CS229a, um, we do much more mathematical derivations in CS229. Cool, any questions? Yes, someone had their hand up. [inaudible] So uh, once you say that what- I would generally prefer students not do that in the interest of time but what, what do you want? [inaudible] Oh, I see, sure go for it. Who is enrolled in 229 and 230? Oh not that many of you, interesting. Oh, that's actually interesting. Cool. Yeah. Thank you, yeah, I just didn't want to set the presence of students using this as a forum to run surveys. [LAUGHTER] That was, that was, that was, that that was an interesting question. So thank you. [LAUGHTER] Um, cool. All right, and, and by the way I think uh, you know, just one thing about Stanford is the AI world and machine learning world, AI is bigger than machine learning right and machine learning is bigger than deep learning. Um, one of the great things about being a Stanford student is, you can and I think should take multiple classes, right. I think that your CS229, has for many years been the core of the machine learning world at Stanford. Uh, but even beyond CS229, it's worth your while to take multiple classes and getting multiple perspectives. So, so if you want to uh, be really effective, you know, after you graduate from Stanford, you do wanna be an expert in machine learning. You do wanna be an expert in deep learning. Uh, and you probably wanna know probability statistics. Maybe you wanna know a bit of convex optimization, and maybe you wanna know a bit more about reinforcement learning, know a ittle bit about planning, know a bit about lots of things. So, so I actually encourage you to take multiple classes I guess. Cool. All right. Good. Um, if there are no more questions, let's go on to talk a bit about some machine learning. So um, all right, so the remainder of this class, what I'd like to do is um, give a quick overview of uh, you know, the major uh, areas of machine learning and also um, and, and also give you a sort of overview of the things you learn uh, in the next 10 weeks. So, you know, what is machine learning? Right. It seems to be everywhere these days and it's useful for so many spaces, and, and I think that um, and uh, you know, and uh, uh, and I, I feel like they uh- just to share with you my personal bias, right. You, you read the news about these people who are making so much money building learning algorithms. I think that's great. I hope, I hope all of you go make a lot of money but the thing I find even more exciting is, is the meaningful work we could do. I think that, you know, I think that every time there's a major technological disruption which there is now, through machine learning um, it gives us an opportunity to remake large parts of the world and if we behave ethically in a principled way and use the superpowers of machine learning to do things that, you know, helps people's lives, right. Maybe we could um, uh, maybe you can improve the healthcare system, maybe you can improve give every child a personalized tutor. Uh, maybe we can make our democracy run better rather than make it run worse. But I think that um, the meaning I find in machine learning is that there's so many people that are so eager for us to go in and help them with these tools that um, if, if you become good at these tools, it gives you an opportunity to really remake some piece, some meaningful piece of the world. Uh, hopefully in a way that helps other people and makes the world kind of, makes the world a better place is very cliche in Silicon Valley. But, but I think, you know, with these tools, you actually have the power to do that and if you go make a ton of money, that's great too. But I find uh, much greater meaning of the work we could do. Um, it gives us a unique opportunity to do these things, right. But um, despite all the excitement of machine learning. What is machine learning? So let me give you a couple um, definitions of machine learning. Um, Arthur Samuel whose claim to fame was uh, building a checkers playing program, uh, defined it as follows. So field of study gives computers the ability to learn without being explicitly programmed. Um, and you know interesting- when, when Arthur Samuel many, many decades ago, built the checkers playing program. Uh, the debates of the day was can a computer ever do something that it wasn't explicitly told to do? And Arthur Samuel uh, wrote a checkers playing program, that through self play learns whether the patterns of uh, the checkerboard that are more likely to lead to win versus more likely to lead to a loss and learned uh, to be even better than Arthur Samuel the author himself at playing checkers. So back then, this was viewed as a remarkable result that a computer programmer, you know that could write a piece of software to do something that the computer program himself could not do, right, because this program became better than Arthur Samuel um, at, uh, uh, at, at, at the task of playing checkers. Um, and I think today we um, are used to computers or machine learning algorithms outperforming humans on so many tasks. Uh, but it turns out that when you choose a narrow task like, speech recognition on a certain type of task, you can maybe surpass human level performance. If you choose a narrow task like, playing the game of Go, than by throwing really, tons of computational power at it and self play. Uh, uh, uh you can have a computer, you know become very good at, at these narrow tasks. But this is maybe one of the first such examples in the history of computing. Um. Uh, and I think this is the one of the most widely cited, um, definitions right. Gives computers the ability learn without being explicitly programmed. Um, my friend Tom Mitchell in his textbook, defined this as a Well-posed Learning Problem. Uh, a program is said to learn from experience E with respect to task T and some performance measure P, if its performance on T, as measured by P, improves with experience E. And I- I asked Tom this. I asked Tom if, um, he wrote this definition just because he wanted it to rhyme and [LAUGHTER] he, he, he, he, he did not say yes, but I, I, I don't know. Um, but in this definition, the experienced E. For- for the case of playing checkers, the experience E would be the experience of having a checkers play- program played tons of games against itself. Uh, so computers lots of patients and sit there for days playing games or checkers against itself. So that's experience E. The task T is the task of playing checkers, the performance measure P maybe, um, what's the chance of this program winning the next game of checkers it plays against the next opponent. Right. So- so we say that, ah, this is a well-posed learning problem, learning the game of checkers. Now, within this, um, set of ideas with machine learning, there are many different tools we use in machine learning. And so in the next 10 weeks, you'll learn about a variety of these different tools. Um, and so the first of them and the most widely used one is supervised learning. Um, let's see. I wanna switch to the white board. Do you guys know how to raise the screen? [NOISE] So what I wanna do today is really go over some of the major categories of, uh, Machine Learning Tools, and, uh, and so what you learn in the next, um, ah, by the end of this quarter. So the most widely used machine learning tool is, uh, today is supervised learning. Actually, let me check, how, how many of you know what supervised learning is? Ah, like two-thirds, half of you maybe. Okay cool. Let me, let me just briefly define it. Um, here's one example. Let's say, you have a database of housing prices and so I'm gonna plot your dataset where on the horizontal axis, I'm- I'm gonna plot the size of the house in square feet. And on the vertical axis, we'll plot the price of the house. Right. And, um, maybe your dataset looks like that. Um, and so horizontal axis, I guess we'd call this X and vertical axis we'll call that Y. So, um, the supervised learning problem is given a dataset like this to find the relationship mapping from X to Y. And so, um, for example, let's say- let's say- let's say you have- let's say you're fortunate enough to own a house in Palo Alto. Right. Ah, and you're trying to sell it, and you want to know how the price of the house. So maybe your house has a size, you know, of that amount on the horizontal axis. I don't know, maybe this is 500 square feet, 1,000 square feet, 1,500 square feet. So your house is, ah, 1,250 square feet. Right. And you want to know, you know, how do you price this house. So given this dataset, one thing you can do is, um, fit a straight line to it. Right. And then you could estimate or predicts the price to be whatever value you read off on the, um, vertical axis. So in supervised learning, you are given a dataset with, ah, inputs X and labels Y, and your goal is to learn a mapping from X to Y. Right. Now, um, fitting a straight line to data is maybe the simplest possible. Maybe the simplest possible learning algorithm, maybe one of the simplest poss- learning algorithms. Um, given a dataset like this, there are many possible ways to learn a mapping, to learn the function mapping from the input size to the estimated price. And so, um, maybe you wanna fit a quadratic function instead, maybe that actually fits the data a little bit better. And so how do you choose among different models will be, ah, either automatically or manual intervention will be- will be something we'll spend a lot time talking about. Now to give a little bit more. Um, to define a few more things. This example is a problem called a regression problem. And the term regression refers to that the value y you're trying to predict is continuous. Right. Um, in contrast, here's a- here's a different type of problem. Um, so problem that some of my friends were working on, and- and I'll simplify it was- was a healthcare problem, where, ah, they were looking at, uh, breast cancer or breast tumors, um, and trying to decide if a tumor is benign or malignant. Right. So a tumor is a lump in a- in a woman's breast, um, is- can be ma- malign, or cancerous, um, or benign, meaning you know, roughly it's not that harmful. And so if on the horizontal axis, you plot the size of a tumor. Um, and on the vertical axis, you plot is it malignant or not. Malignant means harmful, right. Um, and some tumors are harmful some are not. And so whether it is malignant or not, takes only two values, 1 or 0. And so you may have a dataset, um, like that. Right. Ah, and given this, can you learn a mapping from X to Y, so that if a new patient walks into your office, uh, walks in the doctor's office and the tumor size is, you know say, this, can the learning algorithm figure out from this data that it was probably, well, based on this dataset, looks like there's- there's a high chance that that tumor is, um, malignant. Um, so, ah, so this is an example of a classification problem and the term classification refers to that Y here takes on a discrete number of variables. So for a regression problem, Y is a real number. I guess technically prices can be rounded off to the nearest dollar and cents, so prices aren't really real numbers. Um, you know that- because you'd probably not price it, how's it like Pi times 1 million or whatever. Ah, but, so, so- but for all practical purposes prices are continuous so we call them housing price prediction to be a regression problem, whereas if you have, ah, two values of possible output, 0 and 1, call it a classification problem. Um, if you have K discrete outputs so, uh, if the tumor can be, uh, malignant or if there are five types of cancer, right, so you have one of five possible outputs, then that's also a classification problem. If the output is discrete. Now, um, I wanna find a different way to visualize this dataset which is, um, let me draw a line on top. And I'm just going to, you know, map all this data on the horizontal axis upward onto a line. But let me show you what I'm gonna do. I'm going to use a symbol O to denote. Right. Um, I hope what I did was clear. So I took the two sets of examples, uh, the positive and negative examples. Positive example was this 1, negative example was 0. And I took all of these examples and- and kinda pushed them up onto a straight line, and I use two symbols, I use O's to denote negative examples and I use crosses to denote positive examples. Okay. So this is just a different way of visualizing the same data, um, by drawing it on the line and using, you know, two symbols to denote the two discrete values 0 and 1, right? So, um, it turns out that, uh, uh, in both of these examples, the input X was one-dimensional, it was a single real number. For most of the, um, machine learning applications you work with, the input X will be multi-dimensional. You won't be given just one number and asked to predict another number. Instead, you'll often be given, uh, multiple features and multiple numbers to predict another number. So for example, instead of just using a tumor size to predict- to estimate malignancy- malignant versus benign tumors, um, you may instead have two features where one is tumor size and the second is age of the patient, and be given a dataset, [NOISE] right? And be given a dataset that looks like that, right? Where now your task is, um, given two input features, so X is tumor size and age, you know, like a two-dimensional vector, um, and your task is given, uh, these two input features, um, to predict whether a given tumor is malignant or benign. So if a new patient walks in a doctor's office and that the tumor size is here and the age is here, so that point there, then hopefully you can conclude that, you know, this patient's tumor is probably benign, right? Corresponding the O, that negative example. Um, and so what thing- one thing you'll learn, uh, next week is a learning algorithm that can fit a straight line to the data as follows, kinda like that, to separate out the positive and negative examples. Separate out the O's and the crosses. And so next week, you'll learn about the logistic regression algorithm which, um, which can do that. Okay? So, um, one of the most interesting things you'll learn about is, uh, let's see. So in this example, I drew a dataset with two input features, um, when- so I have friends that actually worked on the breast cancer, uh, prediction problem, and in practice you usually have a lot more than one or two features, and usually you have so many features you can't plot them on the board, right? And so for an actual breast cancer prediction problem, my friends are working on this were- were using many other features such as, don't worry about what these me- mean, I guess clump thickness, uh, you know, uniformity of cell size, uniformity of cell shape, right? Um, uh, adhesion, how well the cells stick together. Don't worry about what this means but, uh, if you are actually doing this in a- in a actual medical application, there's a good chance that you'll be using a lot more features than just two. Uh, and this means that you actually can't plot this data, right? It's two high-dimensional. You can't plot things higher than 3-dimensional or maybe 4-dimensional, or something like that and so when we have lots of features it's actually difficult to plot this data. I'll come back to this in a second in learning theory. Um, and, uh, one of the things you'll learn about- so as we develop learning algorithms, you'll learn how to build, um, regression algorithms or classification algorithms that can deal with these relatively large number of features. One of the, uh, most fascinating results you learn is that, um, [NOISE] you'll also learn about an algorithm called the Support Vector Machine which uses not one or two or three or 10 or 100 or a million input features, but uses an infinite number of input features, right? And so, so, so just to be clear, if in this example the state of a patient were represents as one number, you know, tumor size, uh, in this example we had two features. So the state of a patient were represented using two numbers, the tumor size and the age. If you use this list of features maybe a patient that's represented with five or six numbers. Uh, but there's an algorithm called the support vector machine that allows you to use an infinite-dimensional vector, um, to represent a patient. And, um, how do you deal with that and how can the computer even store an infinite-dimensional vector, right? I mean, you know, computer memory, you can store one row number, two row numbers, but you can't store an infinite number of row numbers in a computer without running out of memory or processor speed or whatever. So so how do you do that? Uh, so when we talk about support vector machines and specifically the technical method called kernels, you'll learn how to build learning algorithms that work with, uh, so that the infinitely long lists of features, infinitely long list of feature of- for for- which which- and you can imagine that if you have an infinitely long list of numbers to represent a patient, that might give you a lot of information about that patient and so that is one of the relatively effective learning algorithms to solve problems, okay? Um, so that's supervised learning. And, you know, let me just, um, uh, play a video, um, show you a fun- slightly older example of supervised learning to give you a sense of what this means. [NOISE] But at the heart of supervised learning is the idea that during training, uh, you are given inputs X together with the labels Y and you give it both at the same time, and the job of your learning algorithm is to, uh, find a mapping so that given a new X, you can map it to the most appropriate output Y. Um, so this is a very old video, uh, made by, um, DA Pomerleau and we've known him for a long time as well, uh, using supervised learning for autonomous driving. Uh, this is not state of the art for autonomous driving anymore, but it actually does remarkably well. Oh, and, uh, um, as you, uh, you hear a few technical terms like back-propagation, you'll learn all those techniques in this class, uh, and by the end of class, you'll either build a learning algorithm much more effective than what you see here. But let's- let's- let's see this application. Uh, could you turn up the volume maybe have that? Are you guys getting volume audio? [BACKGROUND] Oh, I see. All right, I'll narrate this. [LAUGHTER] So I'll be using artificial neural network to drive this vehicle that, uh, was built at Carnegie Mellon University, uh, many years ago. And what happens is, uh, during training, it watches the human, um, drive the vehicle and I think 10 times per second, uh, it digitizes the image in front of the vehicle. And, um, so that's a picture taken by a front-facing camera. Um, and what it does is in order to collect labeled data, the car while the human is driving it, records both the image such as it's seeing here, as well as, the steering direction that was chosen by human. So at the bottom here is the image turned to grayscale and lower res, and, uh, on top, let me pause this for a second. Um, this is the driver direction, the font's kinda blurry but this text says driver direction. So this is the Y label, the label Y that the human driver chose. Um, and so the position of this white bar of this white blob shows how the human is choosing to steer the car. So in this, in this image, the white blob is a little bit to the left of center so the human is, you know, steering just a little bit to the left. Um, this second line here is the output of the neural network and initially, the neural network doesn't know how to drive, and so it's just outputting this white smear everywhere and it's saying, "No, I don't know, do I drive left, right, center? I don't know." So it's outputting this gray blur everywhere. Um, and as the algorithm learns using the back-propagation learning algorithm or gradient descents which you'll learn about, uh, you'll actually learn about gradient descent this Wednesday. Um, you see that the neural network's outputs becomes less and less of this white smear, this white blur but starts to, uh, become sharper, um, and starts to mimic more accurately the human selected driving direction. Right. So this, um, is an example of supervised learning because the human driver demonstrates inputs X and outputs Y, uh, meaning, uh, if you see this in front of the car steer like that so that's X and Y. And, uh, after the learning algorithm has learned, um, you can then, uh, well, he pushes a button, takes the hands off the steering wheel, um, [NOISE] and then it's using this neural network to drive itself, right? Digitizing the image in front of the road, taking this image and passing it through the learning algorithm, through the trained neural network, letting the neural networks select a steering direction, uh, and then using a little motor to turn the wheel. Um, this is slightly more advanced version which has trained two separate models; one for, I think, a two-lane road, one for a four-lane road. Uh, so that's the, um, uh, so the second and third lines this is for a two-lane road, this is a four-lane road. And the arbitrator is, is another algorithm that tries to decide whether the two-lane or the four-lane road model is the more, more appropriate one for a particular given situation. Um, and so as Alvin is, excuse me a one-lane road or, uh, a two-lane road. So, so, so it's driving from a one-lane road here, uh, to another intersection, um, the, uh the the algorithm realizes it should swi- switch over from, um, I think I forget, I think the one-lane neural network to the- to the two-lane neural network [NOISE] one of these, right? All right. Um. Okay. Oh, oh, right. Fine. We'll just see the final dramatic moment of switching from a one-way road to a two-lane road. [LAUGHTER] All right. Um, uh, and I think, you know, so this is just using supervised learning to- take as input, what's in front of the car to decide on the steering direction. This is not state of the art for how self-driving cars are built today, but you know, you could do some things in some limited contexts. Uh, uh, and I think, uh, in, in several weeks, you'll actually be able to build something that is more sophisticated than this. Right. Um, so after supervised learning, uh, we wi- will- in this class we'll spend a bit of time talking about machine learning strategy. Also, well, I think on the class notes we annotate this as a learning theory. But what that means is, um, when I give you the tools to go out and apply learning algorithms effectively. And I think I've been fortunate to have, uh, you know, to know a lot of, uh, uh, I, I think that, um, I've been fortunate to have, you know, over the years constantly visited lots of great tech companies. Uh, uh, more than once that I've that- that I've been probably associated with, right? But often, just to help friends out, I visit various tech companies, uh, whose products I'm sure are installed on your cell phone. Uh, but I often visit tech companies and you know, talk to the machine learning teams and see what they're doing, and see if I can help them out. And what I see is that there's a huge difference in the effectiveness of how two different teams could apply the exact same learning algorithm. All right? Uh, and I think that, um, what I've seen sadly is that sometimes there will be a team or even in some of the best tech companies, right? The, the, the, the EV, AI companies, right? And, and, and multiple of them, where you go talk to a team and they'll tell you about something that they've been working on for six months. And then, you can quickly take a look at the data and, and hear that they're not- the algorithm isn't quite working and sometimes you can look at what they're doing, and go yeah, you know, I could have told you six months ago that this approach is never gonna work, right? Um, and, um, what I find is that the most skilled machine learning practitioners are very strategic. By which I mean that your skill at deciding- um, when you work on a machine learning project, you have- you- you have a lot of decisions to make. Right? Do you collect more data? Do you try a different learning algorithm? Uh, do you rent faster GPUs to train your learning algorithm for longer? Or if you collect more data, what type of data do you collect? Or for- all of these architectural choices, using neural networks for reference machine which is regression, which ones do you pick? Um, but there are a lot of decisions you need to make when building these learning algorithms. So one thing that's quite unique to the way we teach is, uh, we want to help you become more systematic in driving machine learning as a, as a systematic engineering discipline, so that when one day when you are working on as machine learning project, you can efficiently figure out what to do next. Right? Um, and I sometimes make an analogy to how, um, to, uh, uh, to, to software engineering. Um, you know, like many years ago, I had a friend, um, that would debug code by compiling it and then, um, uh, this friend would look for all of the syntax errors, right? That, you know, C++ compiler outputs. And they thought that the best way to eliminate the errors is to delete all the lines of code with syntax errors and that was [LAUGHTER] their first heuristic. So that did not go well, right? Um, uh, it took me a while to persuade them to stop doing that. Uh, but, but, but so it turns out that, um, when you run a learning algorithm, you know, it almost never works the first time. All right? It's this just life. Uh, uh, and, and the way you go about debugging the learning algorithm will have a huge impact on your efficiency o- on, on how quickly you can build effective learning systems. And I think until now, too much of the- of this process of, uh, making your learning algorithms work well has been a black magic kind of process where, you know, has worked on this for decades. So when you run something, you don't know why it does not recognize it like, "Hey, what do I do and it says, Oh, yeah, do that." And then, and then, because he's so experienced it works, but I think, um, what we're trying to do with the discipline of machine learning is to evolve it from a black magic, tribal knowledge, experience-based thing to a systematic engineering process. All right. And so um, later this quarter, as we talk about machine learning strategy, we'll talk about learning theory. We'll try to systematically give you tools on how to, um, uh, go about strategizing. Uh, so- so it can be very efficient in, um, how you- how you yourself, how you can lead a team to build an effective learning system, because I don't want you to be one of those people that, you know, wastes six months on some direction that maybe could have relatively quickly figured out it was not promising. Or maybe one last analogy, if you- um, if you're used to optimizing code, right? Making code run faster, I'm not sure if you have done that. Uh, uh, uh, less experienced software engineers, who'll just dive in and optimize the code, they try to make it run faster, right? Let's take the C++ and code in assembly or something. But more experienced people will run a profiler to try to figure out what part of code is actually the bottleneck and then just focus on changing on that. So, uh, one of the things we hope to do this quarter is, uh, uh, convey to you some of these more systematic engineering principles. All right. And yeah. Oh, and actually this is very interesting. This is a, uh, uh, yeah. Actually, I've been- I've been invited, so actually- so how many of you have heard the machine learning journey? Oh, just a few of you, interesting. Oh, so actually, so, so- if any of you are interested, um, just in my, uh, spare time, uh, I've been writing a book, um, uh, to try to codify systematic engineering principles for machine learning and so if you are, uh, and so uh, if you want the, you know, free draft copy of the book, sign up for a mailing list here. I tend to just write stuff and put it on the Internet for free, yeah. So if you want a free draft copy of the book, uh, uh, you know, go to this website, uh, enter your e-mail address and the website will send you a copy of that book. And I'll talk a little bit about these engineering principles as well. Okay. All right. So, uh, so first subject, machine learning. Second subject, learning theory. Um, and, uh, the third major subject we'll talk about is, uh, deep learning, right? And so you have a lot of tools in machine learning and many of them are worth learning about and I use many different tools in machine learning, you know, for many different applications. There's one subset of machine learning that's really hot right now because it's just advancing very rapidly, which is deep learning. And so we'll spend a little time talking about deep learning so that you can understand the basics of how to train a neural network as well. But I think that's where CS229 covers a much broader set of algorithms which are all useful. CS230, more narrowly covers just deep learning, right? Um. So, uh, other than deep learning slash after- after deep learning slash neu- neural networks the other, the four, four of the five major topics we'll cover will be on unsupervised learning. Um, so what is unsupervised learning? [NOISE] So you saw me draw a picture like this just now, right? And this would be a classification problem like the tumor, malignant, benign problems, this is a classification problem. And that was a supervised learning problem because you have to learn a function mapping from X to Y. Um, unsupervised learning would be if I give you a dataset like this with no labels. So you're just given inputs X and no Y, and you're asked to find me something interesting in this data, figure out, you know, interesting structure in this data. Um, and so in this dataset, it looks like there are two clusters, and then unsupervised learning algorithm which we learned about called K-means clustering, will discover this, um, this structure in the data. Um, other examples as well as learning, you know, if- if you actually, Google News is a very interesting website. Sometimes I use it to look up, right? Latest news, just this old example. But Google News everyday crawls or reads, uh, uh, I don't know, uh, uh, many many thousands or tens of thousands of news articles on the Internet and groups them together, right? For example, there's a set of articles on the BP Oil Well spill, and it has, uh, taken a lot of the articles written by different reporters and grouped them together. So you can, you know, figure out that what BP, uh, Macondo oil well, right? That this is a CNN article about the oil well spill, there's a Guardian article about oil well spill and this is an example of a clustering algorithm whereas taking these different news sources and figuring out that these are all stories kind of about the same thing, right? Um, and other examples of clustering, just getting data and figuring out what groups belong together. Um, a lot of work on, um, genetic data. This is a visualization of- of genetic microarray data. Where given data like this, you can group individuals into different types of- into individuals of different, uh, characteristics, um, or clustering algorithms grouping this type of data together is used to, um, organize computing clusters, you know, figure out what machines workflows are more related to each other and organize computing clusters appropriately. So take a social network like LinkedIn or Facebook or other social networks and figure out which are the groups of friends and which are the cohesive communities within a social network, um, or market segmentation. Um, actually many companies I've worked with look at the customer database and cluster the users together. So you can say that looks like we're four types of users, you know, looks like that, um, there are the, uh, young professionals looking to develop themselves, there are the, you know, soccer moms and soccer dads, there are this category and these categories. You can then market to the different market segments, um, separately. Uh, and- and actually, many years ago my friend Andrew Moore, uh, uh, was using this type of data for astronomical data analysis to group together galaxies. You have a question? [inaudible]. Is unsupervised learning the same as learning clustering? No it's not. So unsupervised learning broadly is the concept of using unlabeled data. So just X and finding interesting things about it, right? Um, so, um, for example, uh, actually here's- shoot. This won't work with all of you will do this later in the cla- in the class I guess. Um, maybe I say we'll do this later. Cocktail party problem, um, uh, is another unsupervised learning problem. Reading the audio for this to explain this though, um, let me think how to explain this. Um, the cocktail party problem and I'll try to do the demo when we can get all your work on this laptop. Is a problem where, um, if you have a noisy room and you stick mult- multiple microphones in the room and record overlapping voices, um, so that no labels reaches multiple microphones, an array of microphones, in a room of lots of people talking. Uh, how can you have the algorithm separate out the people's voices. So that's an unsupervised learning problem because, um, there are no labels. So you just stick microphones in the room and have it record different people's voices, overlapping voices, you have multiple users at the same time and then have it try to separate out people's voices. And one of the programming exercises you do later is, if we have, you know, five people talking. So each microphone records five people's overlapping voices, right? Because, you know, each microphone hears five people at the same time. How can you have an algorithm separate out these voices so you get clean recordings of just one voice at a time. So that's called the cocktail party problem and the algorithm you use to do this is called ICA, Independent Components Analysis. And that's something you implement in one of these later homework exercises, right? Um, and there are other examples of unsupervised learning as well. Uh, the Internet has tons of unlabeled text data. You just suck down data from the Internet. There are no labels necessarily but can you learn interesting things about language, figure out what- figure out on, I don't know, one of the best cited results recently was learning analogies, like yeah, man is to woman as king is to queen, right? Or a- what's a Tokyo is to Japan as Washington DC is to the United States, right? To learn analogies like that. It turns out you can learn analogies like that from unlabeled data, just from texts on the Internet. So that's also unsupervised learning. Okay? Um, so after unsupervised learning, oh, and unsupervised learning. So you know machine learning is very useful today. It turns out that most of the recent wave of economic value created by machine learning is through supervised learning. Uh, but there are important use cases for unsupervised learning as well. So I use them in my work occasionally. Uh, and is also a bleeding edge for a lot of exciting research. And then the final topic. Finally, the five topics we covered. So talk about supervised learning, machine learning strategy, deep learning, unsupervised learning, and then the fifth one is reinforcement learning, is this. Which is, um, let's say, I give you the keys to Stanford autonomous helicopter. This helicopter is actually sitting in my office, and I'm trying to figure out how to get rid of it. Um, and I'll ask you to write a program to- to make it fly, right? So how do you do that? Um, so this is a video of a helicopter flying. The audio is just a lot of helicopter noise. So that's not important. But we'll zoom out the video. You see she's found in the sky, right? There. So, um, you can use learning alg- that's kinda cool, right? [LAUGHTER] I was- I was the camera man that day. Um, but so you can use learning algorithms to get, you know, robots to do pretty interesting things like this. Um, and it turns out that a good way to do this is through reinforcement learning. So what's reinforcement learning? Um, it turns out that no one knows what's the optimal way to fly a helicopter, right? If you fly a helicopter, you have two control sticks that you're moving. Um, but no one knows what's the optimal way to move the control stick. So the way you can get a helicopter fly itself is, um, let the helicopter do whatever- think of this as training a dog, right? You can't teach a dog the optimal way to behave, but- actually, how many of you have had a pet dog or pet cat before? Oh, not that many of you. This is fascinating. Okay. So I had a pet dog when I was a kid and my family made it my job to train the dog. So how do you train a dog? You let the dog do whatever it wants, and then whenever it behaves well, you go, "Oh, good dog". And when it misbehaves you go, "bad dog". [LAUGHTER] Um, and then over time, the dog learns to do more of the good dog things and fewer of the bad dog things, and so reinforcement learning is a bit like that, right? I don't know what's the optimal way to fly a helicopter. So you let the helicopter do whatever it wants and then whenever it flies well, you know, does some maneuver you want, or flies accurately without jetting around too much, you go, "Oh, good helicopter". [LAUGHTER] And when it crashes you go, "bad helicopter" and it's the job of the reinforcement learning algorithms to figure out how to control it over time so as to get more of the good helicopter things and fewer of the bad helicopter things. Um, and I think, um, well, just one more video. Um, oh, yeah, that's interesting. All right. And so again given a robot like this, I actually don't know how to program a- actually a robot like this has a lot of joints, right? So how do you get a robot like this to climb over obstacles? So well, this is actually a robot dog, so you can actually say, "Good dog" or "Bad dog". [LAUGHTER] By giving those signals, called a reward signal, uh, you can have a learning algorithm figure out by itself, how to optimize the reward, and therefore, [LAUGHTER] climb over these types of obstacles. Um, and I think recently, the most famous applications of reinforcement learning happened for game-playing, playing Atari games or playing, you know, Game of Go, like AlphaGo. I think that's a- I think that is a- game playing has made some remarkable stunts a remarkable PR but I'm also equally excited or maybe even more excited about the integrals and reinforcement learning is making into robotics applications, right? So I think, um, I think- yeah, reinforcement learning has been proven to be fantastic at playing games, it's also getting- making real traction in optimizing robots and optimizing sort of logistic system and things like that. Um, so you learned about all these things. Um, last thing for today, uh, I hope that you will start to, to meet people in the class, make friends, find project partners and study groups, and if you have any questions, [NOISE] you know, dive on the Piazza, asking questions as you help others answer their questions. So let's break for today, and I look forward to seeing you on Wednesday. Welcome to 229. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_13_Debugging_ML_Models_and_Error_Analysis_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Okay. Happy Halloween. Um, what I want to do today is share with you advice for applying machine learning. And, and you've heard me allude to this before, but, um, uh, yeah, I think over the last several weeks, you've learned a lot about the mechanics of how to build different learning algorithms. Everything from linear regression, logistic regression, SVMs, uh, uh, Random Forest, is it, uh, uh, neural networks. And what I want to do today is share with you some principles for helping you become efficient at how you apply all of these things to solve whatever application problem you might want to work on. Um, and so, uh, a lot of today's material is actually not that mathematical. There's also some of the hardest materials as we're in, in, in, in this class to understand. Um, it turns out that when you give advice on how to apply a learning algorithm such as, you know, "Don't waste lots of time collecting data unless you, you, you have confidence it's useful to actually spend all that time." It turns out when I say things like that, people, you know, easily agree. They say, "Of course, you shouldn't waste time collecting lots of data unless you have some confidence it's actually a good use of your time." That's a very easy thing to agree with. Um, but the hard thing is when you go home today and you're actually working on your class project, right, uh, uh, to apply the principles we talked about today. When you're actually on the ground talking to a teammate saying, "All right, do we collect more data for our class project now or not? " To make the right judgment call for that. To map the concepts you learned today. To when you're actually in the hot seat, you know making a decision to be going, spending another two days scraping data off the Internet, or are you gonna tune this algorithm, tune these parameters to the algorithm and actually make those decisions is actually, um, uh, it, it, it often takes a lot of, um, careful thinking to make the mapping. From the principles we're talking about today, and then probably all of you go, "Yep, that makes sense." But to actually do that when you're in the hot seat making the decisions. That, that, that's something that, um, will often take, take some careful thought, I guess. Um, and I think, uh, uh, you know, for a long time, um, the concepts of machine learning have been an art, right, where, you know, we'll, we'll go to these people that have been doing it for 30 years. And you say, "Hey, my learning algorithm doesn't work," you know, uh, uh, uh, what do we do now? And then they will have some judgment or you go. And people asked me and for some reason because we've done it for a long time, we will say, "Oh yeah, I get more data, or I'll tune that parameter, or try a neural network with big hidden units, and for some reason that'll work. And what I hope to do today is, uh, turn that black magic, that, that, that art into much more refined, so that you can much more systematically make these decisions yourself, rather than, uh, talk to someone, um, who's done this for 30 years, then for, for some reason is able to give you the good recommendations even if, you know, that to turn from, um, more of a black art into more of a systematic engineering discipline. Um, and, and just, uh, uh, one note. Uh, some of the work we are gonna do today is not the best approach for, uh, developing novel machine learning research, or if you're- if your main goal is to write research papers, uh, some of what I'll say will apply, some of what I'll say will not apply, but I'll come back to that later. So most of today's focused on how to help you build stuff that works, right, to build, build applications that work. Um, so the three key ideas, um, you see today are first is, uh, diagnostics for debugging learning algorithms. Um, one thing you might not know, or actually if you're working on the class project maybe you know this already is that, uh, when you implement a learning algorithm for the first time, it almost never works, right? At least not the first time. Uh, uh, uh, and so, um, what is it- I still remember was- there was a weekend, um, uh, about a year ago where I implemented Softmax regression on my laptop, and it worked the first time. And even to this day, I, I still remember that feeling of surprise, like, no, there's got to be a bug and I went in to try to find the bug, and there wasn't a bug. But, but it's so rare. [LAUGHTER] So the learning algorithm worked the first time. I still remember it over a year later. Uh, and so a lot of the workflow of developing learning algorithms, it, it actually feels like a debugging workflow, right? Um, and so I want to help you become systematic at that. Um, and, uh, uh, two key ideas here are about error analysis, and integrative analysis. [NOISE] So how to analyze the errors in your learning algorithm. And also how to, how to understand what's not working, what's error analysis, and how to understand what's working, which is ablative analysis. And then, and then finding with some philosophies and how to get started on a machine learning project, su- such as your class project, okay? So let's start with, uh, discussing debugging learning algorithms. Um, so what happens all the time is you have an idea for a machine learning application. You implement something, uh, and then it won't work as well as you hoped. And the key question is, what do you do next, right? When I want to work on machine learning algorithm, that's actually most of my workflow. We usually have something implemented. It's just not working that well. And your ability to decide what to do next has a huge impact on, on, on your efficiency. Um, uh, I, I think, uh, when, when, um, when I was, uh, when I was an undergrad, uh, at Carnegie Mellon University, I had a friend, um, that would, uh, debug their code by, um, you know, they write a piece of code. And then as always, when you write a piece of code, initially there's always a bunch of syntax errors, right? And so their debugging strategy was to delete every single line of code that generated a syntax error because this is a good way to get rid of errors. So that wasn't a good strategy. So in, in, in machine learning as well, there are good and less good debugging strategies, right? Um, so let's start with a motivating example. Uh, let's say we're building an anti-spam classifier. And, um, let's say you've carefully chosen a small set of a hundred words to use as features So instead of using, you know, 10,000 or 50,000 words. You've chosen a hundred words that you think could be most relevant to, um, anti-spam. And let's say you start off implementing logistic regularization. Uh, I think when we talk about this, this is also, you know, there's a frequentist and Bayesian school, but you can think of this as Bayesian which is a progression where, uh, you have the maximum likelihood term on the left, and then that second term is the regularization term, right? Um, so that's, so that's Bayesian logistic regression. If you're Bayesian or, uh, which is regression with regularization if you are, uh, you know, using frequency statistics. [NOISE] And let say that, um, logistic regression with regularization or Bayesian logistic regression, it gets 20% test error which is unacceptably high, right? Making one in five mistakes on, on your spam filter. Um, and so what do you do next? Um, now, for this scenario, I, I wanna, uh, and, and so, um, for, uh, when you implement an algorithm like this, uh, what many teams will do is, um, try improving the algorithm in different ways. So what many teams would do is say, "Oh yeah, I remember, you know, well, we like big data, more data always helps." So let's get some more data and hope that solves the problem. So one or some teams would say, "Let's get more training examples." And, and, and it's actually true, you know, more data pretty much never hurts. It almost always helps, but the key question is how much. Um, or you could try using a smaller set of features. With a hundred features probably some weren't that relevant. So let's get rid of some features. Um, or if you try having a larger set of features, hundred features is too small, right? So let's add more features. Um, uh, or you might want other designs of the features, you know, instead of, uh, uh, just using features in an e-mail body, uh, you can use features from the e-mail header. Uh, the e-mail header has, um, uh, not just a From To subject, but also routing information about what's the set of servers of the Internet that the e-mail took to get to you. Um, uh, or you could try running gradient descent for more iterations. That, that, you know, that never hurts, right, for usually. Uh, uh, from gradient descent, let's switch to Newton's methods. Uh, or let's try a different value for Lambda. Um, or, or we say, you know, forget about Bayesian logistic regression or, or run regression regularization. Let's, let's use a totally different algorithm, like an SVM or neural networks or something, right? So what happens in a lot of teams is, um, uh, someone will pick one of these ideas, kind of at random. Um, it depends on, you know, what they happen to read the night before, right, about something. Uh, or, or their experience on the last project. And sometimes a project, and sometimes you or the project leader will say, uh, you know, we'll pick one of these and just say, "Let's try that." And then spend, spend a few days or few weeks trying that, and it may or may not be the best thing to do. So, um, uh, I think that in, in, in my team's machine learning workflow, so first, if you actually, you and a few others, sit down and brainstorm a list of the things you could try, you actually are, are already ahead of a lot of teams because all teams will come just by gut feeling, right? Um, uh, or the most opinionated person will pick one of these things at random and do that, but you brainstorm a list of things and then, and then try to evaluate the different options. You're already ahead of many teams. Um, oh, sorry, and I think, uh, uh, yeah and I think, right, you know, unless you analyze these different options, um, uh, uh, it's hard to know which of these is actually the best option. So, um, the most common diagnostic I end up using in developing learning algorithms, is a, um, bias versus variance diagnostic, right? And I think I, um, talked about bias and variance already with a classifier is highly biased, then it tends to under fit the data. So high bias is, well, actually. You guys remember this, right? If, um, If you have a dataset just like this, a highly biased classifier may be much too simple, and high variance classifiers may be much too complex, and some- something in-between, you know, with, with trade off bias and variance in an appropriate way, right? So that's bias and variance. Um, and so, uh, it turns out that one of the most common diagnostics I end up using in pretty much every single machine learning project is a bias versus variance diagnostic. So understand how much of your learning algorithm's problem comes from bias and how much of it comes from variance. Um, and, uh, uh, and, you know, I- I've had, I don't know, like former PhD students, right, that- that learned about bias and variance when they're doing their PhD and then sometimes even a couple of years after they've graduated from Stanford and worked, you know, on more practical problems. They actually tell me that, that, that their understanding of bias and variances continue to deepen, right, for, for, for many years. So this is one of those concepts is, is, um, if you can systematically apply it, you'll be much more efficient and this is really the, maybe the single most useful tool I've found, understanding bias and variance at debugging learning algorithms. Um, and so what I'm gonna describe, is a workflow where you would run some diagnostics to figure out what is the problem, uh, and then try to fix what the problem is. And so, um, just to summarize this no- this example. Um, uh, this logistic regression error is unacceptably high and you want to- and you suspect problems due to high variance or high bias. And so, um, it turns out that there's a diagnostic that lets you look at your algorithm's performance and try to figure out if, um, how much of the problem is variance and how much of the problem is bias. Oh, and I'm going to say test error, but if you are developing, should I really be doing this with a dev set or development set rather than a test set, right? But so let me, let me explain this, um, uh, diagnostic in greater detail. Uh, so it turns out that, um, if you have a classifier with very high variance, then the performance on the test set, or actually would be better, better practice to use the hold-out cross validation so the, the development set. You see that the error that you classify has, um, much, uh, uh, much lower error on the training set than on the development set. But in contrast, if you have high bias, then the training error and the test set error and the dev set error will go behind. So let me sh- let me illustrate this with a picture. Um, so this is a learning curve and what that means is, um, on the horizontal axis, you are going to vary the number of training examples, right? Uh, and when I talk about bias and variance, I had a plot where the horizontal axis was the degree of polynomial, right? You fit a first order, second order, third order, fourth order polynomial. In this plot, the horizontal axis is different, it's the number of training examples. And so it turns out that, um, whenever you train a learning algorithm, you know, the more data you have usually, the better your development set error, the better your your test set error, right? This error usually goes down, when you increase the number of training examples. The other thing, the other- and, and let's say that you're hoping to achieve a certain level of desired performance, you know, for business reasons, you'd like your spam classifier to achieve a certain level of design performance and often- sometimes, desired level of performance is, um, to do about as well as a human can. That's a common business objective depending on your application, uh, but sometimes it can be different, right. So you have some- your product manager, you know, tells you that well you, if you're leading the project, you think that you need to hit a certain level of target performance in order for it to be a very useful spam filter. So the other plot, uh, uh, to add to this which will help you analyze bias versus variance is to plot the training error. Um, now one thing to help you with training error is that it increases, um, uh, as the training set size increases because, if you have only one example, right? Let's say you're building a spam classifier and you have only one training example, then any algorithm, you know, can fit one training example perfectly. And so if your training set size is very small. The training set error is usually 0, right? If you have like 5, 10 examples, you probably can fit all 5 examples perfectly. And it's only if you have a bigger training set that it becomes harder for the learning algorithm to fit your training data that well, right? Or in the the linear regression case, here you have you have one example, yeah you can fit a straight line to data, if you have two examples, you can fit any model, pretty much to the data, and have zero training error. There's only a very, very large training set that a classifier like logistic regression or linear regression may have a harder time fitting all of your training examples. So that's why training error or average training error, average over your training set, uh, generally increases, um, as you increase the training set size. So, um, now there are two characteristics of this plot, that suggest that, um, if you plot the learning curves if you see the- this, this pattern, this suggests that, um, the algorithm has a large bias problem, right? And the two properties written at the bottom, one, the weaker signal, the one that's harder to rely on, is that, um, the development set error, or the test set error is still decreasing, as you increase the training set size. So the green curve is still, you know, still looks like it's going down, and so this suggests that if you increase the training set size and extrapolate further to the right, that the curve would keep on going down. Um, this turns out to be a weaker signal because sometimes we look at a curve like that, it's actually quite hard to tell, you know, to extrapolate to the right. Uh, uh, if you double the training set size, how much further would the green curve go down? It's actually kind of hard to tell. So I find this a useful signal, but sometimes it's a bit hard to judge, you know, exactly where the curve will go if you extrapolate to the right. Um, the stronger signal is actually the second one, the fact that there's a huge gap between your training error and your test set error, or your training or your dev set error would be the better thing to look at. It's actually a stronger signal that, um, this particular learning algorithm has, um, has high variance right, um, uh, because, uh, as you increase the training set size, you find that the gap between, um, training and test error usually closes, usually reduces. And so there's still a lot of room, for, um, uh, making your test set error become closer to your training error. And so if you see a learning curve like this, this is a strong sign that, um, you have a variance problem, okay? Now let's look at what the curve- what the learning curve will look like, um, if you have a bias problem. Um, so this is a typical learning curve for high bias which is, uh, that's your dev set error or your development set cross-validation error, uh, test error, and you're hoping to hit a level of performance like that, and your training error looks like that. And, um, so one sign that you have a high bias problem is that this algorithm is not even doing that well on the training set, right? Even on the training set, you know, you're not achieving your desired level of performance, and it's like, look learn, i- i- imagine you know, you're, you're looking at learning algorithms and say, it's like this algorithm has seen these examples and even for examples it's seen, it's not doing as well as you were hoping. So clearly the algorithm's not fitting the data well enough. So this is a sign that you have a high bias problem, not enough features, your learning algorithm is too simple. And the other signal is that, um, uh, this is very a small gap between the training and, uh, the test error, right? And you can imagine when you see a plot like this, no matter how much more data you get, right, go ahead and extrapolate to the right, as far as you want, you know. No matter how much more data you get, um, no matter how far you extrapolate to the right of this plot, the gree- the blue curve, the training error, is never going to come back down, to hit the desired level of performance. Uh, and because the test set error is you know generally higher than your training set error, no matter how much more data you have, no matter how far you extrapolate to the right, the error is never going to come down to, to your desired level of performance. So if you get a, um, training error and test error curve that looks like this, you kind of know that, you know, while getting more training data may help, right? The green curve could come down, like a little bit. If you get more training data, uh, the act of getting more training data by itself will never get you to where you want to go. Okay? Um, so let's work through this example. So for each of the four bullets here, um, each of the four- first four ideas fixes either a high variance or a high bias problem, right? So let's, let's go through them and, and ask, uh, for the first one, do you think it, do you think it helps you fix high bias or high variance? [BACKGROUND] High variance, right? Okay. Right. Cool. All right, high variance, right? A- anyone want to say, say- well, great. Anyone want to say why? Yeah, okay. [inaudible] All right, cool, yes, uh, right. Yeah, right. I guess if you're fitting a very high order polynomial that wiggles like this, if you have more data, it will make it- then you won't have these oscillates, so crazy even if you have a higher order polynomial. Right. And, um, if you look at a high variance curve, um, this was- wow, there's a lot of latency, you know. That's all for some reason. Huh. Right, sSo this is a high variance plot. Um, and, uh, uh, and if you have a learning algorithm of high variance, you can, hopefully, you know, if you extrapolate to the right, there is some hope that the green curve will keep on coming down. So, so getting more training data if you have high variance, which is if you're in this situation, looks like it could help you- help- it's, it's worth trying, right? I can't guarantee it work, but it's worth trying. [inaudible] when you think about these functions, like for certain algorithms [inaudible] uniformly distributed. Oh, I see. Yes. Sorry. That's a good one. So let's see. Um, the curves will look like this assuming that your training data is IID, right? Um, the training and dev and test sets are all drawn from the same distribution. Uh, uh, uh, there is learning theory that suggests that in most cases, the green curve should decay as 1 over square roots of m. That's the rate that which it should decay, uh, until, until it reaches some Bayes error. That's what the learning theory says. Does that make sense? Um, and sometime- and, and learning algorithms errors don't always go to 0, right? Because sometimes, uh, uh, there- sometimes, um, the data is just ambiguous. I don't know, like, uh, I guess, you know, my PhD students, including Annan, we do a lot of work in healthcare. And sometimes when you look at an x-ray, it's just blurry, and you could try to make a diagnosis, right? Is there, is there, uh- or I actually, Annan is working on predicting patient's mortality. What's the chance that someone dying in the next year or so? And sometimes you look at a patient's medical record, and you just can't tell when- what's, you know, will, will they pass away in the next year or so. Or you're looking at an x-ray, you just can't tell is there, is there a tumor or not? Because it's just blurry, and so learning algorithm's error don't always decay to zero, but the theory says that as, as M increases, it will decay at roughly a rate of 1 over square root of M, um, toward that baseline error, which is, which is called Bayes error, which is the best that you could possibly hope anything could do given how blurry the images are, given how noisy the vector is, right? All right. Um, sorry, I gave the answer away. [LAUGHTER] Okay. So uh, try a smaller set of features, uh, that fixes a high variance problem. Right? Uh, and one concrete example would be, um, if you have this dataset and you're fitting a, you know, 10th order polynomial and the curve oscillates all over the place, that's high variance. You can say, well, maybe I don't need a 10th order polynomial, maybe I should use, you know, only- Wow, I don't know where my- I'm sorry. I don't know what's going on? [NOISE] Okay. All right. So maybe you say maybe I don't need my features to be all of these things, 10th order polynomial, maybe if this is too high variance, I'm going to get rid of a lot of features and just use, you know, a much smaller number of features. Right? So that fixes, um, uh, high variance. Um, and then if you use a larger set of features [NOISE] [inaudible] , right? Cool. So that's if you're fitting a straight line to the data and it's not doing that well, you can go, "Gee, maybe I should add a quadratic term," just add more features, right? So that fixes variance. And adding e-mail header features. [BACKGROUND] Cool. Yeah. Generally, I would try this if- um, ah, to try to reduce bias. And so in the workflow of, um, how you develop a learning algorithm, ah, I would recommend that, um, you, ah- so, so one of the things about, um, building learning algorithms, is that, for a new application problem, uh, it's difficult to know in advance, uh, if you're gonna run into a high bias or high variance problem, right? It, it is actually very difficult to know in advance what's gonna go wrong with your learning algorithm. And so the advice I tend to give is, uh, if you're working on a new application, uh, implement a quick and dirty learning algorithm. It, it will have like a quick and dirty implementation of something. So you can run your learning algorithm, uh, just say- start with logistic regression, right? Let's start with something simple. Um, and then run this bias-variance type of analysis, uh, to see, sort of, what went wrong and then use that to decide what to do next. You go to a more complex algorithm, do you try adding more data? Um, the, the one exception to this is if you're working on a domain in which you have a lot of experience, right? Uh, and, and so for example, you know, I've done a lot of work on speech recognition. So because I've done that work, I kinda have a sense of how much data is needed for the application, then, then I might just build something more complicated from the get go. Or, or if you're doing- or if you're working on, say, face recognition and because you've read a lot of research papers, you have a sense of how much data is needed. Then maybe it's worth trying something because you're building on a body of knowledge. Uh, but, but if you're working on something, on a brand new application that you and maybe, you know, no one in the published academic literature has worked on or, or you don't totally trust the published results to be representative of your problem, then I will usually recommend that, um, you implement a- build a quick and dirty implementation, look at the bias and variance of the algorithm, uh, and then use that to better decide what to try next. Right? Um, so I think, uh, bias and variance is, uh, I think, is actua- is really like the single most powerful tool I know, you know, for analyzing the performance of learning algorithms. And I do this pretty much in every single machine learning application. Um, there's one other pattern that I see quite often, which is, um, uh- which, which addresses the second set, which is, um, uh, which is a- which is the optimization algorithm, ah, working. So, so let me, let me explain this with, um, a motivating example, right? So, um, it turns out that when you implement a learning algorithm, uh, you often have a few guesses for what's wrong. And if you can systematically test if that hypothesis is right before you spend a lot of work to try to fix it, then you could be much more efficient. So, uh, let's explain that with a concrete example. So, so you understand those words I just said, maybe they're a little bit abstract, which is, um, let's say that, you know, you tuned your logistic regression algorithm for a while. And lets say logistic regression gets 2%t error on spam e-mail and a 2% error on non-spam, right? And it's okay to have 2% error on spam e-mail, maybe, right? You know, so you, you have to read a little bit of spam e-mail. It's like, that's okay. Uh, but 2% error on non-spam is just not really acceptable because you're losing 1 in 50 important e-mails. Um, and let's say that, uh, you know, your teammate, right, also try- trains an SVM and they find in SVM using a linear kernel gets 10% error on spam, uh, but 0.01% error on non-spam. All right. And maybe not great, but for this- for purposes of illustration, let's say this is acceptable. Um, but because it turns out logistic regression is more computationally efficient and, and it may be easier to update, right? And you get more examples, run a few more iterations of gradient descent. Uh, and let's say you want to ship a logistic regression implementation rather than SVM implementation. Um, so what do you do next? It turns out that, um, one common question you have when training your learning algorithm is, you often wonder, uh, is your, um, optimization algorithm converging? Right? So you know, it's, it's gradient ascent, is it converging? And so one thing you might do is, uh, draw a plot of the training optimization objective, of J of Theta, whatever you are maximizing or log likelihood of J of Theta or whatever, versus the number of iterations. And, um, often the plot will look like that, right? And, you know, the curve is, kind of, going up, but not that fast. And if you train it twice as long or even 10 times as long, will that help? Right? And again, training, training the algorithm for more iterations, it, you know, pretty much never hurts. If, if you regularize the algorithm properly, training the algorithm longer, you know, almo- almost always helps, right? Pretty much never hurts, uh, but it's the right thing to do to go and burn another 48 hours of, you know, CPU or GPU cycles to just train this thing longer and hoping it works better. Right? Maybe. Maybe not. Um, so is there a, is there a systematic way to tell- is there a better way, uh, to tell if you should invest a lot more time, um, in running the optimization algorithm? Sometimes it's just hard to tell, right? So, um, now, the other question that you sometimes wonder- so, so a lot of- um, where a lot of this iteration of debugging learning algorithms is looking at what your learning algorithm is doing and just asking yourself what are my guesses for what could be wrong. Uh, and maybe one of your guesses is, well, maybe optimizing the wrong cost function. Right? So, so here is what I mean. Um, what you care about is this, um, weighted accuracy criteria, uh, you know, where, uh, sum over your dev set or test set of, you know, weights on different examples of whether it gets it right, uh, where the weights are higher for non-spammed and spam. Because you really make sure you label non-spam e-mail correctly, right? So, so maybe that's the weighted accuracy criteria you care about. Uh, but for logistic regression, uh, you are maximizing this cost function, right? Law of likelihood minus this regularization term. So you're optimizing J of Theta, when what you actually care about is A of Theta. So maybe you're optimizing the wrong cost function. And then one way to change the cost function would be to fiddle with the parameter Lambda, right? That's one way to change the definition of J of Theta. Um, another way to change J of Theta is to just totally change the cost function you are maximizing, like change it to the SVM objective, right? Or, or- and then part of that also means choosing the appropriate value for C. Okay? And so, um, there is a second diagnostic which, um, I end up using i- th- th- which is - which I hope you can tell, is the problem your optimization algorithm? Uh, in other words is gradient ascent not converging? Or is the problem that you're just optimizing the wrong function? Right? And, and we'll see two examples of this thing. So this is the first example. Okay? Um, and so here's the diagnostic that can help you figure that out. So just to summarize this scenario- this, um, this, uh, example - this running example we're using, um, the SVM outperforms logistic regression. If you want to deploy logistic regression. Uh, let's say that theta SVM for the parameters learned by SVM. And, and instead of writing the SVM parameters as w and b, I'm just gonna write the linear SVM. SVM linear kernel. You know, using the logistic regression parameterization. Right? So if you have a linear set of parameters. Um, and let's say that theta BLR will be the parameters learned by logistic regression. Right? So I'll, I'll just- yeah, regularized logistic regression or Bayesian logistic regression. So you care about weighted accuracy and, uh, uh, um, uh, and the, the SVM outperforms Bayesian logistic regression. Okay? So this is one- a one-slide summary of where we are in this example. So how can you tell if the problem is your optimization algorithm, uh, meaning that you need to run gradient ascent longer to actually maximize J of Theta. Um, or this- oh, sorry. And then- right. And this is the- what BLR tries to maximize. Right? So, so how do you tell, we have, we've two possible hypotheses you wanna distinguish between. One is that, um, the learning algorithm is not actually finding the value of Theta that maximizes J of Theta. All right? For some reason gradient ascent is not converging. So that would be a problem with the optimization algorithm. That j of Theta that, that, that, you know, uh, for, for the property of the- for the problem to be with the optimization algorithm it means that, if only we could have an algorithm that maximizes j of Theta we would do great. But for some reason gradient ascent isn't doing well. That's one hypothesis. The second hypothesis is that J of Theta is just the wrong function to be optimizing. It is just a bad choice of cost function, that j of Theta is too different from A of Theta, that maximizing J of theta doesn't give you, you know a classifier that does well on A of theta which is what you actually care about. Okay? Any que- so this is a problem setup. Is there any, any que- I wanna make sure people understand this. This is- raise, raise your hand if this makes sense. Most people? Okay. Cool. Almost everyone, okay. Good. Any questions about this problem setup? Why don't you, why don't you [inaudible]. Oh. Uh, thank you. Why not maximize A of Theta directly? Because A of Theta is non-differentiable. So we don't actually have, um, you know there's this indicator function. So it's- we actually don't- we, uh, - it turns out maximizing A of Theta explicitly is NP-hard. Uh, uh, but just- we just don't have great algorithms to try and do, do that. Okay. So it turns out there's a diagnostic you could use to distinguish between these of two- these two different problems. Um, and here's the diagnostic. Which is, check the cost function that logistic regression is trying to maximize. So J. And compute that cost function on the parameters found by the SVM and compute that cost function on the parameters found by Bayesian logistic regression. And just see which, which value is higher. Okay? Um, so there are two cases. Either, this is greater, or this is less than or equal to. Right? They're just two possible cases. So what I'm gonna do is go over case one and case two corresponding to this greater than or is less than equal than. Uh, and let's, let's see what that implies. So on the next slide, I'm gonna copy over this equation. Right? That's, that's just a fact that the SVM does better than Bayesian logistic regression on a problem. So on the next I'm gonna copy over this first equation. Um, and then we're gonna consider, you know, these two cases separately. So greater than will be case one and less than or equal to will be case two. Okay? So let me copy over these two equations in the next slide. Right? So that's the first equation that I just copied over here. And that's- this is the greater than, this is case one. Okay? So let's see how to interpret this. Um, in case one, J of theta SVM is greater than J of Theta BLR. Right? Meaning that whatever the SVM was doing, um, it found a value for Theta which we have written as, Theta SVM. And theta SVM has a higher value on the cost function J than theta BLR. But Bayesian logistic regression was trying to maximize J of theta. Right? I mean Bayesian logistic regression is just using gradient ascent to try to maximize J of theta. And so under case one, this shows that whatever the SVM was doing, whatever your buddy implementing SVM did. They managed to find a value for Theta that actually achieves a higher value of J of Theta, than your implementation of Bayesian logistic regression. So this means that Theta BLR fails to maximize the cost function J. And, uh, and the problem is with the optimization algorithm. Okay? So this is case one. Case two, um- again I'm just copying over the first equation. Right? Because this is just part of our analysis. This is part of the problem set up. Uh, then case two is now the second line. It's now a less than or equal sign. Okay? So let's see how to interpret this. Um, so under- if you look at the second equation right? The less than equal to sign. It looks like J did a better job than the SVM maximizing J- excuse me. It looks like Bayesian logistic regression did a better job than the SVM, um, maximizing J of Theta. Right? So, you know, you tell Bayesian logistic regression to maximize J of Theta. And by golly, it found the- it found the value of Theta. That's that- it found a value that achieves a higher value of J of Theta than, than whatever your buddy did using an SVM implementation. So it actually did a good job trying to find a value of Theta that drives up J of Theta as much as possible. But if you look at these two equations in combination what we have is that, um, the SVM does worse on the cost function J. But it does better on the thing you actually care about. A of Theta. So what these two equations in combination tell you is that having the best value- the highest value for J of Theta does not correspond to having the best possible value for A of Theta. So it tells you that maximizing J of Theta doesn't mean you're doing a good job on A of Theta. And therefore, maybe J of Theta is not such a good thing to be maximizing. Because maximizing it, doesn't actually give you the result you ultimately care about. So under case two, um, you can be convinced that j of Theta is just a- i- i- is not the best function to be maximizing. Because getting a high value of J of theta doesn't get you a high value for what you actually care about. And so the problem is with the objective function of the maximization problem. And maybe we should just find a different function to maximize. Okay? So, um, any questions about this? Right, go ahead. If you want to change the cost function in case two, you saw it was the right one. [inaudible] Yeah. Uh, let me come back to that. Yeah. It's a g- a complicated answer. Yeah. All right. Actually, let, let- let's do this first. Um, so, uh, all right. For these four bullets, does it fix the optimization algorithm or does it fix the optimization objective? First one. Does it fix the optimization algorithm or does it fix the optimization objective? Cool. Second one. Ah, I don't know what's wrong with this thing. This is so strange. Okay. All right. Does it fix the optimization algorithm or fix the optimization objective? Optimization algorithm, right? So Newton's method still looks at the same cost function J of Theta but in some cases it just optimizes it much more efficiently. Um, this is a funny one. Usually, you fiddle with lambda, um, to, uh, uh, trade off bias and variance things. Right. That, that this is one way to change the optimization objective. Although uh, uh, uh, usually you change lambda to just bias and variance rather than this. Right? Uh, and then trying to use an SVM, right? Would be one way to totally change the optimization objective. Okay? So, uh, to, to address the question just now. Sometimes we find you have the wrong optimization objective, is that there, there isn't always an obvious thing to do. Right? Sometimes you have to, uh, brainstorm a few ideas. Is that there, there isn't, uh, um, always one obvious thing to try. But at least it tells you that, that category of things of trying out different optimization objectives is what you want. Right? Um, all right. So, um, let's go through a more complex example. They're, they're, you know, incorporate some of these- wow, I don't know what's wrong. I sprayed my laptop. I wonder if my- this is so strange. Let me see what I can do. Yeah. All right. Well. Okay. Let's go for a more complex example, uh, that, that, that will illustrate some of these concepts, uh, that, that we've been going through and, and just let you see another example of these things. Um, uh, oh, and- and I find that, um, one- one thing I've learned as a teacher, you know, one of the ways for you to become good at this, right? Is to go, you know, work in a good AI group for five years, right? Because when you work in a good AI group for some several years, then you have seen, you know, 10 projects, and that lets you gain that experience. But it turns out that it takes, I don't know, depending on what AI group you work on, it- it takes- if you work on a different project every year, then in five years that I guess you work on five projects or something. I- I actually don't know. Or maybe 10 projects or something. But, er, one of the reasons that, um, in, uh, the way I try to explain this, I'm try to go- give specific scenarios with you so that, um, you know, my Ph.D students and I, we spent- actually, we spent like many years working with Stanford Autonomous Helicopter, but I'm trying to distill the key lessons down for you so that you don't need to work on a project for, you know, few years to gain this experience but to give you some approximation to this knowledge in maybe 20 minutes, right? The 20 minutes won't give you the depth of three years of experience but we try to summarize the key lessons so that we can learn from experience that others took years to develop. Um, all right. So, uh, this helicopter actually sits in my office. Uh, uh, uh but if you go to my office, uh, uh, and, you know, grab this helicopter, uh, uh, and- and- and we ask you to write a piece of code to make this fly by itself, use the learning algorithm to make this fly by itself. How do you go about doing so? So it turns out a good way to, um, make a helicopter fly by itself is to use, uh, is to do the following. Uh, step one is build a, uh, computer simulator for a helicopter. So, you know, that's actually a simulator, right? Like a video game simulator of a helicopter. Um, the advantage of using, you know, say a video game simulator of a helicopter, is you could try a lot of things, crash a lot in simulation, you know, which is cheap, whereas crashing a helicopter in real life is- is- is- is slightly dangerous and- and- and also, uh, more expensive. Um, uh, but so step one build a simulator of a helicopter. Step two, uh, choose a cost function. And for today, I'm just using a relatively simple cost function which is squared error. So you want the helicopter to fly the position x desired, and your helicopter is there, you know, wandered off to some other place x. So let's use a squared error to penalize it, right? Um, when we talk about reinforcement learning towards the end of this quarter, we'll- we'll actually go through the same example again by using, uh, the reinforcement learning terminology, understand this slightly- this at a slightly deeper level. And we'll go over this exact same example, after you learn about reinforcement learning. But we'll just go over a slightly simplified- very slightly simplified version today. Um, and so, uh, run a reinforcement learning algorithm and what the reinforcement learning algorithm does, is it tries to minimize that cost function J of Theta. Um, and so, uh, you know, and so you learn some set of parameters Theta sub through RL for controlling the helicopter, right? And we'll talk about reinforcement learning, you know, the- the- we'll- you- you'll see all this redone with proper reinforcement learning notation where J is a reward function, Theta Rs is the control policy and so on. But don't worry about that for now. Um, so let's say you do this, and the resulting controller, right? The way you fly the helicopter, it gets much worse performance than a human pilot, you know, so the helicopter wobbles all over the place and doesn't quite stay where you are hoping it will. So what do you do next, right? Well, here are some options, uh, corresponding to the three steps above. You could work on improving your simulator. Um, it turns out even today, you know, we- we- we've had helicopters for what? I don't know- like, uh, uh, I think, uh, we started having a lot of commercial helicopters around the 1950s. You see we have been co- conc- helicopter for many decades now. But airflow around the helicopter is very complicated. And even today, there are actually some, uh, uh, details of how air flows around the helicopter. The- the aerodynamics textbook, you know, that- that even, um, AeroAstro people, right? The experts in AeroAstro cannot fully explain. So helicopters are incredibly complicated. And there's almost unlimited headroom, uh, for building better and more accurate simulations of helicopters. So maybe you wanna do that or maybe you think that cost function is messed up, you know, maybe a squared error isn't the best metric, right? Uh, and- and it turns out, um, the way helicopter- a helicopter has a tail rotor that blows wind to one side, right? So I guess, uh, because the- the- the main rotor spins in one direction, if it only had a main rotor, then the body will spin in the opposite direction. Er, an equal and opposite reaction within torque, right? So the main rotor spins in one direction. If it only had a main rotor, the rotor on top, and it just spun that, then the body of the helicopter would spin the opposite direction. So that's why you need a tail rotor to blow air down off to one side, to not make it, um, uh, uh, spin in the opposite direction. Uh, but because of that, it turns out the helicopter's staying in place, it's actually tilted slightly to a side. Because a tail rotor blows air in one direction. So it's pushing you off to one side, so you have to tilt your helicopter in the opposite direction. So- so the main rotor blows air to one side, the tail rotor blows air to the other side. So you actually stay in place, right? So a helicopter is actually asymmetric. Lift in birds is not the same. So- so- so because of this comp- complication, maybe squared error isn't the best, um, uh, uh, error because, you know, your- your orientation- your optimal orientation is actually not zero, right? Um, so- so- so maybe you should modify the cost function. Um, or maybe you wanna modify the, um, reinforcement learning algorithm because you secretly suspect that your algorithm is not doing a great job of minimizing that cost function, right? That it's not actually finding the value of Theta that absolutely minimizes J of Theta. So it turns out that, um, uh, each one of these topics can easily be a PhD thesis, right? You can definitely work for six years on any one of these topics. Um, and the problem is, uh, uh, you know, so I- I actually- I actually know someone that wrote their PhD thesis is on, right? Uh, improving helicopter simulator, right? Um, uh, but the problem is maybe a helicopter simulator is good enough. You can spend six years improving your helicopter simulator but will that actually get you the result? And you can write- and you can write a PhD thesis, and you get a PhD doing that maybe. But if your goal is not just to write a PhD thesis, it's actually to make a helicopter fly better. It's actually not- not totally clear, right? If- if that's the key thing for you to spend time on. Um, so what I'd like to do is, uh, describe to you a set of diagnostics that allows you to use this sort of logical step-by-step reasoning to debug which of these three things is what you should actually be spending time on, right? Um, so is it possible for us to come up with a debugging process to logically reason, uh, so as to select one of these things to work on and- and have conviction, and then be relatively confident that this is a useful thing to work on, right? Um, so here's how we're gonna do it. Um, so just to summarize a scenario, right? Um, the controller given by Theta RL performs poorly, right? So, uh, this is how I would reason through a learning algorithm, right? So suppose, uh, suppose all of these things were true, um, suppose that- okay, corresponding to the three steps in the previous slide, suppose the helicopter simulator was accurate and suppose, um, uh, you know, the learning algorithm, uh, correctly, you know, minimizes the cost function and suppose J of Theta is a good cost function, right? If- if all of these things were true, then the learned parameters should fly well on the actual helicopter, right? Um, but it doesn't fly well on a helicopter, so one of these three things is false. And our job is to figure out, is- is to identify at least one of these three statements: one, two or three that's false because that- that- that lets you sink your teeth into something that to- to- to work on, right? Um, and I think, uh, uh, um, to make an analogy to more conventional software debugging, if a big complicated program, and for some reason, your program crashes, you're like the core down to whatever, um, if you can isolate this big complicated program into one component that crashes, then you can focus your attention on that component that you know crashes for some reason and try to find the bug there, right? And so instead of trying to look over a huge code base, if you could do binary search or try to isolate the problem in a smaller part of your code base, then you can focus your debugging efforts on that part of your code base, try to figure why it crashes, and then fix that first. And after you fix that, it might still crash, then there may be a second problem to work on but at least you know that, um, trying to fix the first bug seems like, uh, seems with a worthwhile thing to do, okay? So what we're gonna do is, um, uh, come up with a oh, sorry, that's gradient descent, come up with a set of diagnostics to isolate the problem to one of these three components, okay? So the first step is, uh, let's look at, um, how well the algorithm flies in simulation, right? So what I said just now was, uh, you ran the algorithm and it resulted in a set of parameters that doesn't do well on your actual helicopter. So the first thing I will do is just check how well does this thing even do in simulation, right? And, uh, uh, there are two possible cases. Um, if it flies well in simulation but doesn't do well in real life, then it means something's wrong with the simulator, right? It- it means it's actually work- working on the simulator because, you know, if it's already working well in the simulator, I mean what else could you expect to learn the reinforcement learning algorithms to do, right? You know, you told the reinforcement learning algorithm to go and fly well in the simulator because this is just training simulation. It's already doing well in the simulator, so there's not much to improve on there, right? At least, it's hard to improve on that. Uh, but- but- but if- if- if you found a learning algori- if your learning algorithm does well in the simulator but not in real life, then this means that the simulator, um, isn't matching real life well. And so dish- that- that's strong evidence. That's strong grounds for you to spend some time to improve your simulator. Yeah? [inaudible]. Oh, yeah. Uh, right. So to just repeat for the camera, is it ever the case that it flies bad in the simulator but well in real life? I wish that happened. [LAUGHTER] You know, I actually, um, very rarely, I- I think, uh, if that happens I will, I will still work on improving the simulator. Um, uh, so there, there is one scenario where that happens, it turns out that, uh, uh, when we train this helicopter in the simulator or really, any robot in the simulator, we often add a lot of noise to he simulator because one lesson we've learned is that if your simulators is noisy, because simulators are always wrong, right? Any- any digital simulation is only an approximation in the real world. So it turns out we have a lot of noise in all of our simulators, because we think if that the learning algorithm is robust to all this noise you've thrown at it in simulation. Then, whatever the noise the real world throws at it, it has a bigger chance of being robust too, as well. Um, uh, and so we tend to throw a lot of noise into, into simulators. And so one case where that does happen is when we find we threw too much noise added in simulation and tha- that might be a sign we should dial back the noise a bit. Um, right, cool. Uh, so, um, yeah, right. So this first diagnostic tells you should work on improving the simulation. But just, I think there's a big mismatch between simulation performance and real world performance. That's a good sign that, you know, that you should improve the simulation. Second, um, this is actually very similar to the diagnostic we use on the Spam, you know, Bayesian logistic regression and SVM example. So what we're gonna do is, um, we're going to measure this equation. And this is, this again, this is very similar to our previous equation which is, take the cost function, similar as the previous example. Take the cost function J that reinforcement learning is, uh, totally minimized, right? That's J and J of theta was a squared error, right? So take the cost function that, uh, uh, reinforcement learning was told to minimize and see if the human achieves better squared error than the reinforcement learning algorithm. We just see, you know supervise better. So let's measure the human performance on this squared error cost function um, and see which one does better. So there are two cases that equation will be either less than or it will be greater than or equal to, right, so less, or greater or equal to. So case one, is um, say to human is less than excuse me, J of theta human is less than J of theta RL. That would be this case. Then, that tells you that the problem is with the reinforcement learning algorithm, right? That somehow the human achieves a lower squared error uh, and so, uh, the learning algorithm is not finding the best possible squared error, that is some other controller as evidenced by whatever the human is doing that actually achieves a lower cost function, right? So in this case, um, we think the learning algorithm or, or reinforcement learning algorithm is not doing a good job minimizing that and we'll work on the reinforcement learning algorithm. The other case would be if the sign of the inequality is the other way around. Right? Now in this case, um, you can infer that the problem is in the cost function. Because what happens here is, um, the human is flying better than the reinforcement learning algorithm. But the human is achieving what looks like a worse cost than the reinforcement learning algorithm. So what this tells you is that minimizing J of theta does not correspond to flying well. Right? Your learning algorithm achieves a better value for J of theta, you know, J of theta RL is actually smaller than what the human is doing. So the reinforcement learning algorithm as far as it knows is doing a great job cause it's finding a value of theta where J of theta is really really small. But in this last case, um, you know that finding such a small value of J of theta doesn't correspond to flying well because a human doesn't achieve such a good value in the cost function but the helicopter actually just looks better, was flying in a more satisfactory way. And that tells you that this squared error cost function is not the right cost function for, for, for what flying accurately remains, right? And so um, through this set of diagnostics, um, uh, you could decide which one of these three things. Uh, improving the simulator, improving the RL algorithm, reinforcement learning algorithm or improving the cost function is the thing you should work on. And what happens in- in this particular project and what often happens in machine learning applications is, you run this set of diagnostics and this actually happened when we were working on this helicopter. We ran this set of diagnostics and then one week we were saying, "Yep simulator's got a problem, let's work on that." And then we'd improve the simulator, improve the simulator and after a couple of weeks of work we will run these diagnostics and say, "Oh, looks like the simulator is not good enough." And maybe there's a problem with the RL algorithm, then we'll work on that, work on that and improve that. And after that, after awhile we'll say, "Oh, they'll say that's also good enough and the problem is in the cost function." And sometimes the, the location of the most acute problems shifts right after you've cleared out one set of problems. It might be the case that now the bottleneck is the simulator, right? And so, um, I often use this, uh, workflow to constantly drive prioritization for what to work on next, right? And, and to answer your question just now about how do you find the new cost function? It turns out finding a new cost function is actually not that easy. Uh, so actually one, one of my former PhD students Adam Coates um, through this type of process realized that finding a good cost function is actually really difficult. Uh, because if you want a helicopter to fly and maneuver, you know, like fly at speed and then make a bank turn, right? Like how do you mathematically define what is an accurate bank turn? It's actually really difficult to write down an equation to specify what is a good way of, I will fly in that and do a turn. Or is this, how do you specify what is a good turn? So um, he wound up writing a research paper, uh, one of the best application paper, it won at ICML. Uh-uh on, on how to define a good cost function, it's actually pretty complicated, but the reason he did it and it was a good use of his time was running diagnostics like these which gave us confidence that this was actually a worthwhile problem uh, and the, that resulted in, you know making real progress in optimization, right. Um, any questions about this? All right, cool. Actually, I think I- all right, anyway, all right, fun helicopter videos, I always want to show this, but it's fine. And you guys saw this earlier. All right, so, um, only one time, all right, let's go through this. So, um, uh, in addition to, um, these specific diagnoses of bias versus variance and optimization algorithms versus optimization objective. Um, oh sorry- and when we do RL, I wanted to just go through that example one more time, so you see everything you just saw again, after you learned about reinforcement learning, they tend to squeeze up. Okay. Now, in addition to these type of diagnostics, um, uh, how to debug learning algorithms, um, there's one other set of tools you'll find very useful, which is, uh, error analysis tools, uh, which lets you figure out, which is another way for you to figure out what's working, what's not working, or really what's not working in the learning algorithm. [NOISE] So let's let's go through a motivating example. Um, so let's say you're building a, um, uh, you know, uh, like a security system, so when someone walks in front of a door, you unlock the door knob based on whether or not, you know, that person is authorized to enter right that, that place. Um, and so let's say that, uh, uh, so there are a lot of machine learning applications where it's not just one learning algorithm, right? But instead you have a pipeline, you string together many different steps. So how do you actually build a face recognition algorithm? To decide if someone approaching your front door is authorized to unlock the door, all right. Well, here's something you could do which is, uh, you start with a camera image like this, and then, um, you could do preprocessing to remove the background. So all that co- co- complicated color in the background, let's get rid of that. And it turns out that, um, when you have a camera against a static background, right? You could actually do this, you know, with a little bit of noise relatively easily because if you have a fixed camera that's just like mounted, you know, on your door frame, it always sees the same background, and so you can just look at what pixels have changed and- and just keep the pixels that have changed compared to- I mean re- because, you know, this camera always sees that gray background and that, um, brown bench in the back, and so you just look at what pixels have changed a lot and, and this background doesn't really move, right. So this is- this- this is- this is actually feasible by just looking at what pixels have changed and keeping pixels that have changed relative to that. Um, and so, after getting to the background, you could run the face detection algorithm, uh, and then, uh, after detecting the face, it turns out that, uh, actually, you know, I've actually worked with a bunch of face detection, worked with a bunch of face- face recognition systems. It turns out that, um, for some of the leading face recognition systems, so- depends on details, but some of them. Uh, it turns out that, um, the appearance of the eyes is a very important cue for recognizing people, that's why, if you cover your eyes you actually have a much harder time recognizing people, as eyes are very distinct through people. Just segment out the eyes, um, segment out the nose, and the other thing you- segment out the mouth. [LAUGHTER] It's Halloween. [LAUGHTER] All right. And then- and then feed these features into some other algorithms, say logistic regression, that then, you know, finally outputs a label that says, is this the person, right? That- that- that, you know- you know, you're authorized to open the door for. Um, so it- so in many learning algorithms, you have a complicated pipeline like this of different components that, that have to be strung together, and, uh, you know, if you read the newspaper articles about- or if you read research papers in machine learning, often, uh, uh, the, the research papers will say, oh, we built a machine translation system, we've trained a gazillion, you know, of sentences found on the Internet and that's great and a pure end-to-end system, so that's like one learning algorithm that sucks in an input, by sucking an English sentence and spit out the French sentence or something, right? So that's, that's like one learning algorithm. It turns out that for a lot of practical applications, if you don't have a gazillion examples, uh, you end up designing much more complex machine learning pipelines like this, where it's not just one monolithic learning algorithm, but instead there are many different smaller components. Um, and I think in, in- uh, uh, I think that, you know, the, the, the, the, um, I think that, uh, having a lot of data's great, all right? I love having more data, but big data has also been a little bit over-hyped, uh, and to model things you could do with small data sets as well. And in the teams [NOISE] I've worked with, we find that if, if, if you have a relatively small dataset, often you can still get great results. You know, my teams often get great results at 100 images, 100 training examples or something. But when you have small data, it often takes more, uh, insightful design of machine learning pipelines like this, right? Um, now, [NOISE] when you have a machine learning pipeline like this, uh, the things you want to do- what you want to do is, uh, so, so you build a pipeline like this and it doesn't work, right? And there's this common workflow. You build a pipe, you build something, it doesn't work, so you want to debug it. So in order to decide which part of the pipeline to work on, um, it's very useful if you can look at your- the error of your system and try to attribute the error to the different components so that you can decide which component to work on next, right? And, and, there's actually a- I'll tell you a true story, you know, remember preprocess background removal step, right? Since you're getting rid of the background, um, it turns out that, uh, there are a lot of details of how to do background removal, uh, for example, um, the simple way to do it is to look at every pixel and just see which pixels have changed, uh, but it turns out that if there's a tree in the background that, you know, waves a little bit because the wind moves the tree and blows the leaves and branches around a little bit, then sometimes the background pixels do change a little bit. And so they're actually really complicated background removal algorithms, they try to model basically the trees and the bushes moving around a little bit in the background, so you know, that even though the pixels of the tree moves around is part of the background, you just get rid of it. So background removal, there's simple versions where you just look at each pixel and see how much it's changed and there's incredibly complicated versions. Um, so I actually know someone, uh, that, uh, uh, was trying to work on a problem like this and they decided to improve their background removal algorithm. Uh, and they actually, er, this real person actually literally wrote a PhD thesis on background removal. Uh, and so I'm glad he got a PhD, but it turn- but, you know, when I look at the problem he was actually trying to solve, I don't think it actually moved the needle, right? So- so, um. Uh, this is one of the nice things about academia, right, guys, so long as, you know, you can- you can still publish a paper. [LAUGHTER] And- and- and that was technically innovative. It was actually a very good technical work. But- but- but- but if- so if your goal is to publish a paper, great, do that, uh, but then if your goal is to build a better face recognition system, then I would carefully ask which components should you actually spend your time to work on, all right? Um, so here's what you can do with error analysis, which is, say your overall system has 85% accuracy. Here's what I would do. I would go in and in your, uh, dev set, in your development set, the whole of the cross validation set, right, uh, go in and for every one of your examples in the dev set, I would plug into the ground truth for the background. Meaning that, uh, rather than using a-some, you know, approximate heuristic algorithm for roughly cleaning out the background which may or may not work out well, I would just use Photoshop. And for every example in the dev set, I would give it the perfect background removal, right? So imagine if instead of some noisy algorithm trying to remove the background, this step of the algorithm was- just had perfect performance, right? And then you can give it perfect performance on your dev set, on your test set, just by using Photoshop to just tell it this is a background, this is a foreground, right? And let's say that when you plug in this perfect background removal, the accuracy improves to 85.1%. And then you can keep on going from left to right in this pi- pipeline which is, um, now, instead of using some learning algorithm to do face detection, let's just go in and for the test set, you know, modify, kind of have the face detection algorithm cheat, right? Have it just memorize the right location for the face in the test set and just give it a perfect result in the test set. So when- when I shaded these things, um, that means I'm giving it the perfect result, right? Uh, so let's just go in and on the test set give it the perfect face detection for every single example, an- and then look at the final output and see how that changes the accuracy of the final output, right? And then, same for these components, um, eyes segmentation, nose segmentation, mouth segmentation. Uh, and then- and you do these one at a time. And then finally for logistic regression, if you give it the perfect output, your- your- your- your accuracy should be 100%, right? Uh, so now, what you can do is look at the sequence of, um, uh, of steps and see which one gave you the biggest gain. And it looks like, um, in this example, it looks like, um, when you gave it perfect face detection, the accuracy improved from 85.1 to 91%. So, you know, roughly a 6% improvement. And that tells you that, if only you can improve your face detection algorithm maybe your overall system could get better by as much as 6%. So this gives you faith that, you know, maybe it's worth improving on your face detection component. And in contrast, this tells you that even if you had perfect background removal, it's only 0.1% better so maybe don't- don't- don't spend too much time on that. Um, and it looks like that, uh, when you gave it perfect eye segmentation, it went up another 4%. So maybe that's another good project to prioritize, right? Um, and if you're in a team, one common structure would be to do the separate analysis, and then we have some people work on face detection, some people work on eyes segmentation. You could usually do a few things in parallel if you have a large engineering team. But at least this should give you a sense of the relative privatization of the different things. Question? [inaudible] Yeah, right. So if you just cumulatively, uh, such as give it perfect eye segmentation, then add on top of it nose segmentation, or do you give it perfect eye segmentation and then take that away, and then give it perfect nose segmentation. Um, the way I presented it here is done cumulatively. Uh, um, and- and it turns out that, uh, let's see. If you give it- once you give it a perfect face, uh, uh, uh, once you give it, you know, perfect things in the later stages, maybe the- the earlier stages doesn't matter that much anymore. So that's one pattern. It turns out that, uh, uh, you could do it either way, right? For the uh, eyes-nose-mouth, you can do it cumulatively or one at a time and you'll probably get relatively similar results. Um, uh, no guarantee, you might get different results in terms of conclusions. But, uh, but I think, to the extent that you are wondering if doing it cumulatively versus non-cumulatively might give you different results, I will just do it both ways. And then- an- and then- and- and I think this, um, error analysis is not a hard mathematical rule, if- if that makes sense. It is not that you do this and then there's a formula that tells you, okay, work on, uh, uh, face detection, right? I think that this should be, um, married with judgments on, you know, how- how hard do you think it is to improve face detection versus eye segmentation, right? But this at least gives you a sense of- of- of- it gives you a sense of prioritization. Um, and it's worth doing this in- in- in multiple ways if- if you think of- if- if- if you're concerned in the discrepancy in the cumulative and non-cumulative versions, all right? Um, so when we have a complex machine learning pipeline, this type of error analysis helps you break down the error, so attribute the error to different components, which lets you focus your attention on what to work on. So if you [inaudible]? Oh, right. Yeah. If you do face detection accurately and then your error drops, what does that entail? Uh, it's not impossible for that to happen, uh, it would be quite rare. Uh, I would, uh, uh, uh- so at a high-level, what I would do is go in and try to figure out what's going on actually. I- I wouldn't ignore that. Uh, uh, so this is something I see. Sometimes a team gets a- discovers a weird phenomenon like that and usually ignore it and move on. I wouldn't do that, I would- it's actually go. Whenever you find one of these weird things, uh, I wouldn't gloss over and ignore it, I would go in and figure what's going on. Does it make sense? It's- it is like debugging a software [NOISE]. You know, if- if you're- if you're trying to debug a piece of software, and if- whenever you move your mouse over, you know, some button, some random pixel color changes, you go, huh, that's weird. And then some people just ignore it and say, "Oh well, the user won't see this." [LAUGHTER] But I'll say no, let's go figure it out. [LAUGHTER] So what you're saying is quite rare but not impossible. But I would- I would, uh, I don't have an easy solution for how to figure out what's going on but I would- I would- wanna figure out what's going on. Um, all right. So one last thing before we break. So error analysis, um, helps figure out the difference between where you are now, 85% overall system accuracy and 100%, right? So it tries to explain difference between where you are and, you know, perfect performance. There's a different type of analysis called ablative analysis which figures out the difference between where you are and something much worse. So- so here's what I mean. Um, er, so let's say that you built, um, let's say you built a good anti-spam classifier by adding lots of clever features in logistic regression, right? So a spelling correction because spam is trying to misspell words to mess up the tokenizer, uh, uh, to make word look, you know, spammy words not look like spammy words. Uh, sender host features. So, what machine did the e-mail come from? You know, header features uh, could have a parser from NLP, parse a text, uh, use a JavaScript parser to understand, right? Or even you can, uh, uh, uh, fetch the web pages that a- that the e-mail refers to and parse that. Um, and the question is um, how much would these- these components really help? And it turns out, if you're writing a research paper, you know, sometimes you're writing a research paper and you can say, "Hey. Look, I built a great spam classifier," and that's okay. That's, like, a nice result to have. But if you can explain to your reader, either in a research paper or in a class project report like a term project, what ac- what actually made the difference, that conveys a lot of insights as well. So, um, so simple logistic regression without all these clever features got 94% performance, uh, and with all of your- addition of all these clever features, you got 99%, uh, uh, accuracy. So an ablative analysis which we'll do, is, um, we move the components one at a time to see how it breaks, right? So just- so just now, we were adding to the system by making components perfect with error analysis, this is how it improves. Here, we're gonna remove things one at a time. I did not mean to remove that [LAUGHTER]. So let me figure out what's going to pop on. All right. We move things one at a time to see how it breaks. So let's see, we remove spelling correction. And, uh, as the set of features, the error goes away that. Then let's remove the sender host features, we remove email header features and so on until, uh, when you remove all of these features you end up there. And again, you could do this cumulatively or remove one and put it back, remove one and put back. Uh, uh, you know, or- or you could do it both ways and see if they give you slightly different insights. Uh, and so the conclusion from this particular analysis is that the biggest gap is from the, uh, text parser features, because when you remove that the error or the accuracy went down by 4%. And so, you know, there is a strong evidence. If you wanna publish a paper, you can say like text parser features significantly improves spam filter accuracy in that level of insight. An- and then if you're working with spam filter for many years, right, you know, there- there are- there are really important applications where sometimes the same team will work on for many years. So this type of error analysis gives you intuition about what's important and what's not, uh, and helps you decide to maybe even double down on text parser features or maybe if, uh, um, uh, or maybe if, uh, the sender host features is too computationally expensive to compute, tells you maybe you can just get rid of that and without too much harm. And also if you're a publishing paper or sending a report, this gives much more insight to your report. Okay? All right. Um, so that's it for error analysis and ablative analysis. I hope this was useful for your class projects as well. I'll take one last question over there. Uh, how did you chose the order of to remove the features? Oh. Yeah. Uh, uh, how would you choose the order in which you- no systematic way. If you didn't have a systematic way you do that, the other way, the non-cumulative, where you remove one [NOISE] put it back, remove one put it back. So either way it works. All right, let's break. Um, uh, and, uh, problem set two is- is due tonight. A friendly reminder, and problem set three will be posted, uh, in the next, like, several tens of minutes. Okay. Thanks everyone. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_4_Perceptron_Generalized_Linear_Model_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Couple of announcements, uh, before we get started. So, uh, first of all, PS1 is out. Uh, problem set 1, um, it is due on the 17th. That's two weeks from today. You have, um, exactly two weeks to work on it. You can take up to, um, two or three late days. I think you can take up to, uh, three late days, um. There is, uh, there's a good amount of programming and a good amount of math you, uh, you need to do. So PS1 needs to be uploaded. Uh, the solutions need to be uploaded to Gradescope. Um, you'll have to make two submissions. One submission will be a PDF file, uh, which you can either, uh, which you can either use a LaTeX template that we provide or you can handwrite it as well but you're strongly encouraged to use the- the LaTeX template. Um, and there is a separate coding assignment, uh, for which you'll have to submit code as a separate, uh, Gradescope assignment. So they're gonna- you're gonna see two assignments in Gradescope. One is for the written part. The other is for the, uh, is for the programming part. Uh, with that, let's- let's jump right into today's topics. So, uh, today, we're gonna cover, uh- briefly we're gonna cover, uh, the perceptron, uh, algorithm. Um, and then, you know, a good chunk of today is gonna be exponential family and, uh, generalized linear models. And, uh, we'll- we'll end it with, uh, softmax regression for multi-class classification. So, uh, perceptron, um, we saw in logistic regression, um. So first of all, the perceptron algorithm, um, I should mention is not something that is widely used in practice. Uh, we study it mostly for, um, historical reasons. And also because it is- it's nice and simple and, you know, it's easy to analyze and, uh, we also have homework questions on it. So, uh, logistic regression. Uh, we saw logistic regression uses, uh, the sigmoid function. Right. So, uh, the logistic regression, uh, using the sigmoid function which, uh, which essentially squeezes the entire real line from minus infinity to infinity between 0 and 1. Um, and - and the 0 and 1 kind of represents, uh, the probability right? Um, you could also think of, uh, a variant of that, uh, which will be, um, like the perceptron where, um. So in- in- in the sigmoid function at, um, at z equals 0- at z equals 0- g of z is a half. And as z tends to minus infinity, g tends to 0 and as z tends to plus infinity, g tends to 1. The perceptron, um, algorithm uses, uh, uh, a somewhat similar but, uh, different, uh, function which, uh, let's say this is z. Right. So, uh, g of z in this case is 1 if z is greater than or equal to 0 and 0 if z is less than 0, right? So you ca- you can think of this as the hard version of- of- of the sigmoid function, right? And this leads to, um, um, this leads to the hypothesis function, uh, here being, uh, h Theta of x is equal to, um, g of Theta transpose x. So, uh, Theta transpose x, um, your Theta has the parameter, x is the, um, x is the input and h Theta of x will be 0 or 1, depending on whether Theta transpose x was less than 0 or- or, uh, greater than 0. And you tol- and, um, similarly in, uh, logistic regression we had a state of x is equal to, um, 1 over 1 plus e to the minus Theta transpose x. Yeah. That's essentially, uh, g of- g of z where g is s, uh, the sigma- sigmoid function. Um, both of them have a common update rule, uh, which, you know, on the surface looks similar. So Theta j equal to Theta j plus Alpha times y_i minus h Theta of- of x_i times x_ij, right? So the update rules for, um, the perceptron and logistic regression, they look the same except h Theta of x means different things in- in- in- in the two different, um, uh, scenarios. We also saw that it was similar for linear regression as well. And we're gonna see why this- this is, um, you know, that this is actually a- a more common- common theme. So, uh, what's happening here? So, uh, if you inspect this equation, um, to get a better sense of what's happening in- in the perceptron algorithm, this quantity over here is a scalar, right? It's the difference between y_i which can be either 0 and 1 and h Theta of x_i which can either be 0 or 1, right? So when the algorithm makes a prediction of h Theta of- h Theta of x_i for a given x_i, this quantity will either be zero if- if, uh, the algorithm got it right already, right? And it would be either plus 1 or minus 1 if- if y_i- if- if the actual, uh, if the ground truth was plus 1 and the algorithm predicted 0, then it, uh, uh, this will evaluate to 1 if wrong and y_i equals 1 and similarly it is, uh, minus 1 if wrong and y_i is 0. So what's happening here? Um, to see what's- what's- what's happening, uh, it's useful to see this picture, right? So this is the input space, right? And, uh, let's imagine there are two, uh, two classes, boxes and, let's say, circles, right? And you want to learn, I wanna learn an algorithm that can separate these two classes, right? And, uh, if you imagine that the, uh, uh, what- what the algorithm has learned so far is a Theta that represents this decision boundary, right? So this represents, uh, Theta transpose x equals 0. And, uh, anything above is Theta transpose, uh, x is greater than 0. And anything below is Theta transpose x less than 0, all right? And let's say, um, the algorithm is learning one example at a time, and a new example comes in. Uh, and this time it happens to be- the new example happens to be a square, uh, or a box. And, uh, but the algorithm has mis- misclassified it, right? Now, um, this line, the separating boundary, um, if- if- if the vector equivalent of that would be a vector that's normal to the line. So, uh, this was- would be Theta, all right? And this is our new x, right? This is the new x. So this got misclassified, this, uh, uh, this is lying to, you know, lying on the bottom of the decision boundary. So what- what- what's gonna happen here? Um, y_i, let's call this the one class and this is- this is the zero class, right? So y_i minus- h state of i will be plus 1, right? And what the algorithm is doing is, uh, it sets Theta to be Theta plus Alpha times x, right? So this is the old Theta, this is x. Alpha is some small learning rate. So it adds- let me use a different color here. It adds, right, Alpha times x to Theta and now say this is- let's call it Theta prime, is the new vector. That's- that's the updated value, right? And the- and the separating, um, uh, hyperplane corresponding to this is something that is normal to it, right? Yeah. So- so it updated the, um, decision boundary such that x is now included in the positive class, right? The- the, um, idea here- here is that, um, Theta, we want Theta to be similar to x in general, where such- where y is 1. And we want Theta to be not similar to x when y equals 0. The reason is, uh, when two vectors are similar, the dot product is positive and they are not similar, the dot product is negative. Uh, what does that mean? If, uh, let's say this is, um, x and let's say you have Theta. If they're kind of, um, pointed outwards, their dot product would be, um, negative. And when- and if you have a Theta that looks like this, we call it Theta prime, then the dot product will be positive if the angle is- is less than r. So, um, this essentially means that as Theta is rotating, the, um, decision boundary is kind of perpendicular to Theta. And you wanna get all the positive x's on one side of the decision boundary. And what's the- what's the, uh, most naive way of- of- of taking Theta and given x, try to make Theta more kind of closer to x? A simple thing is to just add a component of x in that direction. You know, add it here and kind of make Theta. And so this- this is a very common technique used in lots of algorithms where if you add a vector to another vector, you make the second one kind of closer to the first one, essentially. So this is- this is, uh, the perceptron algorithm. Um, you go example by example in an online manner, and if the al- if the, um, example is already classified, you do nothing. You get a 0 over here. If it is misclassified, you either add the- add a small component of, uh, as, uh, you add the vector itself, the example itself to your Theta or you subtract it, depending on the class of the vector. This is about it. Any- any- any questions about the perceptron? Cool. So let's move on to the next topic, um, exponential families. Um, so, um, exponential family is essentially a class of- yeah. Why don't we use them in practice? Um, it's, um, not used in practice because, um, it- it does not have a probabilistic interpretation of what's- what's happening. You kinda have a geometrical feel of what's happening with- with the hyperplane but it- it doesn't have a probabilistic interpretation. Also, um, it's, um, it- it was- and I think the perceptron was, uh, pretty famous in, I think, the 1950s or the '60s where people thought this is a good model of how the brain works. And, uh, I think it was, uh, Marvin Minsky who wrote a paper saying, you know, the perceptron is- is kind of limited because it- it could never classify, uh, points like this. And there is no possible separating boundary that can, you know, do- do something as simple as this. And kind of people lost interest in it, but, um, yeah. And in fact, what- what we see is- is, uh, in logistic regression, it's like a software version of, uh, the perceptron itself in a way. Yeah. [inaudible] It's- it's, uh, it's up to, you know, it's- it's a design choice that you make. What you could do is you can- you can kind of, um, anneal your learning rate with every step, every time, uh, you see a new example decrease your learning rate until something, um, um, until you stop changing, uh, Theta by a lot. You can- you're not guaranteed that you'll- you'll be able to get every example right. For example here, no matter how long you learn you're- you're never gonna, you know, um, uh, find, uh, a learning boundary. So it's- it's up to you when you wanna stop training. Uh, a common thing is to just decrease the learning rate, uh, with every time step until you stop making changes. All right. Let's move on to exponential families. So, uh, exponential families is, uh, is a class of probability distributions, which are somewhat nice mathematically, right? Um, they're also very closely related to GLMs, which we will be going over next, right? But first we kind of take a deeper look at, uh, exponential families and, uh, and- and what they're about. So, uh, an exponential family is one, um, whose PDF, right? So whose PDF can be written in the form- by PDF I mean probability density function, but for a discrete, uh, distribution, then it would be the probability mass function, right? Whose PDF can be written in the form, um. All right. This looks pretty scary. Let's- let's- let's kind of, uh, break it down into, you know, what- what- what they actually mean. So y over here is the data, right? And there's a reason why we call it y because- yeah. Can you write a bit larger. A bit larger, sure. Is this better? Yeah. So y is the data. And the reason- there's a reason why we call it y and not x. And that- and that's because we're gonna use exponential families to model the output of your- of- of your data, you know, in a, uh, in a supervised learning setting. Um, and- and you're gonna see x when we move on to GLMs. Until, you know, until then we're just gonna deal with y's for now. Uh, so y is the data. Um, Eta is- is called the natural parameter. T of y is called a sufficient statistic. If you have a statistics background and you've learn- if you come across the word sufficient statistic before, it's the exact same thing. But you don't need to know much about this because for all the distributions that we're gonna be seeing today, uh, or in this class, t of y will be equal to just y. So you can, you can just replace t of y with y for, um, for all the examples today and in the rest of the calcu- of the class. Uh, b of y, is called a base measure. Right, and finally a of Eta, is called the log-partition function. And we're gonna be seeing a lot of this function, log-partition function. Right, so, um, again, y is the data that, uh, this probability distribution is trying to model. Eta is the parameter of the distribution. Um, t of y, which will mostly be just y, um, but technically you know, t of y is more, more correct. Um, um, b of y, which means it is a function of only y. This function cannot involve Eta. All right. And similarly t of y cannot involve Eta. It should be purely a function of y. Um, b of y is called the base measure, and a of Eta, which has to be a function of only Eta and, and constants. No, no y can, can, uh, can be part of a of, uh, Eta. This is called the log-partition function. Right. And, uh, the reason why this is called the log-partition function is pretty easy to see because this can be written as b of y, ex of Eta, times t of y over. So these two are exactly the same. Um, just take this out and, um, um. Sorry, this should be the log. I think it's fine. These two are exactly the same. And, uh. It should be the [inaudible] and that should be positive. Oh, yeah, you're right. This should be positive, um. Thank you. So, uh, this is, um, you can think of this as a normalizing constant of the distribution such that the, um, the whole thing integrates to 1, right? Um, and, uh, therefore the log of this will be a of Eta, that's why it's just called the log of the partition function. So the partition function is a technical term to indicate the normalizing constant of, uh, probability distributions. Now, um, you can plug-in any definition of b, a, and t. Yeah. Sure. So why is your y, and for most of, uh, most of our example is going to be a scalar. Eta can be a vector. But we will also be focusing, uh, except maybe in Softmax, um, this would be, uh, a scalar. T of y has to match, so these- the dimension of these two has to match [NOISE]. And these are scalars, right? So for any choice of a, b and t, that you've- that, that, that can be your choice completely. As long as the expression integrates to 1, you have a family in the exponential family, right? Uh, what does that mean? For a specific choice of, say, for, for, for some choice of a, b, and t. This can actually- this will be equal to say the, uh, PDF of the Gaussian, in which case you, you got for that choice of t, a, and, and b, you got the Gaussian distribution. A family of Gaussian distribution such that for any value of the parameter, you get a member of the Gaussian family. All right. And this is mostly, uh, to show that, uh, a distribution is in the exponential family. Um, the most straightforward way to do it is to write out the PDF of the distribution in a form that you know, and just do some algebraic massaging to bring it into this form, right? And then you do a pattern match to, to and, and, you know, conclude that it's a member of the exponential family. So let's do it for a couple of examples. So, uh, we have [NOISE]. So, uh, a Bernoulli distribution is one you use to, uh, model binary data. Right. And it has a parameter, uh, let's call it Phi, which is, you know, the probability of the event happening or not. Right, right. Now, the, uh, what's the PDF of a Bernoulli distribution? One way to, um, write this is Phi of y, times 1 minus Phi, 1 minus y. I think this makes sense. This, this pattern is like, uh, uh, a way of writing a programming- programmatic if else in, in, in math. All right. So whenever y is 1, this term cancels out, so the answer would be Phi. And whenever y is 0 this term cancels out and the answer is 1 minus Phi. So this is just a mathematical way to, to represent an if else that you would do in programming, right. So this is the PDF of, um, a Bernoulli. And our goal is to take this form and massage it into that form, right, and see what, what the individual t, b, and a turn out to be, right. So, uh, whenever you, you, uh, see your distribution in this form, a common, um, technique is to wrap this with a log and then Exp. Right, um, because these two cancel out so, uh, this is actually exactly equal to this [NOISE]. And, uh, if you, uh, do some more algebra and this, uh, we will see that, this turns out to be Exp of log Phi over 1 minus Phi times y, plus log of 1 minus Phi, right? It's pretty straightforward to go from here to here. Um, I'll, I'll let you guys,uh, uh, verify it yourself. But once we have it in this form, um, it's easy to kind of start doing some pattern matching, from this expression to, uh, that expression. So what, what we see, um, here is, uh, the base measure b of y is equal to. If you match this with that, b of y will be just 1. Uh, because there's no b of y term here. All right. And, um, so this would be b of y. This would be Eta. This would be t of y. This would be a of Eta, right? So that could be, uh, um, you can see that the kind of matching pattern. So b of y would be 1. T of y is just y, as, um, as expected. Um, so Eta is equal to log Phi over 1 minus Phi. And, uh, this is an equivalent statement is to invert this operation and say Phi is equal to 1 over 1 plus e to the minus Eta. I'm just flipping the operation from, uh, this went from Phi to Eta here. It's, it's, it's the equivalent. Now, here it goes from Eta to Phi, right? And a of Eta is going to be, um, so here we have it as a function of Phi, but we got an expression for Phi in terms of eta, so you can plug this expression in here, and that, uh, change of minus sign. So, so, let, let me work out this, minus log of 1 minus Phi. This is, uh, just, uh, the pattern matching there. And minus log 1 minus, this thing over, 1 over 1 plus Eta to the minus Eta. The reason is because we want an expression in terms of Eta. Here we got it in terms of Phi, but we need to, uh, plug in, um, plug in Eta over here. Uh, Eta, and this will just be, uh, log of 1 plus e to the Eta. Right. So there you go. So this, this kind of, uh, verifies that the Bernoulli distribution is a member of the exponential family. Any questions here? So note that this may look familiar. It looks like the, uh, sigmoid function, somewhat like the sigmoid function, and there's actually no accident. We'll see, uh, why, why it's, uh, actually the sigmoid- how, how it kind of relates to, uh, logistic regression in a minute. So another example, um [NOISE]. So, uh, a Gaussian with fixed variance. Right, so, um, a Gaussian distribution, um, has two parameters the mean and the variance, uh, for our purposes we're gonna assume a constant variance, um, you-you can, uh, have, um, you can also consider Gaussians with, with where the variance is also a variable, but for-for, uh, our course we are go- we are only interested in, um, Gaussians with fixed variance and we are going to assume, assume that variance is equal to 1. So, this gives the PDF of a Gaussian to look like this, p of y parameterized as mu. So note here, when we start writing out, we start with the, uh, parameters that we are, um, commonly used to, and we- they are also called like the canonical parameters. And then we set up a link between the canonical parameters and the natural parameters, that's part of the massaging exercise that we do. So we're going to start with the canonical parameters, um, is equal to 1 over root 2 pi, minus over 2. So this is the Gaussian PDF with, um, with- with a variance equal to 1, right, and this can be rewritten as- again, I'm skipping a few algebra steps, you know, straightforward no tricks there, uh, any question? Yep? [BACKGROUND]. Fixed variance. E to the minus y squared over 2, times EX. Again, we go to the same exercise, you know, pattern match, this is b of y, this is eta, this is t of y, and this would be a of eta, right? So, uh, we have, uh, b of y equals 1 over root 2 pi minus y squared by 2. Note that this is a function of only y, there's no eta here, um, t of y is just y, and in this case, the natural parameter is-is mu, eta is mu, and the log partition function is equal to mu square by 2, and when we-and we repeat the same exercise we did here, we start with a log partition function that is parameterized by the canonical parameters, and we use the, the link between the canonical and, and, uh, the natural parameters, invert it and, um, um, so in this case it's- it's the- it's the same sets, eta over 2. So, a of eta is a function of only eta, again here a of eta was a function of only eta, and, um, p of y is a function of only y, and b of y is a function of only, um, y as well. Any questions on this? Yeah. If the variance is unknown [inaudible]. Yeah, you- if, if the variance is unknown you can write it as an exponential family in which case eta will now be a vector, it won't be a scalar anymore, it'll be- it'll have two, uh, like eta1 and eta2, and you will also have, um, you will have a mapping between each of the canonical parameters and each of the natural parameters, you, you can do it, uh, you know, it's pretty straightforward. Right, so this is- this is exponential- these are exponential families, right? Uh, the reason why we are, uh, why we use exponential families is because it has some nice mathematical properties, right? So, uh, so one property is now, uh, if we perform maximum likelihood on, um, on the exponential family, um, as, as, uh, when, when the exponential family is parameterized in the natural parameters, then, uh, the optimization problem is concave. So MLE with respect to eta is concave. Similarly, if you, uh, flip this sign and use the, the, uh, what's called the negative log-likelihood, so you take the log of the expression negate it and in this case, the negative log-likelihood is like the cost function equivalent of doing maximum likelihood, so you're just flipping the sign, instead of maximizing, you minimize the negative log likelihood, so-and, and you know, uh, the NLL is therefore convex, okay. Um, the expectation of y. What does this mean? Um, each of the distribution, uh, we start with, uh, a of eta, differentiate this with respect to eta, the log partition function with respect to eta, and you get another function with respect to eta, and that function will- is, is the mean of the distribution as parameterized by eta, and similarly the variance of y parameterized by eta, is just the second derivative, this was the first derivative, this is the second derivative, this is eta. So, um, the reason why this is nice is because in general for probability distributions to calculate the mean and the variance, you generally need to integrate something, but over here you just need to differentiate, which is a lot easier operation, all right? And, um, and you will be proving these properties in your first homework. You're provided hints so it should be [LAUGHTER]. All right, so, um, now we're going to move on to, uh, generalized linear models, uh, this- this is all we wanna talk about exponential families, any questions? Yep. [inaudible]. Exactly, so, ah, if you're-if you're, um, if you're- if it's a multi-variate Gaussian, then this eta would be a vector, and this would be the Hessian. All right, let's move on to, uh, GLM's. So the GLM is, is, um, somewhat like a natural extension of the exponential families to include, um, include covariates or include your input features in some way, right. So over here, uh, we are only dealing with, uh, in, in the exponential families, you're only dealing with like the y, uh, which in, in our case, it- it'll kind of map to the outputs, um. But, um, we can actually build a lot of many powerful models by, by choosing, uh, an appropriate, um, um, family in the exponential family and kind of plugging it onto a, a linear model. So, so the, uh, assumptions we are going to make for GLM is that one, um, so these are the assumptions or design choices that are gonna take us from exponential families to, uh, generalized linear models. So the most important assumption is that, uh, well, yeah. Assumption is that y given x parameterized by Theta is a member of an exponential family. Right. By exponential family of Theta, I mean that form. It could, it could, uh, in, in the particular, uh, uh, uh, scenario that you have, it could take on any one of these, um, uh, distributions. Um, we only, we only, uh, talked about the Bernoullian Gaussian. There are also, um, other distributions that are- those are part of the, uh, exponential family. For example, um, I forgot to mention this. So if you have, uh, real value data, you use a Gaussian. If you have binary, a Bernoulli. If you have count, uh, like, counts here. And so this is a real value. It can take any value between zero and infinity by count. That means just non-negative integers, uh, but not anything between it. So if you have counts, you can use a Poisson. If you have uh, positive real value integers like say, the volume of some object or a time to an event which, you know, um, that you are only predicting into the future. So here, you can use, uh, like Gamma or exponential. So, um, so there is the exponential family, and there is also a distribution called the exponential distribution, which are, you know, two distinct things. The exponential distribution happens to be a member of the exponential family as well, but no, they're not the same thing. Um, the exponential and, um, yeah, and you can also have, um, you can also have probability distributions over probability distributions. Like, uh, the Beta, the Dirichlet. These mostly show up in Bayesian machine learning or Bayesian statistics. Right. So depending on the kind of data that you have, if your y-variable is, is, is if you're trying to do a regression, then your y is going to be say, say a Gaussian. If you're trying to do a classification, then your y is, and if it's a binary classification, then the exponential family would be Bernoulli. So depending on the problem that you have, you can choose any member of the exponential family, um, as, as parameterized by Eta. And so that's the first assumption. That y conditioned on y given x is a member of the exponential family. And the, uh, second, the design choice that we are making here is that Eta is equal to Theta transpose x. So this is where your x now comes into the picture. Right. So Theta is, um, is in Rn, and x is also in Rn. Now, this n has nothing to do with anything in the exponential family. It's purely, um, a dimensions of your of, of your data that you have, of the x's of your inputs, and, and this does not show up anywhere else. And that, that- that's, um. And, and, uh, Eta is, is, uh, we, we make a design choice that Eta will be Theta transpose- transpose x. Um, and another kind of assumption is that at test time, um, right. When we want an output for a new x, given a new x, we want to make an output, right. So the output will be, right. So given an x and, um, given an x, we get, uh, an exponential family distribution, right. And the mean of that distribution will be the prediction that we make for a given, for a given x. Um, this may sound a little abstract, but, you know, uh, we're going to make this, uh, uh, more clear. So this- what this essentially means is that the hypothesis function is actually just, right. This is our hypothesis function. And we will see that, you know, what we do over here, if you plug in the, uh, um, exponential family, uh, as, as Gaussian, then the hypothesis will be the same, you know, Gaussian hypothesis that we saw in linear regression. If we plug in a Bernoulli, then this will turn out to be the same hypothesis that we saw in logistic regression, and so on, right. So, uh, one way to kind of, um, visualize this is, right. So one way to think of is, of- if this is, there is a model and there is a distribution, right. So the model we are assuming it to be a linear model, right. Given x, there is a learnable parameter Theta, and Theta transpose x will give you a parameter, right. This is the model, and here is the distribution. Now, the distribution, um, is a member of the exponential family. And the parameter for this distribution is the output of the linear model, right. This, this is the picture you want to have in your mind. And the exponential family, we make, uh, depending on the data that we have. Whether it's a, you know, whether it's, uh, a classification problem or a regression problem or a time to vent problem, you would choose an appropriate b, a and t, uh, based on the distribution of your choice, right. So this entire thing, uh, a-and from this, you can say, uh, get the, uh, expectation of y given Eta. And this is the same as expectation of y given Theta transpose x, right. And this is essentially our hypothesis function. Right. Yep. [BACKGROUND] That's exactly right. Uh, so, uh, so the question is, um, are we training Theta to, uh, uh, um, to predict the parameter of the, um, exponential family distribution whose mean is, um, the, uh, uh, uh, prediction that we're gonna make for y. That's, that's correct, right. And, um, so this is what we do at test time, right. And during training time, how do we train this model? So in this model, the parameter that we are learning by doing gradient descent, are these parameters, right. So you're not learning any the parameters in the, uh, in the, uh, uh, exponential family. We're not learning Mu or Sigma square or, or Eta. We are not learning those. We're learning Theta that's part of the model, and not part of, uh, the distribution. And the output of this will become the, um, the distributions parameter. It's unfortunate that we use the word parameter for this and that, but, uh, there, there are- it's important to understand what, what is being learned during training phase and, and, and what's not. So this parameter is the output of a function. It's not, it's not a variable that we, that we, uh, do gradient descent on. So during learning, what we do is maximum likelihood. Maximize with respect to Theta of P of y i given, right. So you're doing gradient ascent on the log probability of, of y where, um, the, the, um, natural parameter was re-parameterized, uh, with the linear model, right. And we are doing gradient ascent by taking gradients on Theta, right. Thi-this is like the big picture of what's happening with GLMs, and how they kind of, yeah, are an extension of exponential families. You re-parameterize the parameters with the linear model, and you get a GLM. [NOISE]. So let's, let's look at, uh, some more detail on what happens at train time. [NOISE] So another, um, kind of incidental benefit of using, uh, uh, GLMs is that at train time, we saw that you wanna do, um, maximum likelihood on the log prob- using the log probability with respect to Thetas, right? Now, um, at first it may appear that, you know, we need to do some more algebra, uh, figure out what the expression for, you know, P is, um, represented in the- in- in- as a function of Theta transpose x and take the derivatives and, you know, come up with a gradient update rule and so on. But it turns out that, uh, no matter which- uh, what kind of GLM you're doing, no matter which choice of distribution that you make, the learning update rule is the same. [NOISE] The learning update rule is Theta equals Theta j plus Alpha times y_i minus h Theta of x_i. You guys have seen this so many times by now. So this is- you can, you can straight away just apply this learning rule without ever having to, um, do any more algebra to figure out what the gradients are or what the- what, what the loss is. You can go straight to the update rule and do your learning. You plug in the appropriate h Theta of x, you plug in the appropriate h Theta of x, uh, depending on the choice of distribution that you make and you can start learning. Initialize your Theta to some random values and, and, and you can start learning. So um, any question on this? Yeah. [inaudible] You can do, uh- if you wanna do it for batch gradient descent, then you just, um, sum over all your examples. [inaudible] Yeah. So, um, the uh, Newton method is, is, uh, is probably the most common you would use with GLMs, uh, and that again comes with the assumption that you're- the dimensionality of your data is not extremely high. As long as the number of features is less than a few thousand, then you can do Newton's method. Any other questions? Good. So, um, so this is the same update rule for any, any, um, any specific type of GLM based on the choice of distribution that you have. Whether you are modeling, uh, you know, um, you're doing classification, whether you're doing regression, whether you're doing- you know, a Poisson regression, the update rule is the same. You just plug in a different h Theta of x and you get your learning rule. Another, um, some more terminology. So Eta is what we call the natural parameter. [NOISE] So Eta is the natural parameter and the function that links the natural parameter to the mean of the distribution and this has a name, it's called the canonical response function. Right. And, um, similarly, you can also- let's call it Mu. It's like the mean of the distribution. Uh, similarly you can go from Mu back to Eta with the inverse of this, and this is also called the canonical link function. There's some, uh, terminology. We also already saw that g of Eta is also equal to the, the, the gradient of the log partition function with respect to Eta. So a side-note g is equal to- [NOISE] right. And it's also helpful to make- explicit the distinction between the three different kinds of parameterizations we have. So we have three parameterizations. So we have the model parameters, that's Theta, the natural parameters, that's Eta, and we have the canonical parameters. And this is a Phi for Bernoulli, Mu and Sigma square for Gaussian, Lambda for Poisson. Right. So these are three different ways we are- we can parameterize, um, either the exponential family or, or, or the G- uh, GLM. And whenever we are learning a GLM, it is only this thing that we learn. Right. That is the Theta in the linear model. This is the Theta that is, that is learned. Right. And, uh, the connection between these two is, is linear. So Theta transpose x will give you a natural parameter. Uh, and this is the design choice that we're making. Right. We choose to reparameterize Eta by a linear model, uh, a linear of- linear in your data. And, um, between these two, you have g to go this way and g inverse to come back this way where g is also the, uh, uh, uh, derivative of the log partition. So yeah. So it's important to, to kind of realize. It can get pretty confusing when you're seeing this for the first time because you have so many parameters that are being swapped around and, you know, getting reparameterized. There are three kind of spaces in which- three different ways in which we are parameterizing, uh, uh, generalized linear models. Uh, the model parameters, the ones that we learn and the output of this is the natural parameter for the exponential family and you can, you know, do some algebraic manipulations and get the canonical parameters for, uh, the distribution, uh, that we are choosing, uh, depending on the task where there's classification or regression. [NOISE] Any questions on this? [NOISE] So no- now it's actually pretty, you know, um, you can- you can see that, you know, when you are doing logistic regression, [NOISE] right? So h theta of X, um, so h theta of X, um, is the expected value of- of, um, of Y, uh, conditioned on X theta, [NOISE] and this is equal to phi, right? Because, um, here the choice of distribution is a Bernoulli. And the mean of a Bernoulli distribution is just phi the- in- in the canonical parameter space. And if we, um, write that as, um, in terms of the, um, h minus eta and this is equal to 1 over minus theta transpose X, right? So, ah, the logistic function which when we introduced, ah, linear reg-, uh, logistic regression we just, you know, pulled out the logistic function out of thin air, and said, hey, this is something that can squash minus infinity to infinity, between 0 and 1, seems like a good choice. Bu-but now we see that it is- it is a natural outcome. It just pops out from this more elegant generalized linear model where if you choose Bernoulli to be, uh, uh, to be the distribution of your, uh, output, then, you know, the logistic regression just- just pops out naturally. [NOISE] So,um, [NOISE] any questions? Yeah. Maybe you speak a little bit more about choosing a distribution to be the output. Yeah. So the, uh, the choice of what distribution you are going to choose is really dependent on the task that you have. So if your task is regression, where you want to output real valued numbers like, you know, price of the house, or- or something, uh, then you choose a distribution over the real va- real- real numbers like a Gaussian. If your task is classification, where your output is binary 0, or 1, you choose a distribution that models binary data. Right? So the task in a way influences you to pick the distribution. And, you know, uh, most of the times that choice is pretty obvious. [NOISE] If you want to model the number of visitors to a website which is like a count, you know, you want to use a Poisson distribution, because Poisson distribution is a distribution over integers. So the task deci-, you know, pretty much tells you what distribution you want to choose, and then you- you do the- you know, uh, um, you do this, you know, all- you- you go through this machinery of- of- of figuring out what are the, uh, what h state of X is, and you plug in h state of X over there and you have your learning rule. Any more questions? So, uh, it-, so we made some assumptions. Uh, these assumptions. Now it- it- it's also helpful to kind of get, uh, a visualization of what these assumptions actually mean, right? [NOISE] So to expand upon your point, um, um. You know if you think of the question, "Are GLMs used for classification, or are they used for regression, or are they used for, you know, um, something else?" The answer really depends on what is the choice of distribution that you're gonna choose, you know. GLMs are just a general way to model data, and that data could be, you know, um, binary, it could be real value. And- and, uh, as long as you have a distribution that can model, ah, that kind of data, and falls in the exponential family, it can be just plugged into a GLM and everything just, uh, uh, uh works out nicely. Right. So, uh, [NOISE] so the assumptions that we made. Let, uh, let's start with regression, [NOISE] right? So for regression, we assume there is some X. Uh, to simplify I'm, um, I'm drawing X as one dimension but, you know, X could be multi-dimensional. And there exists a theta, right? And theta transpose X would- would be some linear, um, um, some linear, uh, uh, uh, hyperplane. And this, we assume is Eta, right? And in case of regression Eta was also Mu. So Eta was also Mu, right? Um, and then we are assuming that the Y, for any given X, is distributed as a Gaussian with Mu as the mean. So which means, for every X, every possible X, you have the appropriate, uh, um, um, Eta. And with this as the mean, let's- let's think of this as Y. So that is, uh, a Gaussian distribution at every possible- we assume a variance of 1. So this is like, uh, a Gaussian with standard deviation or variance equal to 1, right? So for every possible X, there is a Y given X, um, which is parameterized by- by- by theta transpose X as- as the mean, right? And you assume that your data is generated from this process, right? So what does it mean? It means, um, you're given X, and let's- let's say this is Y. So you would have examples in your training set that- that may look like this, right? The assumption here is that, for every X there is, um, um- let's say for this particular value of X, um, there was a Gaussian distribution that started from the mean over here. And from this Gaussian distribution this value was sampled, right? You're - you're- you're- you're just sampling it from- from the distribution. Now, the, um- this is how your data is generated. Again, this is our assumption, [NOISE] right? Now that- now based on these assumptions what we are doing with the GLM is we start with the data. We don't know anything else. We make an assumption that there is some linear model from which the data was-was- was- was generated in this format. And we want to work backwards, right, to find theta that will give us this line, right? So for different choice of theta we get a different line, right? We assume that, you know, if -if that line represents the- the Mu's, or the means of the Y's for that particular X, uh, from which it's sampled from, we are trying to find a line, [NOISE] ah, which is- which will be like your theta transpose X from which these Y's are most likely to have sampled. That's- that's essentially what's happening when you do maximum likelihood with- with -with the GLM, right? Ah, similarly, um, [NOISE] Similarly for, um, classification, again let's assume there's an x, right? And there are some Theta transpose x, right? And, uh, and this Theta transpose x is equal- is Eta. We assign this to be Eta, right? And this Eta is, uh, from this Eta, we- we run this through the sigmoid function, uh, 1 over 1 plus e to the minus Eta to get Phi, right? So if these are the Etas for each, um, for each Eta we run it through the sigmoid and we get something like this, right? So this tends to, uh, 1. This tends to 0. And, um, when- at this point when Eta is 0, the sigmoid is- is 0.5. This is 0.5, right? And now, um, at each point- at- at- at any given choice of x, we have a probability distribution. In this case, it's- it's a- it's a binary. So let's assume probability of y is the height to the sigmoid line and here it is low. Um, right. Every x we have a different, uh, Bernoulli distribution essentially, um, that's obtained where, you know, the- the probability of y is- is the height to the, uh, uh, sigmoid through the natural parameter. And from this, you have a data generating distribution that would look like this. So x and, uh, you have a few xs in your training set. And for those xs, you calc- you- you figure out what your, you know, y distribution is and sample from it. So let's say- right. And now, um, again our goal is to stop- given- given this data, so- so over here this is the x and this is y. So this is- these are points for which y is 0. These are points for which y is 1. And so given- given this data, we wanna work backwards to find out, uh, what Theta was. What's the Theta that would have resulted in a sigmoid like curve from which these- these y's were most likely to have been sampled? That's- and- and figuring out that y is- is- is essentially doing logistic regression. Any questions? All right. So in the last 10 minutes or so, we will, uh, go over softmax regression. So softmax regression is, um, so in the lecture notes, softmax regression is, uh, explained as, uh, as yet another member of the GLM family. Uh, however, in- in- in today's lecture we'll be taking a non-GLM approach and kind of, um, seeing- and- and see how softmax is- is essentially doing, uh, what's also called as cross entropy minimization. We'll end up with the same- same formulas and equations. You can- you can go through the GLM interpretation in the notes. It's a little messy to kind of do it on the whiteboard. So, um, whereas this has- has- has a nicer, um, um, interpretation. Um, and it's good to kind of get this cross entropy interpretation as well. So, uh, let's assume- so here we are talking about multiclass classification. So let's assume we have three cat- three, uh, classes of data. Let's call them circles, um, squares, and say triangles. Now, uh, if- here and this is x1 and x2. We're just- we're just visualizing your input space and the output space, y is kind of implicit in the shape of this, so, um, um. So, um, in- in, um, in multicl- multiclass classification, our goal is to start from this data and learn a model that can, given a new data point, you know, make a prediction of whether this point is a circle, square or a triangle, right? Uh, you're just looking at three because it's easy to visualize but this can work over thousands of classes. And, um, so what we have is so you have x_is in R_n. All right. So the label y is, uh, is 0, 1_k. So k is the number of classes, right? So the labels y is- is- is a one-hot vector. What would you call it as a one-hot vector? Where it's a vector which indicates which class the, uh, x corresponds to. So each- each element in the vector, uh, corresponds to one of the classes. So this may correspond to the triangle class, circle class, square class or maybe something else. Uh, so the labels are, uh, in this one-hot vector where we have a vector that's filled with 0s except with a 1 in one of the places, right? And- and- and- and the way we're gonna- the way we're gonna, uh, um, think of softmax regression is that each class has its- its own set of parameters. So we have, uh, Theta class, right, in R_n. And there are k such things where class is in here, triangle, circle, square, etc, right? So in logistic regression, we had just one Theta, which would do a binary, you know, yes versus no. Uh, in softmax, we have one such vector of Theta per class, right? So you could also optionally represent them as a matrix. There's an n by k matrix where, you know, you have a Theta class- Theta class, right? Uh, so in softmax, uh, regression, um, it's- it's- it's a generalization of logistic regression where you have, um, a set of parameters per class, right? And we're gonna do something, um, something similar to, uh, so, uh, [NOISE] so corresponding to each- each class, uh, uh, of- of, uh, parameters that exists, um [NOISE] So there's- there exists this line which represents say, Theta triangle transpose x equals 0, and anything to the left, will be Theta triangle transpose x is greater than 0, and over here it'll be less than 0, right? So if, if- for, for- uh, uh, the- Theta triangle class, um, there is- uh, there is this line, um, which- which corresponds to, uh, uh, Theta transpose x equals 0. Anything to the left, uh will give you a value greater than on- zero, anything to the right. Similarly, there is also-. Uh, so this corresponds to Theta, uh, square transpose x equals 0. Anything below will be greater than 0, anything above will be less than 0. Similarly you have another one for, um, this corresponds to Theta circle transpose x equals 0. And, and, and, and this half plane, we have, uh, to be greater than 0, and to the left, it is less than 0, right? So we have, um, a different set of parameters per class which, um, which, which, which hopefully satisfies this property, um, and now, um, our goal is to take these parameters and let's see what happens when, when we field a new example. So given an example x, we get a set of- given x, um, and over here we have classes, right? So we have the circle class, the triangle class, the square class, right? So, um, over here, we plot Theta class transpose x. So we may get something that looks like this. So let's say for a new point x over here, uh, if that's our new x, we would have Theta transpose, um, Theta trans- Theta square transpose x to be positive. So we- all right. And maybe for, um, for the others, we may have some negative and maybe something like this for this, right? So- th- this space is, is also called the logic space, right? So these are real numbers, right? Thi- this will, this will, uh, this is not a value between 0 and 1, this is between plus infinity and minus infinity, right? And, and our goal is to get, uh, a probability distribution over the classes. Uh, and in order to do that, we perform a few steps. So we exponentiate the logics which would give us- so now it is x above Theta class transpose x and this will make everything positive so it should be a small one. Squares, triangles and circles, right? Now we've got a set of positive numbers. And next, we normalize this. By normalize, I mean, um, divide everything by the sum of all of them. So here we have Theta e to the Theta class transpose x over the sum of i in triangle, square, circle, e to the Theta i transpose x. So n- once we do this operation, we now get a probability distribution where the sum of the heights will add up to 1, right? So, uh- so given- so- if, if, if- given a new point x and we run through this pipeline, we get a probability output over the classes for which class that example is most likely to belong to, right? And this whole process, so let's call this p hat of, of, of, of y for the given x, right? So this is like our hypothesis. The output of the hypothesis function will output this probability distribution. In the other cases, the output of the hypothesis function, generally, output a scalar or a probability. In this case, it's outputting a probability di- distribution over all the classes. And now, the true y would look something like this, right? Let's say, the point over there was- le- let's say it was a triangle, for, for whatever reason, right? If that was the triangle, then the p of y which is also called the label, you can think of that as a probability distribution which is 1 over the correct class and 0 elsewhere, right? So p of y. This is essentially representing the one-hot representation as a probability distribution, right? Now the goal or, or, um, the learning approach that we're going to do is in a way minimize the distance between these two distributions, right? This is one distribution, this is another distribution. We want to change this distribution to look like that distribution, right? Uh, and, and, uh, technically, that- the term for that is minimize the cross entropy between the two distributions. So the cross entropy between p and p hat is equal to, for y in circle, triangle, square, p of y times log p hat of y. I don't think we'll have time to go over the interpretation of cross entropy but you can look that up. So here we see that p of y will be one for just one of the classes and zero for the others. So let's say in this, this example, p of- so y was say a triangle. So this will essentially boil down to- there's a little min- minus log p hat of y triangle, right? And what we saw that this- the hypothesis is essentially that expression. So that's equal to minus log x e x of Theta triangle transpose x over sum of class in triangle, square, circle, e to the triangle. Right. And on this, you, you, you, you treat this as the loss and do gradient descent. Gradient descent with respect to the parameters. Right, um, yeah. With, with, with that I think, uh, uh, any, any questions on softmax? Okay. So we'll, we'll break for today in that case. Thanks. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_19_Reward_Model_Linear_Dynamical_System_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Okay, hey everyone. So welcome to the final week of the class. Uh, what I wanna do today, is share with you a few generalizations of, um, reinforcement learning and of MDPs. So you've learned about the basic MDP formulas of state action, state transition probability, discount factor and rewards. Um, the first thing you see today is two, you know, slight generalizations of this framework to state-action rewards and to the finite horizon MDPs. They're making it a little bit easier for you to model certain types of problems, certain types of robots, so certain types of factory automation problems will be easier to model with these two small generalizations. So we'll talk about those first, and then second, we'll talk about linear dynamical systems. Last Wednesday, you saw a fitted value iteration which was a way to solve for an MDP even when the state-space may be infinite, even when the state space is a set of real numbers known as RN so it's an infinite list of states. So a continuous set states, we use fitted value iteration in which we had to use a function approximator, right, like linear regression, to try to approximate the value function. There's one very important special case of an MDP where even if the state space is infinite or continuous real numbers, uh, there's one important special case where you can still compute the value function exactly without needing to use, you know, like a linear function approximator or to use something like linear regression in the inner loop of fitted value iteration. Um, and so you also see that today, and when you can take a robot or some factory automation task or whatever problem and model it in this framework, it turns out to be incredibly efficient because you can fit a continuous- fit a value function as a function of the states without needing to approximate anything, just compute the exact value function, uh, even though the state space is continuous. So, um, this is a framework that doesn't apply to all problems, but when it does apply, it's incredibly convenient and incredibly efficient. So you see that in the second half of today. Um, uh, yes. Uh, uh, one, one tactical uh, two, two tactical things, um, let's see, from the questions that we're getting from students, some students are asking us, uh, how is grading in CS229? Whatever I did well and this, you know, didn't do so well in that. Um, for people taking the class, pass-fail, a C minus or better is a passing grade. This is quite- I think this is standard at Stanford. Uh, and, um, I think CS229 has historically been one of the heavy workload classes. We know that people taking CS229- yeah, I see a few heads nodding. [LAUGHTER]. I said, sorry, people, uh, uh, taking CS229 end up, you know, putting a lot of work in this class more, maybe frankly more than average for even Stanford classes. And so we've usually been quite nice. With respect to, to grading partly, and we acknowledge that. So I think, uh, uh, yeah I just thought that as well so don't, don't, don't, don't sweat too much. Do work hard for the final project, but just don't, don't sweat too much. Um, uh and, uh, on Wednesday after class, I had a funny question. After I talked about the fitted value iteration question, someone came up to me and said, "Hey Andrew, um, you know, this algorithm you, you just taught us, does it actually work? Like it- does it actually work on autonomous helicopter?" And the answer is yes. Uh, the algorithms I'm teaching, you know, if you, uh, uh, the fitted evaluation as you learned last week, it will work on a finite autonomous helicopter at low speeds. So if you fly very high speeds, very dynamic maneuvers, crazy bang, flipping upside down you, you need a bit more than that, but for flying a helicopter at low speeds the, the exact algorithm that you learned, uh, last Wednesday as well as any of the algorithms you'll learn today including, uh, LQR. You know, if you actually ever need to fly an autonomous helicopter for real, these algorithms will actually work. These simply will work quite well for flying a helicopter at low speeds, maybe not at very, very high speeds and crazy dynamic maneuvers. But at those low speeds these algorithms, pretty much as I'm presenting them, will work. So, um, okay. So the first generalization to the MDP framework that I want to describe is, um, state-action rewards. Um, and so, um, so far we've had the rewards be a function mapping from the states to the set of real numbers, and with state action rewards- um, this is a slight modification to the MDP formalism. Where now, the reward function R is a function mapping from states and actions to the rewards. Um, and so, you know, in an MDP you start from the state S0, you take an action a0, then based on that, you get S1, take an action a1, take a state S2, uh, get to a state S2, take an action a2 and so on. And with the state-action rewards, the total payoff is written like this. Um, and this is, uh, this, this, this allows you to model that different actions may have different costs. Uh, for example, in the little robot wandering around the maze example, um, maybe it's more costly for the robot to move than to stay still. And so, uh, if you have an action for the robot to stay still the reward can be, you know, 0, for staying still and a slight negative reward for moving because you're burning, uh, because, because you're using electricity. Um, right, uh, and so in that case, uh, Bellman's equation becomes this, V star equals, right. Um, where now, you still break down the value of a state as the sum of the immediate reward plus the, you know, expected future rewards. But now, the immediate reward you get depends on the action that you take in the current state, right? So this is a- and so this is Bellman's equations. And if, uh, and notice that previously, um, we had the max kind of over here, but now you need to choose the action, a, that maximizes your immediate reward plus your discounted future reward, which is why the max kind of moved right. If you look at the equation, uh, if you look at this equation, I guess the max had to move outside because now the immediate reward you get, uh, depends on the action you choose at this step in time as well. So these models that different actions, um, may have different costs. Yeah. [inaudible] Uh, yes. Uh, yes. Yes, this max applies to the entire expression, right, yeah. [inaudible] Uh, let's see. So in this formulation, reward is determined based on the state and action, yes that is correct. So, um, in this formulation, the reward depends on the, uh, current state and the current action but on the next state you get to. Right, um, oh, and by the way, there, there are multiple variations of formulations of MDPs, but this is, um, one convenient one. I guess. The model that different costs and I think the- and, and actually- and you find in a helicopter a common, um, formulation of this would be to say that yanking aggressive on the control stick, uh, should be assigned a higher costs because yanking the control stick aggressively causes your helicopter to jerk around more, and so maybe you want to penalize that by setting reward function that penalizes very aggressive maneuvers. The, the- this gives you the, uh, as a, as a problem designer, um, uh, sort of more flexibility. Um, and then, uh, uh, and then finally- so let me just write this on top. In this formulation, um, the optimal action- so, uh, right, so in order to compute the value function, you can still use value iteration, right, which is still, you know V of S gets updated as basically the right-hand side from Bellman's equations. So, um, value iteration works just fine for the state-action reward formulation as well. And, uh, if you apply value iteration until V converges to V star, then the optimal action is, um, is, is just the argmax of that thing, right? So, so, pi star is just the, uh, argmax of this thing. Where now, when you're given state, you wanna choose the action that maximizes your immediate reward plus your expected future rewards. Okay. Yeah, so I think just maybe another example, um, if you want to use an MDP to, um, um, plan a shortest route for robot to, say drive from here in Stanford, to drive up to San Francisco, right? Then, if it cost different amounts to drive on different road segments because of traffic or because of the, uh, speed limit on different road, then this allows you to say that, well driving this distance on this road costs this much in terms of fuel consumption or in terms of, uh, time and so on, right? So the state action rewards. Or, or in factory maintenance, uh, if you send in a team to maintain a machine that has a certain cost versus if you do nothing that has a different cost. But then the machine breaks down that has yet another cost depending on your actions. Okay, so that's the first generalization. Um, the second generalization is the finite horizon MDP. Um, and in a finite horizon MDP, um, we're going to replace the discount factor, Gamma, with a horizon time, T. Uh, and- and we'll- we'll just forget about the discount vector. And in a finite horizon, um, MDP, the MDP will run for, um, a fi- a finite number of T steps. You start with state S_0, take an action a_0, get to S_1, take action a_1, get to state S_T take an action a_T, at time step T and then the world ends, and then we're done, right? So the payoff is this finite sum and- and there's just a full stop at the end of that. Um, you can also apply discounting but usually when you have a finite horizon MDP, maybe there's no need to apply discounting, and so, um, this model is a problem where there are, you know, T time steps and then the world ends after that, right? Wo- world end sounds a bit dire. But, uh, um, yeah, if you're flying an airplane or you're flying a helicopter, and you know you only have fuel, you know, for 30 minutes, right? Uh, er, or an RC helicopter, let say you have 20, 30 minutes of fuel, then you know that you're going to run this thing for 30 minutes and then you're done and so the goal is to accumulate as many rewards as possible up until you, you know, run out of fuel and then you have to land, right? So that'll be an example of a finite horizon MDP. Now, um, and- and- and the goal is to maximize this payoff, um, or the expected payoff over these T time steps, okay? Now, one interesting, uh, property of a finite horizon- of, of, of a finite horizon MDP is that the action you take, um, may depend on what time it is on the clock, right? So there's a clock marching from, you know, timestep 0 to timestep T whereupon, right, the world ends whe- whereupon that's all the rewards the MDP is trying to collect. And one interesting effect of this is that, um, this pen isn't that great, is that, um, the optimal action may depend on what, uh, what the time is on the clock. So, uh, let's say your robot is running around this maze and there's a small plus 1 reward here and a much larger plus 10 reward there, and, um, let's say your robot is here, right, then the optimal action for whether you go left or go right will depend on how much time you have left on the clock. If you have only, you know, two or three times steps left on the clock, it's better to just rush and get the plus 1. But we still have, you know, 10, 20 ticks left on the clock, then you should just go and get the plus 10 reward, right? And so in this example, Pi star of S, um, it's not well defined because well, the- the optimal action to take when your robot is here in this stage, should you go left or should you go right? Um, it actually depends on what time it is on the clock, and so Pi star in this example, um, should be written as the Pi star subscript T of S, uh, because the optimal action, um, depends on what time T it is. The technical term for this is that this is a non-stationary- non-stationary policy. Um, a non-stationary means, uh, it depends on the time, a- as it changes over time, right. Whereas in contrast, up until now, we've been seeing, you know, Pi star of S is the optimal policy before we- before this formula, right, we just said Pi star of S, and that was a stationary policy and stationary means, uh, there's no change over time, okay? So one- one- one thing that, um, I didn't quite prove but that was implicit was that the optimal action you take in the original formulation, uh, is the same action, right, no matter what time it is in the MDP. So in the original formulation that you saw last week, the optimal policy was stationary, meaning that the optimal policy is the same policy, you know, no matter what time it is, it doesn't change over time. Whereas in the final horizon MDP setting, um, the optimal policy, you know, the optimal action changes over time and so this is a non-stationary policy. So stationary versus non-stationary just means, does it change over time or does it not change over time? Okay? So, um, right. If you're using a non-stationary policy anyway, uh, you can also build an MDP with non-stationary transition probabilities or non-stationary rewards- non-stationary. Um, actually so maybe here's an example. Um, let's say you're driving from campus from Palo Alto to San Francisco and we know that rush, hour, is that- what like 5:00 PM or 6:00 PM or something, right? And- and- and maybe- maybe the weather forecasts even says it's going to rain at 6:00 PM or something, right? But so you know that the dynamics of how you drive your car from here to San Francisco will change over time, uh, as in the time it takes, you know, to drive on a certain segment of the road, is a function of time and if you want to build an MDP to solve for, um, the best way to drive from here to San Francisco say, then the state transitions, um, so S_t plus 1 is drawn from state transition probabilities indexed by the state at time T and the action at time T. And if these state transition probabilities change over time, then, um, if you index it by the time t, this would be an example of a non-stationary, um, of a non-stationary state transition probabilities, okay? Um, al- al- alternatively, if you want non-stationary rewards, then you can have R_t T of S_a, uh, is the reward you get for taking a certain action, um, uh, you know, o- o- for being at a certain state at a certain time, okay? Um, so all of these are different variations of- of- of MDPs, um, and so maybe just a few examples of when you want a, ah, finite horizon MDP or use, um, non-stationary, uh, state transitions. Uh, so let's see. Um, if you're flying an airplane, right? For- for- for some airplanes, uh, something like for commercial- for very large commercial airplanes, uh, sometimes over a third of the weight of the airplane comes from the fuel, right? So actually, if you take on a large commercial airplane, you know, when you take off, uh, from, uh, SFO and you fly to- I don't know- I don't know where you guys fly to, I don't know. Fly to- fly to London or something. Right, direct flight from here to London. Uh, by the time the plane lands, you- you get a much lighter airplane than when you took off, um, because, uh, maybe sometimes- maybe like a third of the weight disappears, you know, because of burning fuel. And so the, the dynamics, the, um, how the airplane feels between takeoff and landing is actually different because the weight is dramatically different, and so, um, uh, this would be one example of where the state transition probability changes in a pretty predictable way, right? Um, or- uh, right. I already mentioned, um, uh, weather forecasts, right. Where, uh, weather forecasts or traffic forecasts if you're driving here or, uh, yeah, drivi- yeah, if you're driving over different types of terrain over time, then you know that it's gonna rain tomorrow. Uh, we are gonna know it's gonna rain tonight and the ground will turn muddy, you know, then all- all the traffic will turn bad. Um, uh, and then- and then, I don't know, industrial automation. Um, some of my friends work on industrial automation and I think that maybe one example, um, if you run a factory 24 hours a day, then the cost of labor, you know, getting people to come into the factory to do some work at noon is actually easier, right, and less costly than getting someone to show up in the factory to do some work at 3:00 AM, right? And so depending on, um, uh, really labor availability over time, the cost of taking different actions, uh, and the cost of, um, and the likelihood of transitioning into different state transition probabilities can vary over the 24-hour clock as well, right? And so these are other examples of when, um, uh, uh, you can have a non-stationary policy and non-stationary state transitions, okay? Now, um, let's talk about how you would actually solve for a finite horizon MDP, and I think for the sake of simplicity, uh, for the most part, I'm going to not bother with non-stationary transitions and rewards. Fo- for the most part, I'll focus on- for the most part, I just going to forget about, you know, the fact that this could be varying. Um, I- I- I'll mention it briefly but I- I wanna focus on the finite horizon aspect. So- so let me define, um, the optimal value function. Um, so this is the optimal value function for time t for starting at state S. So this is the, um, ah, expected total payoff. Starting in state S at time t, and if you execute, you know, the best possible policy, okay? So now the, um, optimal value function depends on what time it is, uh, because, if, if you look at, I don't know, that example of the plus 1 reward on the left and the plus 10 reward on the right, depending on how much time you have left on the clock, the amount of rewards you can accumulate can be quite different, right? If you have more time, you have more than, you know, you can- more time to get to the plus 10 reward, in the, the plus 1 and plus 10 reward that I drew- uh, example that I drew just now. And so, um, in this example, value iteration, um, becomes the following. It actually becomes a dynamic programming algorithm, uh, which you'll see in a second, okay? Which is that- let's see, um, [NOISE] all right, I'm gonna need three lines, let me do this here. [NOISE] All right, which is that V star of t of S is equal to max over A, R of S, A plus- okay? Um, and, uh, actually, this is a question for you. So there's, there's one missing thing here, right? So we're saying that the optimal value, you can get when you start off in state as at time t is the max over all actions of the immediate reward plus sum of s prime state transition reward S prime times V star of S prime, and then what should go in that box. T plus 1. Okay. Cool, awesome, great. Right? And then pi star of S is just, you know, argmax of a, right? Of the- of the same thing, of this whole expression up on top. Um, and so this formula defines Vt as a function of Vt plus 1. So this is like, um [NOISE] this is like the iterative step, right? Given V10, you can compute V9, given V9, you can compute V8, given V8, you can compute V7. Um, and so to start this off, there's just one last thing we need to define, which is V capital T at the finite step, uh, at the final step when the clock is about to run out. Um, all you get to do is choose the action a, that maximizes the immediate reward, and then, and then, and then there's no sum after that, right? So, um, if you start off at state S at the final time step t, then you get to take an action and you get an immediate reward, and then there is no next state because the world just ends right after that step which is why the optimal value at time t is just max over a of the immediate reward because what happens after that doesn't matter, okay? So this is a dynamic programming algorithm in which this, um, uh, uh, algorithm- this step on top defines, you know, allows you to compute V star of t, [NOISE] and then the inductive step or the n plus 1 step, I guess, is, uh, if you then having computed V star of t for every state S, right? So, you know, you compute this for every state S. Having done this, you can then compute V star T minus 1 using this, um, inductive step, then V star t minus 2 and so on down to V star of 0. So you compute this at every state, and then based on this, you can compute- oh sorry, 2 pi star of t right? Compute the optimal policy, the non-stationary policy for every state as a function of both the state and the time, okay? Um, all right, cool. And, and I think, uh, again, I don't want to dwell on this, but if you want to work with non-stationary- state transition probabilities or non-stationary rewards, then this algorithm hardly changes in that you can just add you know, if, if your rewards and state transceiver is indexed by time as well, then this is just a very small modification to this algorithm. And it turns out that once you're using a finite horizon MDP making the rewards and state transition rewards as non-stationary is, is just a small tweak, right? So you could yeah, yeah. [inaudible] Uh, can you say that again. In which form will it disappear? The attributes [inaudible]. This one? Oh a non-stationary. So in the end you get a policy pi star subscript T of S. [inaudible]. I'm sorry. This one? This one. Oh, I see, sure yes. Pi star, this is a non-stationary policy. Yes so that's why I like yeah, yeah. Sorry, yeah, yeah. So this- the optimal policy will be a non-stationary policy. Yes. uh, I, I, think, uh, yes, I think, uh, I was using pi star to, to not, not to denote that it has to be a fixed function type, but yes, [inaudible] . Thank you. Yeah. Right. If you take big T to infinity can it just become the usual value iteration? So the- let me think. So there are two things with that. So the two frameworks are closely related right, you can kind of see relationship between the valuation. One problem with taking this framework to big T to infinity is that the values become unbounded, right? Yeah and that's actually one of the reasons why we use a discount factor when you have an infinite horizon MDP, when the MDP just goes on forever. One of the things that discount factor does is it makes sure that the value function doesn't grow without bound, right? And in fact, you know, if, if the rewards are bounded by- right, by some R max then when you use discounting then V, you know, is bounded by I guess R max over 1 minus Gamma, right? By the sum of a geometric sequence. And so but, but in a finite h orizon repeat because you only add up t rewards it, it can't get bigger than T times R max. Yeah. [inaudible]. Let me think. So I think that, boy. So I think, you know, what you find is that- let's see. Um, actually let me just draw a 1D grid just to make life simpler, right? So let's say there's a plus 1 reward there and a plus 10 reward there. If you look at the optimal value function, um, depending on what time it is. If you have two times- and let's say that the dynamics are deterministic, right? Uh, so there's no noise then if you have two times steps left, then I guess V star would be, you know, 10, 10, 10, 1, 1, 1, 0, 0, 0, right? And so, uh, depending on where you are, I guess if you're, uh, uh, uh yeah. Actually, in fact I guess if you're here and there's nothing you can do right, you can't get either reward in time. Uh, but depending on whether you're here or here or here the optimal action will change when we compute with this pi star. This makes sense? Yeah, that's fine. Okay, well yeah. Maybe I do, do encourage you there. If this- if you actually build a little grid simulator and use these equations to compute pi star and V star, you will see that the optimal policy when you have lots of time will be this. Wherever you are go for the 10 rewards, but when the clock runs down then the optimal policy will end up being a mixer, go left and go right. All right, cool. Hope that was okay. Yeah [NOISE]. All right. So, um- [NOISE] So the last thing I want to share with you today is, uh, Linear Quadratic Regulation. And as I was saying at the start, um, LQR applies only in a relatively small set of problems. But whenever it applies, this is a great algorithm, and I just, you know, use it whenever, right, it seems reasonable to apply because it's, uh, uh, is very efficient, and sometimes gives very good control policies. And, um, let's see. And so LQR, um, applies in the following setting. So let's see. In order to specify an MDP, we need to specify the states actions, the state transition release. I'm going to use a finite horizon formulation so capital T and rewards. This, this also works with the discounted MDP formalism but this would be a little bit easier, a little bit more convenient to develop with the finite horizon setting, so let me just use that today. And LQR applies under a specific set of circumstances, which is that, this set of states is an RN. Set of actions is in RD and so to specify the state transition probabilities, we need to tell you what's the distribution of the next state given the previous state. So to specify the state transition probabilities, I'm gonna say that the way S t plus 1 evolves is going to be as a linear function. Some matrix A times S t plus some matrix B times A t plus some noise and sorry there's a little bit of notation overloading again, sorry about that, A is both the set of actions as well as this matrix A, right so there's two separate things but same symbol. I think, I think that the field of- a lot the ideas from LQR came from traditional controls. It's from, uh, what- from, um, I guess from EE and Mechanical Engineering. A lot of ideas from reinforcement learning came from computer science. So these two literatures kind of evolved, and then when the literatures merge, you end up with clashing notations. So CS people use A to denote the set of actions and the, the set of mechanical engineering and EE people use A to denote this matrix and when we merge these two literatures the notation ends up being overloaded, right? Okay. Oh and then, uh, it turns out one thing we'll see later is that this noise term it will- we'll see later is actually not super important. But for now, let's just assume that the noise term is distributed Gaussian with some mean 0 and some covariance sigma subscript w, okay? But we'll see later that the noise will, will be less important than you think. Right. And so this matrix A is going to be R n by n. And this matrix B is going to be R n by d where n and d are respectively the dimensions of the state space and the dimension of the action space. So for driving a car, for example, we saw last time that maybe the state space is six dimensional. So if you're driving a car where the state-space is XY theta x dot y dot theta dot and the action space is steering controls so maybe A is two-dimensional right, acceleration and steering, right. Okay. So let's see. So to specify an MDP we need to specify this five tuple, right? So we specify three of the elements. The fourth one, t is just some number, right, so that's easy. And then the final assumption we need to apply LQR, is that the reward function has the following form. That the reward is negative of s transpose U s plus a transpose V a, where U is n by n, V is d by d and U V are a positive semi-definite. Okay? So these are matrices being bigger than zero, so positive semi-definite. Okay. So the fact that U and V are positive semi-definite that implies that S T U s is greater than or equal to 0 and excuse me, S transpose U s sorry, a transpose V a is also greater than or equal to 0. Okay? So here's one example. If you want to fly an autonomous helicopter and if you want, you know, the state, the state vector to be close to 0. So the state vector captures position, orientation, velocity, angular velocity. If you want a helicopter to just hover in place, then maybe you want the state to be regulated or to, to, to be controlled near some zero position and so if you choose U equals the identity matrix, and V also equal to the identity matrix, this, this would be different dimensions, right? This would be an n by n identity matrix, this would be a d by d ide- identity matrix. Then R of s a ends up equal to negative norm of s squared plus norm of a squared. Okay. And so this allows you to, this allows you to specify the reward function that penalizes, you know, we have a quadratic cost function, the state deviating from 0 or, if you want, the actions deviating from 0, thus penalizing very large jerky motions on the control sticks or we set V equal to 0, then this second term goes away. Okay? So these are some of the cost functions you can specify in terms of a quadratic cost function. Okay. Now again, you know, just so that you can see the generalization, um, if you want non-stationary dynamics, this model is quite simple to change where you can say the matrices A and B depend on the time t. You can also say these, you know, the matrices U and V depend on the time t. So if you have non-stationary state transition probabilities or non-stationary cost function that's how you would modify this. But I won't, I won't use this generalization for today, okay? Now, so the two key assumptions of the LQR framework are that first, the state transition dynamics, the way your states change, is as a linear function of the previous state and action plus some noise, and second, that the reward function is a, you know, quadratic cost function, right? So these are the two key assumptions. And so first, you know, where, where do you get the matrices A and B. One thing that we talked about on Wednesday already was- so again this will actually work if you are trying to apply LQR to fly an autonomous helicopter. This would work for helicopter flying at low speeds. Which is if you fly the helicopter around, [NOISE] you know, start with some state S_0, take an action A_0, um, get to state S_1, do this until you get to S t, right? And then this was the first trial, and then you do this m times. So we talked about this on Wednesday. So fly the helicopter through m trajectory so t time steps each and then we know that we want, S t plus 1 is approximately A S t plus B A t and so you can minimize, okay. So we want the left and the right hand side to be close to each other. So you can, you know, minimize the squared difference between the left hand side and the right hand side in a, in a procedure a lot like linear regression in order to fit matrices A and B. So if you actually fly a helicopter around and collect this type of data and fit this model to it, this will work, you know, this is actually a good pretty reasonable model for the dynamics of a helicopter at low speeds. Okay? So this is one way to do it. Um, so let's see. Method 1 is to learn it, right? A second method is to linearize a non-linear model. [NOISE] So um, let me just describe the ideas at a, at a high level, um, which is let's say that- and, and I think for this it might be, um, useful to think of the inverted pendulum, right? So that was a, you know, so imagine you have a, a, a inverted pendulum. That was that, right? You have a pole and you're trying to- you have a long vertical pole and you're trying to keep the pole balanced. Um, so for an inverted pendulum like this, if you download an open source physics simulator or if you have a friend, you know, from, from the wi- with a physics degree to help you derive the Newtonian mechanics equations for this. Um, ah, [NOISE] let's see. I, I actually tried to work through the, the physics equations in the inverted pendulum one. These are pretty complicated. But I don't know [LAUGHTER]. Um, but you might have a [NOISE] function that tells you that if the state is a certain position orientation with the pole velocity, angular velocity and you as-, um, ah, what is it? Um, apply a certain acceleration, the actions accelerate left or accelerate right, then, you know, one-tenth of a second later, the state will get to this, right? So, so, your physics friend can help you derive this equation. Um, and, an- and then maybe plus noise, right? Le- let me just ignore the noise for now. Um, and so what you have is a function [NOISE], right? Then maps from the state, um, x x. Theta Theta. That's a position of the cart and the angle of the pole and the velocities and angular velocities that maps from the current state at time t, oh, excuse me, comma [NOISE] at, right? Maps from the, I guess, current state vector to the next state vector, um, as a function of the current state and current action. Okay? So, um, here's what linearization means and, I'm going to use a 1D example. So because I can only draw on a flat board, right? I can't, you know, because, because of the two-dimensional nature of the whiteboard, um, I'm just going to use a- let's, let's suppose that you have St plus 1 equals f of St. And let me just forget- let me just ignore the action for now. So I have one input and one output so I can draw this more easily on the whiteboard. Um, so we have some function like this. So the x-axis is St, and y-axis is St plus 1, and this is the function f, right? We'll plug in back the, um, action later. What the linearization process does is, um, you pick a point and I'm going to call this point St over bar, and we're going to, you know, take the derivative of f and fill a straight line. So we're not drawing a straight line very well. Let's take the tangent straight line at this point St-bar, and uh, what, ah, and, and we're going to use this straight line. Let's draw line green. And we're going to use a green straight line to approximate the function f. Okay. And so if you look at the equation for the green straight line, um, the green straight line is a function mapping from St to St plus 1. And S-bar is the point around which you're linearizing the function. So S-bar, um, is a constant. And this function is actually defined by St plus 1, um, is approximately [NOISE] the derivative of the function at S-bar times St minus S-bar plus f of S-bar t. Okay. Um, and so, ah, so S-bar t is a constant, right? And this equation expresses St plus 1 as a linear function of St. So think of S-bar t as a fixed number, right? It doesn't vary. So given some fixed S-bar, um, this equation here- this is actually the equation of the green straight line, which is it says, you know, if, if you use a green straight line to approximate the function f, just tells you what is St plus 1 as a function of St, and this is a, you know, linear and affine relationship between St plus 1 and St, okay? Um, so that's how you would linearize a function. Um, and, and in the more general case where, um, [NOISE]. And in a more general case where, um, St plus 1 is actually a function of, you know, putting this back in where both St and at, um, the formula becomes, um, let me see. Um, well, I'll write out the formula in a second. Ah, but in this example, S-bar t is usually chosen to be a typical value [NOISE] for S, right? And so in particular, if you expect your helicopter to be doing a pretty good job hovering near the state 0, then, uh, it'll be pretty reasonable to choose S-bar t to be the vector of all 0s . Because if you look at how good is the green line as an approximation of the blue line, right, in a small region like this, you know, the green line is actually pretty close to the blue line. And so if you choose S-bar to be the place where you expect your helicopter to spend most of its time, then the green line is not too bad in approximation to the true function to the physics. Oh, excuse me, or if you expect for the inverted pendulum, if you expect that your inverted pendulum will spend most of its time with the pole upright and the velocity not too large, then you choose S-bar to be maybe the 0 vector. Um, and so long as your pendulum- your inverted pendulum is spending most of its time kind of, you know, close to the 0 state, then the green line is not to get an approximation for the blue line, right? So this is an approximation, but you try to choose, um ah, be- because- I mean in, in this little region it's actually not that bad that an approximation, ah, it's only when you go really far away, right, there there's a huge gap between the linear approximation, um, and the true function f, okay? Um, all right. And so, um, in the more general case where f is a function of both the state and the action, then what you have to do is, ah, the input now becomes St, At because f maps from St, At to St plus 1. And then instead of choosing S-bar t, you're choosing S-bar t, a-bar t, which is a typical state and action, ah, around which you linearize the function. Or let me just write down the formula for that. [NOISE] Um, in which you would say, if you linearize f around a point given by S-bar t, a-bar t kind of the typical values, then the formula you have is St plus 1 is given by f of S-bar t, a-bar t plus the gradient with respect to S, [NOISE] transpose S_t minus S_t bar. [NOISE] Okay. So this is the generalization of the 1D function we measured just now, or we wrote down just now, which says that, you know, the next state is approximately this point around you, which you linearize, plus the gradient with respect to S times how much the state differs from the linearization point plus the gradient respect to the actions times how much the actions vary from a-bar, okay? And this kind of generalizes that equation you wrote. [NOISE]. So, um, so this equation expresses St plus 1 as a, ah, linear function or technically an affine function of the previous state and the previous action, right? With some matrices in between. And from this, you know, after some algebraic munging, you can re-express this as St plus 1 equals ASt plus Bat. Um, and, and just- there- there's just one other little detail which is, um, you might need to redefine St to add an intercept term. Right. And because this is, is a affine function with an intercept term rather than the linear function. But so from this formula, you know, with a little bit of algebraic munging, you should be able to figure out whether the matrix is a and b, ah, ah, but you might need to add an intercept term to the S, but this is just an affine function to kind of rewrite in terms of matrices a and b, okay? Um, all right. So right, I hope that makes sense, right? That this thing, this linearization thing expresses St plus 1 as a linear function of St and at, right? This is just a linear- is just- the wa- way St plus 1 varies, you know, is just some matrix times St, some matrix times at, um, and that's why with some munging, you can get into this formula for some matrix a and b, okay? Um, but because there are some constants floating around as well, like this, you might need an extra intercept term to multiply to a to give you that extra constant. [NOISE] That's where we are. Um, we now have that for these MDPs either by learning a linear model with the matrices A and B, um, or by taking a nonlinear model and linearizing it. Like you just saw, you can model- hopefully model an MDP as a, um, [NOISE] linear dynamical system, meaning this, you know, S_T plus 1 is this linear function or the previous state and action, as well as hopefully with a quadratic reward function or the- really, the- er, right, in the form that we saw just now. Um, so let me just summarize the problem we want to solve. a_ST, oops sorry, sorry. S_t plus 1 equals A S_T plus B_at plus w_t, so this is a noise term, um, and then R of S, a equals negative S transpose U_S plus a transpose V_a. All right. And this is a finite horizon MDP. And so the total payoff is R of S_0, a_0 plus dot dot dot plus R S_T. Okay. [NOISE] So let's figure out a dynamic programming algorithm for this. [NOISE] The remarkable problem, the- the remarkable property of LQR, um, and what makes this so useful is that if you are willing to model your MDP using those sets of equations, then the value function is a quadratic function, right? Um, and so let me show you what I mean. And so if your- if your model, if your MDP can be modeled as this type of linear dynamical system, with a quadratic cost function, uh, then it turns out that V star is a quadratic function and so you can compute V star exactly, right? Um, so let me show you what I mean. We're going to develop a dynamic programming algorithm to compute the optimal value function V star. Similar to, uh, what we did a bit earlier today with the finite horizon MDP with a finite set of states, let's start with the final time step and we will work backwards. So, um, V star t of S_T is equal to max over a_T of R of S_T , a_T. Um, this is max over a_T over negative, right? Um, but this is always greater than or equal to 0 because V is positive semi-definite. And so the optimal action is actually to just choose the action 0, um, and so the max over this is equal to the negative S_T transpose U S_T because, because V is a positive semi-definite matrix. This thing is always greater than 0. And then- and so this tells us also that Pi star of the final action is the argmax. So the optimal action is to choose, you know, the vector of 0 actions at the last time step, okay? So this is the base case for the dynamic programming step of, um, value iteration where, uh, the optimal value at the last time step is just choose the action that maximizes the immediate reward, uh, which means maximize this, right? And this is maximized by choosing the action 0 at the last time step, okay? Now, these blue pens keep, let's see if this is any better, ooh, okay. Now, the key step to the dynamic programming implementation is the following, which is suppose that V star t plus 1 S_t plus 1 is equal to a quadratic function. Right. Um, okay. So in the- uh-huh. [inaudible]. Yes. It's true that this term is also greater than 0 without the minus sign. Without the minus sign, that term is positive and so, but you only get to maximize with respect to 80 right? So, so the best you could do for this term is set it to 0. Thank you. All right, cool, tank you. All right. Now, for the inductive case, um, we want to go from V_t plus 1- V_star t plus 1 to computing V star t, right? And the key observation that makes LQR work is, um, let's suppose V star t plus 1, the optimal value function at the next time step, let's suppose is a quadratic function. So in particular, let's suppose V star t plus 1 is this, you know, quadratic function, uh, parameterized by some matrix capital Phi t plus 1 which is an n by n matrix and some constant offset Psi which is a real number. Um, what we will be able to show is that if you do one step of dynamic programming, uh, if this is true for V star plus 1 that V_t after one step as you go from V star plus 1 to V_t that the optimal value function V_t is also going to be a quadratic function with a very similar form, right, with I guess t plus 1 replaced by t, right? Um, and so in the dynamic programming step, um, we are going to update V_t S_t equals max of A_t R of S_t, a_T plus. And then, you know, I- I think you remember, right, previously, um, I'm going to write this in green, previously we had sum of S prime or actually St plus 1 I guess to be S_t a_t S_t plus 1, V star t plus 1 St plus 1. So that's what we had previously where we had a discrete state space and we were summing over it. But now that we have a continuous state space, this formula becomes expected value with respect to S_t plus 1 drawn from the state transition probabilities [NOISE] , uh, V star t plus 1 S_t plus 1 [NOISE]. Uh, yeah. Okay. [NOISE] So the optimal value when the clock is at time t is choose the action a that maximizes immediate reward plus the expected value of, you know, your future rewards when the clock has now ticked from time t to time t plus 1, you're going to state S_t plus 1 at time t plus 1, right? So, um, let's see. So, ah, this is a pretty beefy piece of algebra to do. Um, I think I feel like showing this full result is, I don't know, is like at the level of complexity of a, you know, typical CS 229 homework problem which is quite hard [LAUGHTER]. But let me just show the outline of how you do this derivation and why, you know, why this inductive step works. Well, but I think you- but, but if you want you could work through the algebra details yourself at home. Um, which is that- let me do this on the next board. So V star_t of S_t is equal to max over a_t of the immediate reward, right? So that's the immediate reward. And then plus the expected value with respect to S_t plus 1, is drawn from a Gaussian with mean AS_t plus Ba_t and covariance Sigma w. Ah, so remember S_t plus 1 is equal to AS_t plus Ba_t plus W_t, where W_t is Gaussian with mean 0 and covariance Sigma w. Right? So ah, if you choose an action a_t, then this is the distribution of the next state at time t plus 1. Um, and then expected value of [NOISE] this quadratic term. Um, because this quadratic term here, kind of the inductive case was what we showed was V star for the- for the next time step, right? So it turns out that, um, let's see. So this is a quadratic function, and this expectation is the expected value of a quadratic function with respect to s drawn from a Gaussian, right? With a certain mean and certain variance. So it turns out that, um, the expected value of this thing, right? Well, this whole thing that I just circled. This thing simplifies into, er, a big quadratic function [NOISE] of the action a_t, right? Um, and then, ah, and so in order to, you know, derive the argmax or to derive V star of S, you would derive this big quadratic function. Um, take derivatives with respect to a_t, ah, set to 0, right? And solve for a_t. Okay? And if you go through all that algebra, then you actually- then you end up with the formula for a_t as follows. Um, okay? And um, I'm gonna use, I'm gonna do- I'm gonna take that big matrix and denote that L_t. Okay? Um, and so this shows also that pi star at time t of S_t is equal to L_t times S_t. Okay? So, um, [NOISE] one to- to take away from this is that, under the assumptions we have, right? Linear dynamical system with quadratic cost function. Ah, the optimal action is a linear function of the state S_t. Right? And, ah, this is not a claim that is made through functional approximation. Ah, what I'm- I'm not saying that you could fit a straight line t optimal action and if you fit a straight line, that you get this linear function. Right? That's not what we're saying. We're saying that, um, of all the functions, anyone could possibly come up within the world, linear or non-linear, the best function, the best action is linear. So there is no approximation here. Right? So it's just that, you know, it's just a fact that if you have linear dynamical system, the best possible action at any state is going to be a linear function um, ah, of of that state. Right? So there's no there's- we haven't approximated anything. Right? Um, [NOISE] let me see. Yeah, all right. Let me, let me, let me write this here. Um, and then the other step is that ah, if you take the optimal action and plug it into the definition of V star, then by simplifying which again is quite a lot of algebra, but without the simplifying, you end up with this equation. Um, where again I'll- I'll just write out the formula as is, you know. [NOISE] Okay. Okay. All right. [BACKGROUND] Um, so to summarize the whole algorithm, right, let's, let's put everything together. And, and so- sorry. And so what these two equations do is they allow you to go from V star T plus 1 which is defined in terms of Phi T plus 1 and Psi T plus 1. And it allows you to recursively go back to figure out what is V star T using these two equations. Right. So Phi T depends on Phi T plus 1, Psi T depends on Phi T plus 1 and Psi T plus 1. Uh, and this Sigma w, this is the covariance of w_t. Right. This, this Sigma subscript w. This is not a summation over w, this is a Sigma matrix subscripted by w. That was a covariance matrix for the noise terms you are adding on every step in our linear dynamical system. Okay. And, and this are trace operators, some of the diagonals. Okay? So just to summarize. [NOISE] Um, here's the algorithm. You initialize Phi T to be equal to negative u and Psi T equals 0. Um, and so, you know, that's just taking this equation and mapping it there. Right? So the final time step, ah, that those two, oh, sorry, it should be capital T. Right. So that, um, those two equations for Phi and Psi, it defines V star of capital T. And then you would, um, you know, recursively calculate, um, Phi T and Psi T using Phi T plus 1 and Psi T plus 1. So you go from, you know, for T equals T minus 1, T minus 2 and so on and go back when count down from, right, T minus 1 to T minus 2 and so on down to 0. Um, calculate L_t as above. Right. and L_t was a formula I guess we had over there, um, saying how the optimal action is a function of the current state depending on A, and B, and Phi. Ah, and then finally, Pi star of S_t equals L_t of S_t. Okay? Um, and this algorithm, the remarkable thing, what one really cool thing about LQR is that there is no approximation anywhere. Right? You, you might need to, um, make some approximation steps in order to approximate a helicopter as a linear dynamical system by, you know, fitting matrices A and B to data or by taking a nonlinear thing and linearizing it, and you might need to just restrict- constrict, you know, restrict your choice of possible reward functions. Reward function is quadratic. But once you've made those assumptions, none of this is approximate, everything is exact. Right. Question? [inaudible] Yes, that's right. Yep, yeah. So the approximation step needed are, ah, ah, getting your MDP into the form of a linear dynamical system with quadratic reward. So that is approximate. But once you specify the MTP like that, all of these calculations were exact, right? So, so we're not approximating the value function or quadratic function, is that the value function is a quadratic function and you're computing it exactly. And the optimal policy is a linear function and you just computing, computing that exactly. Okay. Um, I want to mention- before we wrap up, I want to mention one, one unusual fun fact about LQR and this is very specific to LQR. Uh, and, and, and it's convenient, uh, but, but, er, let me say what the fact is and just be careful that this doesn't give you the wrong intuition because it doesn't apply to anything other than LQR, which is that if you look at where, um, so first, if you look at the formula for L, ah, let me see. Move this around. [NOISE] All right. If you look at the formula for L_t, you need to compute, I mean the, you know, the goal of doing all this work is to find the optimal policy. Right? So you want to find L_t so that you can compute the optimal policy. You notice that L_t, um, just depends on Phi but not Psi. Right? Um, so, you know, and, and maybe it's gonna make sense. You're going to- when you take an action, you get to some new state and your future payoffs is a quadratic function plus a constant. It doesn't matter what that constant is. Right? And so in order to compute the optimal action, in order to compute L_t, you need to, you need to know Phi or actually Phi T plus 1 but you don't need to know what is Psi T plus 1. Right. Now, if you look at the way we do the dynamic programming, the backwards recursion, um, what if you implement a piece of code that doesn't involve it to compute Psi, right? So these are the two equations you use, update Phi and Psi. But whether, you know, let's say you delete this line of code. Just don't bother to compute it and just don't bother to compute that and don't bother to compute that. Right? So you notice that Phi depends on Phi T plus 1, but it doesn't depend on Psi. Uh, and so you can implement the whole thing and compute the optimal policy and compute the optimal actions without ever computing Psi. Right. Now the funny thing about this is that the only place that Sigma w appears is that it affects only Psi T. Right? So, you know, if, if we do what I've just cross out in orange and just don't bother to compute Psi T. Then the whole algorithm doesn't even use Sigma w. Right. So one very interesting property of the LQR, um, ah, of this formalism is that the optimal policy does not depend on Sigma w. Right. Um, and I think, ah, maybe this is a, ah, so V star depends on Sigma w, because if the noise is very large, if there's a huge gust of wind blowing a helicopter all over the place, then the value would be worse. But Pi star and L_t, uh, do not depend on the Sigma w. Okay. Um, so this is a property that is very specific to LQR, don't, don't, don't overgeneralize it to other reinforcement learning algorithms. But this, um, I think the intuition to, ah, um, take from this is first, if you are actually applying this system, you know, don't bother to, don't, don't- I say don't, don't try to hard to estimate Sigma w, because you, you don't actually need to use it, uh, which is why when we're fitting a linear model, I didn't talk too much about how you actually estimate Sigma w. Because in LQR system, it literally doesn't matter in a mathematical sense in terms of what does the optimal policy you compute. And the second, the maybe slightly more useful intuition to take away from this, is that, ah, for a lot of MDPs, if you're building a robot, you know, ah, um, remember to add some noise to your system but the exact noise you add doesn't matter as much as one might think. So what I've seen in, in working on a lot of robots, a lot of MDPs is, you know, do add some noise to the system and make sure your learning algorithm is robust to noise. And the form of the noise you add, it does matter. I don't say it doesn't matter at all. I mean, in, in LQR, it doesn't matter at all. For other MDPs, it does matter. But I think the fact that you've remembered to add some noise is often in practice more important than the exact details of, you know, is the noise 10% higher or is the noise 10% lower. If, if the noise is 100% higher or lower, that will often make a big difference, but, ah, but, but when I'm, you know, training a model of our helicopter or something, the noise is something that, you know, I pay a little bit of attention to but I pay much more attention to making sure that the matrices A and B are accurate than, and then, you know, a little bit sloppiness in the act of using your noise model is something that an MDP can probably survive, that your policy can survive. Okay. Let's take one last question. Yes. [inaudible]. Oh V? Uh, ah, oh I see. Sorry, yes. Let me see my notes. Oh V. That was, ah, this is a V. Yes, thanks, yeah. Okay, cool. Thanks everyone. Let's break and I'll see you for the final lecture on [NOISE] Wednesday. Thanks everyone |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_9_ApproxEstimation_Error_ERM_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Okay. Welcome everyone. So, um, today we'll be going over learning theory. Um, this is, um, this used to be taught in the main lectures in- and in previous offerings. Ah, this year we're gonna cover it as, ah, as a Friday section. Um, however, some of the concepts here are, ah, we gonna be covering today are- are, um, important in the sense that they kind of deepen your understanding of how machine learning kind of works under the covers. What are the assumptions that we're making and you know, um, why do things generalize, um, and- and so forth. So here's the rough agenda for today. So, ah, we're going to quickly start off with, ah, framing the learning problem and, ah, we'll go deep into bias-variance, um, trade off. We'll go- we'll spend some time over there and we look at, uh, some other ways where you can kind of, ah, decompose the error, ah, as approximation error and estimation error. Um, we'll see what empirical, ah, risk minimization is and then we'll spend some time on uniform convergence and, um, VC dimensions. So, ah, let's jump right in. Right. So the, um, so the assumptions under which we are going to be operating, um, for- for this lecture and in fact for most of- most of the- the algorithms that we'll be covering in this course, um, is that there are two main assumptions. One is that there exists a data distribution, distribution D from which x y pairs are sampled. So this is, ah, this makes sense in the supervised learning setting where, um, you're expected to learn a mapping from x to y. But, ah, the assumption also actually holds more generally even in the unsupervised, ah, setting case. The- the main assumption is that there is a ge- data-generating distribution and the examples that we have in our training set, and the ones we will be encountering when we test it, ah, are all coming from the same distribution. Right. That's- that's like the core assumption. Um, without this, um, coming up with any theory is- is- is gonna be much harder. So the assumption here is that you know, um, there is some kind of a data ge-, ah, generating process. And we have a few samples from the data generating process that becomes our training set and that is a finite number. Um, you can get an infinite number of samples from this data generating process, and the examples that we're gonna encounter, ah, at test-time are also samples from the same process. Right. That's- that's the assumption. And there is a second assumption. Um, which is that all the samples are sampled independently. Um, so, um, with these two assumptions, ah, we can imagine a learning, ah, the process of learning to look something like this. So, we have a set of x y pairs which we call as s. Um, these are just x 1, y 1, x m y m. So we have m samples from- from- sample from the data generating process and we feed this into a learning algorithm and the output of the learning algorithm is what we call as a hypothesis. Hypothesis, ah, is- is a function, um, which accepts an input- a new input x and makes a prediction about- about y for that x. So, ah, this hypothesis is sometimes also in the form of Theta hat. So if we- if we restrict ourselves to a class of hypothesis. For example, ah, all possible logistic regression models of, ah, of dimension n, for example, then, um, it's, you know, um, obtaining those parameters is equivalent to obtaining the hypothesis function itself. So a key thing to note here is that this s is a random variable. All right. This is a random variable. This is a deterministic function. And what happens when you feed a random variable through a deterministic function you get a? Random variable. Exactly. So, um, the hypothesis that we get is also a random variable. Right. So all random variables have a distribution associated with them. The distribution associated with the data is the distribution of- of capital D. Um, this just a fixed, ah, deterministic function. And there is a distribution associated with the, um, um, with the- with the parameters that we obtain. That has a certain distribution as well. In, um, in the sta- in- in a more statistical setting, um, we call this an estimator. So if you take some advanced statistics courses you will call, ah, what you will come across as an estimator. Here we call it a learning algorithm. Right, and the distribution of Theta, um, is also called the sampling distribution. And the, um, what's implied in this process is that there exists some Theta star, ah, or in A star. However you want to view it which is in a sense a true parameter. A true parameter that we wish, ah, to be the output of the learning algorithm, ah, but of course, we never know- we never know what, ah, Theta star is, um, and when, um, what we get out of the learning algorithm, um, is- is going to be just a- a sample from a random, um, random variable. Now, a thing to note is that this the Theta star or A star is not random. It's just an unknown constant. Not a- when we say it's not random it means there is no probability distribution associated with it. It's just a constant which we don't know, that- that's- that's the assumption under which you operate. Right. Now, um, let- let's see what's, ah, let's see what's- what's- what are some properties about this Theta- Theta-hat. So all the, um, all- all the- all the entities that we estimate are generally, um, decorated with a hat on top, which- which indicates that it's- it's something that we estimated. Um, and anything with a star is like, you know, the true or the right answer which we don't have access to it generally. So any questions with this so far? Yeah. [BACKGROUND] Yeah. So, yeah, this could be, uh, um, in case of like, uh, uh, linear or, or logi- logistic regression or linear regression generally happens to be a vector. It could be a scalar, it could be, you know, a matrix, it could be anything. Right. Uh, it's just an entity that we estimate. Um, and sometimes, uh, H star can also be so generic that it, it need not even be parameterized. It's just some function that you estimate. So, uh, yeah, so it could, it could be a vector or a scalar or, or a matrix, it could be anything. Right? So, uh, let's see what happens when we- so in the lecture, we saw, uh, this diagram for in, in the - when we were talking about bias-variance. So in case of, uh, regression, [NOISE] and, um, we saw that this was one fit, this was just, uh, let me use a different color, straight line, and, right? And we saw this as, uh, the concepts of [NOISE] sorry, underfitting and this is overfit and this is like just right. Right, so the concept of underfitting and overfitting are, kind of, closely related to bias and variance. Uh, so this is how you would view it from the data. So this is from the data view, right? Cause this is x, this is y. You know this is your data. Um, and if, if you look at, you know, um, look at it from a data point of view, these are the kind of, uh, different algorithms that you might get, right? However, uh, to get a more formal sense, uh, formal view into what's, what's bias and variance, it's more useful to see it from the parameter view. [NOISE]. So let's imagine we have four different learning algorithms, right? I'm just going to plot four different. And here this is the parameter space, let's say theta 1, theta 2. Let's imagine, you know, uh, we have just two parameters. It's easier to visualize theta 1 and theta 2, right. And this corresponds to algorithm A, algorithm B, C, and D. Right. There is, there is a true theta star. Let's, let's- which is unknown, right? Now, let's imagine we run through this, this process of sampling m examples running it through the algorithm, obtain a theta hat, right? And then we start with a new sample- sample from D run it through the algorithm we get a different theta hat, right? And theta hat is going to be different for different learning algorithms. So, so let's imagine first we, we sample some data that's our training set, run it through algorithm A and let's say this is the parameter we got and then we run it through Algorithm B and let's say this is the parameter we got and through C here and through D over here. And we're gonna repeat this, you know, second one maybe here, maybe here, here, here and so on and you repeat this process over and over and over. The, the key is that the number of samples per input is m, that is fixed, right? But we're gonna repeat this process and over and over and for every time we repeat it, we get a different point over here. [NOISE] Right? So, uh, each point each dot corresponds to a sample of size M, right? The number of points is basically the number of times we repeated the experiment, right? And what we see is that these dots are basically samples from the sampling distribution, right? Now, the concept of, of bias and variance is kind of visible over here. So if we were to classify this, now we would call this as bias and variance, right? So these two are algorithms that have low bias, these two are- have high variance, these two have low varia- I'm so- these two have low bias, high bias low variance, high variance. So what does this mean? Uh, so bias is basically, um, checking are the- is, is the sampling distribution kind of centered around the true parameter, the true unknown parameter? Is it centered around the true parameter? Right? And variance is, um, is, is measuring basically how dispersed the, the sampling distribution is, right? So, so formally speaking, this is bias and variance and it becomes, uh, you know pretty clear when we see it in the parameter view instead of in, uh, uh, uh, the data view. And essentially bias and variance are basically just properties of the first and second moments of your sampling distribution. So you're asking the first moment that's the mean, is it centered around the true parameter and the second moment that variance - that's literally variance of the bias-variance trade-off. Yeah. [inaudible]. Yeah. [inaudible]. Um, so this is, a, a diagram where I am using only two thetas just to fit, you know write on a whiteboard. So you, you would imagine something that has high variance, for example, this one to probably be of a much, much higher dimension, not just two, but it would still be spread out. It would still have like high variance. There would be points in a higher-dimensional space, you know but more spread out. Right, so, so the question was, um, the question was, um, in over here we, uh, we actually had more number of thetas but, uh, here with the higher variance, um, uh, plots we are having the same number of thetas. So, uh, yeah so you could imagine this to be higher-dimensional. And also, different algorithms could have different, uh, bias and variance even though they have the same number of parameters. For example, if you had regularization, the variance would come down, for example. Let me go over that, um, um, um, a few observations that we want to make, uh, is that as we increase the size of the data, every time we feed in, so if this were to, to be made bigger, if you take a bigger sample for every, um, every time we learn, uh, the variance of theta hat would become small, right? So if we repeat the same thing but with, with larger number of examples, this would be more- all of these would be more, um, tightly concentrated, right? So, so the spread is, uh, uh, so the spread is a function of how many examples we have in each, um, in each, uh, uh, iteration. Right? So, uh, as m tends to infinity, right? The variance tends to zero, right? If you were to collect an infinite number of samples, run it through the algorithm, you would get some particular, um, um, theta-hat. And if you were to repeat that with an infinite number of examples we'll always keep getting the same, um, uh, theta hat. Now the rate at which the variance goes, goes to 0 as you increase m, is you can think of it as what's also, uh, called the statistical efficiency. It's basically a measure of how efficient your algorithm is in squeezing out information from a given amount of data. And if theta hat tends to theta star as m tends to infinity, you call such algorithms as consistent. So, um, consistent and if the expected value of your theta hat is equal to theta star for all m, right? So no matter how big your, um, sample size is, if you always end up with a sampling distribution that's centered around the true parameter, then your estimator is called an unbiased estimator. Yes. [inaudible]. So efficiency is, is, uh, basically the rate at which, uh, the variance drops to 0 as m tends to 0. So for example, you may have one algorithm which, uh, which, which, where the variance is a function of 1 over M square. Another algorithm where the variance is a function of e to the, uh, uh minus m. You, you can have- the variance can, uh, drive down at different rates, uh, relative to m. So that's kind of captures, um, uh, what- what's efficiency here. [NOISE] Right? Yeah. [inaudible] Yeah. So uh, theta-theta hat approaches um, so um, this is a random variable here so so here's one thing to be clear about here. This is ah, a number, a constant, and this is a constant but here this is a random variable, right? So what we're seeing is that as m tends to infinity, theta hat, that is the distribution, converges towards being a constant and that constant is going to be a theta star. Which means at smaller values of m, your algorithm might be centered elsewhere, but as you get more and more data, your sampling distribution variance reduces and also gets centered around the true theta star eventually. Okay. So um, informally speaking, if your algorithm has high bias, it essentially means no matter how much data or evidence you provided, it kind of always keeps away from from theta star, right? You cannot change its mind no matter how much data you feed it, it's never going to center itself around theta star. That's like a high biased algorithm, it's biased away from the true parameter. And variance is, you can think of it as your algorithm that's kind of highly distracted by the noise in the data and kind of easily get swayed away, you know, far away depending on the noise in your data. So uh, these algorithms you would call them as those having high variance, because they can easily get swayed by noise in the data. And as we are seeing here, bias and variance are kind of independent of each other. You can have algorithms that have, you know, an independent amount of bias and variance in them, you know, there is there is no um, um correlation between ah, ah bias and variance. And one way- so the- how do we how- do we kind of fight variance? So first let's look at how we can address variance. Yes. [BACKGROUND]. So bias and variance are properties of the algorithm at a given size m. Right? So these plots were from um, were from a fixed size m and for that fixed size data, this algorithm has high bias, low variance, this algorithm has high variance and high bias and so on. Yeah. Yeah. You can you can um, you can think of it as yeah, it, you- you assume like a fixed data size. Right? So uh, fighting variance. Okay. So uh, one way to kind of ah, address if you're in a high variance situation, this will just increase the amount of data that you have, and that would naturally just reduce the variance in your algorithm. Yes. [BACKGROUND]. That is true. So you don't know upfront what uh, whether you're you're uh, in a in a high bias or high variance um, um, scenario. One way to kind of um- one way to kind of uh, uh, test that is by looking at your training performance versus test performance uh, we'll go- we'll go over that um. In fact we're gonna go into um, you know, much more detail in the main lectures of how do you identify bias and variance, here we're just going over the concepts of what are bias and what are variance. So one way to um, address variance is you just get more data, right? As you as you get more data, the- your sampling distributions kind of tend to get more concentrated. Um, the other way is what's called as regularization. So when you- when you um, add regularization like L2 regularization or L1 regularization um, what we're effectively doing is let's say we have an algorithm with high variance maybe low bias, low bias, high variance and you add regularization, right? What you end up with is an algorithm that has maybe a small bias, you increase the bias by adding regularization but low variance. So if what you care about is your predictive accuracy, you're probably better off trading off high variance to some bias and getting down- reducing your your um, variance ah, to a large extent. Yeah. [BACKGROUND]. Yeah. We'll- we- we- we're gonna uh, uh, look into that next. Right. So in order to kind of um, get a better understanding of this uh, let's imagine um. So think of this as the space of hypothesis, space of, right? So um, let's assume there is a true- there exists, this hypothesis. Let's call it g, right? Which is like the best possible hypothesis you can think of. By best possible hypothesis, I mean if you were to kind of take this uh, um, um, take this hypothesis and take the expected value of the loss with respect to the data generating distribution across an infinite amount of data, you kind of have the lowest error with this. So this is, you know, um, you know the best possible hypothesis. And then, there is this class of hypotheses. Let's call this classes h, right? So this, for example, can be the set of all logistic regression ah, hypotheses, or the set of all ah, SVMs you know. So this is a class of hypotheses and what we, what we end up with when we ah, take a finite amount of data, is some member over here, right? So let me call h star. Okay. There is also some hypothesis in this class, let me call it kind of h star, which is the best in-class hypotheses. So within the set of all logistic regression functions, there exists some, you know, some model which would give you the lowest um, lowest error if you were to ah, test it on the full data distribution, right? Um, the best possible hypothesis may not be inside ah, your ah, um, inside your hypothesis class, it's just some, you know, some hypothesis that that's um, um, that's conceptually something outside the class, right? Now g is not the best possible hypothesis, h star is best in-class h, and h hat is one you learned from finite data, right? So, uh, we also introduce some new notation. Um, so epsilon of H is, you will call this the risk or generalization error. [NOISE] Right? And it is defined to be equal to the expectation of xy sampled from E of indicator of h of x not equal to y. Right? So you sample examples from the data-generating process, run it through the hypothesis, check whether it matches with, uh, with your output and if it matches, you get a 1, if it does, uh, if it- if it, uh, doesn't match you get a 1, if it matches you get a 0. So on average, this is, you know, roughly speaking the fraction of all examples on which you make a mistake. And here we are kind of thinking about this, um, from a classification point of view to check if, you know, the class of your output matches the true class or not. But you can also extend this to, uh, the regression setting. Uh, but that's a little harder to analyze but, you know, the generalization holds to, um uh, the regression setting as well but we'll stick to classification for now. And we have an epsilon hat, s of h and this is called the empirical risk. This is the empirical risk or empirical error. And this over here is 1 over m, i equal to 1 to m, indicator of h of x_i not equal to y_i, right? The difference here is that here this is like an infinite process. You're- you're- you're, um, sampling from D forever and calculating like the long-term average. Whereas this is you have a finite number that's given to you and what's the fraction of examples on which you make - you make an error. Right. All right, uh, before we go further, uh, there was a question of how, um, adding regularization reduces your variance. So what you can see, um, or actually let me- let me get back to that, um - um, in a- in a bit. Uh, so E of g and this is called the Bayes error. [NOISE] So this essentially means if you take the best possible hypothesis, what's the fraction, uh, what's - what's the rate at which you make errors? You know, uh, and that can be non-zero, right? Even if you take the best possible hypothesis ever and that can still - still make some - some mistakes and - and this is also called irreducible error. [NOISE] For example if your data-generating process you know, uh, spits out examples where for the same x you have different y's, uh, in two different examples then no - no learning algorithm can, you know, uh - uh, do well in such cases. That's just one- one kind of irreducible error, they can be other kinds of irreducible, uh, errors as well. And epsilon of h_*, epsilon of g is called the approximation error. [NOISE] So this essentially means what is the price that we are paying for limiting ourselves to some class, right? So it's the - it's the error between - it's the difference between the best possible error that you can get and the best possible error you can get from h_*. Right, so this is, um, this is an attribute of the class. So what's the cost we are paying for restricting yourself to a class? Then you have, uh, epsilon of h_i minus epsilon h_* and this you call it the estimation error. [NOISE] The estimation error is, given the data that we got, the m examples that we got and we estimated, you know, using our estimator sum h - h - h_i. What's the - what's the - what's the error due to estimation and this is like approximation. All right. So, this - this, uh, the error on G is the Bayes error. The gap between this error and the best in class is the approximation error and the gap between the best in class and the hypothesis that you end up with is called the estimation error, right? And, uh, it's easy to see that, um, h hat is actually equal to estimation error plus approximation error plus irreducible error. Right? It's pretty easy. You know, if you just add them up all these cancel out and you're just left with, uh um, epsilon of H hat. Um, so it's - it's kind of useful to think about your generalization error as different components. Um, some error which you just cannot, you know, uh - uh, reduce it no matter what - no matter what hypothesis you pick no matter how much of training data you have. There's no way you can get rid of the irreducible error. And then you make some - some decisions about - that you're going to limit yourself to neural networks or Logistic regression or whatever and thereby you're defining a class of all possible models and that has a cost itself and that's your approximation error. And then you are working with limited data. And this is generally due to data, right? And with the limited data that you have and possibly due to some nuances of your algorithm, you also have an estimation error, right? We can further see that the estimation error can be broken down into estimation variance and the estimation bias, right? Um, and, uh, you can not, therefore, write this as approximation error plus irreducible error. And what we commonly call as bias and variance are - this we call it as variance and this we call it as bias and this is just irreducible. So sometimes you see the bias-variance decomposition and sometimes you see the estimation approximation error decomposition. There are somewhat related, they're not exactly the same. So, uh, the bias is basically why is, you know, bias is basically trying to capture why is H hat far from a - from G, right? Why is it staying away from G? You know. Why did our hypothesis stay away from the true hypotheses? And that could be because your classes, uh, is- is kind of too small or it could be due to other reasons, uh, such as, you know, um - um, as we'll see maybe regularization that kind of keeps you away from a certain- certain, uh - uh, hypothesis, right? And the variance is generally due to it like - it's almost always due to having small data. It could be due to other, uh - uh, reasons as well. But these are two different ways of, uh, of decomposing your, um, your error. So now, um, if you have high bias, how do you fight high bias? Fight high bias. So how would you fight high bias? Any guesses. [inaudible] Yeah exactly. So one way is to just, you know make your h bigger, right. Make your h bigger. And also you can - you can try, you know different algorithms, um - um uh, after making your h bigger. And what this generally means is what we saw there was regularization kind of, you know reduces your - your, um, variance by paying a small cost in bias and over here, you know, um. [NOISE] So let's say your algorithm has some bias, right. So it has a high bias and some variance, right, and you make H bigger, your, your class bigger right and this generally results in something which reduces your bias but also increases your variance, right? So, with, with this picture you can, you can also see, you know, what's the effect of, um, how, how does variance come into the picture? Now just by having a bigger class, there is a higher probability that the hypothesis that you estimate can vary a lot, right, if you reduce your- the space of hypothesis, you may be increasing your bias because you may be moving away from g, but you're also effectively reducing your variance, right. So that's, that's the, the one of the, you know, trade off that you observe that any step, you- a step that you take for example in, um, reducing bias by making it bigger also makes it possible for your h hat to land at much, you know, a- at a wider space and increases your variance. And if you take a step to reducing your variance by maybe making your, um, your, your class smaller, you may end up making it smaller by being away from the end thereby increase your, your, um, um, increase your bias. So, when you, when you add regularization, you know, th- the question, uh, uh, somebody asked before of how does, um, in, how does adding regularization decrease the variance? By adding regularization, you are effectively, kind of shrinking the class of hypothesis that you have. You start penalizing those hypotheses whose Theta is very, is very large, and in a way you're kind of, you know, shrinking the class of hypothesis that you have. So, if you shrink the class of hypothesis your, your variance is kind of reduced because, you know, there's much smaller wiggles room for your estimator to place your h hat. And, you know, if you shrink it by going away from, from, uh, from g, you, you also introduce bias. That's like, you know, the bias variance, uh, um, trade off. Any questions on this so far? Yeah. [BACKGROUND].Yeah, you, you probably wanna think of each of these, you probably wanna think of this as a generalized version of this, right, so here we have, like, fixed Theta 1, Theta 2, but you know, uh, because you could parameterize them into, uh, uh, a few parameters you can kind of plot it in a metric space but that's like a more general, um, um, like a bag of hypotheses, and, you know, but in any case in both of- both those diagrams, a point here is one hypothesis, a point there is one hypothesis. Here it's parameterized, here it's not parameterized. Yes. [BACKGROUND]. The thing is we differ, d, um, so the question is, how- what if we, we shrink it towards h star, right. The thing is, uh, we don't know where h star is, right. If we knew it, we didn't even need to learn anything. We could just go straight there, right. So, um, yeah. [BACKGROUND]. With regularization? So the question is, when we add regularization, are we sure that the bias is going up? No, we, we don't know and, and this is a common scenario what happens, right. You, when you add regularization, you, you, you reduce the variance for sure but you're very likely gonna introduce some bias in that process. [BACKGROUND]. So if you add regularization, you're shrinking your hypothesis space in some ways. So you're kind of moving away from 2g. So you're kind of adding a little bit of bias. You're very likely to add some bias in that process. Yes, so, it's, uh, so I, I, I would encourage you to, you know, kind of, after this lecture to think about this a little more slowly, it's, it's, it takes a while to kind of internalize this, the concept of bias and variance and, and, um, uh, It's not very intuitive but, but, uh thinking about it more definitely helps. All right, an- any other questions before we move on? [BACKGROUND]. So an example for a hypothesis class, right? So the- an example would be, um, the set of all logistic regression models, right? And, uh, when you do gradient descent on your, you know, logistic regression class, you're kind of implicitly restricting yourself to set up possible logistic regression models, that's kind of implicit. [BACKGROUND]. So, the h is the output of the learning algorithm, right? So you feed and input your algorithm. Like this is not the model. This the learning algorithm like, this is, like gradient descent for example. And the output of that is the parameters that you learned that converge to. Right. So d- so, yeah, you, you probably don't wanna think about this as the model that you learned but this as the, like the training process and the output of the training process is a model that you learn. And that is a point in your, in the class of hypotheses. [BACKGROUND]. Yes, so, so, you fix, um, that, uh, th- the class of learning models, you, you, say I'm gonna only gonna learn logistic regression models, right? For different, different samples of data that you feed it as your training set, you're gonna get, learn a different Theta hat. [BACKGROUND]. Yes, the- they have to be within the class of hypotheses. All right, so let's move on. Next, we come across this concept called empirical risk minimization. [NOISE]. ERM. So this is the Empirical Risk Minimizer. Right. So, so the empirical risk minimizer is a learning algorithm. Right. It is one of those kind of boxes that we drew. It is, you know, ah- so in the box that we drew earlier as learning algorithm, right. So the- the- the diagram that we drew earlier based on which we- we ah, reasoned everything so far, didn't actually tell you what actually happens inside. It could be doing gradient descent, it could just do something else. It could be, you know, some- some, you know, smart programmer who's written a whole bunch of if, else and just returns a theta, it could be anything. Right. Uh, and no matter what kind of algorithm was used, the- the bias-variance theory still holds. Right. Now we're going to look at, ah, a very specific type of learning algorithms called the empirical risk minimizer. Right. So, um, and this was feed into your algorithm and you get h star, h hat ERM. Right? Now, h, um, h hat ERM is equal to- what is ERM, empirical risk minimization? It's what we've been doing so far in the course. Right? We, we tried to find a minimizer in a class of hypotheses that minimizes the average training error. Right. Um, so for example, um, this is trying to minimize the training error from a classification, ah, ah, perspective. This is kind of minimizing the- or increasing the training accuracy, which is different from what actually logistic regression did with, where we were doing the maximum likelihood or minimizing the negative log-likelihood. It can be shown that, ah, losses like the logistic loss are - are can be well approximated by, um, by the ERM. And, and, and this theory should- should, ah, ah, hold nonetheless. Um, All right. So if- if we are limiting ourselves to do that class of algorithms which, which worked by minimizing the training loss, right, um, as opposed to something that say returns a constant all the time or- or- or does something else. If we limit ourselves to, um, empirical risk minimizers, then we can come up with more theoretical results. For example, uniform convergence, which we are gonna look at right now. [NOISE]. Right. So, so we're limiting ourselves to empirical risk minimizers and starting off, er, uniform convergence. Right. So there are two central questions that we are kind of interested in. So, ah, one question is, if we do empirical risk minimization, that is if we just reduce the training loss, right, what- what does that say about the generalization of an effect? So that is basically, um, e hat of h versus h. So for, you know, consider some hypotheses. Right. And that gives you some amount of training error. Right. What does that say about its generalization error? And that's one central question we wanna, um, um, consider. And the second one is, how does the generalization error of our learned hypothesis compare to the best possible generalization error in that class? Right. Note we're- you know, we're only talking about h star and not, g um, there. So h star is- is- is the best in class um, um. So these are- these are two central questions that we wanna- we wanna um, explore. And for this, we're gonna use our two tools. Right. So one is called the union bound. [NOISE] Right. What's the union bound? Um, if we have um, k different events A_2, A_K. Then, ah, these need not be independent. Independent. Then the probability of A_1 union A_2 union A_k, is less than equal to the sum of- If this looks trivial, it is trivial. It's- it's um, it's probably one of the axioms in- in- in your, ah, undergrad probability class. But the, the probability of any one of these events happening is less than or equal to the sum of the probabilities of, ah, each of them, ah, happening. Right. And then we have a second tool. Right. It's called the Hoeffding's inequality. [NOISE]. We're only going to state the- the inequality here, ah, there is ah, um, a supplemental notes on the website that actually proves the Hoeffding inequality. You can, ah, go through that, um, but here we're only going to state the result. In fact, throughout this session, we are going to state results. We're not gonna prove anything. Um, so, ah, let Z_1, Z_2, Z_m, be sampled from some Bernoulli distribution of parameter phi. And let's call well, phi hat to be the average of them, m of z_i, and let there be a Gamma greater than zero, which we call it as the margin. So the Hoeffding Inequality basically says, the probability that the absolute difference between the estimated phi parameter and the true phi parameter is greater than some margin, can be bounded by 2 times the exponential of minus 2 gamma square m. Right? Not very obvious but you know, you can, you can show this. What, what it's basically saying, is there is some- there is some- some ber- ah, parameter between 0 and 1 of a Bernoulli distribution. The fact that it is between 0 and 1 means it's- it's bounded. And- and that's a key requirement for, ah, the Hoeffding's inequality. And now, we take samples from this Bernoulli distribution, and the estimator for this is basically- and these are just 0s or 1s. Z- Z- each of the Z is either a 0 or 1. The sample of 0 or a 1 with probability, um, um, Phi, and the estimator is basically just the averages of your samples. Right. And, um, the absolute difference between the estimated value and the true value, the probability that this difference becomes greater than some margin Gamma is bounded by this expression. Right. So there are a lot of things happening here. So you probably want to, um, um, you know, slowly think through this. So this is a margin. All right. And this is like- basically like the deviation of the error. [NOISE] Right. Um, the absolute value of how- how- how far away your estimated values from- from the true. And you'd like it to be small- closer. So you- you- you probably want, ah, your -your Phi hat and phi, to be not more than, I don't know, 0.001. Right. So in which case, if the absolute value between, ah, ah, the estimated and, um, the true parameter is greater than 0.01, if that's the margin your- that you're interested in. Then this, ah, the Hoeffding's inequality proves that if you were to repeat this process over and over and over, the number of times phi hat is going to be great- is going to be farther than 0.001 from the true parameter, it's going to be less than this expression, which is a function of m. Right. And that is- you- you- you can kind of, ah, believe it because as m increases, this becomes smaller, which means the probability of, um, your estimate deviating more than a certain margin only reduces as you increase m. Right. So this is Hoeffding's inequality and we're gonna use this. [inaudible]. Oh, yeah. Questions? [inaudible]. Not, so, so the question is, is h star, uh, the limit of h_r as M goes to infinity? Uh, it is h star in, in the limit as M goes to infinity, if it is a consistent estimator, right? So we, we, we went over the concept of consistency. Given infinite data, will you eventually get to the right answer? And if your estimator is not consistent, then it will- it need not be. So, uh, in general h hat need not converge to h star as you get an infinite amount of data. [NOISE] All right? So, uh, now we wanna use, um, uh, these tools, tool 1 and tool 2 to answer our- like the central questions. [NOISE] Any other questions? Yeah. [BACKGROUND] This is a more limited version of Hoeffding's inequality and yes, uh, if we limit ourselves to a Bernoulli variable, uh, ba- um, which has some parameter phi and you take samples from it. And you construct an estimator which is the average of th- the samples of the 0s and 1s, then, um, this inequality holds. That's- thi- this inequality is called the Hoeffding's inequality. Yes. [BACKGROUND] So if you're, um, in general, there, there are- there are- there is this class of algorithms called maximum likelihood algorithms, maximum likelihood estimators and a pure maximum likelihood estimator is generally consistent. If you include regularization, then it need not be- it need not be, uh, uh, uh, consistent though, uh, I'm not very sure about that. I'm not very sure about that. [NOISE] Yeah, sure. Yeah. So basically like- if you think about a neural net where you have something that's completely [inaudible] neural net is not always consistent. Yeah. So the- basically, um, um, um, I know for the mic, uh, wha- what, what, what he responded was, um, if you have an algorithm like a neural net which is, um, which is non-convex, you may actually not end up with the same, uh, uh, result even if you, uh, um, increase, um, increase like, uh, the number of, um, uh- though I would probably call the, uh, uh, the fact- I, I would probably think of the non-convexity to be part of an estimation bias, um, because you could in theory always find like the global minima of a neural network. It's just that there's some bias in our estimator that we are using gradient descent and we cannot solve it. Okay. So now, uh, let's- let's use these two tools, uh, and for that, uh, we're gonna start [NOISE] how do we look at this diagram, right? So, [NOISE] so over here, um, we have hypotheses. [NOISE] Here we have error, [NOISE] and let's think of this. [NOISE] There's actually one, one curve which I'm trying to make it thick and probably make it look like multiple curves, this is just one curve and this we will call it as. [NOISE] So this is the generalization risk or the, uh, uh, the generalization error of every possible hypothesis, uh, in our class, right? So pick one hypothesis that's gonna be somewhere on this axis, calculate the generalization error, not the empirical, the generalization error and- no that's the height of that curve, right? And we also have something like this. [NOISE] Right? So this dotted line now corresponds to sum each of s_h. Now let's, let's sample a set of m examples and calculate the empirical error of all our hypotheses in our class and plot it as a curve, right? Any questions on what, what, what these two are? Yeah. [BACKGROUND] It need not meet. I'm, I'm just, uh, uh, in fact, thi- this is very likely not even a straight line, you're just thinking of all, all possible hypotheses. It may not be convex. Um, this just to, to, um, get some ideas, um, um, get, get better intuitions on some of these ideas. Yes. [BACKGROUND] So, uh, the black line, the thick black line is the generalization error of all your hypotheses, right? And let's say you sample some, some, some data, right? Let's call it S. On that sample, you have training error for all possible hypotheses, right? [NOISE] We haven't not learned anything, right? It's, it's, uh, uh, this is the generalization error and this is the empirical error for the given S, right? Now, uh, in order to app- apply Hoeffding's Inequality here, right? So let's consider some h_i, right? This is some hypothesis. We- we don't know. So we start with some random hypotheses, right? And- so by starting with some hypotheses like think of this as you start with some parameter, [NOISE] right? And, uh, let's- right. So the height of this line up to the, the thick black curve is basically, um, the generalization error of h_i is the height to the thick black curve. So let me call this Epsilon of h_i, right? And the height to the dotted curve until here. And this is Epsilon hat of h_i. I'm gonna ignore the S for now, right? And this corresponds to like the, the sample that we obtain. Now one thing, ah, you can, you can check is that the expected value of- [NOISE] where the expectation is with respect to the data's sample. So what this means is that, ah, for one particular sample you, ah, -this is the generalization error you got. Take another set of samples, that curve might look som- some, you know, some other way, and, you know, the height of the dotted line would be there. So in general on average, if you sum average across all possible training samples that you can get, ah, the, the expected value of the height to the dotted line is gonna be the height to the, the, the thick line. Right? That's, that's justified. Now here if you apply Hoeffding's inequality, you basically get probability of absolute difference between the empirical error versus the generalization error to be greater than Gamma is less than equal to 2 minus 2 Gamma square. And- this is basically, you know, Hoeffding's inequality. We have right here except in place of phi and phi hat, we have the true generalization error, and the empirical error. Any questions on this so far? So what we are saying is essentially the gap between the generalization error and the empirical error. All right. Right. The gap being greater than some margin Gamma is gonna be bounded by this expression. Right? So loosely speaking what this means is, as we increase the size M, if our training is up- if we plot the set of all dotted lines for a larger M, they are gonna be more concentrated around the black line. Does that make sense? So, so take a moment and think about it. This dotted line corresponds to S of some particular size M. We could take another sample of, you know, a fixed set of examples, and that might look kinda something like this. Right. And take another sample of size M, and that might look something like [NOISE] this. Now- and now, consider the set of all deviations from the black line to every possible dotted line along the vertical line of HI. Right? Now this gap is greater than some margin Gamma with probability less than this term over here. Right? So, so it essentially means that if you start plotting dotted lines with a bigger M, right, where the set of all those dotted lines correspond to a bigger M, they are gonna be much more tightly concentrated around the true generalization of that, of that edge. That make sense? And you're basically applying Hoeffding's inequality to this gap over here instead of some phi. That's basically what you're doing. Right? Now, that's good. But there's a problem here. The problem here is that, we started with some hypotheses, and then averaged across all possible data that you could sample. But in practice, this is useless. Because in practice we start with some data, and run the empirical risk minimizer to find the lowest H for that particular data. Right? And when you, when, when- which means that H, and the data that you have are not really independent. Right? You, you chose the H to minimize, ah, minimize the risk for the empirical risk for th- the particular data that you are given in the first place. Right? So to, to fix this, what we wanna do is basically extend this result that we got to account for all H. Right. Now if we want to get a bound on the gap between the probabilistic bound, and the gap between the generalization error, and the empirical error for all H. You know what's that bound gonna look like. Right. And this is basically called uniform, uniform convergence. This is- I'll just call uniform convergence because we are trying to, ah, we are trying to see how the risk curve converges uniformly to the generalization risk curve, or how the empirical risk curve uniformly converges to the generalization risk curve. And, ah, it's, ah, that's called uniform convergence which you can apply to functions in general, but here we are applying to the risk curves across our hypotheses. And we can show- I'm gonna, ah, just, um, skip the math. So, um, this we showed using Hoeffding's inequality, and you can apply the union bound for unioning across all H. Except we can- first we're going to limit ourselves to, um- all right. So let me start over. So we got this bound for a fixed H. Right? But we are interested in getting the bound for any possible H. Right? So that's our next step. Right? And the way we're gonna, gonna extend this pointwise result to across all of them, is gonna look different for two possible cases. One is a case of a finite hypothesis class, and the other case is gonna be the case for infinite hypothesis classes. So what does it look like? So, [NOISE] so let's first consider finite hypothesis classes. So first we're gonna assume that the class of H has a finite number of hypotheses. The result by itself is not very useful, but it's gonna be like a building block for, for the, for the other case. So let's assume that the number of hypotheses in this class is some number K. Right? Ah, we can show that- I'm not gonna go over the, the derivation, but I'm just gonna, um, write out the result. It's, it's pretty intuitive. So basically what we do is, ah, we apply the union bound for all K hypotheses, and we end up just multiplying that by a factor of K. Right? So what we get is the probability that there exists some hypotheses in H such that the empirical error minus generalization error is greater than Gamma, is less than equal to K times, K times the probability of any 1 which is equal to K times, ah, 2 minus xp 2 Gamma square M. And this we flip it over, we negate it, and we get the probability that for all hypotheses in our class, the empirical risk minus generalization risk is less than Gamma, is gonna be greater than equal to 1 minus 2K, minus 2 Gamma square. Okay. So with probability at least 1 minus, you know this expression, which we can, we can call this Delta with probability at least so much. For all hypotheses, our margin is gonna be less than some Gamma. Right? This is, this is just, um, Hoeffding's inequality plus union bound, and just negate the two sides, you get this. And you, you can go with this slowly, um, um, you know later from the notes, the notes goes over this, um, in more detail. Right? Now, basically now what we have is, you know, now let's let Delta equals to K exp minus 2 Gamma squared M. So we basically now have, um, a relation between Delta which is like the probability of error. By here, um, ah, by error I mean that the, um, empirical risk, and the generalization risk are farther than some, some margin. And Gamma is called the margin of error. And M is your sample size. So, so what this basically tells us, um, if your algorithm is the empirical risk minimizer, it could have been any kind of algorithm. But if it is the kind that minimizes the training error, then you can get by, by, by just changing the sample size, you can get a relation between the margin of error and the probability of error and relate it to the sample size, right? So, um, what we can do with this relation is basically fix any two and solve for the third, and that gives us, nope, some actionable results. For example, you can fix any two and solve for the third from this relationship, right? And what, what, what that could, uh, mean is for example, so you, you can choose any two and solve for the third. Um, I'm only gonna go over one, one of those. So let, let's fix, fix uh, Gamma and Delta to be greater than zero. And we solve for m, and we get m [NOISE] weighted to, equal 1 over 2 Gamma square, log 2K over Delta. So what this means is with probability at least 1 minus Delta which means probably at least 99% or 99.9%. For example, with probability at least, uh, 1 minus delta, the margin of error between the empirical risk and the true generalization risk is gonna be less than Gamma as long as your training size is bigger than this expression, right. That's something actionable for us, right. Now, theory can be useful. So this is also called the sample complexity dessert. [NOISE] right? And uh, basically, what this means is as you increase m and you, you sample different [NOISE] sets of, uh, uh, data-sets, your dotted lines are gonna get closer and closer to, to, uh, the thick line which means, minimizing your- minimizing on the dotted line will also get you closer to the generalization error. So this, this is basically telling you how minimizing on, on, um, minimizing on the empirical risk gets you closer to, uh, gen- generalization, right? Okay, so that- so we started off with two questions, relating the empirical risk to generalization risk. Now, let's, let's explore the second question. What about, uh, the generalization error [NOISE] of [NOISE] our minimizer with the, uh, um, best possible in class? So let's look at this diagram again. Let's say we started with this dotted curve, right. And the minimizer of that would be h-star. This is h-star. Sorry the diagram is a little, uh, [NOISE] let me erase the previous one [NOISE] right? So this is h-hat. Sorry, this is h-hat. And this has a particular generalization error, right? That is the point of, of- uh, let- let- let's assume we got this data-set. We ran the empirical [NOISE] risk minimizer and we obtained this hypothesis. And when we deploy this in the world- in the real world, its error is gonna be so much, right? Now, how does this compare [NOISE] to the performance of the minimizer of the, the, the best possible [NOISE] , so this is h-star, best in class, right? Now, we want to get a relation between this error level and this error level. We got one bound that relates this to this, and now we want something that relates this to this. Now, how do we do that? It's pretty straightforward. Um, so the em- generalization error of h-hat, that's this dot over here, is less than equal to the empirical risk of h-hat plus Gamma. So we got the result, um, using a Hoeffding and union bound that the gap between the dotted line and the, the thick black line is always less than Gamma, right? And it's the absolute value. So we can, we can, uh, um, write it this way as well. And this, right? So basically, we, we start it from the, the thick black line, drop down to the dotted line. And this is gonna be less than the empirical error of h-star plus Gamma. Why is that? Because em- empirical error, um, the empirical error, uh, uh, of h-hat by definition, is less than or equal to the empirical error on any other hypotheses, including the best in class. Because this is the training error, not, not, not the generalization error, right? So which means, um- and, and this is less than or equal to. So we, we dropped from the generalization to the test. And we said, this test is, thi- this training error is always gonna to be less than the empirical error of the best-in-class. You see that the best-in-class is higher for the trade to be empirical error. And this again, is now- this gap is also bounded because we, we proved uniform convergence. That the gap between the dotted line and thick line is bounded by Gamma for any h, right? And this is therefore h-star plus 2 Gamma, because we added the extra margin. So we wanted the relation between the, uh, the- our, our hypothesis generalization error to the generalization error of the best in class hypotheses. So we dropped from the generalization error to the empirical error of our hypotheses, related that to the empirical one of the best in class and again bounded by the gap between these two. So we- we've got a gap between the generalization bound, the generalized error for hypothesis to the best in class generalization. Any questions on this? So the result basically says, with probability, 1 minus Delta, and for training size m, [NOISE] the generalization error of [NOISE] the hypothesis from the empirical risk minimizer is going to be within the best in class generalization error plus 2 times 1 over, 1 over 2m plus log 2K over Delta. So this was basically uh, so you can get this, uh, when you, when you- so in this expression, if you set this equal to Delta and solve for Gamma, you will get this. Any questions? [NOISE] I think we're already over time. So, uh, the case for infinite classes is an extension to this. Maybe I'll just write the results. So there is a concept called VC dimension, which is a pretty simple concept but [NOISE] we won't be going over it today. VC dimension basically says, um, what is the- so VC dimension is, you can think of it as trying to assign a size to an infinitely, uh, to an infinite size hypothesis class. For a fixed size hypothesis class, we had like, you know, K to be the size of the hypothesis class. So VC [NOISE] of some hypothesis class is gonna be some number, right? Some number which, which kind of, um, which is like the size of the hypothesis. It's basically, telling you how, how expressive it is um, and, and, uh, on using, using the VC dimension, uh, there are very nice uh, geometrical meanings of VC dimension. You can, you can get a bound, similar bound. But now, it's not for, uh, uh, um, it's not for uh, uh, finite classes anymore. Some big O of [NOISE] right? So in place of this margin, we ended up with, uh, a different margin that is, uh, a function of the, the VC dimension. And the, the key takeaway from this is that uh, the number of data examples, that the sample complexity that you want is generally, uh, an order of the VC dimension to get good results. That's basically, the, uh, uh, main result from that, right? From, uh- with that, I guess we'll, we'll, uh, we'll break for the day and, uh, we'll take more questions. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_11_Introduction_to_Neural_Networks_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Hello everyone. Uh, welcome to CS229. Um, today we're going to talk about, um, deep learning and neural networks. Um, we're going to have two lectures on that, one today and a little bit more of it on, ah, Monday. Um, don't hesitate to ask questions during the lecture. Ah, so stop me if you don't understand something and we'll try to build intuition around neural networks together. We will actually start with an algorithm that you guys have seen, uh, previously called logistic regression. Everybody remembers logistic regression? Yes. Okay. Remember it's a classification algorithm. Um, we're going to do that. Explain how logistic regression can be interpreted as a neural network- specific case of a neural network and then, we will go to neural networks. Sounds good? So the quick intro on deep learning. So deep learning is a- is a set of techniques that is let's say a subset of machine learning and it's one of the growing techniques that have been used in the industry specifically for problems in computer vision, natural language processing and speech recognition. So you guys have a lot of different tools and, and uh, plug-ins on your smartphones that uses this type of algorithm. Ah, the reason it came, uh, to work very well is primarily the, the new computational methods. So one thing we're going to see to- today, um, is that deep learning is really really computationally expensive and we- people had to find techniques in order to parallelize the code and use GPUs specifically in order to graphical processing units, in order to be able to compute, uh, the, the, the, the computations in deep learning. Ah, the second part is the data available has been growing after, after the Internet bubble, the digitalization of the world. So now people have access to large amounts of data and this type of algorithm has the specificity of being able to learn a lot when there is a lot of data. So these models are very flexible and the more you give them data, the more they will be able to understand the salient feature of the data. And finally algorithms. So people have come up with, with new techniques, uh, in order to use the data, use the computation power and build models. So we are going to touch a little bit on all of that, but let's go with logistic regression first. Can you guys see in the back? Yeah? Okay, perfect. So you remember, uh, what logistic regression is? What- we are going to fix a goal for us, uh, that, uh, is a classification goal. So let's try to, to find cats in images. So find cats in images. Meaning binary classification. If there is a cat in the image, we want to output a number that is close to 1, presence of the cat, and if there is no cat in the image, we wanna output 0. Let- let's say for now, ah, we're constrained to the fact that there is maximum one cat per image, there's no more. If you are to draw the logistic regression model, that's what you would do. You would take a cat. So this is an image of a cat. I'm very bad at that. Um, sorry. In computer science, you know that images can be represented as 3D matrices. So if I tell you that this is a color image of size 64 by 64, how many numbers do I have to represent those pixels? [BACKGROUND] Yeah, I heard it, 64 by 64 by 3. Three for the RGB channel, red, green, blue. Every pixel in an image can be represented by three numbers. One representing the red filter, the green filter, and the, and the blue filter. So actually this image is of size 64 times 64 times 3. That makes sense? So the first thing we will do in order to use logistic regression to find if there is a cat in this image, we're going to flatten th- this into a vector. So I'm going to take all the numbers in this matrix and flatten them in a vector. Just an image to vector operation, nothing more. And now I can use my logistic regression because I have a vector input. So I'm going to, to take all of these and push them in an operation that we call th- the logistic operation which has one part that is wx plus b, where x is going to be the image. So wx plus b, and the second part is going to be the sigmoid. Everybody is familiar with the sigmoid function? Function that takes a number between minus infinity and plus infinity and maps it between 0 and 1. It is very convenient for classification problems. And this we are going to call it y hat, which is sigmoid of what you've seen in class previously, I think it's Theta transpose x. But here we will just separate the notation into w and b. So can someone tell me what's the shape of w? The matrix W, vector matrix. Um, what? Yes, 64 by 64 by 3 as a- yeah. So you know that this guy here is a vector of 64 by 64 by 3, a column vector. So the shape of x is going to be 64 by 64 by 3 times 1. This is the shape and this, I think it's- that if I don't know, 12,288 and this indeed because we want y-hat to be one-by-one, this w has to be 1 by 12,288. That makes sense? So we have a row vector as our parameter. We're just changing the notations of the logistic regression that you guys have seen. And so once we have this model, we need to train it as you know. And the process of training is that first, we will initialize our parameters. These are what we call parameters. We will use the specific vocabulary of weights and bias. I believe you guys have heard this vocabulary before, weights and biases. So we're going to find the right w and the right b in order to be able, ah, to use this model properly. Once we initialized them, what we will do is that we will optimize them, find the optimal w and b, and after we found the optimal w and b, we will use them to predict. Does this process make sense? This training process? And I think the important part is to understand what this is. Find the optimal w and b means defining your loss function which is the objective. And in machine learning, we often have this, this, this specific problem where you have a function that you know you want to find, the network function, but you don't know the values of its parameters. In order to find them, you're going to use a proxy that is going to be your loss function. If you manage to minimize the loss function, you will find the right parameters. So you define a loss function, that is the logistic loss. Y log of y hat plus 1 minus y log of 1 minus y hat up. You guys have seen this one. You remember where it comes from? Comes from a maximum likelihood estimation, starting from a probabilistic model. And so the idea is how can I minimize this function. Minimize, because I've put the minus sign here. I want to find w and b that minimize this function and I'm going to use a gradient descent algorithm. Which means I'm going to iteratively compute the derivative of the loss with respect to my parameters. And at every step, I will update them to make this loss function go a little down at every iterative step. So in terms of implementation, this is a for loop. You will loop over a certain number of iteration and at every point, you will compute the derivative of the loss with respect to your parameters. Everybody remembers how to compute this number? Take the derivative here, you use the fact that the sigmoid function has a derivative that is sigmoid times 1 minus sigmoid, and you will compute the results. We- we're going to do some derivative later today. But just to set up the problem here. So, the few things that I wanna- that I wanna touch on here is, first, how many parameters does this model have? This logistic regression? If you have to, count them. So this is the numb- 089 yeah, correct. So 12,288 weights and 1 bias. That makes sense? So, actually, it's funny because you can quickly count it by just counting the number of edges on the- on the- on the drawing plus 1. Every circle has a bias. Every edge has a weight because ultimately this operation you can rewrite it like that, right? It means every weight has- every weight corresponds to an edge. So that's another way to count it, we are going to use it a little further. So we're starting with not too many parameters actually. And one thing that we notice is that the number of parameters of our model depends on the size of the input. We probably don't want that at some point, so we are going to change it later. So two equations that I want you to remember is, the first one is neuron equals linear plus activation. So this is the vocabulary we will use in neural networks. We define a neuron as an operation that has two parts, one linear part, and one activation part and it's exactly that. This is actually a neuron. We have a linear part, wx plus b and then we take the output of this linear part and we put it in an activation, that in this case, is the sigmoid function. It can be other functions, okay? So this is the first equation, not too hard. The second equation that I wanna set now is the model equals architecture plus parameters. What does that mean? It means here we're, we're trying to train a logistic regression in order to, to be able to use it. We need an architecture which is the following, a one neuron neural network and the parameters w and b. So basically, when people say we've shipped a model, like in the industry, what they're saying is that they found the right parameters, with the right architecture. They have two files and these two files are predicting a bunch of things, okay? One parameter file and one architecture file. The architecture will be modified a lot today. We will add neurons all over and the parameters will always be called w and b, but they will become bigger and bigger. Because we have more data, we want to be able to understand it. You can get that it's going to be hard to understand what a cat is with only that, that, that many parameters. We want to have more parameters. Any questions so far? So this was just to set up the problem with logistic regression. Let's try to set a new goal, after the first goal we have set prior to that. So the second goal would be, find cat, a lion, iguana in images. So a little different than before, only thing we changed is that we want to now to detect three types of animals. Either if there's a cat in the image, I wanna know there is a cat. If there's an iguana in the image, I wanna know there is an iguana. If there is a lion in the image, I wanna know it as well. So how would you modify the network that we previously had in order to take this into account? Yeah? Yeah, good idea. So put two more circles, so neurons, and do the same thing. So we have our picture here with the cats. So the cat is going to the right. 64 by 64 by 3 we flatten it, from x1 to xn. Let's say n represents 64, 64 by 3 and what I will do, is that I will use three neurons that are all computing the same thing. They're all connected to all these inputs, okay? I connect all my inputs x1 to xn to each of these neurons, and I will use a specific set of notation here. Okay. Y_2 hat equals a_2_1 sigmoid of w_ 2_1 plus x plus b_2_1. And similarly, y_3 hat equals a_3_1, which is sigmoid of w_ 3_1 x plus b_3_1. So I'm introducing a few notations here and we'll, we'll get used to it, don't worry. So just, just write this down and we're going to go over it. So [NOISE] the square brackets here represent what we will call later on a layer. If you look at this network, it looks like there is one layer here. There's one layer in which neurons don't communicate with each other. We could add up to it, and we will do it later on, more neurons in other layers. We will denote with square brackets the index of the layer. The index, that is, the subscript to this a is the number identifying the neuron inside the layer. So here we have one layer. We have a_1, a_2, and a_3 with square brackets one to identify the layer. Does that make sense? And then we have our y-hat that instead of being a single number as it was before, is now a vector of size three. So how many parameters does this network have? [NOISE] How much? [BACKGROUND] Okay. How did you come up with that? [BACKGROUND]. Okay. Yeah, correct. So we just have three times the, the thing we had before because we added two more neurons and they all have their own set of parameters. Look like this edge is a separate edge as this one. So we, we have to replicate parameters for each of these. So w_1_1 would be the equivalent of what we had for the cat, but we have to add two more, ah, parameter vectors and biases. [NOISE] So other question, when you had to train this logistic regression, what dataset did you need? [NOISE] Can someone try to describe the dataset. Yeah. [BACKGROUND] Yeah, correct. So we need images and labels with it labeled as cat, 1 or no cat, 0. So it is a binary classification with images and labels. Now, what do you think should be the dataset to train this network? Yes. [BACKGROUND] That's a good idea. So just to repeat. Uh, a label for an image that has a cat would probably be a vector with a 1 and two 0s where the 1 should represent the prese- the presence of a cat. This one should represent the presence of a lion and this one should represent the presence of an iguana. So let's assume I use this scheme to label my dataset. I train this network using the same techniques here. Initialize all my weights and biases with a value, a starting value, optimize a loss function by using gradient descent, and then use y-hat equals, uh, log to predict. What do you think this neuron is going to be responsible for? If you had to describe the responsibility of this neuron. [BACKGROUND] Yes. Well, this one. Lion. Yeah, lion and this one iguana. So basically the, the, the way you- yeah, go for it. [BACKGROUND] That's a good question. We're going to talk about that now. Multiple image contain different animals or not. So going back on what you said, because we decided to label our dataset like that, after training, this neuron is naturally going to be there to detect cats. If we had changed the labeling scheme and I said that the second entry would correspond to the cat, the presence of a cat, then after training, you will detect that this neuron is responsible for detecting a cat. So the network is going to evolve depending on the way you label your dataset. Now, do you think that this network can still be robust to different animals in the same picture? So this cat now has, uh, a friend [NOISE] that is a lion. Okay, I have no idea how to draw a lion, but let's say there is a lion here and because there is a lion, I will add a 1 here. Do you think this network is robust to this type of labeling? [BACKGROUND] It should be. The neurons aren't talking to each other. That's a good answer actually. Another answer. [BACKGROUND] That's a good, uh, intuition because the network, what it sees is just 1, 1, 0, and an image. It doesn't see that this one corresponds to- the cat corresponds to the first one and the second- and the lion corresponds to the second one. So [NOISE] this is a property of neural networks, it's the fact that you don't need to tell them everything. If you have enough data, they're going to figure it out. So because you will have also cats with iguanas, cats alone, lions with iguanas, lions alone, ultimately, this neuron will understand what it's looking for, and it will understand that this one corresponds to this lion. Just needs a lot of data. So yes, it's going to be robust. And that's the reason you mentioned. It's going to be robust to that because the three neurons aren't communicating together. So we can totally train them independent- independently from each other. And in fact, the sigmoid here, doesn't depend on the sigmoid here and doesn't depend on the sigmoid here. It means we can have one, one, and one as an output. [NOISE] Yes, question. [BACKGROUND] You could, you could, you could think about it as three logistic regressions. So we wouldn't call that a neural network yet. It's not ready yet, but it's a three neural network or three logistic regression with each other. [NOISE]. Now, following up on that, uh, yeah, go for it. A question. [BACKGROUND] W and b are related to what? [BACKGROUND] Oh, yeah. Yeah. So, so usually you would have Theta transpose x, which is sum of Theta_i_x_i, correct? And what I will split it is, I will spit it in sum of Theta_i_x_i plus Theta_0 times 1. I'll split it like that. Theta_0 would correspond to b and these Theta_is would correspond to w_is, make sense? Okay. One more question and then we move on. [BACKGROUND] Good question. That's the next thing we're going to see. So the question is a follow up on this, is there cases where we have a constraint where there is only one possible outcome? It means there is no cat and lion, there is either a cat or a lion, there is no iguana and lion, there's either an iguana or a lion. Think about health care. There are many, there are many models that are made to detect, uh, if a disease, skin disease is present on- based on cell microscopic images. Usually, there is no overlap between these, it means, you want to classify a specific disease among a large number of diseases. So this model would still work but would not be optimal because it's longer to train. Maybe one disease is super, super rare and one of the neurons is never going to be trained. Let's say you're working in a zoo where there is only one iguana and there are thousands of lions and thousands of cats. This guy will never train almost, you know, it would be super hard to train this one. So you want to start with another model that- where you put the constraint, that's, okay, there is only one disease that we want to predict and let the model learn, with all the neurons learn together by creating interaction between them. Have you guys heard of softmax? Yes? Somebody, ah, I see that in the back. [LAUGHTER] Okay. So let's look at softmax a little bit together. So we set a new goal now, which is we add a constraint which is an unique animal on an image. So at most one animal on an image. So I'm going to modify the network a little bit. We're go- we have our cat and there is no lion on the image, we flatten it, and now I'm going to use the same scheme with the three neurons, a_1, a_2, a_3. But as an output, what I am going to use is an exponent, a softmax function. So let me be more precise, let me, let me actually introduce another notation to make it easier. As you know, the neuron is a linear part plus an activation. So we are going to introduce a notation for the linear part, I'm going to introduce Z_11 to represent the linear part of the first neuron. Z_112 to introduce the linear part of the second neuron. So now our neuron has two parts, one which computes Z and one which computes a, equals sigmoid of Z. Now, I'm going to remove all the activations and make these Zs and I'm going to use the specific formula. So this, if you recall, is exactly the softmax formula. [NOISE] Yeah. Okay. So now the network we have, can you guys see or it's too small? Too small? Okay, I'm going to just write this formula bigger and then you can figure out the others, I guess, because, e of Z_3, 1 divided by sum from k equals 1 to 3 of e, exponential of ZK_1. Okay, can you see this one? So here is for the third one. If you are doing it for the first one, you will add- you'll just change this into a 2, into a 1 and for the second one into a 2. So why is this formula interesting and why is it not robust to this labeling scheme anymore? It's because the sum of the outputs of this network have to sum up to 1. You can try it. If you sum the three outputs, you get the same thing in the numerator and on the denominator and you get 1. That makes sense? So instead of getting a probabilistic output for each, each of y, if, each of y hat 1, y hat 2, y hat 3, we will get a probability distribution over all the classes. So that means we cannot get 0.7, 0.6, 0.1, telling us roughly that there is probably a cat and a lion but no iguana. We have to sum these to 1. So it means, if there is no cat and no lion, it means there's very likely an iguana. The three probabilities are dependent on each other and for this one we have to label the following way, 1, 1, 0 for a cat, 0, 1, 0 for a lion or 0, 0, 1 for an iguana. So this is called a softmax multi-class network. [inaudible]. You assume there is at least one of the three classes, otherwise you have to add a fourth input that will represent an absence of an animal. But this way, you assume there is always one of these three animals on every picture. And how many parameters does the network have? The same as the second one. We still have three neurons and although I didn't write it, this Z_1 is equal to w_11, x plus b_1, Z_2 same, Z_3 same. So there's 3n plus 3 parameters. So one question that we didn't talk about is, how do we train these parameters? These, these parameters, the 3n plus 3 parameters, how do we train them? You think this scheme will work or no? What's wrong, what's wrong with this scheme? What's wrong with the loss function specifically? There's only two outcomes. So in this loss function, y is a number between 0 and 1, y hat same is the probability, y is either a 0 or 1, y hat is between 0 and 1, so it cannot match this labeling. So we need to modify the loss function. So let's call it loss three neuron. What I'm going to do is I'm going to just sum it up for the three neurons. Does this make sense? So I am just doing three times this loss for each of the neurons. So we have exactly three times this. We sum them together. And if you train this loss function, you should be able to train the three neurons that you have. And again, talking about scarcity of one of the classes. If there is not many iguana, then the third term of this sum is not going to help this neuron train towards detecting an iguana. It's going to push it to detect no iguana. Any question on the loss function? Does this one make sense? Yeah? [inaudible] Yeah. Usually, that's what will happen is that the output of this network once it's trained, is going to be a probability distribution. You will pick the maximum of those, and you will set it to 1 and the others to 0 as your prediction. One more question, yeah. [inaudible] If you use the 2-1. If you use this labeling scheme like 1-1-0 for this network, what do you think it will happen? It will probably not work. And the reason is this sum is equal to 2, the sum of these entries, while the sum of these entries is equal to 1. So you will never be able to match the output to the input to the label. That makes sense? So what the network is probably going to do is it's probably going to send this one to one-half, this one to one-half, and this one to 0 probably, which is not what you want. Okay. Let's talk about the loss function for this softmax regression. [NOISE] Because you know what's interesting about this loss is if I take this derivative, derivative of the loss 3N with respect to W2_1. You think is going to be harder than this derivative, than this one or no? It's going to be exactly the same. Because only one of these three terms depends on W12. It means the derivative of the two others are 0. So we are exactly at the same complexity during the derivation. But this one, do you think if you try to compute, let's say we define a loss function that corresponds roughly to that. If you try to compute the derivative of the loss with respect to W2, it will become much more complex. Because this number, the output here that is going to impact the loss function directly, not only depends on the parameters of W2, it also depends on the parameters of W1 and W3. And same for this output. This output also depends on the parameters W2. Does this makes sense? Because of this denominator. So the softmax regression needs a different loss function and a different derivative. So the loss function we'll define is a very common one in deep learning, it's called the softmax cross entropy. Cross entropy loss. I'm not going to- to- into the details of where it comes from but you can get the intuition of yklog. So it, it surprisingly looks like the binary croissant, the binary, uh, the logistic loss function. The only difference is that we will sum it up on all the- on all the classes. Now, we will take a derivative of something that looks like that later. But I'd say if you can try it at home on this one, uh, it would be a good exercise as well. So this binary cross entropy loss is very likely to be used in classification problems that are multi-class. Okay. So this was the first part on logistic regression types of networks. And I think we're ready now with the notation that we introduced to jump on to neural networks. Any question on this first part before we move on? So one- one question I would have for you. Let's say instead of trying to predict if there is a cat or no cat, we were trying to predict the age of the cat based on the image. What would you change? This network. Instead of predicting 1-0, you wanna predict the age of the cat. What are the things you would change? Yes. [inaudible]. Okay. So I repeat. I, I basically make several output nodes where each of them corresponds to one age of cats. So would you use this network or the third one? Would you use the three neurons neural network or the softmax regression? Third one. The third one. Why? You have a unique age. You have a unique age, you cannot have two ages, right. So we would use a softmax one because we want the probability distribution along the edge, the ages. Okay. That makes sense. That's a good approach. There is also another approach which is using directly a regression to predict an age. An age can be between zero and plus infi- not plus infinity- [LAUGHTER]. -zero and a certain number. [LAUGHTER] And, uh, so let's say you wanna do a regression, how would you modify your network? Change the Sigmoid. The Sigmoid puts the Z between 0 and 1. We don't want this to happen. So I'd say we will change the Sigmoid. Into what function would you change the Sigmoid? [inaudible] Yeah. So the second one you said was? [inaudible] Oh, to get a Poisson type of distribution. Okay. So let's, let's go with linear. You mentioned linear. We could just use a linear function, right, for the Sigmoid. But this becomes a linear regression. The whole network becomes a linear regression. Another one that is very common in, in deep learning is called the Rayleigh function. It's a function that is almost linear, but for every input that is negative, it's equal to 0. Because we cannot have negative h, it makes sense to use this one. Okay. So this is called rectified linear units, ReLU. It's a very common one in deep learning. Now, what else would you change? We talked about linear regression. Do you remember the loss function you are using in linear regression? What was it? [inaudible] It was probably one of these two; y hat minus y. This comparison between the output label and y-hat, the prediction, or it was the L2 loss; y-hat minus y in L2 norm. So that's what we would use. We would modify our loss function to fit the regression type of problem. And the reason we would use this loss instead of the one we have for a regression task is because in optimization, the shape of this loss is much easier to optimize for a regression task than it is for a classification task and vice versa. I'm not going to go into the details of that but that's the intuition. [NOISE] Okay. Let's go have fun with neural networks. [NOISE] So we, we stick to our first goal. Given an image, tell us if there is cat or no cat. This is 1, this is 0. But now we're going to make a network a little more complex. We're going to add some parameters. So I get my picture of the cat. [NOISE] The cat is moving. Okay. And what I'm going to do is that I'm going to put more neurons than before. Maybe something like that. [NOISE] So using the same notation you see that my square bracket- So using the same notation, you see that my square bracket here is two indicating that there is a layer here which is the second layer [NOISE] while this one is the first layer and this one is the third layer. Everybody's, er, up to speed with the notations? Cool. So now notice that when you make a choice of architecture, you have to be careful of one thing, is that the output layer has to have the same number of neurons as you want, the number of classes to be for reclassification and one for a regression. So, er, how many parameters does this - this network have? Can someone quickly give me the thought process? So how much here? Yeah, like 3n plus 3 let's say. [inaudible]. Yeah, correct. So here you would have 3n weights plus 3 biases. Here you would have 2 times 3 weights plus 2 biases because you have three neurons connected to two neurons and here you will have 2 times 1 plus 1 bias. Makes sense. So this is the total number of parameters. So you see that we didn't add too much parameters. Most of the parameters are still in the input layer. Um, let's define some vocabulary. The first word is Layer. Layer denotes neurons that are not connected to each other. These two neurons are not connected to each other. These two neurons are not connected to each other. We call this cluster of neurons a layer. And then this has three layers. So we would use input layer to define the first layer, output layer to define the third layer because it directly sees the outputs and we would call the second layer a hidden layer. And the reason we call it hidden is because the inputs and the outputs are hidden from this layer. It means the only thing that this layer sees as input is what's the previous layer gave it. So it's an abstraction of the inputs but it's not the inputs. Does that make sense? And same, it doesn't see the output, it just gives what it understood to the last neuron that will compare the output to the ground truth. So now, why are neural networks interesting? And why do we call this hidden layer? Um, it's because if you train this network on cat classification with a lot of images of cats, you would notice that the first layers are going to understand the fundamental concepts of the image, which is the edges. This neuron is going to be able to detect this type of edges. This neuron is probably going to detect some other type of edge. This neuron, maybe this type of edge. Then what's gonna happen, is that these neurons are going to communicate what they found on the image to the next layer's neuron. And this neuron is going to use the edges that these guys found to figure out that, oh, there is a - their ears. While this one is going to figure out, oh, there is a mouth and so on if you have several neurons and they're going to communicate what they understood to the output neuron that is going to construct the face of the cat based on what it received and be able to tell if there is a cat or not. So the reason it's called hidden layer is because we - we don't really know what it's going to figure out but with enough data, it should understand very complex information about the data. The deeper you go, the more complex information the neurons are able to understand. Let me give you another example which is a house prediction example. House price prediction. [NOISE] So let's assume that our inputs are number of bedrooms, size of the house, zip code, and wealth of the neighborhood, let's say. What we will build is a network that has three neurons in the first layer and one neuron in the output layer. So what's interesting is that as a human if you were to build, uh, this network and like hand engineer it, you would say that, uh, okay zip codes and wealth or - or sorry. Let's do that. Zip code and wealth are able to tell us about the school quality in the neighborhood. The quality of the school that is next to the house probably. As a human you would say these are probably good features to predict that. The zip code is going to tell us if the neighborhood is walkable or not, probably. The size and the number of bedrooms is going to tell us what's the size of the family that can fit in this house. And these three are probably better information than these in order to finally predict the price. So that's a way to hand engineer that by hand, as a human in order to give human knowledge to the network to figure out the price. In practice what we do here is that we use a fully-connected layer - fully-connected. What does that mean? It means that we connect every input of a layer, every - every input to the first layer, every output of the first layer to the input of the third layer and so on. So all the neurons among lay - from one layer to another are connected with each other. What we're saying is that we will let the network figure these out. We will let the neurons of the first layer figure out what's interesting for the second layer to make the price prediction. So we will not tell these to the network, instead we will fully connect the network [NOISE] and so on. Okay. We'll fully connect the network and let it figure out what are the interesting features. And oftentimes, the network is going to be able better than the humans to find these - what are the features that are representative. Sometimes you may hear neural networks referred as, uh, black box models. The reason is we will not understand what this edge will correspond to. It's - it's hard to figure out that this neuron is detecting a weighted average of the input features. Does that make sense? Another word you might hear is end-to-end learning. The reason we talked about end-to-end learning is because we have an input, a ground truth, and we don't constrain the network in the middle. We let it learn whatever it has to learn and we call it end-to-end learning because we are just training based on the input and the output. [NOISE] Let's delve more into the math of this network. The neural network that we have here which has an input layer, a hidden layer and an output layer. Let's try to write down the equations that run the inputs and pro - propagate it to the output. We first have Z_1, that is the linear part of the first layer, that is computed using W_1 times x plus b_1. Then this Z_1 is given to an activation, let's say sigmoid, which is sigmoid of Z_1. Z_2 is then the linear part of the second neuron which is going to take the output of the previous layer, multiply it by its weights and add the bias. The second activation is going to take the sigmoid of Z_2. And finally, we have the third layer which is going to multiply its weights, with the output of the layer presenting it and add its bias. And finally, we have the third activation which is simply the sigmoid of the three. So what is interesting to notice between these equations and the equations that we wrote here, is that we put everything in matrices. So it means this a_3 that I have here, sorry, this here for three neurons I wrote three equations, here for three neurons in the second layer I just wrote a single equation to summarize it. But the shape of these things are going to be vectors. So let's go over the shapes, let's try to define them. Z_1 is going to be x which is n by 1 times w which has to be 3 by n because it connects three neurons to the input. So this z has to be 3 by 1. It makes sense because we have three neurons. Now let's go, let's go deeper. A_1 is just the sigmoid of z_1 so it doesn't change the shape. It keeps the 3 by 1. Z_2 we know it, it has to be 2 by 1 because there are two neurons in the second layer and it helps us figure out what w_2 would be. We know a_1 is 3 by 1. It means that w_2 has to be 2 by 3. And if you count the edges between the first and the second layer here you will find six edges, 2 times 3. A_2, same shape as z_2. Z_3, 1 by 1, a_3, 1 by 1, w_3, it has to be 1 by 2, because a_2 is 2 by 1 and same for b. B is going to be the number of neurons so 3 by 1, 2 by 1, and finally 1 by 1. So I think it's usually very helpful, even when coding these type of equations, to know all the shapes that are involved. Are you guys, like, totally okay with the shapes, super-easy to figure out? Okay, cool. So now what is interesting is that we will try to vectorize the code even more. Does someone remember the difference between stochastic gradient descent and gradient descent. What's the difference? [inaudible] Exactly. Stochastic gradient descent is updates, the weights and the bias after you see every example. So the direction of the gradient is quite noisy. It doesn't represent very well the entire batch, while gradient descent or batch gradient descent is updates after you've seen the whole batch of examples. And the gradient is much more precise. It points to the direction you want to go to. So what we're trying to do now is to write down these equations if instead of giving one single cat image we had given a bunch of images that either have a cat or not a cat. So now our input x. So what happens for an input batch of m examples? So now our x is not anymore a single column vector, it's a matrix with the first image corresponding to x_1, the second image corresponding to x_2 and so on until the nth image corresponding to x_n. And I'm introducing a new notation which is the parentheses superscript corresponding to the ID of the example. So square brackets for the layer, round brackets for the idea of the example we are talking about. So just to give more context on what we're trying to do. We know that this is a bunch of operations. We just have a, a network with inputs, hidden, and output layer. We could have a network with 1,000 layer. The more layers we have the more computation and it quickly goes up. So what we wanna do is to be able to parallelize our code or, or our computation as much as possible by giving batches of inputs and parallelizing these equations. So let's see how these equations are modified when we give it a batch of m inputs. I will use capital letters to denote the equivalent of the lowercase letters but for a batch of input. So Z_1 as an example would be W_1, let's use the same actually, W_1 times X plus B_1. So let's analyze what Z_1 would look like. Z_1 we know that for every, for every input example of the batch we will get one Z_1 which should look like this. Then we have to figure out what have to be the shapes of this equation in order to end up with this. We know that Z_1 was 3 by 1. It mean, it means capital Z_1 has to be 3 by m because each of these column vectors are 3 by 1 and we have m of them. Because for each input we forward propagate through the network, we get these equations. So for the first cat image we get these equations, for the second cat image we get again equations like that and so on. So what is the shape of x? We have it above. We know that it's n by n. What is the shape of w_1? It didn't change. W_1 doesn't change. It's not because I will give 1,000 inputs to my network that the parameters are going to be more. So the parameter number stays the same even if I give more inputs. And so this has to be 3 by n in order to match Z_1. Now the interesting thing is that there is an algebraic problem here. What is the algebraic problem? We said that the number of parameters doesn't change. It means that w has the same shape as it has before, as it had before. B should have the same shape as it had before, right? It should be 3 by 1. What's the problem of this equation? Exactly. We're summing a 3 by m matrix to a 3 by 1 vector. This is not possible in math. It doesn't work. It doesn't match. When you do some summations or subtraction, you need the two terms to be the same shape because you will do an element-wise addition or an ele- element-wise subtraction. So what's the trick that is used here? It's a, it's a technique called broadcasting. Broadcasting is that- is the fact that we don't want to change the number of parameters, it should stay the same. But we still want this operation to be able to be written in parallel version. So we still want to write this equation because we want to parallelize our code, but we don't want to add more parameters, it doesn't make sense. So what we're going to do is that we are going to create a vector b tilde 1 which is going to be b_1 repeated three times. Sorry, repeated m times. So we just keep the same number of parameters but just repeat them in order to be able to write my code in parallel. This is called broadcasting. And what is convenient is that for those of you who, uh, the homeworks are in Matlab or Python? Matlab. Okay. So in Matlab, no Python? [LAUGHTER]. [NOISE] Thank you. Um, Python. So in Python, there is a package that is often used to to code these equations. It's numPy. Some people call it numPy, I'm not sure why. So numPy, basically numerical Python, we directly do the broadcasting. It means if you sum this 3 by m matrix with a 3 by 1 parameter vector, it's going to automatically reproduce the parameter vector m times so that the equation works. It's called broadcasting. Does that make sense? So because we're using this technique, we're able to rewrite all these equations with capital letters. Do you wanna do it together or do you want to do it on your own? Who wants to do it on their own? Okay. So let's do it on their own [LAUGHTER] on your own. So rewrite these with capital letters and figure out the shapes. I think you can do it at home, wherever, we're not going to do here, but make sure you understand all the shapes. Yeah. [inaudible] How how is this [inaudible]? So the question is how is this different from principal component analysis? This is a supervised learning algorithm that will be used to predict the price of a house. Principal component analysis doesn't predict anything. It gets an input matrix X normalizes it, ah, computes the covariance matrix and then figures out what are the pri- principal components by doing the the eigenvalue decomposition. But the outcome of PCA is, you know that the most important features of your dataset X are going to be these features. Here we're not looking at the features. We're only looking at the output. That is what is important to us. Yes. In the first lecture when did you say that the first layers is the edges in an [inaudible]. So the question is, can you explain why the first layer would see the edges? Is there an intuition behind it? It's not always going to see the edges, but it's oftentimes going to see edges because um, in order to detect a human face, let's say, you will train an algorithm to find out whose face it is. So it has to understand the faces very well. Um, you need the network to be complex enough to understand very detailed features of the face. And usually, this neuron, what it sees as input are pixels. So it means every edge here is the multiplication of the weight by a pixel. So it sees pixels. It cannot understand the face as a whole because it sees only pixels. It's very granular information for it. So it's going to check if pixels nearby have the same color and understand that there is an edge there, okay? But it's too complicated to understand the whole face in the first layer. However, if it understands a little more than a pixel information, it can give it to the next neuron. This neuron will receive more than pixel information. It would receive a little more complex-like edges, and then it will use this information to build on top of it and build the features of the face. So what I'm trying to sum up is that these neurons only see the pixels, so they're not able to build more than the edges. That's the minimum thing that they can- the maximum thing they can do. And it's it's a complex topic, like interpretation of neural network is a highly researched topic, it's a big research topic. So nobody figured out exactly how all the neurons evolve. Yeah. One more question and then we move on. Ah, how [inaudible]. So the question is how [OVERLAPPING]. [inaudible]. -how do you decide how many neurons per layer? How many layers? What's the architecture of your neural network? There are two things to take into consideration I would say. First and nobody knows the right answer, so you have to test it. So you you guys talked about training set, validation set, and test set. So what we would do is, we would try ten different architectures, train it, train the network on these, looking at the validation set accuracy of all these, and decide which one seems to be the best. That's how we figure out what's the right network size. On top of that, using experience is often valuable. So if you give me a problem, I try always to gauge how complex is the problem. Like cat classification, do you think it's easier or harder than day and night classification? So day and night classification is I give you an image, I asked you to predict if it was taken during the day or during the night, and on the other hand you want there's a cat on the image or not. Which one is easier, which one is harder? Who thinks cat classification is harder? Okay. I think people agree. Cat classification seems harder, why? Because there are many breeds of cats. Can look like different things. There's not many breeds of nights. um, I guess. [LAUGHTER] Um, one thing that might be challenging in the day and night classification, is if you want also to figure it out in house like i- inside, you know maybe there is a tiny window there and I'm able to tell that is the day but for a network to understand it you will need a lot more data than if only you wanted to work outside, different. So these problems all have their own complexity. Based on their complexity, I think the network should be deeper. The comp- the more complex usually is the problem, the more data you need in order to figure out the output, the more deeper should be the network. That's an intuition, let's say. Okay. Let's move on guys because I think we have about what, 12 more minutes? Okay. Let's try to write the loss function for this problem. [NOISE]. So now that we have our network, we have written this propagation equation and I we call it forward propagation, because it's going forward, it's going from the input to the output. Later on when we will, we will derive these equations, we will call them backward propagation, because we are starting from the loss and going backwards. So let's let's talk about the optimization problem. Optimizing w_1, w_2, w_3, b_1, b_2, b_3. We have a lot of stuff to optimize, right? We have to find the right values for these and remember model equals architecture plus parameter. We have our architecture, if we have our parameters we're done. So in order to do that, we have to define an objective function. Sometimes called loss, sometimes called cost function. So usually we would call it loss if there is only one example in the batch, and cost, if there is multiple examples in a batch. So the loss function that, let- let's define the cost function. The cost function J depends on y hat n_y. Okay. So y hat, y hat is a_3. Okay. It depends on y hat n_y, and we will set it to be the sum of the loss functions L_i, and I will normalize it. It's not mandatory, but normalize it with 1/n. So what does this mean? It's that we are going for batch gradient descent. We wanna compute the loss function for the whole batch, parallelize our code, and then calculate the cost function that will be then derived to give us the direction of the gradients. That is, the average direction of all the de-de- derivation with respect to the whole input batch. And L_i will be the loss function corresponding to one parameter. So what's the error on this specific one input, sorry not parameter, and it will be the logistic loss. You've already seen these equations, I believe. So now, is it more complex to take a derivative with respect to J like of J with respect to the parameters or of L? What's the most complex between this one, let's say we're taking the derivative with respect to w_2, compared to this one? Which one is the hardest? Who thinks J is the hardest? Who think it doesn't matter? Yeah, it doesn't matter because derivation is is a linear operation, right? So you can just take the derivative inside and you will see that if you know this, you just have to take the sum over this. So instead of computing all derivatives on J, we will com- compute them on L, but it's totally equivalent. There's just one more step at the end. Okay. So now we defined our loss function, super. We defined our loss function and the next step is optimize. So we have to compute a lot of derivatives. [NOISE] And that's called backward propagation. [NOISE] So the question is why is it called backward propagation? It's because what we want to do ultimately is this. For any l equals 1-3, we want to do that, wl equals wl minus Alpha derivative of j with respect to wl, and bl equals bl minus Alpha derivative of j with respect to bl. So we want to do that for every parameter in layer 1, 2, and 3. So it means, we have to compute all these derivatives, we have to compute derivative of the cost with respect to w1, w2, w3, b1, b2, b3. You've done it with logistic regression, we're going to do it with a neural network, and you're going to understand why it's called backward propagation. Which one do you want to start with? Which derivative? You wanna start with the derivative with respect to w1, w2, or w3, let's say. Assuming we'll do the bias later. W what? W1? You think w1 is a good idea. I do- don't wanna do w1. I think we should do w3, and the reason is because if you look at this loss function, do you think the relation between w3 and this loss function is easier to understand or the relation between w1 and this loss function? It's the relation between w3 and this loss function. Because w3 happens much later in the- in the network. So if you want to understand how much should we move w1 in order to make the loss move? It's much more complicated than answering the question how much should w3 move to move the loss. Because there's much more connections if you wanna compete with w1. So that's why we call it backward propagation is because we will start with the top layer, the one that's the closest to the loss function, derive the derivative of j with respect to w1. Once we've computed this derivative which we are going to do next week, once we computed this number, we can then tackle this one. Oh, sorry. Yeah. Thanks. Yeah. Once we computed this number, we will be able to compute this one very easily. Why very easily? Because we can use the chain rule of calculus. So let's see how it works. What we're- I'm just going to give you, uh, the one-minute pitch on- on backprop, but, uh, we'll do it next week together. So if we had to compute this derivative, what I will do is that I will separate it into several derivatives that are easier. I will separate it into the derivative of j with respect to something, with the something, with respect to w3. And the question is, what should this something be? I will look at my equations. I know that j depends on Y-hat, and I know that Y-hat depends on z3. Y-hat is the same thing as a3, I know it depends on z3. So why don't- why don't I include z3 in my equation? I also know that z3 depends on w3, and the derivative of z3 with respect to w2 is super easy, it's just a2 transpose. So I will just make a quick hack and say that this derivative is the same as taking it with respect to a3, taking derivative of a3 with respect to z3, and taking the derivative of z3 with respect to w3. So you see? Same, same derivative, calculated in different ways. And I know this, I know these are pretty easy to compute. So that's why we call it backpropagation, it's because I will use the chain rule to compute the derivative of w3, and then when I want to do it for w2, I'm going to insert, I'm going to insert the derivative with z3 times the derivative of z3 with respect to a2 times the derivative of a2 with respect to z2, and derivative of z2 with respect to w2. Does this make sense that this thing here is the same thing as this? It means, if I wanna compute the derivative of w2, I don't need to compute this anymore, I already did this for w3. I just need to compute those which are easy ones, and so on. If I wanna compute the derivative of j with respect to w1, I'm going to- I'm not going to decompose all the thing again, I'm just going to take the derivative of j with respect to z2 which is equal to this whole thing. And then I'm gonna multiply it by the derivative of z2 with respect to a1 times derivative of a1 with respect to z1 times the derivative of z1 with respect to w1. And again, this thing I know it already, I computed it previously just for this one. So what's, what's interesting about it is that I'm not gonna redo the work I did, I'm just gonna store the right values while back-propagating and continue to derivate. One thing that you need to notice though is that, look, you need this forward propagation equation in order to remember what should be the path to take in your chain rule because you know that this derivative of j with respect to w3, I cannot use it as it is because w3 is not connected to the previous layer. If you look at this equation, a2 doesn't depend on w3, it depends on z3. Sorry, like, uh, my bad, it depends- no, sorry, what I wanted to say is that z2 is connected to w2, but a1 is not connected to w2. So you wanna choose the path that you're going through in the proper way so that there's no cancellation in these derivatives. You- you cannot compute derivative of w2 with respect to- to a1, right? You cannot compute that, you don't know it. Okay. So I think we're done for today. So one thing that I'd like you to do if you have time is just think about the things that can be tweaked in a neural network. When you build a neural network, you are not done, you have to tweak it, you have to tweak the activations, you have to tweak the loss function. There's many things you can tweak, and that's what we're going to see next week. Okay. Thanks. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_10_Decision_Trees_and_Ensemble_Methods_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Hello everyone. Uh, so my name is Raphael Townshend. I'm one of the head TAs for this class. This week Andrew is traveling and my advisor is still dealing with medical issues. So I'm going to be giving today's lecture. Um, you heard from my wonderful co-head TA Anand a couple of weeks ago. And so today, we're gonna be going over decision trees and various ensemble methods. Uh, so these might seem a bit like disparate topics at first, but really decision trees are, sort of, a classical example model class to use with various ensembling methods. We're gonna get into a little bit why in a bit, but just to give you guys an overview of what the outlines can be. We're first gonna go over decision trees, then we're gonna go over general ensembling methods and then go specifically into bagging random forests and boosting. Okay. So let's get started. So first let's cover some decision trees. Okay. So last week, Andrew was covering SVNs that are, sort of, one of the classical linear models and, sort of, brought to a close a lot of discussion of those linear models. And so today we gonna be getting to decision trees which is really one of our first examples of a non-linear model. And so to motivate these guys let me give you guys an example. Okay. So I'm Canadian, I really like to ski. So I'm gonna motivate it using that. So pretend you have a classifier that given a time and a location tells you whether or not you can ski, so it's a binary classifier saying yes or no. And so you have, you can imagine, a graph like this, and on the x-axis we're gonna have time in months, so counting from the start. So starting at 1 for January to 12 for December, and then on the y-axis we're gonna use latitude in degrees, okay? And so for those of you who might have forgotten what latitude is, it's basically at positive 90 degrees you're at the North Pole, at negative 90 degrees you're at the South Pole. So positive 90, negative 90, 0 being the Equator and it's, sort of, your location along the north-south axis. Okay. So given this, if you might recall, the winter in the Northern Hemisphere generally happens in the early months of the year. So you might see that you can ski in these early months over here and it has some positive data points, and then in the later months, right, and then in the middle, you can't really ski. Versus in the Southern Hemisphere, it's basically flipped, where you can not ski in the early months. You can ski during the May, June, July, August time period, and then you can't ski in the earlier months, and then the equator in general is just not great for skiing that's the reason I don't live there, and so you just have a bunch of negatives here. Okay. And so when you look at a data set like this, you've sort of got these separate regions that you're looking at, right? And you sort of want to isolate out those regions of positive examples. If you had a linear classifier, you'd sort of be hard-pressed to come up with any sort of decision boundary that would separate this reasonably. Now you could think okay, maybe you have an SVM or something, you've come up with a kernel that could perh- perhaps project this into a higher feature space that would make it linearly separable, but it turns out that with decision trees, you have a very natural way to do this. So to sort of make clear exactly what we want to do with decision trees, is we wanna sort of partition the space into individual regions. So we sort of wanna isolate out these, like, positive examples, for example. In general this problem is fairly intractable just coming up with the optimal regions. But how we do with decision trees is we do it in this basically greedy, top-down, recursive manner, and this is going to be recursive partitioning. Okay? And so it's- basically it's top-down because we're starting with the overall region and we wanna slowly partition it up, okay? And then it's greedy because at each step we wanna pick the best partition possible. Okay. So let's actually try and work out intuitively what a decision tree would do, okay? So what we do is we start with the overall space and the tree is basically gonna play 20 Questions with this space. Okay. So like for example, one question it might ask is, if we have the data coming in like this, is, is the latitude greater than 30 degrees, okay? And that would involve, sort of, cutting the space like this, for example, okay? And then we'd have a yes or a no. And so starting from, like, the most general space now we have partitioned the overall space into two separate spaces using this, this question. Okay. And this is where the recursive part comes in now, because now that you've sort of split the space into two, you can then sort of treat each individual space as a new problem to ask a new question about. So for example now that you've asked this latitude greater than 30 question, you could then ask something like, month less than like March or something like that. All right, and that would give you a yes or no. And what that works out to effectively, is that now you've taken this upper space here and divided it up into these two separate regions like this. And so you could imagine how through asking these recursive questions over and over again, you could start splitting up the entire space into your individual regions like this. Okay, and so to make this a little bit more formal, what we're looking for is, we're looking for, sort of, the split function, okay? So you can, sort of, define a region. So you have a region and let's call that region R_p in this case for R parent, okay? And we're looking for, looking for a split S_p, such that you have an S_p, you can, sort of, write out this S_p function as a function of j,t. Okay, where we saw you- j is which feature number and T is the threshold you're using. And so you can, sort of, write this out formally as, sort of, you're outputting a tuple, where on the one hand you have a set X where you have the x j- the jth feature of x is less than the threshold, and you have Xs element of R-p, since we're only partitioning that parent region. And then the second set is literally the same thing, except it's just those that are greater than t. And so we can refer to each one of these as R_1 and R_2. Any questions so far? No? Okay. So we, sort of, define now how we would, sort of, do this. We're trying to, like, greedily pick these peaks that are partitioning our input space and the splits are, sort of, defined by which feature you're looking at and the threshold that you're applying to that feature. Uh, sort of a natural question to ask now is, is how do you choose these splits, right? And so I sort of gave this intuitive explanation, that really what you're trying to do is you're trying to isolate out the space of positives and negatives in this case. And so what is useful to define is a loss on a region, okay? So define your loss L on R, loss on R. And so for now let's define our loss as something fairly obvious, is your misclassification loss. It's how many examples in your region you get wrong. And so assuming that you have, uh, given C classes total, you can define P hat c to be the proportion of examples in R that are of class c. [NOISE] And so now that we've got this definition, where we had this p hat c of telling us the proportion of examples that we've gotten that case, you can try to define the loss of any region as loss, let's call it misclassification, it's just 1 minus max over c of p hat c, okay? And so the reasoning behind this is basically you can say that for any region that you've subdivided generally, what you'll want to do is predict the most common class there, which is just the maximum p hat c, right? And so then all the remaining probability just gets thrown onto misclassification errors, okay? And so then once we do this, we want to basically pick- now that we have a loss defined, we want to, um, pick a split that decreases the loss as much as possible. So you recall I've defined this region, R_parent, and then these two children regions R_1 and R_2. And you basically want to reduce that loss as much as possible. So you want to, um, basically minimize loss, R_parent minus loss of R_1 plus loss of R_2. And so this is sort of your parent loss, this is your children loss. Okay. And since you're picking- and basically what you're minimizing over in some case is this j, t that we defined over here since this split is really what is gonna define our two children regions, right. And what you'll notice is that the loss of the parent doesn't really matter in this case because that's already defined. So really all you're trying to do is minimize this negative sum of losses of your children, okay? So let's move to the next board here. [NOISE] So I started to find this misclassification loss. Let's get a little bit into actually why misclassification loss isn't actually the right loss to use for this problem, so, okay? And so for a simple example, let's pretend- So I've sort of drawn out a tree like this, Let's pretend that instead we have another setup here where we're coming into a decision node. And at this point we have 900 positives and 100 negatives, okay? So this is sort of a misclassification loss, of 100 in this case because you'd predict the most common class and end up with 100 misclassified examples. All right, and so this would be your region R_p right now, right? And so then you can split it into these two other regions, right? Say R_1 and R_2. And say that what you've achieved now is you have the 700 positive, 100 negatives on this side versus, uh, 200 positives and 0 negatives on this side, okay? Now, this seems like a pretty good split since you're getting out some more examples. But what you can see is that, if you just drew the same thing again, right, R_p with 900 and 100, split split, and say in this case, instead, you've got 400 positives over here, 100 negatives, and 500 positives and 0 negatives. So most people would argue that this bright decision boundary is better than the left one because you're basically isolating out even more positives in this case. However, if you're just looking at your misclassification loss, it turns out that on this left one here, let's call this R_1 and R_2 versus this right one, Let's call this R_1 prime, R_2 prime, okay? So your loss of R_1 plus R_2, on this left case it's just 100 plus 0. All right, so it's just 100. And then on the right side here, it's actually still just the same, right? And in fact, if you'd look at the original loss of your parent it's also just 100, right? So you haven't really according to this loss metric changed anything at all. And so that sort of brings up one problem with the misclassification loss is that, it's not really sensitive enough, okay? So like instead what we can do is we can define this cross-entropy loss, okay? So which we'll define as L_cross. Let me just write this out here. And so really what you're doing is you're just summing over the classes and it's the probability- that the proportion of elements in that class times the log of the proportion in that class. And how you can think of this is, It's sort of this concept that we borrow from information theory, which is sort of like the number of bits you need to communicate to tell someone who already knows what the probabilities are what class you are looking at. And so that sounds like a mouthful but really you can sort of think of it intuitively as, if someone already knows the probabilities, like say it's a 100% chance that it is of one class, then you don't need to communicate anything to tell them exactly which class it is because it's obvious that it is that one class versus if you have a fairly even split then you'd need to communicate a lot more information to tell someone exactly what class you were in. Any questions so far? Yeah? [inaudible]. The R_1, R_2 for the parent class? [inaudible]. For this case here? Yeah, yeah, so, um, for that case there so you see that, er, I'll try and reach up there, but so it's like say like R_p was your start region, right? You could say it's the overall region, right? And then R_1 would be all the points above this latitude 30 line. And R_2 would be all the points below the latitude 30 line. Yeah, yeah? [inaudible]. Yeah. So the question is, when you're trying to minimize this loss here, is it the same as maximizing the, the children loss, and since, er - no, uh, ah, let's see, of maximizing the children loss. And yeah, it turns out it doesn't really matter, which, um, which way you put it. It just- basically, you're trying to either minimize the loss of the children or maximize the gain in information, basically. [inaudible]. Yeah. Let's see. Yeah, you're right. That should actually be a max. Let me fix that really quick. Because you start with your parent loss, and then you're subtracting out your children's loss, and so the amount left, let's see, the higher this loss is- yeah. So you really want to maximize this guy. Makes sense, everyone? Thanks for that. Okay, so I've sort of given this like, hand-wavy- Oh, sure, what's up? [inaudible]. So that would be log-based. The question is, for the cross-entropy loss, is it log base 2 or log base c? It's log base 2. Okay, here, I can write that out. Yep. [inaudible]. Oh, sorry, I didn't quite hear that. [inaudible]. Okay. Um, so the question is can- uh, what is the proportion that are correct versus incorrect for these two examples we've worked through here? Um, and so, yeah- basically, what we're starting with is, we're starting with we have 900/100, 900 positives and 100 negatives. All right, so you can imagine that if you just stopped at this point, right, you would just cla- classify everything as positive, right, and so you get 100 negatives incorrect. Does that make sense? Because this is 900 positives and 100 negatives. So if you just stopped here and just tried to classify, given this whole region R_p, you would end up getting 10% of your examples wrong, right? In this case, we're sort of talking- we're not talking about percentages, we're talking about absolute number of examples that we've gotten wrong, but you can also definitely talk in terms of percentages instead. And then down here, once you've split it, right, now you've got these two subregions, right? And for- on this, on this left one here, you still have more positives than negatives, right? So you're still gonna classify positive in this leaf, right? And you're still gonna classify positive in this leaf, too, because they- they're both majority class, or the positives are still the majority class there. And in this case, since you have 0 negatives, you're not gonna make any errors in your classification, whereas in this case here, it's still going to make 100 errors. And so what I'm saying is that, at this level, so if we just look above this line at R_p, right, you're making 100 mistakes, and then below this line you're still making 100 mistakes. So what I'm saying is that, that the loss in this case is not very informative. [inaudible]. Um, so this, this p-hat- okay, I'm being a little bit loose with terminology with the notation here, but the p-hat in this case is a proportion, okay? But you can also easily- basically, it's like whether you're normalizing the whole thing or not. Yeah. Okay. So I've sort of given this a bit handwavy explanation as to why misclassification loss versus cross-entropy loss might be better or worse. Um, we can actually get a fairly good intuition for why this is the case by looking at it from a sort of geometric perspective. So pretend now that you have this, this plot, okay? And what you're plotting here is- pretend you have a binary classification problem, okay? So you have just- is it positive class or negative class, okay? And so you can sort of represent, say p-hat, as like the proportion of positives in your set, okay? And what you've got plotted up here is your loss. Okay. For cross-entropy loss, where your curve is gonna end up looking like is, is gonna end up looking like this strictly concave curve like this, okay? And what you can do is you can sort of look at where your children versus your parent would fall on this curve. So say that you have two children, okay? You have one up here, so like let's call this LR1. And you have one down here, LR2, okay? And say that you have an equal number of examples in both R1 and R2, so they're equally weighted. If you take- when you're looking at the overall loss between the two, right, that's really just the average of the two. So you can draw a line between these two, and the midpoint turns out to be the average of your two losses. So this is LR1 plus LR2 divided by 2. That's for this guy, okay? And what you can notice is that, in fact, the loss of the parent node is actually just this point projected upwards here, so this would be your LR parent. And this difference right here, this difference, is sort of your change in loss. Does this makes sense? Any questions? Okay. So we have this- just to recap, okay. So we have- say, we have two children regions, right? And they have different probabilities of positive examples occurring, right? They sort of would fall- one would fall on this point on the curve, and say, the other one falls on this point on the curve, then the average of the two losses sort of falls on the midpoint between these two original losses. And if you look at the parent, it's really just halfway between on the x-axis, and you can project up towards for that as well, and you end up with the loss of R_parent. What's up? [inaudible]. Okay. So what we're looking at here is we're looking at the cross entropy loss. So you've got this function here, this L cross entropy right, and that's in terms of p-hat c's, right? In this case here, we're just assuming that we have two classes, okay? So what we're doing is we're just modifying the p-hat c, we're, we're changing that on the x-axis and then we're looking at what the response of the overall loss function is on the y-axis. And so what I just did here is for any- this curve just represents for any p-hat c, what the cross entropy loss would look like. Okay. And so we can come back to this, for example, right? And if we look at this parent here right, this guy has a 10%, right? It's sort of like p-hat, p-hat for this guy is 0.1, it's 10% basically or, or I guess no, in this case, would be 0.9 sorry. And then versus here, in these two cases, right, your p-hat, in this case, is 1 since you've got them all right, all right, and then, in this case, it's 0.8, okay? So you can sort of see since these are equal, there's the same number of examples in both of these, the p-hat of the parent is just the average of the p-hat's of the children. Okay. And so that's how we can sort of take this LR_parent, this LR_parent is just halfway, if we projected this down, all right. Let me just erase this little bit here. If we projected this down like this, we'd see that this- that this point here is the midpoint. Okay. Um, but then when you're actually averaging the two losses after you've done the split, then you can basically just, you're just taking the average loss right? You're just summing LR1 plus LR2 and if you're taking the average then you're dividing by two, and what you can do is you can just draw the line and take the midpoint of this line instead. Yeah. [inaudible]. Yeah. [inaudible] Yeah. Exactly. So yeah really any- if there- it's a good point. The question was if you have an uneven split, uh what would that look like on this curve, right? And so at this point, I've been making the math easy by just saying there's an even split but really if there was a slightly uneven split you- the average would just be any point along this line that you've drawn. As you can see the whole thing is strictly concave so any point along that line is going to lie below the original loss curve for the parent. So you're basically, as long as you're not picking the exact same points on the probability curve and not making any gain at all in your split, you're gonna gain some amount of information through this split. Okay. Now, this was the cross-entropy loss, right? If instead, we look at the misclassification loss over here, let's draw this one instead. What we can see, in this case, if you draw it is that it's in fact really this pyramid kind of shape where it's just linear and then flips over once you start classifying the other side. And if you did the same argument here where we had LR1 and LR2, and then you drew a line between them, all right, that's basically just still the loss curve, and so, in this case, like your midpoint would be the same point as your parent. So your loss of R_parent, in this case, would equal your loss of R1 plus loss of R2 divided by 2. All right. And so in this case, you can- there's even now according to the cross entropy formulation, you do have a gain in information and intuitively we do see a gain in information over here. For the misclassification loss, since it's not very sensitive, if you end up with points on the same side of the curve, then you actually don't see any sort of information gain based on this kind of representation. And so there's actually a couple, I, I presented the cross entropy loss here. There's also the Gini loss which is another one, which people just write out as, as the sum over your classes p-hat c times one minus p-hat c, okay, and it turns out that this curve also looks very similar to this original cross entropy curve. And what you'll see is that actually most curves that are successfully used for de- decision splits, look basically like the strictly concave function. Okay. So that's where it covers a lot of the criteria we use for splits. Um, let's look at some extensions for decision trees. Actually, I'm going to keep this guy. Okay. So, so far I've been talking about decision trees for classification. You could also imagine having decision trees for regression, and people generally call these regression trees, okay. So taking the ski example again let's pretend that instead of now predicting whether or not you can ski, you're predicting the amount of snowfall you would expect in that area around that time. Um, so like let's- I'm just gonna say it's like inches of snowfall I guess or something, per like day or something and just like maybe have some values up here. Some high value because you're- it's winter over there, it's mostly 0s over here because there's summer, and then you have some more high values over here, and then you have 0s along the equator again. 0s, southern hemisphere over our winter, and then more 0s like this. And you can sort of see how you would do just the exact same thing. You still want to isolate out regions and sort of increase like the purity of those regions. So you could still create like your trees like this, all right, and split out like this for example. And what you do when you get to one of your leaves is instead of just predicting a majority class, what you can do is predict the mean of the values left. So you're predicting, predict y hat where, well for Rm. So pretend you have a region Rm, you're predicting y hat of m which is the sum of all the indices in Rm, Y i minus y hat m, and you want the squared loss and then you can sort of I guess, in this case, you want to normalize by the overall cardinality of Rm or how many points you have in Rm. And so in this case, basically all you've done is you've switched your loss function or, no sorry that's wrong. [LAUGHTER] This is actually- I got a little bit ahead of myself. This is actually just- the, the mean value would just be this, in this case, right? It's just your summing all the values within your region. So in this case, 7, 9, 8, 10, and then just taking the average of that. Um, but so then what you do, what I was starting to write out there was actually really the, the loss that you would use, in this case, right, which is your squared loss, okay? So like we'll just call that L squared which, in this case, would be equal to Y i minus y hat m squared over R m. That's what I started to write over there. But in this case, right, you have your mean prediction and then your loss in this case, is how far off your mean prediction is from the overall predictions, in this case. Yep. So in terms of [inaudible]. So that's a really good question. The question was uh, how do you actually search for your splits, how do you actually solve the optimization problem of finding these splits? And it turns out that you can actually basically brute force it very efficiently. I'm going to get into sot of the details of how you do that shortly, but it turns out that you can just go through everything fairly quickly. Um, I'll get into that. I think that's in a couple of sections from now, yeah. Any other questions? Okay. So this is, uh, for regression trees, right? It turns out that, um, another useful extension that, that you don't really get for other learning algorithms is that you can also deal with, uh, categorical variables fairly easily. And basically, for this case, you could imagine that instead of having your latitude in degrees, you could just have three categories right? You could have something like, uh, this is the northern hemisphere, this is the equator, and this is the southern hemisphere, okay? And then you could ask questions instead of the sort, like that initial question we had before, where it was latitude greater than 30. Your question could instead be is, is- I guess this would be, is location in northern hemisphere? Right. And you can have basically any sort of subset- you could ask me question about any sort of subset of the categories you're looking at. Right? So in this case Northern, you would still- this question would still split out this top part from these bottom pieces here. One thing to be careful about though is that if you have q categories, then you have- I mean, you basically are considering every single possible subset of these categories. So that's 2 to the q possible splits. And so in general, you don't want to deal with too many categories because this will become quickly intractable to look through that many possible examples. It turns out that in certain very specific cases, you can still deal with a lot of categories. One such case is for binary classification where then you can just- the math is a little bit complicated for this one but you can basically sort your categories by how many positive examples are in each category, and then just take that as like a sorted order then search through that linearly, and it turns out that that yields to an optimal solution. So decision trees, we can use them for regression, we can also use them for categorical variables. Um, one thing that I've not gotten into is that, you can imagine that in the limit if you grew your tree without ever stopping, you could end up just having a separate region for every single data point that you have. Um, so that's really- you could consider that probably over fitting if you ran it all the way to that completion, right? So you can sort of see that decision trees are fairly high variance models. So one thing that we're interested in doing is regularizing these high variance models. And generally, how people have solved this problem is through a number of heuristics, okay? So one such heuristic is that if you hit a certain minimum leaf size, you stop splitting that leaf, okay? So for example in this case if you've hit like you only have four examples left in this leaf, then you just stop. Another one is you can enforce a maximum depth, and sort of a related one in this case is a max number of nodes. And then a fourth very tempting one I've got to say to use is you say, a minimum decrease in loss, right? I say this one's tempting because it's generally not actually a good idea to use this minimum decrease in loss 1. You can think about that, by thinking that if you have any sort of higher-order interactions between your variables, um, you might have to ask one question that is not very optimal, or doesn't give you that much of an increase in loss, and then your follow-up question combined with that first question might give you a much better increase. And you can sort of see that in this case, where the initial latitude questions doesn't really give us that much of a gain. We sort of split some positive and negatives, but the combination of the latitude question plus the time question really nails down what we want. And if we were looking at it purely from the minimum decrease in loss perspective, we might stop too early and miss that entirely. And so a better way to do this kind of loss decrease is instead you grow out your full tree, and then you prune it backwards instead. So you grow out the whole thing and then you check which nodes to prune out. Pruning. And how you generally do this, is you, you take it- you have a validation set that you use this with, and you evaluate what your misclassification error is on your validation set. If for each example that you might remove for each leaf that you might remove. So you would use misclassification in this case with a validation set. Any questions? Yeah? The minimum decrease in loss. The minimum decrease in loss? So, um, yeah of course. Uh, so you'll recall that before I was talking about sort of this RP, this loss of R_parent versus loss of R_1 plus loss of R_2. All right, so when we're- I had written out a maximization basically, um, oh to be clear, the question is, can you explain a little bit more clearly what this minimum decrease in loss means? And so you have your loss of R_1 and R_2 versus your loss of R_parent, right? So the split before the split, right, you have your loss before split. You have the loss of R_parent, and then after split, you have loss of R_1 plus loss of R_2. Yeah. And if, if this decrease between your loss of R_parent to your loss of your children is not great enough, you might be tempted to say, "Okay, that question didn't really gain us anything, and so therefore we will not actually use that question." But what I'm saying is that sometimes you have to ask multiple questions, right? You have to ask sort of sub-optimal questions first to get to the really good questions, especially if you have sort of interaction between your variables, if there is some amount of correlation between your variables. Okay. So we talked about regularization. I said that we would get to run time, let's actually just go up here again. [NOISE] So let's cover that really quickly. [NOISE] Okay. So it'll be useful to define a couple of numbers at this point. So say you have n examples. [NOISE] You have f features, and finally, you have, uh, d- let's say the depth of your tree is d, okay. All right. So you've gra- you, you have n examples that you trained on you- with the each of f features and your resulting tree has depth d. So at test time, your run-time is basically just your depth d, right? [NOISE] It's just o of d, right? Which is your depth. And typically, though not in all cases, um, d is sort of about- is less than the log of your number of examples. And you can sort of think about this as if you have a fairly balanced tree right, you'll end up sort of evenly splitting out all the examples and sort of recursively like doing these binary splits and so you'll be splitting it at the log of that n. Okay. So at test-time you've generally got it pretty quick. Uh, at train time, um, you have each point. So if you return back to this example, you'll see that each point, right, once you've done a split only belongs to the left or right of that split afterwards. All right. So it's sort of like, like this point right here, once you've split here will only ever be part of this region, will never be considered on the other side, on the right-hand side of that split. All right. So if your, if your tree is of depth d, each point, each point is part of Od nodes. Okay. And then at each node, you can actually work out that the cost of evaluating that point for- at training time is actually just proportional to the number of features f. I won't get too much into the details of why this is, but you can consider that if you're doing binary features, for example, where each feature is just yes or no of some sort, then you only have to consider, if you have f features total, you only have to consider, um, f possible splits. So that's why the cost in that case would be f, and then if it was instead a, uh, quantitative feature, I mentioned briefly that you could sort the overall features and then scan through them linearly, um, and that also ends up being asymptotically O of f to do that. Okay. So each point is at most O of d nodes, and then the cost of point at each node is O of f and you have n points total. So the total cost is really just, is just O of nfd, like this. It turns out that this is actually surprisingly fast, uh, especially if you consider that n times f is just the size of your original design matrix, right or your data matrix, all right. Your data matrix is of size n times f, right, and then you're only- your, your runtime is going through the data matrix that most depth times, and since depth is log of n, that turns out to be or generally bounded by log of n, you have generally, a fairly, fast training time as well. Any questions about runtime? [NOISE] Okay. So I've been talking a lot about the good sides of decision trees right, they seem pretty nice so far. However, there are a number of downsides too. Um, and one big one is that it doesn't have additive structure to it. And so let me explain a little bit what that means. Okay. So let's say now we have an example and you have just two features again, so x1 and x2, and you ca- say you define a line, okay, just running through the middle defined by x1 equals x2. And all the points above this line are positive, and all the points below it are negative. Now, if you have a simple linear model like logistic regression, you'll have no issue with this kind of setup. But for a decision tree, [LAUGHTER] basically, you'd have to ask a lot of questions that even somewhat approximate this line. Like, what you can try is you're going to say okay let's split it this way, and maybe we can do a split this way and then now I split here, maybe something like this, and basically something like that, right? Even here you- so you've asked a lot of questions and you've only gotten a very rough approximation of the actual line that you've drawn in this case. And so decision trees do have a lot of issues with these kind of structures where the v- the features are interacting additively with one another. Okay. So to recap so far, since we've covered a number of different things about decision trees, there's a number of pu- pluses and minuses to decision trees. Okay. So on the plus side, they're actually, I think this is an important point is that they're actually pretty easy to explain, right? If you're explaining what a decision tree is to like a non-technical person, it's fairly obvious you're like okay you have this tree, you're just playing 20 Questions with your data and letting it co- come up with one question at a time. There are also interpretable, you can just draw out the tree especially for shorter trees to see exactly what it's doing. It can deal with categorical variables, and it's generally pretty fast. However, on the negative side, one that I alluded to is that they're fairly high variance models and so are oftentimes prone to overfitting your data. They're bad at additive structure. And then finally they have, because in large part because of these first two, they generally have fairly low predictive accuracy. [NOISE] I know what you guys are thinking, I just spent all this time talking about decision trees and then I tell you guys they actually sort of suck. So why did I actually cover decision trees? And the answer is that in fact you can make decision trees a lot better through ensembling. And a lot of the methods, for example at the leading methods in Kaggle these days are actually built on ensembles of decision trees, and they really provide an ideal sort of model framework to look at, through which we can examine a lot of these different ensembling methods. Any questions about decision trees before I move on? [NOISE] Yeah? [inaudible]. I don't think that's strictly- Okay. So the question is for the cross-entropy loss, does the log need to be base 2? And the answer is I'm pretty sure that it's not very relevant in this case, I'm not 100% sure about that but I'm pretty sure that the base of the log of that makes, it's cross entropy loss actually initially came out of like information theory, we have like computer bits and you're transmitting bits. So it's useful to think in terms of bits of information that you can transmit, which is why it came up as log base 2 in the initial formulation. [NOISE] Okay. So now let's talk about ensembling. Okay. So why does ensembling help? At some level, you can sort of think back to your basic, uh, statistics. So say you have, um, you have XIs, XIs, which are random variables. I'll sometimes write this as just RV, um, that are independent and identically distributed. And so probably a lot of you are familiar with this already or you can call this IID, okay. Now say that your variance of one of these variables is Sigma squared. Then what you can show is that the variance of the mean of many of these variables. So let's- of many of these random variables or written alternatively, 1 over N sum over I to the XI is equal to Sigma squared over N. And so each independent variable you factor in is decreasing the variance of your model, all right? And so the thought is that if you can factor in a number of independent sources, you can slowly decrease your variance. Okay, so, uh, so that- though this is a little bit simplistic of a way of looking at this, because really all these different things are factoring together have some amount of correlation with each other. And so this independence assumption is oftentimes not correct. So if instead, you drop the independence assumption. So now your variables are just ID, right? Okay. And say we can characterize what the correlation between any two XIs is and we can write that down as Rho. So Xi. Then you can actually write out the variance of your mean as Rho Sigma squared- Sigma squared, plus 1 minus Rho over M or- no, N Sigma squared, okay? And so you can sort of see that if your correlation- if they're fully correlated, then your- this term will drop to 0 and that you'll just have Sigma squared again because adding a bunch of fully correlated variables is just gonna give you the original variable's variance versus if they're completely decorrelated then this term drops to 0 and you just end up with Sigma squared over N which gives you the initial, uh, independent identically distributed equation. And so in this case, really what you wanna do- the name of the game is, you wanna have as many different models that you're factoring as possible to increase this N which drives this term down. And then on the other hand, you also want to make sure those models are as decorrelated as possible so that your Rho goes down and this first term goes down as well, okay? And so this gives rise to a number of different ways to ensemble. And one way you could think about doing this is you just use different algorithms, right? This is actually what a lot of people in Kaggle, for example, will do, is they'll just take a neural network or Random Forest, an SVM, average them all together and generally that actually works pretty well but- then you sort of have to spend your time implementing all these separate algorithms which is oftentimes not the most efficient use of your time. Another one that people would like to do is just use different training sets. Okay. And again, in this case, like you probably spent a lot of effort collecting your initial training set, you don't want your- like machine learning person to just come and recommend to you that, just go collect a whole second training set or something like that to improve your performance. Like that's generally not the most helpful recommendation, okay? And so then, what we're gonna cover now are these two other methods that we use to do ensembling. And one of them is called bagging, which is sort of trying to approximate having different training sets. We'll get into that quickly. And then you also have boosting. And just so that you guys will have a little bit of context, we're gonna be using decision trees to talk a lot about these models; and so bagging, you might have heard of random forests, that's a variant of bagging for decision trees. And then for boosting, you might have heard of things like AdaBoost, or XGBoost, which are variants of boosting for decision trees. Okay, so that sort of covers at a high level what we would wanna do. These first two are very nice because they're sort of would give us a much more like independently correlated- or less correlated variables. But generally, we're- we end up doing these latter two because we don't want to collect new training sets or train entirely new algorithms. Okay, so let's cover bagging first. Okay, so bagging really stands for this thing. It's called bootstrap aggregation, okay? Um, and so- first, let's just break down this term. So bootstrap, what that is, is it's typically this method used in statistics to measure the uncertainty of your estimate, okay? And so what- what is useful to define in this case for when you're talking about bagging is you can say that you have a true population P, okay? And your training set- training set S is sampled from P, right? So you just start drawing a bunch of examples from P and that's what forms your training set at some level. And so ideally, like for example, this different training set's approach. What you do is, you just draw S1, S2, S3, S4, and then train your model in each one of those separately. Unfortunately, you generally don't have the time to do that. And so what ba- what Bootstrapping does, is you assume basically that your population is your training sample, okay? So you assume that your population is your training sample. And so now that you have this S is approximating your P, then you can draw new samples from your population by just drawing samples from S instead, okay? So you have bootstrap samples, is what they're called. Z sampled from S. And so how that works is you basically just take your train- your- your training sample, okay? Say it's of like cardinality N or something. And you just sample N times from S and this is important, you do it with replacement. Because they're pretending that this is the population, and so doing it with replacement sort of makes that assumption hold that you're sampling from it as a population. Okay, so that's bootstrapping. So you generate all these different bootstrap samples Z on your- from your training set. And what you can do is you can take your model and train it on all these separate bootstrap samples, and then you can sort of look at the variability in the predictions that your model ends up making based on these different bootstrap samples. And that gives you sort of a measure of uncertainty. I'm not gonna go into too much detail now because that's not actually what we're gonna use Bootstrapping for. What we want to use bootstrapping for is we wanna aggregate these two Bootstrap samples. And so at a very high level, what that means is we're gonna take a bunch of Bootstrap samples, train separate models on each and then average their outputs, okay? So let's make that a little bit more formal. [NOISE] So you have bootstrap samples Z_1 through Z_M say, okay, capital M. That's just say how many bootstrap samples you're going to take. Okay, you train [NOISE] a model, G_M, okay, on Z_M, okay? Then all you're doing is you're just defining this new sort of meta model. I'm not putting a subscript on this one to show that it's a meta model, T of M, which is just the sum of your predictions of your individual models, divided by the total number of models you have, all right? And this is just me writing out what I was sort of talking about right up there for bagging. If you're taking these bootstrap samples and then you're training separate models, and then you're just aggregating them all together to get this bagging approach. So if we just do a little bit of analysis from the bias-variance perspective on this, we can sort of see why this kind of thing might work. [NOISE] And so you recall we had this equation up here, right? The va- variance of the mean is rho sigma squared, plus 1 minus rho, over n of sigma squared. So let me just write that out here. [NOISE] And in this case, our M is actually really uh, just the number of bootstrap samples. So we'll just use big M in this case. And what you're doing is by taking these bootstrap samples, you're sort of decorrelating the models you're training. Your bootstrapping [NOISE] is driving down [NOISE] rho. Okay. And so by driving this down, you're sort of making this term get smaller and smaller. And then your question might be okay, what about this term here? And it turns out that basically you can take as many bootstrap samples as you want, and that will slowly drive down- it increases M and drive down this second term. And it turns out that one nice thing about bootstrapping, is that increasing the number of bootstrap models in your training, doesn't actually cause you to overfit anymore than you were beforehand. Because all you're doing, is you're driving down this term here. So more M [NOISE] and it's just less in variance. [NOISE] All you're doing is driving down the second term as much as possible when you're getting more and more bootstrap samples. So generally, it only improves performance. And so generally what people will do is they'll sample more and more models until they see that their error stops going down. Because that means they basically eliminated this term over here. So this seems kinda nice, right? You're decreasing the variance, where is the trade-off coming in? Oh, there is a question there. [inaudible]. Yeah, there's definitely a bound, right? Because um, I'm not going to define one formally right now. Oh, the question is can you define a bound on how much you decrease rho by? Uh, I'm not- yeah, so there's definitely a lower bound [NOISE] or, oh yeah, a lower bound on how far you can decrease rho. Basically it comes down to your bootstrap samples are still fairly highly correlated with one another, all right. Because they're still just drawing it from the same sample set S. Really, your Z is gonna end up containing about two- each Z is going to contain about two thirds of S. And so your Zs are still gonna be fairly highly correlated with each other. And no, I don't have a formal equation to write down as to exactly how much that decreases rho by, or how much that bounds rho by, you can sort of see intuitively that there is a bound there and that you can't just magically decrease rho all the way down to 0 and achieve 0 variance. [NOISE] All right. So saying that you decrease variance, that seems very nice. One issue that comes up with, with uh, bootstrapping is that in fact you're actually slightly increasing the bias of your models when you're doing this. And the reasoning for that [NOISE] is because of this sub-sampling that I was talking about here. Each one of your Zs is now about two-thirds of the original S. So you're training on less data um, and so your models are becoming slightly less uh, you know, complex and so that increases your bias in this case. Yes. [inaudible] Yeah, for sure. Um, so the question is, can you explain the difference between a random variable and an algorithm in this case, right? And so you could sorta- at a, at a very high level, you can think of an algorithm as a classifier- as a function that's taking in some data and making a prediction. Right? And if you sort of see those- that whole setup as sort of like, the probability, the algorithm is giving some sort of output in the probabilistic perspective, you can sort of see the algorithm as like a random variable in the case- in this case. Sort of like, you're basically considering, sort of the space of possible predictions that your algorithm can make and that you can sort of see as a distribution of possible predictions and that you can approximate that as a random variable. I mean it is a random variable at some level, because it's sort of like based on what's training sample you end up with, your predictions of your output model are gonna change. And so since you're sampling sort of these random samples from your population set, you can consider your algorithm as sort of based on that random sample and therefore a random variable itself. Okay. So yeah, your bias is slightly increased because [NOISE] of random subsampling, [NOISE] but generally, the decrease in variance that you get from doing this, is much larger than the slight increase in bias you get from, from doing this randomized subsampling. So in a lot of cases, bagging is quite nice. [NOISE] Okay? So I've talked a bit about ba- about bagging, uh, let's talk about decision trees plus bagging now. Okay. So you recall that decision trees are high [NOISE] variance, low bias okay? And this right here sort of explains why they're a pretty good fit for bagging. Okay? Because bagging what you're doing, is you're decreasing the variance of your models for a slight increase in bias. And since most of your error from your decision trees is coming from the high variance side of things, by sort of driving down that variance, you get a lot more benefit than for a, a model that would be on the reverse high bias and low variance. Okay? So, so this makes this like an ideal fit [NOISE] for bagging. [NOISE] Okay. [NOISE] So now, um, this is sort of decision trees plus bagging. I said that random forests are sort of a version of decision trees plus bagging. And so what I've described here is actually almost a random forest at this point. The one key point we're still missing is that random forests actually introduce even more randomization into each individual decision tree. And the idea behind that is that- as I had a question from before is this Rho, you can only drive it down so far through just pure bootstrapping. But if you can further decorrelate your different random variables, then you can drive down that variance even further, okay? Um, and so the idea there is that basically for- at each split for random forests, at each split, you consider only a fraction of your total features, right? So it's sort of like, for that ski example, maybe like for the first split, I only let it look at latitude, and then for the second split, I only let it look at, uh, the time of the year. [NOISE] And so this might seem a little bit unintuitive at first, but you can sort of get the intuition from two ways. One is that you're decreasing Rho and then the other one is you can think that- say you have a classification example, where you have one very strong predictor that gets you very good performance on its own. And regardless of what bootstrap sample you select, your model is probably gonna use that predictor as its first split. That's gonna cause all your models to be very highly correlated right at that first split, for example, and by instead forcing it to, to sample from different features. Instead, that's going to increase the, uh, or decrease the correlation between your models. And so it's all about decorrelating your models in this case. [NOISE] Okay. And that sort of brings to a close a lot of our discussion of bagging. Are there any questions regarding bagging? Okay. Now, I've covered bagging. Let's get a little bit into boosting. [NOISE] And I'll make this quick. But basically, whereas bagging we sort of saw in the intuition that we were decreasing variance, boosting is sort of actually more of the opposite where you're decreasing the bias of your models, okay? So- [NOISE] and also it- it's basically, um, more additive in, um, in how it's doing things. So versus- [NOISE] you'll recall that for bagging, you were taking the average of a number of variables. In boosting, what happens, you train one model and then you add that prediction into your ensemble. And then when you turn a new model, you just add that in as a prediction. And so- and that's a little bit handwavy right now. So let me actually make that clear through an example. [NOISE] So say you have a dataset, again, X1, X2, X2 and you have some data points, maybe some- that's actually- just call them pluses and minuses. So you have some more pluses here, and then maybe a couple of minuses and some pluses here, okay? And what you- say you're training a size one decision tree. So decision stumps is what we call them. And so you only get to ask one question at a time. And the reason behind this, just really quickly is that because you're decreasing bias by restricting your trees to be only depth 1, you basically are increasing their amount of bias and decreasing their amount of variance, which makes them a better fit for boosting kind of methods. And say that you come up with a, a decision boundary, okay? Say this one here, okay? And what you're gonna do is, on this side you predict positive, right? And on this side you predict negative. There's like a reasonable like line that you could draw here, but it's not perfect, right? You've made some mistakes. And in fact, what you can do is you can sort of identify these mistakes. Now, if we draw this in red, right? You've got- made these guys as mistakes. And what boosting does is basically it increases the weights of the mistakes you've made. And then for the next out- uh, decision stump that you train, it's now trained on this modified set. Which we could, let's just draw it over here. One. [NOISE] And so now you- these positives, I'll just draw them much bigger. You know, you've got big positives here and some small negatives, and some small positives, some big negatives here. And so now your model, to try and get these right, might pick a decision boundary like this, right? And this is also basically recursive in that each step, right? You're gonna be reweighting each of the examples based on how many of your previous ones have gotten it wrong or right in the past. [NOISE] And so basically what you're doing is you can sort of weight each one of these classifiers. You can determine [NOISE] for classifier Gm, a weight Alpha m, which is proportional to how many examples you got wrong or right. So a better classifier, you wanna give it more weight, um, and, uh, a bad classifier you wanna give it less weight proportional. And, uh, I think that the exact equation used in AdaBoost, for example, is just log of 1 minus the error of your nth model divided with basically log odds, okay? And then your total classifier is just F of- or let's just call it G of x again. G of x is just the sum over m of Alpha m, G of m, right? And then each G of m is trained on a weighted- on a reweighted, actually, reweighted training set. And so I've glossed over a lot of the details here in interest of time, but the specifics of an algorithm like this are- will be in the lecture notes. And this algorithm is actually known as AdaBoost. [NOISE] And basically through similar techniques, you can derive algorithms such as XGBoost or gradient boosting machines that also allow you to basically reweight the examples you're getting right or wrong in this sort of dynamic fashion and slowly adding them in this additive fashion to your composite model. [NOISE] And that about finishes it for today. Uh, thanks for coming. Um, yeah, a great rest of your week. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Stanford_CS229_Machine_Learning_Linear_Regression_and_Gradient_Descent_Lecture_2_Autumn_2018.txt | Morning and welcome back. So what we'll see today in class is the first in-depth discussion of a learning algorithm, linear regression, and in particular, over the next, what, hour and a bit you'll see linear regression, batch and stochastic gradient descent is an algorithm for fitting linear regression models, and then the normal equations, um, uh, as a way of- as a very efficient way to let you fit linear models. Um, and we're going to define notation, and a few concepts today that will lay the foundation for a lot of the work that we'll see the rest of this quarter. Um, so to- to motivate linear regression, it's gonna be, uh, maybe the- maybe the simplest, one of the simplest learning algorithms. Um, you remember the ALVINN video, the autonomous driving video that I had shown in class on Monday, um, for the self-driving car video, that was a supervised learning problem. And the term supervised learning [NOISE] meant that you were given Xs which was a picture of what's in front of the car, and the algorithm [NOISE] had to map that to an output Y which was the steering direction. And that was a regression problem, [NOISE] because the output Y that you want is a continuous value, right? As opposed to a classification problem where Y is the speed. And we'll talk about classification, um, next Monday, but supervised learning regression. So I think the simplest, maybe the simplest possible learning algorithm, a supervised learning regression problem, is linear regression. And to motivate that, rather than using a self-driving car example which is quite complicated, we'll- we'll build up a supervised learning algorithm using a simpler example. Um, so let's say you want to predict or estimate the prices of houses. So [NOISE] the way you build a learning algorithm is start by collecting a data-set of houses, and their prices. Um, so this is a data-set that we collected off Craigslist a little bit back. This is data from Portland, Oregon. [NOISE] But so there's the size of a house in square feet, [NOISE] um, and there's the price of a house in thousands of dollars, [NOISE] right? And so there's a house that is 2,104 square feet whose asking price was $400,000. Um, [NOISE] house with, uh, that size, with that price, [NOISE] and so on. Okay? Um, and maybe more conventionally if you plot this data, there's a size, there's a price. So you have some dataset like that. And what we'll end up doing today is fit a straight line to this data, right? [NOISE] And go through how to do that. So in supervised learning, um, the [NOISE] process of supervised learning is that you have a training set such as the data-set that I drew on the left, and you feed this to a learning algorithm, [NOISE] right? And the job of the learning algorithm is to output a function, uh, to make predictions about housing prices. And by convention, um, I'm gonna call this a function that it outputs a hypothesis, [NOISE] right? And the job of the hypothesis is, [NOISE] you know, it will- it can input the size of a new house, or the size of a different house that you haven't seen yet, [NOISE] and will output the estimated [NOISE] price. Okay? Um, so the job of the learning algorithm is to input a training set, and output a hypothesis. The job of the hypothesis is to take as input, any size of a house, and try to tell you what it thinks should be the price of that house. Now, when designing a learning algorithm, um, and- and, you know, even though linear regression, right? You may have seen it in a linear algebra class before, or some other class before, um, the way you go about structuring a machine learning algorithm is important. And design choices of, you know, what is the workflow? What is the data-set? What is the hypothesis? How does this represent the hypothesis? These are the key decisions you have to make in pretty much every supervised learning, every machine learning algorithm's design. So, uh, as we go through linear regression, I will try to describe the concepts clearly as well because they'll lay the foundation for the rest of the algorithms. Sometimes it's much more complicated with the algorithms you'll see later this quarter. So when designing a learning algorithm the first thing we need to ask is, um, [NOISE] how- how do you represent the hypothesis, H, right? And in linear regression, for the purpose of this lecture, [NOISE] we're going to say that, um, the hypothesis is going to be [NOISE] that. Right? That the input, uh, size X, and output a number as a- as a linear function, um, of the size X, okay? And then, the mathematicians in the room, you'll say technically this is an affine function. It was a linear function, there's no theta 0, technically, you know, but- but the machine learning sometimes just calls this a linear function, but technically it's an affine function. Doesn't- doesn't matter. Um, so more generally in- in this example we have just one input feature X. More generally, if you have multiple input features, so if you have more data, more information about these houses, such as number of bedrooms [NOISE] Excuse me, my handwriting is not big. That's the word bedrooms, [NOISE] right? Then, I guess- [NOISE] All right. Yeah. Cool. My- my- my father-in-law lives a little bit outside Portland, uh, and he's actually really into real estate. So this is actually a real data-set from Portland. [LAUGHTER] Um, so more generally, uh, if you know the size, as well as the number of bedrooms in these houses, then you may have two input features [NOISE] where X1 is the size, and X2 is the number of bedrooms. [NOISE] Um, I'm using the pound sign bedrooms to denote number of bedrooms, and you might say that you estimate the size of a house as, um, h of x equals, theta 0 plus theta 1, [NOISE] X1, plus theta 2, X2, where X1 is the size of the house, and X2 is- [NOISE] is the number of bedrooms. Okay? Um, so in order to- [NOISE] so in order to simplify the notation, [NOISE] um, [NOISE] in order to make that notation a little bit more compact, um, I'm also gonna introduce this other notation where, um, we want to write a hypothesis, as sum from J equals 0-2 of theta JXJ, so this is the summation, where for conciseness we define X0 to be equal to 1, okay? See we define- if we define X0 to be a dummy feature that always takes on the value of 1, then you can write the hypothesis h of x this way, sum from J equals 0-2, or just theta JXJ, okay? It's the same with that equation that you saw to the upper right. And so here theta becomes a three-dimensional parameter, theta 0, theta 1, theta 2. This index starting from 0, and the features become a three dimensional feature vector X0, X1, X2, where X0 is always 1, X1 is the size of the house, and X2 is the number of bedrooms of the house, okay? So, um, to introduce a bit more terminology. Theta [NOISE] is called the parameters, um, of the learning algorithm, and the job [NOISE] of the learning algorithm is to choose parameters theta, that allows you to make good predictions about your prices of houses, right? Um, and just to lay out some more notation that we're gonna use throughout this quarter. We're gonna use a standard that, uh, M, we'll define as the number of training examples. So M is going to be the number of rows, [NOISE] right, in the table above, um, where, you know, each house you have in your training set. This one training example. Um, you've already seen [NOISE] me use X to denote the inputs, um, and often the inputs are called features. Um, you know, I think, I don't know, as- as- as a- as a emerging discipline grows up, right, notation kind of emerges depending on what different scientists use for the first time when you write a paper. So I think that, I don't know, I think that the fact that we call these things hypotheses, frankly, I don't think that's a great name. But- but I think someone many decades ago wrote a few papers calling it a hypothesis, and then others followed, and we're kind of stuck with some of this terminology. But X is called input features, or sometimes input attributes, um, and Y [NOISE] is the output, right? And sometimes we call this the target variable. [NOISE] Okay. Uh, so x, y is, uh, one training example. [NOISE] Um, and, uh, I'm going to use this notation, um, x_i, y_i in parentheses to denote the i_th training example. Okay. So the superscript parentheses i, that's not exponentiation. Uh, I think that as suppo- uh, this is- um, this notation x_i, y_ i, this is just a way of, uh, writing an index into the table of training examples above. Okay. So, so maybe, for example, if the first training example is, uh, the size- the house of size 2104, so x_1_1 would be equal to 2104, right, because this is the size of the first house in the training example. And I guess, uh, x, um, second example, feature one would be 1416 with our example above. So the superscript in parentheses is just some, uh, uh, yes, it's, it's just the, um, index into the different training examples where i- superscript i here would run from 1 through m, 1 through the number of training examples you have. Um, and then one last bit of notation, um, I'm going to use n to denote the number of features you have for the supervised learning problem. So in this example, uh, n is equal to 2, right? Because we have two features which is, um, the size of house and the number of bedrooms, so two features. Which is why you can take this and, and write this, um, as the sum from j equals 0 to n. Um, and so here, x and Theta are n plus 1 dimensional because we added the extra, um, x_0 and Theta_0. Okay. So- so we have two features then these are three-dimensional vectors. And more generally, if you have n features, uh, you, you end up with x and Theta being n plus 1 dimensional features. All right. And, you know, uh, you see this notation in multiple times, in multiple algorithms throughout this quarter. So if you, you know, don't manage to memorize all these symbols right now, don't worry about it. You'll see them over and over and they become familiar. All right. So, um, given the data set and given that this is the way you define the hypothesis, how do you choose the parameters, right? So you- the learning algorithm's job is to choose values for the parameters Theta so that it can output a hypothesis. So how do you choose parameters Theta? Well, what we'll do, um, is let's choose Theta such that h of x is close to y, uh, for the training examples. Okay. So, um, and I think the final bit of notation, um, I've been writing h of x as a function of the features of the house, as a function of the size and number of bedrooms of the house. [NOISE] Um, sometimes we emphasize that h depends both on the parameters Theta and on the input features x. Um, we're going to use h_Theta of x to emphasize that the hypothesis depends both on the parameters and on the, you know, input features x, right? But, uh, sometimes for notational convenience, I just write this as h of x, sometimes I include the Theta there, and they mean the same thing. It's just, uh, maybe a abbreviation in notation. Okay. But so in order to, um, learn a set of parameters, what we'll want to do is choose a parameters Theta so that at least for the houses whose prices you know, that, you know, the learning algorithm outputs prices that are close to what you know where the correct price is for that set of houses, what the correct asking price is for those houses. And so more formally, um, in the linear regression algorithm, also called ordinary least squares. With linear regression, um, we will want to minimize, I'm gonna build out this equation one piece at a time, okay? Minimize the square difference between what the hypothesis outputs, h_Theta of x minus y squared, right? So let's say we wanna minimize the squared difference between the prediction, which is h of x and y, which is the correct price. Um, and so what we want to do is choose values of Theta that minimizes that. Um, to fill this out, you know, you have m training examples. So I'm going to sum from i equals 1 through m of that square difference. So this is sum over i equals 1 through all, say, 50 examples you have, right? Um, of the square difference between what your algorithm predicts and what the true price of the house is. Um, and then finally, by convention, we put a one-half there- put a one-half constant there because, uh, when we take derivatives to minimize this later, putting a one-half there would make some of the math a little bit simpler. So, you know, changing one- adding a one-half. Minimizing that formula should give you the same as minimizing one-half of that but we often put a one-half there so to make the math a little bit simpler later, okay? And so in linear regression, we're gonna define the cost function J of Theta to be equal to that. And, uh, [NOISE] we'll find parameters Theta that minimizes the cost function J of Theta, okay? Um, and, the questions I've often gotten is, you know, why squared error? Why not absolute error, or this error to the power of 4? We'll talk more about that when we talk about, um, uh, when, when, when we talk about the generalization of, uh, linear regression. Um, when we talk about generalized linear models, which we'll do next week, you'll see that, um, uh, linear regression is a special case of a bigger family of algorithms called generalizing your models. And that, uh, using squared error corresponds to a Gaussian, but the- we, we, we'll justify maybe a little bit more why squared error rather than absolute error or errors to the power of 4, uh, next week. So, um, let me just check, see if any questions, [NOISE] at this point. No, okay. Cool. All right. So, um, so let's- next let's see how you can implement an algorithm to find the value of Theta that minimizes J of Theta. That- that minimizes the cost function J of Theta. Um, we're going to use an algorithm called gradient descent. And, um, you know, this is my first time teaching in this classroom, so trying to figure out logistics like this. All right. Let's get rid of the chair. Cool, um, all right. And so with, uh, gradient descent we are going to start with some value of Theta, um, and it could be, you know, Theta equals the vector of all zeros would be a reasonable default. We can initialize it randomly, the count doesn't really matter. But, uh, Theta is this three-dimensional vector. And I'm writing 0 with an arrow on top to denote the vector of all 0s. So 0 with an arrow on top that's a vector that says 0, 0, 0, everywhere, right. So, um, uh, so sought to some, you know, initial value of Theta and we're going to keep changing Theta, um, to reduce J of Theta, okay? So let me show you a, um- vi- vis- let me show you a visualization of gradient descent, and then- and then we'll write out the math. [NOISE] Um, so- all right. Let's say you want to minimize some function J of Theta and, uh, it's important to get the axes right in this diagram, right? So in this diagram the horizontal axes are Theta 0 and Theta 1. And what you want to do is find values for Theta 0 and Theta 1. In our- I- I- In our example it's actually Theta 0, Theta 1, Theta 2 because Theta's 3-dimensional but I can't plot that. So I'm just using Theta 0 and Theta 1. But what you want to do is find values for Theta 0 and Theta 1, right? That's the, um, uh, right, you wanna find values of Theta 0 and Theta 1 that minimizes the height of the surface j of Theta. So maybe this- this looks like a good- pretty good point or something, okay? Um, and so in gradient descent you, you know, start off at some point on this surface and you do that by initializing Theta 0 and Theta 1 either randomly or to the value of all zeros or something doesn't- doesn't matter too much. And, um, what you do is, uh, im- imagine that you are standing on this lower hill, right standing at the point of that little x or that little cross. Um, what you do in gradient descent is, uh, turn on- turn around all 360 degrees and look all around you and see if you were to take a tiny little step, you know, take a tiny little baby step, in what direction should you take a little step to go downhill as fast as possible because you're trying to go downhill which is- goes to the lowest possible elevation, goes to the lowest possible point of J of Theta, okay? So what gradient descent will do is, uh, stand at that point look around, look all- all around you and say, well, what direction should I take a little step in to go downhill as quickly as possible because you want to minimize, uh, J of Theta. You wanna minim- reduce the value of J of Theta, you know, go to the lowest possible elevation on this hill. Um, and so gradient descent will take that little baby step, right? And then- and then repeat. Uh, now you're a little bit lower on the surface. So you again take a look all around you and say oh it looks like that hill, that- that little direction is the steepest direction or the steepest gradient downhill. So you take another little step, take another step- another step and so on, until, um, uh, until you- until you get to a hopefully a local optimum. Now one property of gradient descent is that, um, uh, depend on where you initialize parameters, you can get to local diff- different points, right? So previously, you had started it at that lower point x. But imagine if, uh, you had started it, you know, just a few steps over to the right, right? At that- at that new x instead of the one on the left. If you had run gradient descent from that new point then, uh, that would have been the first step, that would be the second step and so on. And you would have gotten to a different local optimum- to a different local minima, okay? Um, it turns out that when you run gradient descents on linear regression, it turns out that, uh, uh, uh, there will not be local optimum but we'll talk about that in a little bit, okay? So let's formalize the [NOISE] gradient descent algorithm. In gradient descent, um, each step of gradient descent, uh, is implemented as follows. So- so remember, in- in this example, the training set is fixed, right? You- You know you've collected the data set of housing prices from Portland, Oregon so you just have that stored in your computer memory. And so the data set is fixed. The cost function J is a fixed function there's function of parameters Theta, and the only thing you're gonna do is tweak or modify the parameters Theta. One step of gradient descent, um, can be implemented as follows, which is Theta j gets updated as Theta j minus, I'll just write this out, okay? Um, so bit more notation, I'm gonna use colon equals, I'm gonna use this notation to denote assignment. So what this means is, we're gonna take the value on the right and assign it to Theta on the left, right? And so, um, so in other words, in the notation we'll use this quarter, you know, a colon equals a plus 1. This means increment the value of a by 1. Um, whereas, you know, a equals b, if I write a equals b I'm asserting a statement of fact, right? I'm asserting that the value of a is equal to the value of b. Um, and hopefully, I won't ever write a equals a plus 1, right because- cos that is rarely true, okay? Um, all right. So, uh, in each step of gradient descent, you're going to- for each value of j, so you're gonna do this for j equals 0, 1 ,2 or 0, 1, up to n, where n is the number of features. For each value of j takes either j and update it according to Theta j minus Alpha. Um, which is called the learning rate. Um, Alpha the learning rate times this formula. And this formula is the partial derivative of the cost function J of Theta with respect to the parameter, um, Theta j, okay? In- and this partial derivative notation. Uh, for those of you that, um, haven't seen calculus for a while or haven't seen, you know, some of their prerequisites for a while. We'll- we'll- we'll go over some more of this in a little bit greater detail in discussion section, but I'll- I'll- I'll do this, um, quickly now. But, um, I don't know. If, if you've taken a calculus class a while back, you may remember that the derivative of a function is, you know, defines the direction of steepest descent. So it defines the direction that allows you to go downhill as steeply as possible, uh, on the, on the hill like that. There's a question. How do you determine the learning rate? How do you determine the learning rate? Ah, let me get back to that. It's a good question. Uh, for now, um, uh, you know, there's a theory and there's a practice. Uh, in practice, you set to 0.01. [LAUGHTER]. [LAUGHTER] Let me say a bit more about that later. [NOISE]. Uh, if- if you actually- if- if you scale all the features between 0 and 1, you know, minus 1 and plus 1 or something like that, then, then, yeah. Then, then try- you can try a few values and see what lets you minimize the function best, but if the feature is scaled to plus minus 1, I usually start with 0.01 and then, and then try increasing and decreasing it. Say, say a little bit more about that. [NOISE] Um, uh, all right, cool. So, um, let's see. Let me just quickly [NOISE] show how the derivative calculation is done. Um, and you know, I'm, I'm gonna do a few more equations in this lecture, uh, and then, and then over time I think. Um, all, all of these, all of these definitions and derivations are written out in full detail in the lecture notes, uh, posted on the course website. So sometimes, I'll do more math in class when, um, we want you to see the steps of the derivation and sometimes to save time in class, we'll gloss over the mathematical details and leave you to read over, the full details in the lecture notes on the CS229 you know, course website. Um, so partial derivative with respect to J of Theta, that's the partial derivative with respect to that of one-half H of Theta of X minus Y squared. Uh, and so I'm going to do a slightly simpler version assuming we have just one training example, right? The, the actual deriva- definition of J of Theta has a sum over I from 1 to M over all the training examples. So I'm just forgetting that sum for now. So if you have only one training example. Um, and so from calculus, if you take the derivative of a square, you know, the 2 comes down and so that cancels out with the half. So 2 times 1.5 times, um, uh, the thing inside, right? Uh, and then by the, uh, chain rule of, uh, derivatives. Uh, that's times the partial derivative of Theta J of X Theta X minus Y, right? So if you take the derivative of a square, the two comes down and then you take the derivative of what's inside and multiply that, right? [NOISE] Um, and so the two and one-half cancel out. So this leaves you with H minus Y times partial derivative respect to Theta J of Theta 0X0 plus Theta 1X1 plus th- th- that plus Theta NXN minus Y, right? Where I just took the definition of H of X and expanded it out to that, um, sum, right? Because, uh, H of X is just equal to that. So if you look at the partial derivative of each of these terms with respect to Theta J, the partial derivative of every one of these terms with respect to Theta J is going to z- be 0 except for, uh, the term corresponding to J, right? Because, uh, if J was equal to 1, say, right? Then this term doesn't depend on Theta 1. Uh, this term, this term, all of them do not even depend on Theta 1. The only term that depends on Theta 1 is this term over there. Um, and the partial derivative of this term with respect to Theta 1 will be just X1, right? And so, um, when you take the partial derivative of this big sum with respect to say the J, uh, in- in- in- instead of just J equals 1 and with respect to Theta J in general, then the only term that even depends on Theta J is the term Theta JXJ. And so the partial derivative of all the other terms end up being 0 and partial derivative of this term with respect to Theta J is equal to XJ, okay? And so this ends up being H Theta X minus Y times XJ, okay? Um, and again, listen, if you haven't, if you haven't played with calculus for awhile, if you- you know, don't quite remember what a partial derivative is, or don't quite get what we just said. Don't worry too much about it. We'll go over a bit more in the section and we- and, and then also read through the lecture notes which kind of goes over this in, in, in, um, in more detail and more slowly than, than, uh, we might do in class, okay? [NOISE] So, um, so plugging this- let's see. So we've just calculated that this partial derivative, right, is equal to this, and so plugging it back into that formula, one step of gradient descent is, um, is the following, which is that we will- that Theta J be updated according to Theta J minus the learning rate times H of X minus Y times XJ, okay? Now, I'm, I'm gonna just add a few more things in this equation. Um, so I did this with one training example, but, uh, this was- I kind of used definition of the cost function J of Theta defined using just one single training example, but you actually have M training examples. And so, um, the, the correct formula for the derivative is actually if you take this thing and sum it over all M training examples, um, the derivative of- you know, the derivative of a sum is the sum of the derivatives, right? So, um, so you actually- If, if, if you redo this derivation, you know, summing with the correct definition of J of Theta which sums over all M training examples. If you just redo that little derivation, you end up with, uh, sum equals I through M of that, right? Where remember XI is the Ith training examples input features, YI is the target label, is the, uh, price in the Ith training example, okay? Um, and so this is the actual correct formula for the partial derivative with respect to that of the cost function J of Theta when it's defined using, um, uh, all of the, um, [NOISE] uh, on- when it's defined using all of the training examples, okay? And so the gradient descent algorithm is to- [NOISE] Repeat until convergence, carry out this update, and in each iteration of gradient descent, uh, you do this update for j equals, uh, 0, 1 up to n. Uh, where n is the number of features. So n was 2 in our example. Okay. Um, and if you do this then, uh, uh, you know, actually let me see. Then what will happen is, um, [NOISE] well, I'll show you the animation. As you fit- hopefully, you find a pretty good value of the parameters Theta. Okay. So, um, it turns out that when you plot the cost function j of Theta for a linear regression model, um, it turns out that, unlike the earlier diagram I had shown which has local optima, it turns out that if j of Theta is defined the way that, you know, we just defined it for linear regression, is the sum of squared terms, um, then j of Theta turns out to be a quadratic function, right? It's a sum of these squares of terms, and so, j of Theta will always look like, look like a big bowl like this. Okay. Um, another way to look at this, uh, uh, and so and so j of Theta does not have local optima, um, or the only local optima is also the global optimum. The other way to look at the function like this is to look at the contours of this plot, right? So you plot the contours by looking at the big bowl and taking horizontal slices and plotting where the, where the curves where, where the edges of the horizontal slice is. So the contours of a big bowl or I guess a formal is, uh, of a bigger, uh, of of this quadratic function will be ellipsis, um, like these or these ovals or these ellipses like this. And so if you run gradient descent on this algorithm, um, let's say I initialize, uh, my parameters at that little x, uh, shown over here, right. And usually you initialize Theta degree with a 0, but but, you know, but it doesn't matter too much. So let's reinitialize over there. Then, um, with one step of gradient descent, the algorithm will take that step downhill, uh, and then with a second step, it'll take that step downhill whereby we, fun fact, uh, if you- if you think about the contours of the function, it turns out that the direction of steepest descent is always at 90 degrees, is always orthogonal, uh, to the contour direction, right. So, I don't know, yeah. I seem to remember that from my high-school something, I think it's true. All right. And so as you, as you take steps downhill, uh, uh, because there's only one global minimum, um, this algorithm will eventually converge to the global minimum. Okay. And so the question just now about the choice of the learning rate Alpha. Um, if you set Alpha to be very very large, to be too large then they can overshoot, right. The steps you take can be too large and you can run past the minimum. Uh, if you set to be too small, then you need a lot of iterations and the algorithm will be slow. And so what happens in practice is, uh, usually you try a few values and and and see what value of the learning rate allows you to most efficiently, you know, drive down the value of j of Theta. Um, and if you see j of Theta increasing rather than decreasing, you see the cost function increasing rather than decreasing, then, there's a very strong sign that the learning rate is, uh, too large, and so, um. [NOISE] Actually what what I often do is actually try out multiple values of, um, the learning rate Alpha, and, uh, uh, and and usually try them on an exponential scale. So you try open O1, open O2, open O4, open O8 kinda like a doubling scale or some- uh, uh, or doubling or tripling scale and try a few values and see what value allows you to drive down to the learning rate fastest. Okay. Um, let me just. So I just want to visualize this in one other way, um, which is with the data. So, uh, this is this is the actual dataset. Uh, they're, um, there are actually 49 points in this dataset. So m the number of training examples is 49, and so if you initialize the parameters to 0, that means, initializing your hypothesis or initializing your straight line fit to the data to be that horizontal line, right? So, if you initialize Theta 0 equals 0, Theta 1 equals 0, then your hypothesis is, you know, for any input size of house or price, the estimated price is 0, right? And so your hypothesis starts off with a horizontal line, that is whatever the input x the output y is 0. And what you're doing, um, as you run gradient descent is you're changing the parameters Theta, right? So the parameters went from this value to this value to this value to this value and so on. And so, the other way of visualizing gradient descent is, if gradient descent starts off with this hypothesis, with each iteration of gradient descent, you are trying to find different values of the parameters Theta, uh, that allows this straight line to fit the data better. So after one iteration of gradient descent, this is the new hypothesis, you now have different values of Theta 0 and Theta 1 that fits the data a little bit better. Um, after two iterations, you end up with that hypothesis, uh, and with each iteration of gradient descent it's trying to minimize j of Theta. Is trying to minimize one half of the sum of squares errors of the hypothesis or predictions on the different examples, right? With three iterations of gradient descent, um, uh, four iterations and so on. And then and then a bunch more iterations, uh, and eventually it converges to that hypothesis, which is a pretty, pretty decent straight line fit to the data. Okay. Is there a question? Yeah, go for it. [inaudible] Uh, sure. Maybe, uh, just to repeat the question. Why is the- why are you subtracting Alpha times the gradient rather than adding Alpha times the gradient? Um, let me suggest, actually let me raise the screen. Um, [NOISE] so let me suggest you work through one example. Um, uh, it turns out that if you add multiple times in a gradient, you'll be going uphill rather than going downhill, and maybe one way to see that would be if, um, you know, take a quadratic function, um, excuse me. Right. If you are here, the gradient is a positive direction and you want to reduce, so this would be Theta and this will be j I guess. So you want Theta to decrease, so the gradient is positive. You wanna decrease Theta, so you want to subtract a multiple times the gradient. Um, I think maybe the best way to see that would be the work through an example yourself. Uh, uh, set j of Theta equals Theta squared and set Theta equals 1. So here at the quadratic function of the derivative is equal to 1. So you want to subtract the value from Theta rather than add. Okay? Cool. Um. All right. Great. So you've now seen your first learning algorithm, um, and, you know, gradient descent and linear regression is definitely still one of the most widely used learning algorithms in the world today, and if you implement this- if you, if you, if you implement this today, right, you could use this for, for some actually pretty, pretty decent purposes. Um, now, I wanna I give this algorithm one other name. Uh, so our gradient descent algorithm here, um, calculates this derivative by summing over your entire training set m. And so sometimes this version of gradient descent, has another name, which is batch gradient descent. Oops. All right and the term batch, um, you know- and again- I think in machine learning, uh, our whole committee, we just make up names and stuff and sometimes the names aren't great. But the- the term batch gradient descent refers to that, you look at the entire training set, all 49 examples in the example I just had on, uh, PowerPoint. You know, you- you think of all for 49 examples as one batch of data, I'm gonna process all the data as a batch, so hence the name batch gradient descent. The disadvantage of batch gradient descent is that if you have a giant dataset, if you have, um, and- and in the era of big data we're really, moving to larger and larger datasets, so I've used, you know, we train machine learning models of like hundreds of millions of examples. And- and if you are trying to- if you have, uh, if you download the US census database, if your data, the United States census, that's a very large dataset. And you wanna predict housing prices, from all across the United States, um, that- that- that may have a dataset with many- many millions of examples. And the disadvantage of batch gradient descent is that, in order to make one update, to your parameters, in order to even take a single step of gradient descent, you need to calculate, this sum. And if m is say a million or 10 million or 100 million, you need to scan through your entire database, scan through your entire dataset and calculate this for, you know, 100 million examples and sum it up. And so every single step of gradient descent becomes very slow because you're scanning over, you're reading over, right, like 100 million training examples, uh, uh, and uh, uh, before you can even, you know, make one tiny little step of gradient descent. Okay, um, yeah, and by the way, I think- I feel like in today's era of big data people start to lose intuitions about what's a big data-set. I think even by today's standards, like a hundred million examples is still very big, right, I- I rarely- only rarely use a hundred million examples. Um, I don't know, maybe in a few years we'll look back on a hundred million examples and say that was really small, but at least today. Uh, yeah. So the main disadvantage of batch gradient descent is, every single step of gradient descent requires that you read through, you know, your entire data-set, maybe terabytes of data-sets maybe- maybe- maybe, uh, tens or hundreds of terabytes of data, uh, before you can even update the parameters just once. And if gradient descent needs, you know, hundreds of iterations to converge, then you'll be scanning through your entire data-set hundreds of times. Right, or-or and then sometimes we train, our algorithms with thousands or tens of thousands of iterations. And so- so this- this gets expensive. So there's an alternative to batch gradient descent. Um, and let me just write out the algorithm here that we can talk about it, which is going to repeatedly do this. [NOISE] Oops, okay. Um, so this algorithm, which is called stochastic gradient descent. [NOISE] Um, instead of scanning through all million examples before you update the parameters theta even a little bit, in stochastic gradient descent, instead, in the inner loop of the algorithm, you loop through j equals 1 through m of taking a gradient descent step using, the derivative of just one single example of just that, uh, one example, ah, oh, excuse me it's through i, right. Yeah, so let i go from 1 to m, and update theta j for every j. So you update this for j equals 1 through n, update theta j, using this derivative that when now this derivative is taken just with respect to one training example- example I. Okay, um, I'll- I'll just- alright and I guess you update this for every j. Okay, and so, let me just draw a picture of what this algorithm is doing. If um, this is the contour, like the one you saw just now. So the axes are, uh, theta 0 and theta 1, and the height of the surface right, denote the contours as j of theta. With stochastic gradient descent, what you do is you initialize the parameters somewhere. And then you will look at your first training example. Hey, lets just look at one house, and see if you can predict that house as better, and you modify the parameters to increase the accuracy where you predict the price of that one house. And because you're fitting the data just for one house, um, you know, maybe you end up improving the parameters a little bit, but not quite going in the most direct direction downhill. And you go look at the second house and say, hey, let's try to fit that house better. And then you update the parameters. And you look at third house, fourth house. Right, and so as you run stochastic gradient descent, it takes a slightly noisy, slightly random path. Uh, but on average, it's headed toward the global minimum, okay. So as you run stochastic gradient descent- stochastic gradient descent will actually, never quite converge. In- with- with batch gradient descent, it kind of went to the global minimum and stopped right, uh, with stochastic gradient descent even as you won't run it, the parameters will oscillate and won't ever quite converge because you're always running around looking at different houses and trying to do better than just that one house- and that one house- and that one house. Uh, but when you have a very large data-set, stochastic gradient descent, allows your implementation- allows you algorithm to make much faster progress. Uh, and so, um, uh, uh- and so when you have very large data-sets, stochastic gradient descent is used much more in practice than batch gradient descent. [BACKGROUND] Uh, yeah, is it possible to start with stochastic gradient descent and then switch over to batch gradient descent? Yes, it is. So, uh, boy, something that wasn't talked about in this class, it's talked about in CS230 is Mini-batch gradient descent, where, um, you don't- where you use say a hundred examples at a time rather than one example at a time. And so- uh, so that's another algorithm that I should use more often in practice. I think people rarely- actually, so- so in practice, you know, when your dataset is large, we rarely, ever switch to batch gradient descent, because batch gradient descent is just so slow, right. So, I-I know I'm thinking through concrete examples of problems I've worked on. And I think that what- maybe actually maybe- I think that uh, for a lot of- for- for modern machine learning, where you have- if you have very- very large data sets, right so you know, whether- if you're building a speech recognition system, you might have like a terabyte of data, right, and so, um, it's so expensive to scan through a terabyte of data just reading it from disk, right it's so expensive that you would probably never even run one iteration of batch gradient descent. Uh, and it turns out the- the- there's one- one huge saving grace of stochastic gradient descent is, um, let's say you run stochastic gradient descent, right, and, you know, you end up with this parameter and that's the parameter you use, for your machine learning system, rather than the global optimum. It turns out that parameter is actually not that bad, right, you- you probably make perfectly fine predictions even if you don't quite get to the like the global- global minimum. So, uh, what you said I think it's a fine thing to do, no harm trying it. Although in practice uh, uh, in practice we don't bother, I think in practice we usually use stochastic gradient descent. The thing that actually is more common, is to slowly decrease the learning rate. So just keep using stochastic gradient descent, but reduce the learning rate over time. So it takes smaller and smaller steps. So if you do that, then what happens is the size of the oscillations would decrease. Uh, and so you end up oscillating or bouncing around the smaller region. So wherever you end up, may not be the global- global minimum, but at least it'll- be it'll be closer to the global minimum. Yeah, so decreasing the learning rate is used much more often. Cool. Question. Yeah. [BACKGROUND] Oh sure, when do you stop with stochastic gradient descent? Uh, uh, plot to j of theta, uh, over time. So j of theta is a cost function that you're trying to drag down. So monitor j of theta as, you know, is going down over time, and then if it looks like it stopped going down, then you can say, "Oh, it looks like it looks like it stopped going down," then you stop training. Although- and then- ah, uh, you know, one nice thing about linear regression is that it has no local optimum and so, um, uh, it- you run into these convergence debugging types of issues less often. Where you're training highly non-linear things like neural networks, we should talk about later in CS229 as well. Uh, these issues become more acute. Cool. Okay, great. So, um, uh, yeah. [BACKGROUND]. Oh, would your learning rate be 1 over n times linear regressions then? Not really, it's usually much bigger than that. Uh, uh, yeah, because if your learning rate was 1 over n times that of what you'd use with batch gradient descent then it would end up being as slow as batch gradient descent, so it's usually much bigger. Okay. So, um, so that's stochastic gradient descent and- and- so I'll tell you what I do. If- if you have a relatively small dataset, you know, if you have- if you have, I don't know, like hundreds of examples maybe thousands of examples where, uh, it's computationally efficient to do batch gradient descent. If batch gradient descent doesn't cost too much, I would almost always just use batch gradient descent because it's one less thing to fiddle with, right? It's just one less thing to have to worry about, uh, the parameters oscillating, but your dataset is too large that batch gradient descent becomes prohibit- prohibitively slow, then almost everyone will use, you know, stochastic gradient descent or whatever more like stochastic gradient descent, okay? All right, so, um, gradient descent, both batch gradient descent and stochastic gradient descent is an iterative algorithm meaning that you have to take multiple steps to get to, you know, get near hopefully the global optimum. It turns out there is another algorithm, uh, and- and, um, for many other algorithms we'll talk about in this course including generalized linear models and neural networks and a few other algorithms, uh, you will have to use gradient descent and so- and so we'll see gradient descent, you know, as we develop multiple different algorithms later this quarter. It turns out that for the special case of linear regression, uh, uh, and I mean linear regression but not the algorithm we'll talk about next Monday, not the algorithm we'll talk about next Wednesday, but if the algorithm you're using is linear regression and exactly linear regression. It turns out that there's a way to, uh, solve for the optimal value of the parameters theta to just jump in one step to the global optimum without needing to use an iterative algorithm, right, and this- this one I'm gonna present next is called the normal equation. It works only for linear regression, doesn't work for any of the other algorithms I talk about later this quarter. But [NOISE] um, uh, let me quickly show you the derivation of that. And, um, what I want to do is, uh, give you a flavor of how to derive the normal equation and where you end up with is you know, wha- what- what I hope to do is end up with a formula that lets you say theta equals some stuff where you just set theta equals to that and in one step with a few matrix multiplications you end up with the optimal value of theta that lands you right at the global optimum, right, just like that, just in one step. Okay. Um, and if you've taken, you know, advanced linear algebra courses before or something, you may have seen, um, this formula for linear regression. Wha- what a lot of linear algebra classes do is, what some linear algebra classes do is cover the board with, you know, pages and pages of matrix derivatives. Um, what I wanna do is describe to you a matrix derivative notation that allows you to derive the normal equation in roughly four lines of linear algebra, uh, rather than some pages and pages of linear algebra and in the work I've done in machine learning you know, sometimes notation really matters, right. If you have the right notation you can solve some problems much more easily and what I wanna do is, um, uh, define this uh, matrix linear algebra notation and then I don't wanna do all the steps of the derivation, I wanna give you- give you a sense of the flavor of what it looks like and then, um, I'll ask you to, uh, uh, get a lot of details yourself, um, in the- in the lecture notes where we work out everything in more detail than I want to do algebra in class. And, um, in problem set one you'll get to practice using this yourself to- to- to-, you know, derive some additional things. I've- I've found this notation really convenient, right, for deriving learning algorithms. Okay. So, um, I'm going to use the following notation. Um, so J, right. There's a function mapping from parameters to the real numbers. So I'm going to define this- this is the derivative of J of theta with respect to theta, where- remember theta is a three-dimensional vector says R3, or actually it's R n+1, right. If you have, uh, two features to the house if n=2, then theta was 3 dimensional, it's n+1 dimensional so it's a vector. And so I'm gonna define the derivative with respect to theta of J of theta as follows. Um, this is going to be itself a 3 by 1 vector [NOISE]. Okay, so I hope this notation is clear. So this is a three-dimensional vector with, uh, three components. Alright so that's what I guess I'm. So that's the first component is a vector, there's a second and there's a third. It's the partial derivative of J with respect to each of the three elements. Um, and more generally, in the notation we'll use, um, let me give you an example. Um, uh, let's say that a is a matrix. So let's say that a is a two-by-two matrix. Then, um, you can have a function, right, so let's say a is, you know, A1-1, A1-2, A2-1 and A2-2, right. So A is a two-by-two matrix. Then you might have some function um, of a matrix A right, then that's a real number. So maybe f maps from A 2-by-2 to, uh, excuse me, R 2-by-2, it's a real number. So, um, uh, and so for example, if f of A equals A11 plus A12 squared, then f of, you know, 5, 6, 7, 8 would be equal to I guess 5 plus 6 squared, right. So as we derive this, we'll be working a little bit with functions that map from matrices to real numbers and this is just one made up example of a function that inputs a matrix and maps the matrix, maps the values of matrix to a real number. And when you have a matrix function like this, I'm going to define the derivative with respect to A of f of A to be equal to itself a matrix where the derivative of f of A with respect to the matrix A. Uh, this itself will be a matrix with the same dimension of a and the elements of this are the derivative with respect to the individual elements. Actually, let me just write it like this. [NOISE] Okay. So if A was a 2-by-2 matrix then the derivative of F of A with respect to A is itself a 2-by-2 matrix and you compute this 2-by-2 matrix just by looking at F and taking, uh, derivatives with respect to the different elements and plugging them into the different, the different elements of this matrix. Okay. Um, and so in this particular example, I guess the derivative respect to A of F of A. This would be, um, [NOISE] right, it would be- it would be that. Ah and I got these four numbers by taking, um, the definition of F and taking the derivative with respect to A_1, 1 and plugging that here. Ah, taking the derivative with respect to A_1,2 and plugging that here and taking the derivative with respect to the remaining elements and plugging them here which- which was 0. Okay? So that's the definition of a matrix derivative. Yeah? [inaudible]. Oh, yes. We're just using the definition for a vector. Ah, N by 1 or N by 1 matrix. Yes. And in fact that definition and this definition for the derivative of J with respect to Theta these are consistent. So if you apply that definition to a column vector, treating a column vector as an N by 1 matrix or N, I guess it would be N plus 1 by 1 matrix then that- that specializes to what we described here. [NOISE] All right. So, um, let's see. Okay. So, um, I want to leave the details of the lecture notes because there's more lines of algebra which I won't do but it'll give you an overview [NOISE] of what the derivation of the normal equation looks like. Um, so onto this definition of a derivative of a- of a matrix, um, the broad outline of what we're going to do is we're going to take J of Theta. Right. That's the cost function. Um, take the derivative with respect to Theta. Right. Ah, since Theta is a vector so you want to take the derivative with respect to Theta and you know well, how do you minimize a function? You take derivatives with [NOISE] respect to Theta and set it equal to 0. And then you solve for the value of Theta so that the derivative is 0. Right. The- the minimum, you know, the maximum and minimum of a function is where the derivative is equal to 0. So- so how you derive the normal equation is take this vector. Ah, so J of Theta maps from a vector to a real number. So we'll, take the derivatives respect to Theta set it to 0,0 and solve for Theta and then we end up with a formula for Theta that lets you just, um, ah, you know, immediately go to the global minimum of the- of the cost function J of Theta. And, and a lot of the build up, a lot of this notation is, you know, is there- what does this mean and is there an easy way to compute the derivative of J of Theta? Okay? So, um, ah, so I hope you understand the lecture notes when hopefully you take a look at them, ah, just a couple other derivations. Um, if A [NOISE] is a square matrix. So let's say A is a [NOISE] N, N by N matrix. So number of rows equals number of columns. Um, I'm going to denote the trace of A [NOISE] to be equal to [NOISE] the sum of the diagonal entries. [NOISE] So sum of i of A_ii. And this is pronounced the trace of A, um, and, ah, and- and you can- you can also write this as trace operator like the trace function applied to A but by convention we often write trace of A without the parentheses. And so this is called the trace of A. [NOISE] So trace just means sum of diagonal entries and, um, some facts about the trace of a matrix. You know, trace of A is equal to the trace of A transpose because if you transpose a matrix, right, you're just flipping it along the- the 45 degree axis. And so the the diagonal entries actually stay the same when you transpose the matrix. So the trace of A is equal to trace of A transpose, um, and then, ah, there-there are some other useful properties of, um, the trace operator. Um, here's one that I don't want to prove but that you could go home and prove yourself with a-with a few with- with a little bit of work, maybe not, not too much which is, ah, if you define, um, F of A [NOISE] equals trace of A times B. So here if B is some fixed matrix, right, ah, and what F of A does is it multiplies A and B and then it takes the sum of diagonal entries. Then it turns out that the derivative with respect to A of F of A is equal to, um, B transpose [NOISE]. Um, and this is, ah, you could prove this yourself. For any matrix B, if F of A is defined this way, the de- the derivative is equal to B transpose. Um, the trace function or the trace operator has other interesting properties. The trace of AB is equal to the trace of BA. Ah, um, you could- you could prove this from past principles, it's a little bit of work to prove, ah, ah, that- that you, if you expand out the definition of A and B it should prove that [NOISE] and the trace of A times B times C is equal to [NOISE] the trace of C times A times B. Ah, this is a cyclic permutation property. If you have a multiply, you know, multiply several matrices together you can always take one from the end and move it to the front and the trace will remain the same. [NOISE] And, um, another one that is a little bit harder to prove is that the trace, excuse me, the derivative of A trans- of AA transpose C is [NOISE] Okay. Yeah. So I think just as- just as for you know, ordinary, um, calculus we know the derivative of X squared is 2_X. Right. And so we all figured out that very well. We just use it too much without- without having to re-derive it every time. Ah, this is a little bit like that. The trace of A squared C is, you know, two times CA. Right. It's a little bit like that but- but with- with matrix notation as there. So think of this as analogous to D, DA of A squared C equals 2AC. Right. But that is like the matrix version of that. [NOISE] All right. So finally, um, what I'd like to do is take J of Theta and express it in this, uh, you know, matrix vector notation. So we can take derivatives with respect to Theta, and set the derivatives equal to 0, and just solve for the value of Theta, right? And so, um, let me just write out the definition of J of Theta. So J of Theta was one-half sum from i equals 1 through m of h of x i minus y i squared. Um, and it turns out that, um, all right. It turns out that, um, if it is, if you define a matrix capital X as follows. Which is, I'm going to take the matrix capital X and take the training examples we have, you know, and stack them up in rows. So we have m training examples, right? So so the X's were column vectors. So I'm taking transpose to just stack up the training examples in, uh, in rows here. So let me call this the design matrix. So the capital X called the design matrix. And, uh, it turns out that if you define X this way, then X times Theta, there's this thing times Theta. And the way a matrix vector multiplication works is your Theta is now a column vector, right? So Theta is, you know, Theta_0, Theta_1, Theta_2. So the way that, um, matrix-vector multiplication works is you multiply this column vector with each of these in in turn. And so this ends up being X1 transpose Theta, X2 transpose Theta, down to X_m transpose Theta, which is of course just the vector of all of the predictions of the algorithm. And so if, um, now let me also define a vector y to be taking all of the, uh, labels from your training example, and stacking them up into a big column vector, right. Let me define y that way. Um, it turns out that, um, J of Theta can then be written as one-half of X Theta minus y transpose X Theta minus y. Okay. Um, and let me see. Yeah. Let me just, uh, uh, outline the proof, but I won't do this in great detail. So X Theta minus y is going to be, right, so this is X Theta, this is y. So, you know, X Theta minus y is going to be this vector of h of x1 minus y1 down to h of xm minus ym, right. So it's just all the errors your learning algorithm is making on the m examples. It's the difference between predictions and the actual labels. And if you you remember, so Z transpose Z is equal to sum over i Z squared, right. A vector transpose itself is a sum of squares of elements. And so this vector transpose itself is the sum of squares of the elements, right. So so which is why, uh, so so the cost function J of Theta is computed by taking the sum of squares of all of these elements, of all of these errors, and and the way you do that is to take this vector, your X Theta minus y transpose itself, is the sum of squares of these, which is exactly the error. So that's why you end up with a, this as the sum of squares of the, those error terms. Okay. And, um, if some of the steps don't quite make sense, really don't worry about it. All this is written out more slowly and carefully in the lecture notes. But I wanted you to have a sense of the, uh, broad arc of the of the big picture of their derivation before you go through them yourself in greater detail in the lecture notes elsewhere. So finally, what we want to do is take the derivative with respect to Theta of J of Theta, and set that to 0. And so this is going to be equal to the derivative of one-half X Theta minus y transpose X Theta minus y. Um, and so I'm gonna, I'm gonna do the steps really quickly, right. So the steps require some of the little properties of traces and matrix derivatives I wrote down briefly just now. But so I'm gonna do these very quickly without getting into the details, but, uh, so this is equal to one-half derivative of Theta of, um. So take transposes of these things. So this becomes Theta transpose X transpose minus y transpose. Right. Um, and then, uh, kind of like expanding out a quadratic function, right. This is, you know, A minus B times C minus D. So you can just AC minus AD this and so on. So I'll just write this out. All right. And so, uh, what I just did here this is similar to how, you know, ax minus b times ax minus b, is equal to a squared x squared minus axb minus bax plus b squared. Is it's kind of, it's just expanding out a quadratic function. Um, and then the final step is, yeah, go ahead. [BACKGROUND] Oh, is that right? Oh yes, thank you. Thank you. Um, and then the final step is, you know, for each of these four terms; first, second, third, and fourth terms, to take the derivative with respect to Theta. And if you use some of the formulas I was alluding to over there, you find that the derivative, um, which which I don't want to show the derivation of, but it turns out that the derivative is, um, X transpose X Theta plus X transpose X Theta minus, um, X transpose y minus X transpose y, um, and we're going to set this derivative. Actually not, let me just do this. And so this simplifies to X transpose X Theta minus X transpose y. And so as as described earlier, I'm gonna set this derivative to 0. And how to go from this step to that step is using the matrix derivatives, uh, explained in more detail in the lecture notes. And so the final step is, you know, having set this to 0, this implies that X transpose X Theta equals X transpose y. Uh, so this is called the normal equations. And the optimum value for Theta is Theta equals X transpose X inverse, X transpose y. Okay. Um, and if you implement this, um, then, you know, you can in basically one step, get the value of Theta that corresponds to the global minimum. Okay. Um, and and and again, common question I get is one is, well, what if X is non-invertible? Uh, what that usually means is you have have redundant features, uh, that your features are linearly dependent. Uh, but if you use something called the pseudo inverse, you kind of get the right answer if that's the case. Although I think the, uh, even more right answer is if you have linearly dependent features, probably means you have the same feature repeated twice, and I will usually go and figure out what features are actually repeated, leading to this problem. Okay. All right. Uh, any last questions before- so that so that's the normal equations. Hope you read through the detailed derivations in the lecture notes. Any last questions before we break? Okay. [BACKGROUND] Oh, yeah. How do you choose a learning rate? It's, it's, it's quite empirical, I think. So most people would try different values, and then just pick one. All right. I think let's let's break. If, if people have more questions, when the TAs come up, we're going to keep taking questions. Well, let's break for the day. Thanks everyone. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_8_Data_Splits_Models_CrossValidation_Stanford_CS229_Machine_Learning_Autumn_2018.txt | Hey guys. Um, let's get started. So over the last several weeks, you've learned a lot about many different learning algorithms from linear regression, to logistic regression, to generalizing models, generative algorithms like GDA and Naive Bayes to most recently support-vector machines. Um, what I'd like to do today is to start talking about advice for applying learning algorithms. To teach a little bit about the theory behind, um, how to make good decisions of what to do, how to actually apply these algorithms. And so today, um, I wanna discuss bias and variance. Um, and it turns out, you know, I've, I've built quite a lot of machine learning systems, um, and it turns out that bias and variance is one of those concepts. It's, sort of, easy to understand, but hard to master. Uh, uh, what does it- lots of those, was it all these board games or sometimes, uh, uh, smartphone games, say easy to learn, hard to master or something like that? So bias and variance is actually one of those things, where I've had PhD students that worked with me for several years and then graduated, and worked in the industry for a couple years after that. And, and they actually tell me that, you know, when they took, um, machine learning at Stanford, they learned bias and variance, but as they progressed for many years their understanding of bias and variance continues to deepen. So I'm gonna try to accelerate your learning, um, uh, uh, of, of bias and variance because I find that people that understand this concept, um, are much more efficient in terms of how you develop learning algorithms and make your algorithms work. So we'll talk about this today, and it'll be a recurring theme that'll come up again a few times in the next several weeks as well. Um, then we'll discuss regularization, um, uh, and talk about, um, how to reduce variance in learning algorithms, talk about train, dev, test splits, uh, and then also talk about a few model selection and cross-validation algorithms. Um, oh, let's see, reminders for today. Uh, Problem Set 1 is due tonight, uh, uh, 11:59 P.M. Uh, and, uh, if you are not yet ready to submit it today, uh, late submissions are accepted until Saturday evening. Saturday 11:59 P.M, with the details of late submissions, uh, written according to the late day policy written on the course website. So, so I definitely encourage you to submit your homework on time today. If for some reason you're not able to the late submission, which we don't encourage anyone to take advantage of, but it is written, uh, on the course website. And Problem Set 2 will be released, uh shortly. Actually I think, uh, it was already posted online, um, uh, and is due two weeks from now. Um, yeah. Right. And so, uh, okay. So, um, and, and what I'm going to do today is talk about the conceptual aspects of this. Uh, and if you want to see even more math between these so the conceptual concepts, uh, at this Friday's discussion section, we'll be covering, um, some of the the, uh, mathematical aspects of learning theories such as error decomposition, uniform convergence, and VC dimension. You know, one, one interesting thing I've learned is, um, really watching the evolution of machine learning over many years is that, that machine learning as a discipline has actually become less mathematical over the years, right? Um, uh, so I remember when, um, you know, machine learning people used to worry about, uh, computing the normal equations, like x transpose x inverse equals x transpose y. How numerically stable is your numerical solver for solving the normal equations of inverting a matrix for solving linear equations. But because, um, numerical linear algebra has made tremendous rise, now we just call linear- linear algebra routine. To invert a matrix to solve linear system equations not worry about whether it's numerically stable or not. But once upon a time a lot of my friends in machine learning were reading text books on, uh, numerical optimization to figure out if your formula for inverting a matrix or really solving linear system equations was numerically stable. And so one of the trends I've seen is that, uh, I think, um, three or four years ago, to understand bias and variance, there was a certain mathematical theory that was crucial to understanding that. And so I used to teach that in CS229, but we decided, um, that we're constantly trying to improve this class, right? But I decided that, uh, that, uh, mathematical theory is actually less crucial today. If your main goal is to make these algorithms work. So we still teach it. But we're doing it in the Friday discussion section, and that leaves more time for the main lecture here to talk more about the conceptual thing that I think will help you build learning algorithms, as well as for the newer topics like, um, what- we'll talk about random forest, decision trees of random forests in neural networks next week. So here we go. Okay. So let's talk about bias and variance. Um, let's say you have this dataset. [NOISE] Um, I'm gonna draw the same dataset three times. [NOISE] Okay. So, um, let's say you have a housing price prediction problem where this is the size of the house and this is the price of the house. Um, it looks like if you fit a straight line to this data, maybe it's not too bad, right? But it looks like this dataset seems to go up and then curve downward a little bit, right? And so [NOISE] maybe this is a slightly better model if you fit a, let me see. So this if you fit a linear function, um, Theta 0 plus Theta 1x. Uh, but if you fit a quadratic model, maybe this actually fits to the dataset a little bit better. Um, or you could actually fit a high order polynomial. This is one, two, three, four, five, six examples. So if you fit a fifth order polynomial, let's say the 5x to the 5th, then, um, you can actually fit a function that passes through all the points perfectly. But that doesn't seem like a great model for this data either. And so, um, to name this phenomenon, the function assuming the one in the middle is what we like, um, fitting a quadratic function is maybe pretty good. Let's call it just right. Whereas, um, this, uh, example on the left, it underfits the data, um, as in, it is not capturing the trend that is maybe semi-evident in the data. And we say this algorithm has high bias. And the term bias, um, the term bias has, has actually multiple meanings in the English language. We, as a society, want to to avoid racial bias, and gender bias, and discrimination against people's orientation, and things like that. So, uh, the term bias in machine learning has a completely separate meaning. Um, and it just means that, uh, and, and it just means that, um, uh, this learning algorithm had very strong preconceptions that the data could be fit by linear functions. This album had a very strong bias or the very strong preconception that the relationship between pricing and house- size of house is linear, and this bias turns out not to be true. So this is actually a different sense of bias than, than the, than the other types of undesirable bias we want to avoid in society or which, which interestingly comes up in machine learning as well in other contexts, right? We want our learning algorithms to avoid those different biases, there's a different use of the term. And in contrast and just cut off on the right, we say that this is overfitting, um, the data. And this algorithm has high variance. Um, and the term high variance comes from this intuition that, um, you happen to get these five examples, but if, you know, a friend of yours was to collect data from, uh, see here, six, six examples and a friend of yours was to collect a slightly different set of six examples, right? So if a friend of yours were to rerun the collected slightly different, um, uh, set of housings- houses, you know, right? Then this algorithm will fit some totally other varying function on this and so the- your predictions will have very high variance. If you think of this as averaging over different random draws of the data. So, so the, the variations if, if a friend of yours does the same experiment and they just get a slightly different dataset just due to random noise, then this algorithm fitting a fifth-order polynomial results in a totally different result. So that's-, uh, so we say that this algorithm has a very high variance, there's a lot of variability in the predictions this algorithm will make, okay? Um, so one of the things we'll need to do is, um, identify if your learning algorithm. Oh, so when you train a learning algorithm. It almost never works the first time. And so when I'm developing learning algorithms, my standard work flow is often to train an algorithm-, uh, often train up something quick and dirty, and then try to understand if the algorithm has a problem of high bias or high variance, if it's underfitting it or overfitting the data, and I use that insight to decide how to improve the learning algorithm. And I will say a lot more about, um, how to improve the learning algorithm. We have a menu of tools that we'll talk about in the next couple of weeks, about how to reduce bias or reduce variance of, uh, of, of your learning algorithms. Um, I should have mentioned that the problems of bias and variance, um, also hold true for classification problems. Uh, so, [NOISE], right. So let's say that's a binary classification problem. Um, if you fit a, uh, logistic regression model to this, you know, straight line fit to the data. Maybe that's not great, right? Um, if you fit a logistic regression model, um, with a few nonlinear features; so you have features x_1 and x_2. Um, if instead of using x_1 and x_2 as features, you use additional features x_1 squared, x_2 squared, x_1 times x_2, x_1, qx_2 and this is Phi of x, right? And you can have a small set of features you choose by hand. excuse me, probably more features in this or using SVM kernel and using SVM for this problem. Then, um, if you, let's see, if you have too many features, then you might actually have a learning algorithm that fits a decision boundary that looks like that. Right? And this learning algorithm actually gets perfect performance on the training set but this overfits. Um, excuse me, I meant to make the colors consistent, sorry I meant to use red. But you- you get what I mean. Um, and there's only if you choose somewhere in-between, you know, that you get something that, that seems to be a much better fit to the data. The green line seems to be a pretty good way of separating the positive and negative examples that they're sort of just right. So, uh, similar to, I guess I messed up the colors slightly before, kind of but similar to these colors here, the blue line underfits because it's not capturing trends that are pretty apparently in the data. The orange line overfits. It's just much too complicated a hypothesis whereas the green line, um, is just right, okay? So it turns out that, um, in the error of GPU computing ability to train models with a lot of features, um, one of the- by building a big enough model, uh, so take a support vector machine. If you add enough features to it, if you have a high enough dimensional feature space, um, or if you, um, take a linear regression model, logistic regression model and just add enough features to it, you can often, um, overfit the data. And it turns out that, um, one of the most effective ways to prevent overfitting, um, is regularization. So let me describe what that is and, um, excuse me, just finding my notes, reworking today's lecture. So this is new things I have not presented. Um, so all that. Okay, cool. And, um, regularization is the- it'll be one of those techniques that, um, won't take that long to explain. It'll sound deceptively simple but is one of the techniques that I use most often. I, I, I feel like I use regularization in many, many models. So, so just because it doesn't that sound that complicated or maybe won't even take that long to explain today, don't underestimate how widely used it is. It's used in- it's not used in every single machine learning model but it's used very, very often. Um, so here's the idea, um, which is- let's take linear regression. Right. So that's the optimization objective for linear regression. Um, if you want to add regularization, uh, you just add one extra term here, uh, Lambda, uh, times norm of, uh, Theta squared, right? Sometimes you write Lambda over two to make some of the derivations come out easier. And what this does is it takes your cost function for logistic regression, uh, which you try to minimize, try to minimize the square error fit to the data and you're creating an incentive term for the algorithm to make the parameter's Thetas, uh, smaller, okay? So this is called a regularization term. And it turns out that, um, let's take the linear regression overfitting example, right. So you know if you set Lambda equal to0, then it's just linear regression over the fifth order polynomial features. Uh, it turns out that as you increase Lambda, you know, Lambda to some intermediate value, uh, depending on the scales of data. Let's say you said Lambda equals 1. Then, when you solve for this minimization problem, or this augmented problem for the value of Theta, um, this term penalizes the parameters being too big and it turns out that you end up with a fit that looks a little bit better, right? It maybe it looks like that, okay? Um, and by preventing the parameters Theta from being too big, you make it harder for the learning algorithm to overfit the data and it turns out fitting a very high order polynomial like that may result in value of states that is very large, right? Um, and, and then if you set Lambda to be too large, then you actually end up, um, in an underfitting regime, okay? So there'll usually be some optimal value of Lambda where if Lambda equals 0, you're not using any regularization. You're so- maybe overfitting. Um, if Lambda is way too big, then you're forcing all the parameters to be too close to 0. Um, in fact actually, if you think about it, if Lambda was equal to 10 to the 100 or some ridiculously large number, then you're really forcing all the Thetas to be 0, right? If all the Thetas is 0, then you know then you're kinda fitting the straight line, right? So that's if Lambda equals, uh, 10 to the 100. And so- and this is a very simple function which is the function 0, right? And, and this function h of Theta, x equals 0, right, approximately 0. It is a very simple function which you get if you set Lambda very large. And by doubling Lambda between, you know, a far too large value like 10 to the 100 compared to a far too small value like Lambda 0, you, you, you smooth the interpolate between this much too simple function of h equals 0 and a much too complex function, okay? Um, so there is, um- so that's pretty, uh, it, it, it, um, it- so that's pretty much it for regularization in terms of what you need to implement but you feel like your learning algorithm may be overfitting, um, add this to your model and solve this optimization problem, um, and it will help relieve overfitting. Um, more generally, if you are, um, let's see. More generally if you have a, uh, say logistic regression problem where this is your cost function. Then to add regularization, I guess instead of min this is a max, right? If you're applying logistic regression, uh, then this was the original cost function, um, then you can have minus [NOISE] Lambda or Lambda over 2, right, it just depends on scaling of Lambda times the norm of Theta squared and there's a minus here because for logistic regression, we're maximizing rather than minimizing. Or this could be argmax at any of the generalized linear model family as well. But by subtracting Lambda times the norm of Theta squared, this allows you to also regularize the classification algorithm such as logistic regression. Okay? [NOISE] Um, it turns out that, uh, and- and I- I- I- wan- I make an analogy that, uh, where all the math details are true, but we don't wanna talk about all the math details. It turns out that, um, one of the reasons the support vector machine doesn't overfit too badly even though it has, you know, been working in infinite like, you know, infinite dimensional feature space, right? So- so why- why doesn't a support vector machine just overfit like crazy? We showed, uh, on Monday that by using kernels, it's sort of using infinite dimensional feature space, right? So why doesn't it always fit these crazy complicated functions, it just overfits the dataset like crazy? It turns out and the theory is complicated. It turns out that, um, [NOISE] you know, the optimization objective of the support vector machine was to minimize the norm of w squared. Uh, this turns out to, uh, correspond to maximizing the margin, the geometric margin SVM, and it's actually possible to prove that, um, this has a similar effect as that, right? That this is why the support vector machine despite working in infinite dimensional feature space sometimes, um, by forcing the parameters to be small is difficult for the support vector machine to overfit the data too much. Okay? The theory to actually show this is quite complicated. Uh, um, uh, yeah, er, uh, it's actually very- yeah, is to show that the cost of cost Phi is where this is- where norm of w small cannot be too complicated, complicating can overfit basically. Um, but that's why, uh, SVMs can work in- can work in infinite dimensional feature spaces. Yeah? [inaudible] Oh, sure. Do you ever regularized per elements of parameters? Um, not really. Uh, and the problem with that is, um, you know, let me give one- let me give one more specific example, then come back to that, right? So it turns out that, um, uh, so we talked about Naive Bayes as a text classification algorithm. [NOISE] It turns out that, um, let's see if the text classification algorithm problem, you know, classify spam, non-spam, or classified it to a sentiment, possible negative sentiment of a tweet or something. [NOISE] Let's say you have 100 examples, but you have [NOISE] 10,000 dimensional features, right? So let's say your features are these, you know, take the dictionary A, aardvark and so on. So 101, right. So let's say you construct a feature like this, um, it turns out that if you fit logistic regression to this type of data, where you have 10,000 parameters and 100 examples, this will badly- this will probably overfit the data, um, because you have, uh, uh, but it turns out that if you use logistic regression with regularization, this is actually a pretty good algorithm for text classification, um, and this will usually i- in terms of performance accuracy, um, yeah, because this is logistic regression, you need to implement gradient descent or something to solve local value parameters. But logistic regression with regularization for text classification, will usually perform- outperform Naive Bayes o- on a- on a classification accuracy standpoint. Uh, without regularization, logistic regression will badly overfit this data, right? Um, and- and to- to explain a bit more, um, you know, imagine that you have a three-dimensional subspace where you have two examples. Then all you can do is fit a straight line, right, for the hyper-plane to separate these two examples. But so one rule of thumb for, um, logistic regression is that, if you do not use regularization, it's nice if the number of examples is at least on the order of the number of parameters you want to fit, right? So this is if you're not using regularization. It's nice if- in fact, I- I personally think that, uh, I tend to use the duration only if the number of examples can be maybe 10x bigger than the number of examples, uh, because that's what you need to have enough information to fit good choices for all these parameters, um, but that's if you're not using regularization. But if you are using regularization, then, um, you can fit, you know, even 10,000 parameters, right? Even with only 100 examples, and this will be a pretty decent, um, text classification algorithm. Okay? Um, the question you have just now: why don't we regularize per parameter, right? So why don't we, uh, let's see. I guess instead of Lambda [NOISE] norm of Theta squared, it would be a sum over j Lambda j, you know, Theta j squared, right? Um, the reason we don't do this is because you then end up with, if you have 10,000 parameters here, you end up with another 10,000 parameters here, and so choosing all these 10,000 Lambdas is as difficult as just choosing all these parameters in the first place. So we don't have good weights to do this. Whereas, when you talk about cross-validation, multiple regression a little bit, we'll talk about how to choose maybe one parameter Lambda, but that- those techniques won't work for choosing from 10,000 parameters Lambda j. You've got a question? [inaudible] You're absolutely right. Yes. Thank you. Um, yes. So in order to make sure that the different Lambdas on the similar scale, uh, a common pre-processing step we're using learning algorithms is, uh, take your different features, um, so for text classification of all the features is 01, you can just leave the features alone. But if a housing classification, if feature one is the size of house which I guess ranges from, I don't know, 100 to, uh, how big are the biggest houses? I don't know, like whatever. Let's say houses go from, I don't know, five inches square feet to 10,000 square feet. Ten thousand square feet is really really big for a house, I guess. But the numb- feature x2 is the number of bedrooms which probably ranges from like, I don't know, one to- I guess there's some houses with a ton of bedrooms, but I would say most houses have at most five bedrooms, I don't know, right? Then these features are on very different scales and, uh, normalizing them to all be on a similar scale, so subtract out the mean and divide it by the standard deviation. So scale all of these things to be between, you know, 01 over 2 minus 1, um, to 1, would- would- would be a good pre-processing step before applying these methods. Um, it turns out that this will make gradient descent run faster as well, as a common pre-processing step to scale each individual feature to be on a similar range of values. All right. Yeah? At the back? Uh, can we quickly go back to some more support vector machine model like NLG? So it's actually both, so just to repeat it, why-why don't support vector machines suffer too badly, is it because it's small numbers for vectors or is it because of minimizing the penalty W. Um, I would say the formal argument relies more on the latter. So it turns out that if you look all the class- if you're looking at all the class of functions separate the data with a large margin, ah, that class has low complexity formalized by low VC dimension which you'll learn about in Friday's discussion section if you want to come to that. And so, it turns out that the class of all functions that separate the data of a large margin is a relatively simple class of functions by- and by simple class functions, I mean, it has low VC dimension. We should talk about this Friday. Um, and thus any function within that class of functions, uh, is not too likely to over-fit. So, um, it is convenient the support vector machine has a relatively low number of support vectors. But, um, uh, you could imagine other algorithms of a very large number of support vectors, uh, but smallest to large margin is still a low complexity class that will move with it. Alright, next one question. I'm sorry say that again. [inaudible] Oh, sure yes. So is it possible that so yes. So one of the- so yes. So in general, models that have high bias tend to underfit and models have high variance tend to overfit. Um, we use these terms over-fit high variance, underfit high bias not quite and they have very similar meanings. Right, at their first approximation assume they, they mean the same thing. One thing we'll see later, uh, two weeks from now is, uh, we'll talk about algorithms with high bias and high variance. So, uh, this is, uh, and actually one way to think of high bias and high variance, we will talk about this later, is if you have a dataset that looks like this, uh, and if somehow your classifier has very high complexity, there is a very, very complicated function. But for some reason it's still not fitting your data well right, so that would be one way to have high bias and high variance which does happen. All right. Cool. [NOISE]. So to wrap up the discussion on regularization, um, there's one- so mechanically the way you implement regularization is by adding that penalty on the norm of the parameters, uh, so that's what you actually implement. It turns out that, um, there's another way to think about regularization. So you remember when we talked about the new, uh, linear regression we talked about minimizing squared error and then later on we saw that linear regression was maximum likelihood estimation on a certain generalized linear model using, uh, using, using, using a Gaussian distribution as the choice for the exponential family as a member of the exponential family. It turns out that, um, there's a similar point of view you can take on the regularization algorithm that we just saw. Which is, let's say S is the training set. [NOISE]. Right. So, um, given a training set, um, you want to find the most likely value of Theta, right? Um, and so by Bayes rule P of Theta given S is P of S given Theta times P of Theta divided by P of S. And so if you want to pick the value of Theta that's the most likely value of Theta given the data you saw, then because the denominator is just a constant, this is arg max over Theta of P of S given Theta times P of Theta. Um, and so if you're using logistic regression then the first term is this. Right and in the second term is P of Theta, um, where this is the, you know, logistic regression model say, right or any generalized linear model. And it turns out that, um, if you assume P of Theta is Gaussian. So if you assume P of Theta is follow Theta. The prior probability on Theta is Gaussian with mean 0 and, uh, some variance tau squared i. So in other words P of Theta is, you know, 1 over root 2 pi, um, I guess this would be the determinant of tau squared i, right, e to the negative, um, Theta transpose, uh, tau squared i inverse. Right. So the Gaussian probability as follows. It turns out that if, um, this is your prior distribution for Theta and you plug this in here and you take logs computer maps and so on, then you end up with exactly the regularization technique that we found just now. Okay. Um, and so in everything we've been doing so far we've been taking a, um, frequentist interpretation. I guess the two main schools of statistics are the frequentist school of statistic and the Bayesian school of statistic, um, and there used to be some titanic academic debates about which is the right one, but I think, uh, statisticians have gotten together and kind of made peace and then go freely between these two more and more these days. Maybe not now all the time but, uh, but then the frequency score statistic. We say that there is some data and we want to find, um, the value of Theta that makes the data as likely as possible and that's where we got maximum likelihood estimation right. And in the frequentist school of statistics, we view them as being some true value of Theta out in the world that is unknown. Um, and so there is some true value of Theta that generated all these housing prices and our goal is to estimate this true parameter. In the Bayesian school of statistics we say that Theta is unknown. But before you see even any data you already have some prior beliefs about how housing prices are generated out in the world and your prior beliefs are captured in a probability distribution, uh, denoted by P of Theta, so this is called the Gaussian prior. And, um, we say that, um, and- and if you look at this Gaussian prior. [NOISE]. Excuse me. It's quite reasonable. It's saying that before you've seen any data on average I think the parameters of theta have mean 0 because I don't know if each Theta is positive or negative so giving the mean 0 seems reasonable. And most things in the world are Gaussians and we just assume that my prior on Theta is Gaussian. So you know, we could debate that this is, uh, the right assumption but it's not totally unreasonable, right? But they say well, for actually I think, you know, for the next linear regression problem I'm gonna work on next week and I have no idea what I'm going to work on, where I'm going to apply linear regression that next week. It is actually not too bad an assumption to say, you know, my prior is Gaussian. And in the Bayesian view of the world, our goal is, um, to find the value of Theta that is most likely after, um, we have seen the data. Okay. And so this is called map estimation. Which stands for the maximum a posteriori estimation. So this is actually the map estimator I guess the arg max of this, right? Uh, as the map or the maximum a posteriori estimates of Theta which means, look at the data, compute the Bayesian posterior distribution of Theta and pick the value of Theta that's most likely. Okay. And so one of the things you do in the problem set that was just released, um, is-is actually show this equivalence as well as plugged in a different prior for theta other than the Gaussian prior you experiment with, uh, whether P of Theta is the Laplace prior and to find a derive a different map as mean algorithm. Okay. Um, all right, good. Yeah, question? [inaudible]. Sorry can you say that again? [inaudible] Uh, yes. [inaudible] Oh I see, yes, can difference between these two be seen as regularized versus non-regularized? Yes. So, so, um, MLU here corresponds to the origin of regularization, uh, and this procedure here corresponds to adding regularization. Um, it turns out that frequency statistic- statisticians can also use regularization it just that they don't try to justify it through a Bayesian prior they just say, so if you're a frequentist statistic. If you're a frequentist statistician your job is to wake up and come up with an algorithm to estimate this you know true value of theta that exists out in the world, and you can come up any procedure you want and to inspire your procedure, you can add a regularization term. I think there's a lot of these debates between frequentists and Bayesians are more philosophical. I think there's a machine learning person, as an engineer. I don't really you know, I think the philosophical debates are lovely but I just- I, I just like my stuff to work. So, so, so frequentists can also infer regularization. It just that they say this is part of the algorithm they invented rather than derived from a Bayesian prior. All right, cool. So, um, [NOISE] all right. Let's talk about um, so in, in our discussion on regularization and choosing the degree of polynomial, um, uh, all right. So let's see, let's say I plot a chart where on the horizontal axis I plot, um, [NOISE] model complexity. So how complicated is your model? So for example, uh, to the right of this curve could be a very high degree polynomial. [NOISE] Right. Um, and what you find is that as you increase model complexity your training error- if you do not regularize, right? So if, if you fit a linear function, cosine function, cubic function and so on. You find that the higher the degree of your polynomial the better your training error because you know, a fifth-order polynomial always fits the data better than a fourth-order polynomial. If you, if you do not regularize. But what we saw with the original picture was that the ability of the algorithm [NOISE] to generalize kind of goes down and then starts to go back up, right? And so if you were to have a separate test set and evaluate your classifier on a set of data that the algorithm hasn't seen so far, so measure how well the algorithm generalizes to a different novel set of data, then if you fit a linear function then this underfits [NOISE]. If you fit a fifth-order polynomial this overfits, [NOISE] and there is somewhere in between right, that is just right. Okay? And um, this curve is true for regularization as well. So say you apply linear regression with 10,000 features to a very small training example. If lambda was much too big then they will um, underfit. If [NOISE] lambda was 0 so, you're not regularizing at all then they will overfit, and there will be some intermediate value of lambda that is not too big and not too small that you know, balances overfitting and underfitting. Okay. So, um, what I'd like to do next is describe uh, a mechanistic. A few different mechanistic procedures for trying to find this point in the middle, right? And so [NOISE] um, so given a data set [NOISE] what we'll often do is um, take your data set and split it into different subsets, uh, and a, a, a good hygiene is to take a data in the trained set- train, dev and test sets, um, So if you have say, 10,000 examples, all right, and you're trying to carry out this model selection problem. So for example, let's say you're trying to decide what order polynomial you want to fit, [NOISE] right. Or you're trying to choose the value of lambda, um, or you're trying to choose the value of tau, that was the bandwidth parameter in uh, locally weighted regression that you saw in the problem set- that we saw with, uh, locally weighted regression, all right? So, um, or you're trying to choose a value C in a support vector machine. So remember, the SVM objective was actually this, right. With the you know, subject to some other things but for the O unknown soft margin that we talked about on Wednesday- uh, talked about on Monday. You're trying to minimize the normal W and then there was this additional parameter C that trains off how much you insist on classifying every training example perfectly. All right. So whether you're trying to make- which of these decisions you are trying to make, um, how do you, uh, you know, choose a polynomial size or choose lambda or choose tau or choose parameter C which also has this bias-variance trade-off. There'll be some values of C that are too large and some values of C that are too small. [NOISE] So here's one thing you can do which is um, uh, let's see, so split your training data S into a subset which I'm gonna call the uh, raw training set as subscript train, um, and then some subset which we wanna call S subscript dev. And dev stands for uh, development, [NOISE] um, and then later we'll talk about [NOISE] a separate test set. And so what you can do is train each model, and by model I mean, um, option [NOISE] for the degree of polynomial [NOISE] on S train. Um, so you're evaluating a menu of models, right? So let's say, this is model 1, model 2, and so on up to model 5, up to some number. They can train each of these models, uh, on the first subset of your data [NOISE] and then get some hypothesis. Let's call it h_i, [NOISE] um, and then, [NOISE] measure the error on S dev which is a second subset of your data called the development set. And pick the one- [NOISE] Okay. So rather than- and- and- uh, I wanna contrast this with an alternative procedure, right? So the two sets of the da- two subsets of the data, some test set data, training set, and development set. And uh, after training, uh, first order polynomial, second order polynomial, third order polynomial on the training set, we evaluate all of these different models on the separate held-out developments sets and then pick the one with the lowest error on the development set. Okay, um, but the one thing to not do would be to evaluate all these algorithms instead on the training set and then pick the one with the lowest error on the training set, right. Why not- wha- what goes wrong when you do that? Yeah. [BACKGROUND] [inaudible] Yeah, right, you just over-fit. How- why- why will you over-fit? [BACKGROUND] Parts of the error, what you want to remain so don't want- [inaudible] Yeah. Yep, cool, right. So if you use this procedure, you'll always end up picking the fifth order polynomial, right. Because the more complex algorithm will always do better on the training set. So if you do this, this will always cause you to say, let's use the fifth order polynomial or the- or the highest possible order polynomial. So this won't help you realize in the housing price prediction example to the second order polynomial is the benefit of the data, right. Does it make sense? Um, and that's why for this procedure, um, if you evaluate your, uh, model's error on a separate development set that the algorithm did not see during training, this allows you to hopefully pick a model that neither over-fits nor underfits. And in this example, hopefully, you find that uh, there will be the second-order polynomial, right, that the one that's just right in between that actually does best on your development set, okay. Um, all right. Now, uh, and then, um, you know if- if you are, uh, if- if you are publishing an academic paper on machine learning um, then, this procedure has looked at the training set as well as the development set, right. So this- this procedure, this piece of code is, you know, is two in these decisions. Uh, it's two in the parameters, the training set, and it's two in the decision on the degree of polynomial to the dev set. And so if you want to know, if you want to publish a paper that say, oh, my algorithm achieves 90% accuracy on this dataset um, it's not valid to report the results on the dev set because the algorithm has already been optimized to the dev set. In particular, information about what's the most- um, what's the best uh, degree of polynomial to choose was derived from the dev set from the development set. And so if you're publishing a paper or you want to report an unbiased result, um, evaluate the algorithm on a separate test set, S test and report that error, okay. And so if you're publishing a paper, it's considered good hygiene to um, uh, report the error on a completely separate test set that you did not in any way shape or form look at during the development of your model, during the training procedure, okay. Clear with things? Oh, yeah. Are dev and test [inaudible] is uh, generally different by much? Um, a dev and test set's error isn't strictly different by much. It depends on the size of- it depends on the size of the dataset. Um, uh, and so it turns out that um, actu- let- let- actually let me- let me give an example, actually. So let's say you're trying to fit a degree of polynomial, right. Um, and you want to choose uh, uh, write the dev error. So we can fill the first, second, third, fourth, fifth degree polynomial. And so um, after fitting all of these, lets say that the square error right, to use round numbers is 10, um, 5.1, 5.0, 4.9, you know, um, 7, 10, and so on, okay. Just to- just to use round numbers for illustrative purposes. If you're using the dev error to pick the best hypothesis, to pick the best hot spot, you would say that uh, using the fifth order polynomial gets you 4.9 squared error, right. But did you really earn that 4.9 square error or did you just get lucky? Because there is some noise and so maybe all of these actually have error that close to 5.0. But some are just higher, some are just lower, and you just got a little bit lucky that on the dev set this did better. Which is why, if you look at your dev set error, your dev set error is a biased estimate, right. And so where's your very large test set? If it's a very large test set, maybe the true numbers are 10, 5, 5, 5, 7, 10 are your actual expected squared errors. It's just that um, because of a little bit of noise you got lucky and reported 4.9. And so this would be a bad thing to do in an academic paper, right. Because it's uh, what you earned was an error of 5.0 you didn't earn an error of 4.9. It's just that- because you're over-fitting a little bit in the dev set. Um, you chose the thing that looked best for the dev set, but your algorithm didn't actually achieve that error, it's just because of noise, okay. So- so um, now in- in so- so it's considered a good practice to report um, uh, uh, so reporting on the dev error isn't- isn't- isn't really a valid unbiased procedure. And- and uh, um, yeah. Do you have a question? [BACKGROUND] [inaudible] Yeah. Ye- so- so one of the just to we say, I- I yes, you're right. One of the problems with some of the machine learning benchmarks that people worked on for a long time is this is unavoidable mental over-fitting. The people'd gotten to use the dataset and everyone's working the same trying to publish the best numbers from the same test set. So the academic committee on machine learning does have some amount of over-fitting uh, to the standard benchmarks that people have worked on for a long time. And this is an unfortunate result uh, when the test is very- very large, the amounts of over-fitting is probably smaller, but when the test set is not big enough then the over-fitting result can cause um, sometimes even research papers to uh, to publish results that are uh, probably over-fit to the data set, right. um, uh, and so I think there is actually uh, one standard academic benchmark because there's a dataset called CIFAR, it's quite small. It's actually the very same research paper uh, uh, analyzing um, results on CIFAR uh, arguing that some fraction of their progress that was made was actually perhaps uh, researchers uni- unintentionally over-fitting to this dataset. Okay. Oh and by the way um, one thing I do when I'm building you know, production machine learning systems. So when I'm- when I'm shipping a product, right. I just don't build a speech recognition system and just make it work. I just wanna, and not- and if I'm not trying to publish a paper, I'm not trying to make some claim. Sometimes I don't bother to have a test set, right. So and uh, and it means I don't know the true error of the system sometimes uh, but I'm very conscious of that. If I don't have a lot of data, sometimes I'm may decide to just not have a test set and it means I just don't try to report the test set number. I can report that dev set number which I know is biased and I just don't report the test set number. Don't do this when you're publishing your academic paper, right. This is not good if you're publishing a paper or making claims on the outside but all we're doing is building a product and not writing a paper out, this is- this is actually okay. Uh, yeah. [inaudible] Yeah. Okay, good. Uh, that's- lemme, lemme get to that. Good. So, um, the next topic about setting up the train dev test split is, how do you decide how much data should go into each of these three subsets? Um, so uh, uh, I can tell- so, so let me just tell you the historical perspective and then a modern perspective. Um, historically, the rule of thumb was you take a training set, right? Take your training set S and then you would send- here, one rule of thumb that you see a lot of people referring to is, uh, 70% training, right? 30% test. [NOISE] This is one common rule of thumb that you just hear a lot. Uh, or maybe you have- if you- if you don't have a dev set, if- if- if you're not doing model selection, if you just- if you've already picked the model and now you're revising. Or maybe you have people use, you know, 60% train, 20% dev, 20% test. Right? So these are rules of thumb that people use to give. Um, and these are decent rules of thumb when you don't have a massive dataset. So you may have 100- 100 examples, maybe you have 1,000 examples, maybe several thousand examples, I think these rules of thumb are perfectly fine. [NOISE] Um, what I'm seeing is that as you move to machine learning problems with really, really giant datasets, the percentage of data you send to dev and test are shrinking. Right? And, and so, here's what I mean. Um, let's say you have 10 million examples. Um, you know, yeah, decent size, not giant but like a reasonable size. Um, so le- let's, let's take this- this is actually a pretty good rule of thumb if you have a small dataset. If you have a thou- if you have, you know, 5 million examples, this is a perfectly fine rule of thumb to use. Um, but if you have 10 million examples, then, you know, you have 6 million, [NOISE] 2 million, [NOISE] 2 million, right, train, dev, test. [NOISE] And the question is, do you really need 2 million examples to estimate the performance of your final classifier? Uh, sometimes you do if you're working on online advertising, you know, which I have done, and you're trying to increase your ad click-through rate by 0.1%, because it turns out increasing ad click-through rate by 0.1%, which I've done multiple times, uh, turns out to be very lucrative. [LAUGHTER] Uh, uh, then you actually need a very large dataset to measure these very, very small improvements because to- to- to increasing ad click-through rate by 0.1, you might have actually a lot of projects. You might have 10 projects, each of which increases ad click-through rate by 0.01%, right? And so to measure these very different- small differences in, algorithm one does 0.01% better than algorithm b by- so you need a lot of data to tease out that very small difference. So if you're in the business of teasing out these very small differences, you actually need very large test sets. But if you're comparing different algorithms and one algorithm is, you know, 2% better or even 1% better than the other algorithm, then with 1,000 examples maybe, right, 1,000 examples may be enough for you to distinguish between these much larger differences. Um, so my recommendation for choosing the dev and test sets is choose them to be big enough, um, that you have enough data to make meaningful comparisons between different algorithms. Uh, and if you suspect your algorithms will vary in performance by 0.01%, you just need a lot of data to distinguish that, right? So, so if you have 100 examples, then, you know, if, if one algorithm has 90% accuracy and one algorithm has 90.01% accuracy, then unless you have at least 1,000 examples and maybe 10,000 or more, you just can't see this very small difference, right? If you have 100 examples, you just can't measure this very small difference. So my, my advice is, uh, choose your dev and test sets to be big enough that, um, uh, you could see the differences in the performance of algorithms that you, uh, tha- that you roughly expect to see. Um, and then you don't need to make your dev and test sets much larger than that. And I would usually then just put the data. You don't need the dev and sets back in the training set. So when you're working with very large datasets, say, you know, a million or 10 million or 100 million examples, what you see is that the percentage of data that goes into dev and test tends to be much smaller. So it might be, um, uh, so you see for example, maybe 90% train, you know, 5% dev, and 5% test, right? Or, or, or even smaller, or even 1%, 1% depending on how much data you really need. To measure to the level of accuracy you need the differences in the performance of your algorithms. Okay? Cool. All right, um, just to give this whole procedure a name, um, what we just did here between the train and dev set, this procedure that we have is called hold-out cross validation. [NOISE] And sometimes, to distinguish this from other cross validation procedures we'll talk about in a minute, sometimes this is called simple hold-out cross validation. We'll talk about some other hold-out cross validation procedures in a second. Um, and, uh, uh, and the dev set, um, is sometimes also called the cross validation set. Okay, right? Uh, so sometimes you- people use- sometimes, you hear people say, you know, we're gonna use a cross validation set. That means roughly the same thing as a, as a dev set. Okay? So in the normal workflow of developing a learning algorithm, uh, when you're given the dataset, I would split it into a training set and a dev set. Oh, and I used to say cross-validation set, but cross-validation is just a mouthful. So I think just motivated by the reducing number of syllables, because you're using this classifier so often, more and more people just call it the dev set, but it means roughly the same thing. Right? So, so when I'm, uh, building a machine learning system, I'll often take the dataset, split into train and dev, and if you need a test set, then also a test set, um, and then, uh, keep on fitting the parameters to the training set and, uh, evaluating the performance of your algorithm on the dev set and using that to come up with new features, choose the model size, choose the regularization parameter Lambda, um, really try out lots of different things and spend, you know, several days or weeks, uh, to optimize the performance on the dev set. Um, and then, uh, when you want to know how well is your algorithm performing, to then evaluate the model on the test set. Right? And- and the thing to be careful not to do is to make any decisions about your model using the test set, right? Because then- then your scientific data to the test set is no longer an unbiased estimate. Uh, one- so- and- and o- one thing that is actually okay to do is, um, if you have a team that's working on the problem, if every week they measure the performance on the test set and report out on a chart, right? You know, uh, the, the performance on the test set, that's actually okay. You can evaluate the model multiple times on the test set. You can actually give out a weekly report, saying, this week, for our online advertising system, we have this result on the test set. One week later, we have this result on test set, one week later, this result on the test set. It's actually okay to evaluate your algorithm repeatedly on the test set. Uh, what's not okay is to use those evaluations to make any decisions about your learning algorithm. So for example, if one day you notice that your model is doing worse this week than last week on the test set, if you use that to revert back to an older model, then you've just made a decision that's based on the test set, and- and your test set is no longer biased. But if all you do is report out the result but not make any decisions based on the test set performance such as whether to revert to an earlier model, then you can- I- I- it's actually legitimate it's actually okay to keep on, you know, use, uh, use the same test set to track your- your team's performance over time. Okay. All right. Good. Um, so when you have very large data sets, this is the procedure if you're developing for defining the train dev and test sets and this procedure can be used to choose the model of polynomial. It can also be used to choose the regularization parameter Lambda or or the parameter C or- or- or the parameter tau from now locally weighted regression. Um, now, whenever you have a very small data-set, right? Um, [NOISE] So it turns out that, uh, so I'm gonna leave out the test set for now. Le- let's just assume there is some separate test set. I'm not gonna worry about that for now. Um, but let's say you have 100 examples, right? Um, if you're going to split this into, you know, 70 in the training set in S subscript train and 30 in S dev. Then you're training your algorithm on 70 examples instead of 100 examples. And so I've actually worked on a few healthcare problems. Oh, actually, mo- most of my PhD students, uh, including Annan, work, doing a lot of work on, uh, machine learning applied to health care. And so we actually worked on a few data-sets in healthcare where, you know, every training example corresponded to some patient that sometimes that, uh, you know, unfortunate disease or- or- or if every- if you're working- or if, um, every example corresponded to injecting a patient with a drug and seeing what happens to the patient right sometimes there's literally a lot of blood and pain that goes into collecting every example. And if you have 100 examples to hold out 30 of them, um, for the purpose of model selection using only 70 examples and 100 examples. It seems like you're wasting a lot of data that was collected through a lot of, you know, literal pain, right? Um, so is there a way to say do model selection such as choose the degree of polynomial without, "Slightly wasting so much of the data." There is a procedure that you should use only if you have a small data-set, only if you're worried about the size of- oh, and the other disadvantage of this is, you evaluate your model only on 30 examples, and that seems really small. Right? You know can you- can you just find more data to evaluate your models as well. So there's a procedure that you should use [NOISE] only if you have a small data-set, uh, called k-fold cross-validation, or k-fold CV. And this is, uh, in contrast to simple cross validation. Um, but this is the idea which is- let's say this is your training set S, so you have, you know, X 1, Y 1 down to X say 100, Y 100. [NOISE] What we're going to do is take the training set and, uh, divide it into k pieces. Um, so for the purpose of illustration, I'm gonna use k equals 5. When I'm, just to make the- the writing on the board sane. Uh, k equals 10, uh, is typical. I guess, uh, for illustration. [NOISE] But so what you do is, um, take your data-set and divide it into five different subsets of- in this example, you would have 20 examples. 100- 100 examples divided into five subsets, so there are 20 examples in each subset. And, um, what you do is, for i equals 1 to k train i.e, fit parameters on k minus 1 pieces. And then test on the remaining [NOISE] one piece, and then you average. Right? So in other words, uh, when k is equals 5, we're going to loop through five times. In the first iteration, we're going to train on these and test on the last one fifth of the data. Um, so we'll hold out the last one fifth of the data, train on the rest and test on that. And then in the second iteration, through this for loop, we'll train on pieces 1 ,2, 3 and 5 and test on piece number 4, and we get the number. Um, and then you hold out this third piece, train on the others, test on this, and so on. So you do it five times, where on each time, you leave out one fifth of the data, train on the remaining four-fifths and you evaluate the model on that final one fifth. Okay? And so, um, if you're trying to choose a degree of polynomial, what you would do is- I guess for, you know, D equals 1 through 5. Right? So you do this procedure for a first order polynomial, uh, fit- you fit a linear regression model five times, each time on four-fifths of the model and test on the remaining one fifth, and you repeat this whole procedure for the quadratic function. Repeat this whole procedure for the cubic function, and so on. And after doing this for every order polynomial from 1, 2, 3, 4, 5 you would then, uh, pick the degree of polynomial that, um, sorry and then for each of these models, you then average the five S's you have for- for S error. Okay? And then after doing this, you would pick the degree of polynomial that did best according to this- according to this metric. Right? And then maybe you'll find that the second-order polynomial does best. Um, and now you actually end up with, uh, five classifiers. Right? Because you have five classifiers, each one fits on four-fifths of the data, uh, and then, uh, and- and there's a- there's a final optional step, which is to refit the model on all 100% of the data. Right? So if you want, you can keep five classifiers around and output their predictions, but then you're keeping five classifiers around this. Uh- uh maybe a bit more common to- now that you've chosen to use a second-order polynomial to just refit the model once on all 100% of the data. Okay? Um, and so the advantage of, uh, k-fold cross validation is that, instead of leaving out 30% of your data for your dev set on each iteration, you're only leaving out 1 over k of your data. I use k equals 5 for illustration, but in practice, k equals 10 is by far the most common choice that we use. I've sometimes seen people use k equals 20, but quite rarely, but, um, uh, if you use k equals 10, then on each iteration, you're leaving out just one tenth of the data. 10% of the data rather than 30% of the data. Okay? Um, and so this procedure compared to simple cross-validation, it makes more efficient use of the data, because you're holding out you know only 10% of the data on each iteration. Uh, the disadvantage of this is computationally very expensive, that you're now fitting each model 10 times instead of just once. Okay? But- but- but when you have a small data-set, this- this is actually a better procedure than simple cross validation. If you don't mind the computational expense of fitting each model 10 times. This- this- this actually lets you get away with holding on this data. [NOISE] And then, um, there's one even more extreme version of this, which you should use, if you have very very small datasets. So sometimes you might have an even smaller dataset. You know, if you're doing a class project with 20 examples this- that's- that's small even by today's machine learning standards. So, uh, there's- there's an extreme version of k-fold cross-validation, called leave-one-out cross validation, which is if you set k equals m. Right? So in other words, here's your training set, maybe 20 examples. So you're gonna divide this into as many pieces as you have training examples. And what you do is leave out one example, train on the other 19, and test on the one example you held out. And then leave out the second example, train on the other 19 and test to the one example you held out, and do that 20 times, and then you average this over the 20 outcomes to evaluate how good different orders of polynomial are. Um, the huge downside of this is just is completely very- very expensive, because now you need to change your algorithm m times. So you, kind of, never do this unless m is really small. Uh, I personally have- I pretty much never use this procedure unless m is 100 or less. I guess, you- your- if your model isn't too complicated, you can afford to fit a linear regression model 100 times, like it's not too bad. Right? So- so if- if- if m is, uh, less than 100, you could consider this procedure. But- but if m is 1000, fitting a linear model- fitting a model 1000 times, it seems like a lot of work, then you usually use k-fold cross-validation instead. Uh, but if you do have 20 examples, then you know, I- I would then- then- if you have 20 examples, I would probably use this procedure and somewhere between 20 and 50s maybe when I switch over from leave-one-out to k-fold cross-validation. Okay? Yeah. In 10- fold cross validation should we use [inaudible] k times to go. Yeah. So, um, right. So since you have k estimates, say 10- 10 estimates, we're using 10-fold cross-validation. Can you measure the variance on those 10 estimates? Um, it turns out that those 10 estimates are correlated because each of the 10 classifiers, eight-eight-eight-eight out of nine of the sets of data they trained on overlap. So, um, there were some very interesting results, uh-uh there's some research papers written by Michael Kearns, actually, um, it's like a long time ago, uh, trying to understand how correlated are these 10 estimates. And from a theoretical point of view, the- we- the- as far as I know, the latest error result shows that this is not a worse estimate in training error, but note- but- but maybe it's showing us in practice is not- you could measure it, but, uh, we don't really trust that estimate of variance, because we think all 10 estimates are highly correlated, or- or at least somewhat correlated. Yeah. Go ahead [inaudible] Whether we're gonna find using k-fold cross-validation in deep learning? Um, if you have a very small training set, then maybe yes. But deep learning algorithms depend on the details. Right? Sometimes it takes so long to train, that training- training- training on a neural network 20 times, you know, seems like a pain unless- unless you have enough data. Unless, um, your neural network was quite small. Right? Um, so it's rarely done with a deep learning algorithm. But if you have so- frank- frankly if you have so little data, if you have 20 training examples, uh - uh, you know there are other techniques that you probably need to use to boost performance. Such as transfer learning, or just more heterogeneity of input features, or something else. Right? Um, yeah. [inaudible] Sorry say again. [inaudible]. Sorry thank you for asking that, uh, this average set no. Um, I meant, um, averaging the test errors. So, uh, here, you will have trained 10 classifiers and, you know, when you evaluate it on the left at 110 for the data, you get it wrong- you get a number. Right? So you're looping 10 times, hold at one part, train on the others. Test on this part you left out. And so that will give you a number, and they go say oh when you test on test on this part you left out, the squared error was 5.0 and then you do it again, squared error was 5.7, squared error was 2.8. So by average I meant average those numbers. And the average of those numbers is your estimate of the error of a, you know, third order polynomial for this problem. So this is an averaging the set of real numbers that you got from this- so- so this loop gives you k real numbers, uh, and so this is averaging those k real numbers to estimate for this outer loop, how good a classifier with that degree polynomial is. Okay? Wow, actually, a lot of questions, there's one thing I want to cover, go ahead and this last two go ahead. [inaudible] I see. Sure. Yes, using something other than F1 score would it just mean other than average? Uh, yes, it would. Having F1 score is complicated. Yes. Uh, I think, I think we'll talk- actually, um, so this week, Friday, we'll talk about learning theory. Next week- next Friday we're talking more about performance evaluation metrics. So actually we will talk about F1 score? Uh, Mitch, one last question? How do you sample the data in the, in these sets? Oh, sure. How do you sample the data in these sets? Um, so for the purposes of this class, assuming all your data comes through the same distribution, uh I, I, I, I would usually randomly shuffle. Uh, again, in the era of machine learning and big data, there's one other interesting trend is which, which wasn't true 10 years ago which is we're increasingly trying to train and test on different sets. Uh, uh, we're trying to, you know, train on data, uh, collect it in one context and apply it to a totally different context. Uh, such as, um, we're trying to, uh, uh, you know, train on, on speech collected on your cellphone because you have all that data, and trying to apply it to a, um, uh, uh to a smart speaker where it was collected on a different microphone, in your cellphone or something. So, uh, if you are doing that and the way you set your train dev test split is a bit more complicated. Um, I wasn't going to talk about it in this class. If you want to learn more, uh, ah, I think at the start of the class, I mentioned I was working on this book, Machine Learning Yearning. So that book is finished. And if you go to this website, you can get a copy of it for free. Uh, uh, that talks about that. Uh, and I also talk about this more in CS230 which, which goes more into the big data. But you can, you can go, go and learn machine- you can also read all about it in, in Machine Learning Yearning. Um, if the train and test sets are a different distribution. Uh, yeah, but random shuffling would be a good default if you think you're training dev test on two different, right? All right. Just one last thing I want to cover real quick which is, um, feature selection. And so, um, so let me just describe what- so sometimes you have a lot of features. Um, so, so actually let's take text classification. You might have 10,000 features corresponding to 10,000 words, but you might suspect that a lot of the features are not important, right? You know the word the, whether the word the is called a stop word, whether the word the appears in e-mail or not, doesn't really tell you if it's spam or not spam because the word the, a, of, you know, these are called stop words. They don't tell you much about the content of the email. Um, but so if a lot of features, uh, sometimes one way to reduce overfitting is to try to find a small subset of the features that are most useful for your task, right? And so, um, this takes judgment. There are some problems like computer vision where you have a lot of features corresponding to there being a lot of pixels in every image. But probably, every pixel is somewhat relevant. So you don't want to select a subset of pixels for most computer vision tasks. But there are some other problems where you might have lot of features when you suspect the way to prevent overfitting is to find a small subset of the most relevant features for your task. Um, so feature selection is a special case of model selection that applies to when you suspect that even though you have 10,000 features, maybe only 50 of them are highly relevant, right? And so, um, uh one example, if you are measuring a lot of things going on in a truck, uh, in order to figure out if the truck is about to break down, right. You, you might, uh, for, for preventive maintenance, you might measure hundreds of variables or many hundreds of variables, but you might secretly suspect that there are only a few things that, you know, predict when this truck is about to go down, so you can do preventive maintenance. So if you suspect that's the case, then feature selection would be a reasonable approach to try, right? And so, um, here's the- I'll, I'll just write out one algorithm, uh, which is start with- this is script f equals the empty set of features. And then you repeatedly try adding each feature i to f and see which single feature addition most improves the dev set performance, right? And then Step 2 is go ahead and connect to add that feature to f, okay? So let me illustrate this with pictures. So let's say you have, um, five features, x1 through x5, and in practice it's usually more like x1 through x500 or x1 through 10,000, but I'll just use 5. So start off with an empty set of features and, you know, train a linear classifier with no feature. So the model is, um, h of x equals theta 0, right? With no features. Uh, so this won't be a very good model. But see how well this does on your dev set. Uh, so this way you average the ys, right? So it's not for your model. Next- so this is step one. In the second iteration, you would then take each of these features and add it to the empty sets. You can try the empty set plus x1, empty set plus x2, empty set plus x5. And for each of these, you would fit a corresponding model. So for this one you fit h of x equals theta 0 plus theta 1 x5. So try adding one feature to your model, and see which model best improves your performance on the dev set, right? And let's say you find that adding feature two is the best choice. So now, what we'll do is set the set of features to be x2. For the next step, you would then consider starting of x2 and adding x1, or x3, or x4, or x5. So if your model is already using the feature x2, what's the other feature, what additional feature most helps your algorithm? Um, and let's say it is x4, right? So you fit three or four models, see which one does best. And now you would commit to using the features x2 and x4. Um, and you kind of keep on doing this, keep on adding features greedily, keep on adding features one at a time to see which single feature addition, um, helps improve your algorithm the most. Um, and, and, and you can keep iterating until adding more features now hurts performance. Uh, and then pick what- whichever feature subset allows you to have the best possible performance of dev set, okay? So this is a special case of model selection called forward search. It's called forward search because we started with a empty set of features, and adding features one at a time. There's a procedure called backwards search which we'll read about that. We install all the features and remove features one at a time. But this would be a reasonable, uh, uh feature selection algorithm. The disadvantage of this is it is quite computationally expensive, uh, but this can help you select a decent set of features, okay? Um, so we're running a little bit late, uh, let's break. Oh, so I think, uh, I was meant to be on the road next week but, uh, because [inaudible] is still unable to teach, I think we will have, uh, Rafael, uh, uh, teach, uh, decision trees next week, and then also Kian will talk about neural networks next week, okay? So let's break for today, um, and, and maybe we'll see some of you at the Friday discussion session. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | RL_Debugging_and_Diagnostics_Stanford_CS229_Machine_Learning_Andrew_Ng_Lecture_20_Autumn_2018.txt | All right. Hey, everyone, actually started a little bit late. So welcome to the, uh, final lecture of, uh, CS229 of this quarter or I guess, uh, to the home viewers, welcome to the season finale. So what I'd like to do today is, um, wrap up our discussion on reinforcement learning and then, um, and then we'll conclude the class. Um, so I think you know, over the last, uh, few lectures you saw a lot of, uh, uh, we- we saw a lot of NAV. So maybe as a brief interlude here are some videos. Um, so self-autonomous helicopter, um, you know, this is a project that, uh, I know Pieter Abbeel, Adam Coates, uh, some- some former students here, now some of the machine learning greats worked on when they were, um, PhD students here. Uh, and- and- and I think, uh, using algorithms similar to the ones you learned in this class, how do you make a helicopter fly? So just for fun, this is a video shot on top of one of the Stanford, uh, soccer fields. I was actually the camera man that day [LAUGHTER] , um, and zooming out the camera. See the trees touching the sky. [BACKGROUND]. Say, uh, um, it- it- it turns out- that's a small radio-controlled helicopter. It turns out that, uh, when you're very far away you can't tell if this is a small radio-controlled helicopter or if there's like a helicopter with people sitting in it [LAUGHTER]. So, um, uh, there was- actually there's, uh, you know, foot is on, uh, a kind of a soccer field, the big, uh, grass field off San Hill Road and turns out across San Hill Road, um, one of the high rises there was a- there was an elderly lady that lives in one of those apartments. And when she saw that, she would call 9-1-1 and say, "Hey, there's a helicopter about to crash." [LAUGHTER] And then the- the firemen would come out, so, [LAUGHTER] I had to tell them that and I- I think they were partly relieved, partly disappointed that there was no one for us for- for them to save. And, uh, um, and so- and- and I think, uh, let's see. Uh, uh, one of the things I promised to do, um, in the debugging learning algorithms lecture was just go over the um, reinforcement learning example again. So let me just do that now but, uh, with notation that I think you now understand compared to- oh, yes. Why is the helicopter flying upside down? Oh, uh, it was an aerobatic stunt. Uh, yeah, I- I don't think there's any good reason for flying a helicopter upside down [LAUGHTER] , uh, other than that you can. Uh, there- there a lot of videos of self-autonomous helicopters flying all sorts of stunts, go to heli.stanford.edu, heli.stanford.edu and the Stanford autonomous helicopter did- did- did a lot more than flying upside down. Uh, it could, I mean, make some maneuvers that looked aerodynamically impossible such as the helicopter that looks like it's tumbling, just spinning randomly but staying same place in the air, right? Um, it's called a chaos maneuver and if you look you go, wow. This helicopter was turning upside down, spinning around the air in every single direction but it was just staying right there in the air not crashing, and so there are maneuvers like that- that, um, the very best human pilots in the world can fly with helicopters and I think, uh, this was just, uh, um, uh, a demonstration I guess, uh, and I think a lot of this work wound up influencing some of the later work on the quadcopter drones in a few research labs and. Yeah, I think, uh, it was a difficult control problem and it was, uh, it was one of those things you do when you're, you- you when you're a university and you want to solve the hardest problems around. But I wanted to step through a few of the debugging process that we went through as we were building a helicopter like this. So, uh, when you're trying to get the helicopter to fly upside down, fly stunts, you don't want to crash too often. So step one is build a model or build a simulator of a helicopter, right? Much- much as you saw, um, when we start to talk about fitted value iteration and then, um, choose a reward function, uh, like that, and it turns out that specific reward function for staying in place is not that high, you know, like the quadratic function like that works okay. But if you want the helicopter to fly aggressive maneuvers it's actually quite tricky to specify what is a good turn for a helicopter, right? Um, and then what you do is you run reinforcement learning algorithm, um, to try to maximize say the finite horizon MDP formulation and maximize sum of rewards over T timesteps, so you get a policy Pi. And then whenever you do this, the first time you do this, you find that the resulting controller does much worse than the human pilot, and the question is what do you do next, right? This is- by the way- this is almost- I think this is almost exactly the slide I showed you last time except I cleaned up the slide using reinforcement learning notation rather than the slightly simplified notation you saw before [NOISE] you learned about reinforcement learning. And so the question is, um, and- and again if you're working on the reinforcement learning problem yourself, you know, uh, there's a good chance you have to answer this question yourself for whatever robot or other reinforcement learning or factory automation or stock trading system or whatever it is, um, you are trying to get to work in reinforcement learning. But do you want to improve the model sim- model or do you want to modify the reward function or do you want to, uh, modify the reinforcement learning algorithm. All right. And modifying the reinforcement learning algorithm includes things like, uh, playing with the discretization that you're using. Um, if you're taking a continuous state MDP and discretizing it to solve over finite state MDP formulation or modifying the reinforcement learning algorithm includes also maybe choosing new features to use in fitted value iteration, right? There are a lot of things you could try. Or maybe instead of using a linear function approximator, instead of fitting a linear function for fitted value iteration. Maybe you want to use a bigger, you know, deep neural network, right? Um, but so which of these steps is the most useful thing to do? So this is the analysis of those three things, uh, you know, if, I'll give you a second to read this, right? But if these three statements are true, then the learn controller should have flown well on the helicopter. Right? Um, and so those three sentences correspond to the three things in yellow that you could work on, um, there's a problem that, you know, um, statement 1 is false, that the simulator isn't good enough, there's a problem that statement 2 is false. That, um, ah, oh, sorry I think actually two or three are reversed. But, uh, the three statements corresponds to the three things in yellow. I think the two and three are in, uh, are in, uh, opposite order, right? Ah, as the RL algorithm maximizing some rewards is a reward function, actually the right thing to maximize. And so here are the diagnostics you could use, um, to see if this helicopter simulator is accurate, uh, well, first check if, um, the policy flies well in simulation. If your policy flies well in simulation but not in real life, then this shows that the problem is with your simulator and you should try to learn a better model for your helicopter, right? And, and if you're using a linear model this with the matrices a and b, um, if, you know, st plus 1 equals ast plus bat, if you're [inaudible] try, try getting more data or maybe try a non-linear model, but if you find that the problem's not your simulator, if you find that, uh, your policy is flying poorly in simulation and flying poorly in real life, right, then this is the diagnostic I would use. Um, so I shall show these two lines. So let human be the human control policy, so hire a human pilot, right? Which, which we did. We're fortunate to have one of the best- one, one of, um, America's top, you know, aerobatic helicopter pilots working with us, and he, using his control sticks and radio control, can make a helicopter fly upside-down, tumble, do flips, loops, rolls. So we had a very good human pilot, um, help us, uh, fly the helicopter manually. So what you can do is, um, test whether or not the, uh- so this, this thing here, right? That's just a pay off of the, um, learn policy as measured on your reward function. So check if, um, the learn policy achieves a better or worse pay off than a human pilot can, right? And so that means, you know, go ahead and let the learn policy fly the helicopter and we get the human to fly the helicopter and compute the sum of rewards on the sequence of states that these two systems take the helicopter through and just see whether the human or the learn policy achieves a higher payoff, achieves a higher sum of rewards. And if, um, the payoff achieved by the learning algorithm is less than the payoff achieved by the human, then this shows that, um, the learn policy's not actually maximizing the sum of rewards, right? Because whether the human is doing, you know, he or she is doing a better job, maximizes the sum of rewards then the learn policy. So this means that you should, you know, consider working on the reinforcement learning algorithm to try to make it do a better job maximizing the sum of rewards, right? Um, and then on the flip side, this inequality goes the other way, right? Uh, so if pa- if, if the payoff or the RL algorithm is greater than the payoff of the human, then what that means is that, you know, RL algorithm is actually doing a better job, maximizing the sum of rewards, but it's still flying worse. So what this tells you is that, doing a really good job maximizing the sum of rewards does not correspond to how you actually want the helicopter to fly and so that means that maybe you should work on, um, improving the reward function, that the reward function is not capturing what's actually most important to fly a helicopter well and then, then you modify the reward function, right? So in a typical workflow, uh, hoping to describe to you what, what it feels like to work on a machine learning project like this, and this was a big multi-year machine learning project, but when you're working on a big complicated machine learning project like this, um, the bottleneck moves around meaning that you build a helicopter, you get a human pilot to fly it, you know, gets in the work, they run these diagnostics and maybe the first time you do this you'll find, wow, the simulation's really inaccurate, then you are going to work on improving the simulator for a couple months. And then, you know, and every now and then you come back and rerun this diagnostic and maybe for the first two months of the project, you keep on saying, "Yup, simulator is not good enough, simulator is not good enough, simulator is not good enough." After working on the simulator for a couple months you, you may find that, um, item 1 is no longer the problem, you might then find that, um, item 3 is the problem, the simulator's now good enough, but when you run this diagnostic, two months into the project, you might say, "Wow, looks like your RL algorithm, uh, is maximizing the reward function but this is not good flying." So now I think the biggest problem for the project or the biggest bottleneck for the project is that the ref- the reward function is not good enough, and then you might spend, you know, another one or two, or three, or, or, or sometimes longer months working to try to improve the reward function, then you might do that for a while, and then when the reward function is good enough then that exposes the next problem in your system which might be that the RL algorithm isn't good enough. And so the problem you should be working on actually moves around and it's different in different phases of the project. And, um, when you're working on this it feels like every time you solve the current problem that exposes the next most important problem to work on and then you work on that and you solve that then this helps you identify and expose the next most important problem to work on and you kind of keep doing that or you keep iterating, and keep solving problems until hopefully, you get a helicopter that does what you want it to, make sense? Okay. Um, but I think [NOISE] teams that have the discipline to, um, prioritize according to diagnostics like this, uh, tend to be much more efficient, the teams that kind of go by gut feeling in terms of selecting, you know, what to, what to spend the time on. All right, um, any, any questions about this? [inaudible]. Oh, sorry, say that again. [inaudible] the simulator's accurate [inaudible]. Yeah, uh, I, I kind of wanna say yes, um, let me think. Yeah, I would usually check step 1 first and then if I think simulator is okay then look at steps 2 and 3. Um, maybe one, one other thing, uh, er, about, when you work on these projects there is some judgment involved so I think I'm presenting these things as though- as a rigid mathematical formula, that's cut and dry, this formula says, now work on step 1, then this one says, now work on step 3. Um, there is, there is, um, more judgment involved because when you run these diagnostics you might say, well, it looks like the simulators not that good but it's kinda good, it's little bit ambiguous, and oh it looks like, you know, uh, and so that's what it often feels like. And so a team would get together, look at the evidence from all three steps and then say, you know, "Well, maybe the simulator is not that good but it's maybe good enough and but both the reinforcement- the, the reward function is really bad, let's focus on that." So there is some, um- so rather than a hard and fast rule there, there is some judgment needed to, to make these decisions, uh, but having a, um- so when leading machine learning teams often my teams will, you know, run these diagnostics, get together and look at the evidence and then discuss and debate what's the best way to move forward, but I think the process in making sure that discussion and the debate is much better than the alternative, which is, you know, someone just picks something kind of at random and, and the team does that, right? Yeah, okay. Cool. Um- All right, cool. So, um, just, uh, yeah maybe you, while I have the laptop up, you know, a little bit for fun but a little bit because I'm, uh, to illustrate fitted value iteration. Um, let me just show another, um, reinforcement learning video. Um, oh, by the way, one of the- I- I think if I look at the future of AI, the future of machine learning, you know, there's a lot of hype about reinforcement learning for game playing which is fine. You know, we all like- we all love, uh, computers playing computer games, like that's a great thing I think or something, er. But- but I think that some of the most exciting applications of reinforcement learning coming down the pipe I think will be robotics. So I think over the next few years, even though there are only a few success stories of reinforcement learning applied to robotics. There are more and more right now. One of the trends I see, you know, when you look at, uh, the academic publications and some of the things making their way into industrial environments is I think in the next several years, just based on the stuff I see, my friends in many different companies, in many different institutes working on, I think there will be a rise of, uh, reinforcement learning algorithms applied to robotics. I think this would be one important area to- to- to watch out for. All right. Uh, but, uh, uh, so, you know, uh, uh, this is another Stanford video, this is again just using reinforcement learning to get a robot dog, um, to climb over obstacles like these. Uh, my friends that were less generous, um [NOISE] , uh, uh, did not want to think of this as a robot dog. Uh, they thought it was more like a robot cockroach, uh, [LAUGHTER]. But I think cockroaches don't have four legs, right, cockroaches have six legs [LAUGHTER]. Um, yeah but so, uh, how do you program a robot dog like this, right, to, uh, climb over terrain? So one of the key components, this is work by, um, Zico Kolter, uh, now a Carnegie Mellon professor, uh, another one of the machine learning greats, uh, is, ah, ah, a key part of this was, ah, value function of approximation, uh, where it- dog starts on the left and it goes get to the right then, uh, the approximate value function kind of, um, ah, I- I'm- I'm sort of finding a little bit, right? But- but the approximate value function tells it, uh, given the 3D shape of the terrain, uh, the middle plot is a height map where the different shades tell you how- how- how tall is the terrain, uh, but given the shape of the terrain, the dog, uh, learns a value function that tells it what is the cost of putting his feet on different locations to the terrain and it learns among other things, you know, not to put his feet at the edge of a cliff because then it's likely to slip off the edge of a cliff and fall over, right? And so, um, but- but hopefully this gives a visualization of whether, uh, learn value function for a very complicated function they'll say. And- and the state is very high-dimensional, this is all kind of projected onto a 2D space so you can visualize it. But- but this is what, uh, a simplified value function looks like for a robot like this. Okay. All right. So with that, um, let me return to the white board [BACKGROUND] um. So, um, there's just one class of algorithms I want to describe to you today which are called policy search algorithms. And uh, sometimes, uh, policy searches are also called, uh, direct policy search. And, um, to explain what this means, so far our approach to reinforcement learning has been to first learn or approximate the value function, you know, approximate V star and then use that to learn or at least hopefully approximate Pi star, right? So we have- you saw value iteration, top, we had policy iteration. But philosophy to reinforcement learning was to estimate the value function and then use that, you know, that equation with the arg max to figure out what is Pi star. So this is an indirect way of getting a policy because we- we first try to figure out what's the value function. In direct policy search, um, we try to find a good policy directly, hence the term direct policy search because you don't- you go straight for trying to find a good policy without the intermediate step of finding an approximation to the value function. So, um, let's see. I'm going to use, uh, as the motivating example the inverted pendulum. Right. So that is that thing with a free hinge here, and let's say your actions are to accelerate left or to accelerate right, right? And then you can have- and you can have states to accelerate strong, accelerate less strong, accelerate right. You got more than two actions but let's just say you've inverted pendulum with, um, two actions. So, um, if you want to- I- I'll- I'll talk about pros and cons of direct policy search later. But if you want to apply polic- direct policy search, you want to apply policy search, the first step is to, um, come up with the class of policies you'll entertain or come up with the set of functions you use to approximate the policy. So, um, again to make an analogy, when, uh, you saw logistic regression for the first time, you know, we kind of said that we would approximate y as the hypothesis, um, right, whose form was governed by this sigmoid function. And you remember in week 2 when, uh, I first described logistic regression, I kind of pulled this out of a hat, right, and said, "Oh yeah, trust me, let's use the logistic function," and- and then later, we saw this was a special case of the generalized linear model. Um, but, you know, we just had to write down some form for how we will predict y as a function of x. So in direct policy search, we will have to come up with a form for Pi, right? So we have to just come up with a function for algorithms in h. Um, in direct policy search, we'll have to come up with a way for how we approximate the policy Pi. Right? And so, you know, one thing we have to do is say, well, maybe the action were approximate with some policy Pi, um, maybe parameterized by Theta and is now a function of the state, and maybe it'll be 1 over 1 plus e to the negative Theta transpose, you know, to state vector. Right? Where the same vector maybe something like, um, x, x dot, uh, and- and the angle- and the angle dot right if- if this angle is Phi and maybe add an intercept there. Okay. And- and I- I switch this from Theta to Phi to avoid, uh, conflict in the notation. Okay. Um, this isn't really the formative policy we'll write. So let me- let me make one more definition and then I'll, um, show you a form of a specific form of policy you can use, but it's actually not quite this. We'll- we'll need to tweak this a little bit. So, uh, the direct policy search algorithm we'll use, will use a stochastic policy. So this is a new definition. Um, so stochastic policy is a function. Right. Um, so we're going to use, um, for the direct policy search algorithm that you see today, we are going to use stochastic policies meaning that, um, on every time step, uh, the policy will tell you what's the chance you want to accelerate left versus what's the chance you want to accelerate right, and then you use a random gen- number generator to select either left or right to accelerate on the inverted pendulum depending on the policies- no, depending on the probabilities output by this policy. Okay. Um, and so here's one example. Um, let's see which is you can have [BACKGROUND]. So, you know, continuing with the inverted pendulum, here's one policy that, um, [BACKGROUND] might be reasonable, uh, where you say that, um, let's see. Right. So, you know, in a state s, the chance that you take the accelerate right action is given by this sigmoid function. And the chance that in a state s, you take the accelerate left action is given by that. Okay. Um, and here's one example for why this might be a reasonable policy. So let's say the state vector s is 1, x, x dot phi, phi dot, um, where, you know, this angle of the inverted pendulum, um, is the angle phi. And let's say for the sake of argument that we set the parameter of this policy phi to be, um, 0, 0, 0, 1, 0. So in this case, this is saying that, um, let's see, so theta transpose s is just equal to phi, right? And so in this case, uh, right, because, you know, theta transpose s is just 1 times phi, everything else gets multiplied by 0. And so in this case is saying that the chance you accelerate to the right is equal to 1 over 1 plus e to the negative, how far is the pole tilted over to the right. Um, and so this policy gives you the effect that the further the pole is tilted to the right, the more aggressively you want to accelerate to the right, okay? So this is a very simple policy, it's not a great policy, but it's not a totally unreasonable policy, which is well, look at how far the pole is tilted to the left or the right, apply sigmoid function, and then accelerate to the left or right, you know, depending on how far it's tilted to the right. Um, now, uh, and, and, and because this is the, right, so this is really the chance of taking the accelerate right action as a function of the, um, pole angle Pi, right? Now, this is not the best policy because it ignores all the features other than phi. Um, but if you were to set theta equals, you know, 0, negative 0.5, 0, 1, 0, then this policy, um, the negative 0.5 now multiplies into the x position. Right. Uh, now this new policy if you have this value of theta, it takes into account how far is your cart is already to the right, um, where I guess this is the x distance, right? And the further your cart is already, I guess if, if your cart is on a set of rails, right, is on a set of railway track. And you don't want to fall off the rail- and you want to keep the cart kind of centered, you don't want it to fall off the end of your table. But this now says the further this is to the right already well, the less likely you should be to accelerate to the right. Okay? And so maybe this is a slightly better policy than with this set of parameters. And more generally, what you would like is to come up with five numbers that tells you how to trade off, how much you should accelerate to the right based on the position, velocity, angle, and angular velocity, um, of the current state of the cart- of the, of the inverted pendulum. And what a direct policy search algorithm will do is, um, help you come up with a set of numbers that results in hopefully a reasonable policy for controlling the inverted pendulum. Hope- and in a policy that hopefully result in a appropriate set of probabilities that cause it to accelerate to the right whenever it's good to do so and accelerate to the left, you know, more often when it's good to do so. Okay. So, um, all right. So our goal is to find the parame- find parameters theta so that when we execute pi of s, a, um, we maximize, well, max over theta the expected value of R of s_0 is 0 plus dot, dot, dot, plus, okay? Um, and so the reward function could be negative 1 whenever the inverted pendulum falls over, uh, and 9 whenever it stays up that of, of, of, whatever, or something that measures how well your inverted pendulum is doing. But the goal of a direct policy search algorithm is to choose a set of parameters theta so that we execute the policy, you maximize your expected payoff. And I'm gonna use the finite horizon setting, um, for the algorithm that we'll talk about today. Okay? Uh, and then one, one other difference between policy search compared to, um, estimating the value function is that in direct policy search here s_0 is, um, a fixed initial state, okay? Um, it turns out that when we were estimating the value function v-star, um, you found the best possible policy for starting from any state. Right. And there's kind of no matter what state you start from is simultaneously the best possible policy for all states. In direct policy search, we assume that either there's a fixed start state- fixed initial state s0 or there's a fixed distribution over initial state. So I'm gonna try to maximize the expected reward with respect to your initial state or respect to your initial probability distribution over what is the initial state. Okay. So that's, that's one other, um, difference. So, um, let me think how I'm going to do this. All right. So let's write this out. The goal is to maximize overall theta, the expected value of R of s_0, a_0, plus R of s_1, a_1, plus dot, dot, dot up to R of sc, aT um, you know, given pi theta. And, um, in order to simplify the math we'll write on this board today, um, I'm just going to set T equals 1 to simplify the math, uh, in order to not carry such a long summation. But it turns out that, um, uh, so I'm just gonna do like a 2 times set MDP, uh, just to simplify the derivation, but everything works, you know, just with a longer sum if you, uh, have a more general version of T. Okay. Um, and so this term here, the expectation is equal to sum over all possible state action sequences, right? And again, this will go up to sT and aT. But as we said T equals 1 of, um, what's the chance your MDP starts out in some state s_0? So this is your initial state distribution times the chance that in that state you take the first action a_0- oh, actually sorry. Let me just- let me write this out. Right. So the chance of your MDP going through this state action sequence, right, times, times that, right. So that's what it means to sort of compute the expected value of, uh, the payoff. Um, and so instead of writing out this sum, I'm just gonna call this the payoff, right? And so this is equal to sum of s_0, a_0, s_1, s_1, a_1 of the chance your MDP starts in state 0, times the chance that in state 0, you end up choosing the action a_0 times, um, uh, the chance governed by the state transition probabilities that you end up in state 1, uh, state s_1, times the chance at state s_1 you end up choosing, let's see, s_1 and then times the payoff, okay. And so what we're going to really do is, um, derive a gradient ascent algorithm- actually a stochastic gradient ascent algorithm as a function of theta to maximize this thing- to maximize the expected value of this thing. And that- and this is a, um, this is how we'll do direct policy search. Okay. So let me just write out the algorithm, and then we'll go through why, um, the algorithm that I write down is maximizing this expected payoff. [NOISE]. So this algorithm is called the, um, reinforce algorithm. Ah, the objective of the reinforce algorithm, um, uh, had a few other bells and whistles, but, but I'm gonna to explain the core of the idea. But the reinforcing- the reinforce algorithm, um, does the following which is you're going to run your MDP, right? And just you know run it for a trajectory of T timesteps. So, um, again, you know, I'm just gonna, [NOISE] well. Right. And and actually you would, uh, right. Technically, you would, um, run it for T timesteps but, you know, let, let's just say for now, we'll - we'll do only the thing in blue. We run it for one timestep, because we set capital T equal to 1. Um, and then you would compute the payoff, right, equals R of s0 + R of s1 and again, in the more general case, you know, plus dot dot dot plus R of st right? [NOISE] And then you perform the following update which is Theta gets updated as Theta plus the learning rate alpha, times. Right? Um, and then times the payoff. Right? And again, I'm just setting capital T equals 1. If capital T was bigger, you would just sum this all the way up to time T. Okay? So that's the algorithm. Um, that's on every iteration through the reinforce algorithm, through the reinforce algorithm, you will take your robot, take your inverted pendulum, um, run it through T timesteps, uh, executing your current policy, so choose actions randomly according to the current stochastic policy using current values of the parameters Theta, compute the total sum of rewards you receive, that's called a payoff and then update Theta using this funny formula. Right? Now, on every iteration of this algorithm, um, you're going to update Theta. And it turns out that reinforce is a stochastic gradient ascent algorithm. Um, and you remember when we talked about, uh, linear regression, right? You saw me draw pictures like this. It is a global minimum. Then uh, gradient descents with just, you know, take a straight path to the minimum, but stochastic gradient descent would take a more random path right towards the minimum and it kind of oscillates around then, maybe it doesn't quite converge unless you slowly decrease the learning rate alpha. So this is what we have for stochastic gradient descent, um, for linear regression. What we'll see in a minute, is that reinforce is a stochastic gradient ascent algorithm meaning that each of these updates is random, because it depends on what was this state action sequence that you just saw and what was the payoff that you just saw. But what will this show is that on expectation, the average update. You know, this- this update to Theta. This thing you are adding to theta, that on average let's see, that- that on average this update here is exactly in the direction of the, um, gradient. So that on average, um, you know, because, uh, every-every loop, every time through this loop you're making a random update to Theta and it's random and noisy because it depends on this random state sequence. Right? That and this state sequence is random because of the state transition probabilities and also because of the fact that you're choosing actions randomly. But on- but the expected value of this update, uh, you'll see in a little bit it turns out to be exactly the direction of the gradient. Um, which is why this, uh, reinforce algorithm is a gradient ascent algorithm. Okay? So let's, uh, let's show that now. Okay. So [NOISE] all right. So what we want to do is maximize the expected payoff which is a formula we derive up there and so, um, we're going to, want to take derivatives with respect to Theta of the expected pay-off. Right? Of, uh, I'm just gonna copy that formula up there [NOISE]. Okay? So there's a chance of that, you're going through that state-action sequence times the pay off. And so we want to take derivatives of this and, you know, so we can like go uphill using gradient ascent. Um, so we're going to do this in, uh, four steps. Um, now, first, um, let me remind you when you take the derivative of three, of- of a product of three things. Right? So let's say that you have, uh, three functions, f of Theta times g of Theta times h of Theta. So by the product rule of, um, you know, derivatives product rule from calculus, the derivative of the product of three things is obtained by, um, you know, taking the derivatives of each of them one at a time. Right? So this is f prime times g times h plus, um, g prime here plus h prime. Okay? So the product rule from calculus is that if you want to take derivatives of a product of three things, then you kind of take the derivatives one at a time and you end up with three sums. Right? And so we're going to apply the product rule to this where, um, we have- here we have two different terms that depend on, um, Theta, and so when we take the derivative of this thing with respect to theta, we're gonna have two terms. Uh, that correspond to taking derivative of this ones and taking the derivative of that ones. Right? And so, um, this derivative is equal to, so the first term is the sum over all the state action sequences, um, P of s0, um, and then let's see. So now we have pi of Theta, excuse me. The derivative with respect to pi Theta, s0, a0. Right? And then plus, um, [NOISE]. Right? And then times the payoff. Right? So the whole thing here is then multiplied by the payoff. Okay? So we just apply the product rule for calculus where, uh, for the first term in the sum, we kinda took the derivative of this first thing and then for the second term of the sum we took the derivative of this second thing. Okay? And now, um, I'm gonna make one more algebraic trick which is, I'm going to multiply and divide by that same term, and then multiply and divide by the same thing here. Right? So lots of multiply, multiply and divide by the same thing. Right? And then finally, um, if you factor out. So now, the final step is, um, I'm- I'm gonna factor out these terms I'm underlining. Ah, right? Because this terms I underlined, this is just you know, the probability of the whole state sequence. Right? And again, for the orange thing, this this orange thing. Right? These two orange things multiplied together is equal to that orange thing on that box as well. And so the final step is to factor out the orange box which is just P of s0, a0, s1, a1, right? So that's the thing I boxed-up in orange times, then those two terms involving the derivatives [NOISE]. Times the payoff [NOISE]. Okay? And I think, ah, right, where- because I guess this term goes there, [NOISE] and this term goes there, okay? And so this is just equal to, um, well- and if you look at the reinforce algorithm, right, that we wrote down, ah, this is just equal to sum over, you know, all the state action sequences times the probability of the gradient update, right. [NOISE] Because, ah, I guess I'm running out of colors. But, you know, this is a gradient update and that's just like equal to this thing, okay? So what this shows is that, um, even though on each iteration the direction of the gradient updates is random, um, the, ah, the expected value of how you update the parameters is exactly equal to the derivative of your objective, of your expected total payoff, right. So we started saying that this formula is your expected total payoff, um, so let's figure out what's the derivative of your expected total payoff, and we found that the expected- the- the derivative, your expected total payoff, the derivative of the thing you want to maximize is equal to the expected value of your gradient update. And so this proves that, um, on average, you know, if you have a very small learning rate, you end up averaging over many steps, right? But on average, the updates that reinforce is taking on every iteration is exactly in the direction of the derivative of the, um, expected total payoff that you're trying to maximize, okay makes sense? Yes, any questions about this? Yeah. [inaudible]. Oh, it is independent of the choice of its function. Um, this is true for any form of a stochastic policy, ah, where the definition is that, you know, Pi Theta of s0, [NOISE] ah, a0 has to be the chance of taking that action in that state, but this could be any function you want. Ah, it could be a softmax, it could be a logistic function of many, many different complicated features, it could be- or it has to be a continuous de- or it has to be a differential function. And actually one of the reasons we shifted to stochastic policies was because, um, previously just have two actions, is either left or right, right? And so you can't define a derivative over a discontinuous function like either left or right but now we have a probability that shifts slowly between what's the probability to go left, versus go right and by making this a continuous function of Theta, you can then take derivatives and plot gradient ascent, but it does need to be a logistic function. Yeah, go ahead. Ah, [inaudible]? Sure. So, um, ah, another way to train a, um, helicopter controller is you use supervised learning, where you have a human expert train, um, you know, so you can also actually have a human pilot demonstrate in this state, take this action, right, and then you use supervised learning to just learn directly a mapping from a state to the action. Um, I think this, I don't know, this might be okay for low speed helicopter flight, I don't think it works super well, ah, I bet you could do this and not crash a helicopter, but, ah, um, ah, ah, but to get the best results, I wouldn't use this approach, um, yeah. It turns out for some of the maneuvers it'll actually fly better than human pilots as well, um, yeah, no. Cool. All right. Um, and so, um, for other types of policies, um, let's see, right. [NOISE] So, ah, direct policy search also works, um, if you have continuous value actions and you don't want to discretize the actions. So here's a simple example. Let's say a is a real number, ah, such as the magnitude of the force you apply to accelerating left or right. All right. So run discretizing, you invert your pendulum, you wanna output a continuous number of how hard you swerve to left or right. Um, or for a self-driving car maybe Theta is the steering angle which is a real value number. So simple policy would be a equals, you know, Theta transpose S, um, and then plus [NOISE] Gaussian noise. And if just for the purpose of training, you're willing to pretend that your policy is to apply the action Theta transpose S and add a little bit of Gaussian noise to it, then, um, the whole framework for reinforce but this type of gradient descent also, ah, will, will also work, great, um, and then I guess if you're actually implementing this, you can probably turn off the Gaussian noise variability, there, there are little tricks like that as well. Um, so let's see. Some pros and cons of, um, so, whe - whe- when should you use direct policy search and when should you use value iteration or a value function based type of approach? Um, so it, ah, turns out there's one setting, ah, actually there are two settings where direct policy search works much better. One is if you have a, um, POMDP, ah, PO in this case stands for partially observable. [NOISE] And that's if for example, um, you know, for the inverted pendulum, um, does a polar angle Phi, you have, you have a car and this is your position x. Um, and what this is saying that the state space is, ah, x, x dot Phi, Phi dot. All right? [NOISE] But let's say that, um, you have sensors on this inverted pendulum that allow you to measure only the position and only the angle of the inverted pendulum. [NOISE] Uh, so you might have an angle sensor, you know, down here and you may have a position sensor for your inverted pendulum, but maybe you don't know the velocity or you don't know the angular velocity, right. So this is an example of a partially observable Markov decision process because, ah, and what this means is that on every step, you do not get to see the host state because you, you don't have enough sensors to tell you exactly what is the state of the entire system, okay? So in a partially observable MDP, um, at each step, you get a partial and potentially noisy measurement of the state, right, and then have to take actions, or, have to choose an action a. [NOISE] Using these partial and potentially noisy elements, right? Which is, uh, maybe you only observe the position and the angle, but your senses aren't even totally accurate. So you get a slightly noisy, you know, estimate of the position. You get a slightly noisy estimate of the angle but you just have to choose an action based on your noisy estimates of just two of the four state variables, right? Um, it turns out that there's been a lot of academic literature trying to generalize value function base approaches, the POMDPs, ah, and they're very complicated algorithms in the literature on trying to apply value function based approaches of POMDPs. But those algorithms despite their very high level of complexity, you know, are not- are not widely in production, right? Um, but if you use the direct policy search algorithm, then there's actually very little problem. Oh, let me just write this out. So let's say the observation is on every timestep you observe y equals x Phi plus noise, right? So you just don't know whether it's a state. And in a POMDP you cannot approximate a value function. Or even if you knew what was V star, right? You can't compute Pi-star because, uh, and maybe you know what is Pi star best. This can compute V star and Pi star. But if you don't know what the state is, you can't apply Pi star to the state because- so- so how do you choose an action. Um, if you're using direct policy search, then here's one thing you could do. Which is you can say that, uh, Pi of, um, given an observation, the chance of going to the right given your current observation is equal to 1 over 1 plus e to negative Theta transpose y, where I guess y can be 1, right, x plus noise, Phi plus noise. But, sorry that's x plus noise, Phi plus noise, right? And so you could run reinforce using just the observations you have to, um, try to- stochastically try to randomly choose an action, and nothing in the framework we talked about prevents this algorithm from working. And so direct policy search just works very naturally even if you have only partial observations of the state. Um, and more generally instead of plugging the direct observations this can be any set of features, right? I'll just make a side comment for those who don't know what common filters are. Don't worry if you don't. But one common- one common way of, uh, uh, using direct policy search would be to use some estimates such as common filter, or probabilistic graphical model or something to use your historical estimates. Look, don't, don't just look at your one, uh, set of measurements now but look at all the historical meas- measurements. And then there are algorithms such as something we call the common filter that lets you estimate whatever is the current state, the full state vector. You can plug that full state vector estimate into the features you use to choose- to choose an action. That's a common design paradigm. If you don't know what the common filter is, don't worry about it. Ah, we take- take one of Stephen Boyd's classes or something, I don't know. Yeah, right. But, but that's one common paradigm where you can use your partial observations as for the full state and plug that as a feature into the policy search, okay? So that's one setting where direct policy search works. Um, just, just applies in a way that value function approximation is very difficult to even get to apply. Um, now one last thing is, uh, one last consideration so should you apply search policy search algorithm or a value function approximation algorithm? Oh, it turns out, um, the reinforce algorithm is, is actually very inefficient. Ah, as in, ah, you end up, you know, whe- when, when you look at research papers on the reinforce algorithm, it's not unusual for people that run the reinforce algorithm for like a million iterations, or 10 million iterations. So you just have to train. It turns out that gradient estimates for the reinforce algorithm even though the expected value is right, it's actually very noisy. And so if you train the reinforce algorithm, you end up just running it for a very, very, very long time, right? It does work but it is a pretty inefficient algorithm. So that's one disadvantage of the reinforce algorithm is that the gradient estimates on expectation are exactly what you want it to be, but there's a lot of variance in the gradient. So you have to run it for a long time for a very small learning range. Um, but one other reason to use, um, direct policy search is, is kind of ask yourself, do you think Pi star is simpler? Or is V star simpler, right? And so, um, here's what I mean, there are, ah, ah, ah, there- in, in robotics, there's sometimes what we call low-level control tasks. And, uh, one way to think of low-level control task is flying a helicopter. Hovering a helicopter is example of a low-level control task. And one way to inform of when you think of low-level control task is kind of a really skilled human, um, you know, holding a joystick. Control this thing, making seat of the pants decisions, right? So those are kind of almost instinctual, in a tiny fraction of a second, almost by feel you could control the thing. Those, those are- tend to be low-level control tasks. Those are seat of the pants, holding a joystick, a skilled person could balance the inverted pendulum or, you know, steer a helicopter. Those are low-level control tasks. In contrast, um, playing chess is not a low-level control task. You know, because for the most part, a very good chess player is not really a seat of the pants, you know, take that- make a decision in like- in, in 0.1 seconds, right. You kind of have to think multiple steps ahead. Um, and in low-level control tasks, there's usually some control policy that is quite simple. A very fun- simple function mapping some state actions, that's pretty good. And so that allows you to specify a relatively simple class of functions of Pi star and direct policy search would be relatively promising for tasks like those. Whereas in contrast, if you want to play chess or play Go, or do these things where we have multiple steps of reasoning, um, I think that, if you're driving a car on a straight road, that's a low-level control task. Where you just look at the road and you just, you know, you know turn the steering a little bit to stay on the road. So that's a low-level control task. But if you are planning how to, um, you know, overtake this car and avoid that other car, or there's a pedestrian and a bicycle is along the way, then that's less of a low level control task. Um, and that requires more multi-step reasoning, right? I guess depends on how aggressive of a driver you are, right? Driving on the highway, you know, may require more or less multistep reasoning. Where you want to, ah, overtake this car before the truck comes in this lane. So that- that type of thing is, um, more multi-step reasoning. Um, and approaches like that tend to be difficult for a very simple like a linear function to be a good policy. And for those things in playing chess, playing Go, playing checkers, um, a value function approximation approach may be more promising. Okay, um, cool. So any, um, questions about the- oh, and so, um, okay for, for, ah, autonomous helicopter flight, ah, actually, my first attempt for flying helicopters were actually a direct policy search because flying helicopters, are actually a seat of the pants thing. Ah, but then when you try to fly more complex maneuvers, then you end up using something maybe closer to value function approximation methods if you want to fly a very complicated maneuver, right? Um, oh, so the video you saw just now, of the helicopter flying upside down, the algorithm implemented on, you know, for that particular video that was a direct policy search algorithm, right? Not, not exactly this one, a little bit different. But that was a direct policy search algorithm. But if you want the helicopter to fly a very complicated maneuver, then you need something maybe closer to a value function approximator. And so the- and there is exciting research on how to blend direct policy search approaches together with value function approximation approaches. So actually AlphaGo. Ah, ah, ah, ah, one of the reasons AlphaGo works was, um, sorry, ah, you know Go playing program, right, by DeepMind, ah, was, was a blend of ideas from both of these types of literature which enabled it to scale to a much bigger system to play Go and, you know clearly at a very, very impressive level. All right. Any questions about this, anyone? [NOISE] All right. Um, so just final application examples, um, you know, reinforcement learning today, um, is, uh, making strong- let's see. So there's a lot of work on reinforcement learning for game playing, Checkers, Chess, um, uh, Go. That is exciting, um, reinforcement learning today is used in, uh, is used in a growing number of robotics applications, um, I think for controlling a lot of robots. Um, there is a, uh- if you've go to the robotics conferences, if you look at some of the projects being done by some of the very large companies that make very large machines, right. Uh, I have many friends in multiple, you know, large companies making large machines that are increasingly using reinforcement learning to control them. Um, there is fascinating work, uh, using reinforcement learning for optimizing, um, entire factory deployments. Um, there is, uh, academic research work, uh, still in research for a class, I know, actually may- maybe Science to be deployed on using reinforcement learning to build chatbots. Um, uh, uh, and actually, on, on, on using reinforcement learning to, uh, build a, uh, AI-based guidance counselor, for example, right, where, uh, the actions you take up, of what you say to students, and then, and then the reward is, you know, do you manage to help a student navigate their coursework and navigate their career. Uh, there is, uh, uh, and that's also starting to be applied to healthcare, where- one of the keys of reinforcement learning is, this is a sequential decision making process, right? Where, do you have to take a sequence of decisions that may affect your reward over time? And I think um, uh, and, uh, in, in healthcare, there is work on medical planning, where, um, the goal is not to, you know, send you to get, uh, a blood test and then we're done, right? In, in, in complicated, um, medical procedures, we might essentially get a blood test, then based on the outcome of the blood test, we might send you to get a biopsy or not, or we might ask you to take a drug and then come back in two weeks. But this is a very complicated sequential decision-making process for a treatment of complicated healthcare conditions, and so there's fascinating work on trying to apply reinforcement learning to this set of multi-step reasoning, where it's not about, well, we'll send you for a treatment and then you'll never see again for the rest your life. It's about here's the first thing you do then come back, let's see what state you get to after taking this blood test, or let's see what- state you get to after trying a drug, and then coming back in a week to see what has happened to symptoms. But I think that, um, these are all sectors where reinforcement learning, uh, is making inroads, um, or, or even actually, stock trading. Okay, maybe not the most inspiring one, but one of my friends, um, on the East Coast was, uh, uh, in, in- was, uh- and just actually, if, if you or your parents, uh, invest in mutual funds, this may be being used to, um, buy and sell shares with them today, depending on what bank they are investing. I know what bank is doing this, but I won't say it out loud. Uh, but, um, uh, uh, but, uh, if you want to buy or sell, you know, say, a million shares of stock, a, a very large volume of stock, you may not want to do it in a very public way because that will affect the price of the shares, right? So everyone knows that a very large investor is about to buy a million shares or buy 10 million shares or whatever, that will, um, cause the price to increase, uh, and this, this is, this disadvantages the person wanting to buy shares. But so that's been very interesting work on using reinforcement learning to, um, decide how to sequence out your, your buy, how to buy the stock in small lots, and this trading market is called dark pools. You could Google if you're curious. Actually, don't bother, uh, uh, uh, to, to try to, um, buy a very large lot of shares. Also, sell a very large lot of shares without affecting the market price too much because the way you affect the market price always breaks against, you know, is always against you, it's always bad, right. Um, so there's work laid down as well. So anyway, I think, um, uh, many applications- I personally think that one of the most exciting areas for reinforcement learning will be robotics, but, uh, we'll, we'll see what, what happens over the next few years, right? Okay. All right. So let's see, well, just five more minutes. Um, and, and just to wrap up, I think, you know, um, uh, we've gone through quite a lot of stuff. I guess, uh, for supervised learning to, uh, learning theory, and advice for applying learning algorithms to unsupervised learning, although- was it, K-means, PCA, uh, EM mixture of Gaussians, uh, factor analysis, and PrinCo analysis to most recently, reinforcement learning with the value function approaches, fitted value iteration, policy search. So, um, feels like we did, feels like, feels like- I- feels like you've seen a lot of learning algorithms. Um, go ahead. [inaudible] How is reinforcement learning compared to adversarial learning? I think of those as pretty distinguished literatures. Uh, uh, yeah, yeah, so I think, uh, and again, actually, I, I, I know a lot of non-publicly known facts about the machine learning world, but, uh, one of the things that I actually happen to know is that, er, uh, some of the ideas in adversarial learning, uh, uh, you know, so can you tweak a picture by, you know, very little bit, by tweaking a bunch of pixel values that are not visible to human eye, that fools the learning algorithm into thinking that this picture is actually a cat, when it's clearly not a cat or whatever. So I actually know that there are attackers out in the world today using techniques like that to attack, you know, websites, to try to fool, um, uh, ah, you know- some of the websites I'm pretty sure you guys use, and fool their anti-spam, anti-fraud, anti-undermining democracy types of algorithms into, um, [LAUGHTER] into making poor decisions. Uh, so, so it's an exciting to time to do machine learning, right? [LAUGHTER] That, that, we get to fight battles like these [LAUGHTER]. Yeah, I'm sorry. Um, uh, okay, um, and, and I think, you know, I think with, with- really, I think that with the things that you guys have learned in machine learning, I think all of you, um, uh, are now very knowledgeable, right? I think all of you are experts in all the ideas of core machine learning, and I hope that, um- I, I think you- when you look around the world, there are so many worthwhile projects you could do with machine learning and the number of you that know these techniques is so small that I hope that, um, you take these skills. Um, and some of you will go, you know, build businesses and make a lot of money, that's great. Some of you will take these ideas and, uh, help drive basic research at Stanford or at other institutions. I think that's fantastic. But I think whatever you are doing, the number of worthwhile projects on the planet is so large and the number of you that actually know how to use these techniques is so small that I hope that, um, you take these skills you're learning from this course and go and do something meaningful, go and do something that helps other people. Um, I think we are seeing in the Silicon Valley that there a lot of ways, you know, to build very valuable businesses, uh, and some of you do that and that's great, but I hope that you do it in a way that helps other people. Um, uh, I think, er, over the past few years we've seen, uh, um- I think that, er, in Silicon Valley, maybe 10 years ago, the contract we had with society was that people would trust us with their data and then we'll use their data to help them. But I think in the past year, that contract feels like it has been broken and the world's faith in Silicon Valley has been shaken up, but I think that places even more pressure on all of us, on all of you, to make sure that, um, the work you go out in the world to do is work that actually is respectful of individuals, respectful of individual's privacy, is transparent and open, and ultimately is, uh, helping drive forward, um, uh, humanity, or helping people, or helping drive forward basic research, or building products that actually help people, rather than, um, exploit their foibles for profit but to their own harm. So I hope that all of you will take your superpowers that you now have, and, um, go out to, to, to do meaningful work. Um, and let's see, um, and I think, uh- oh, and, and lastly, just, just on a personal note, I want, to, you know, thank all of you. On behalf of the TAs and the whole teaching team and myself, I want to thank all of you for your hard work. Uh, sometimes, they go over the homework problems. They look at probably some of these problems and go, "Wow, there she got that problem, I thought that was really hard," or the project milestones go, "Hey, that's really cool, look forward to seeing your final project results at the final poster session." So, um, I know that all of you have worked really hard. Uh, uh, and if you didn't, don't tell me, but I think almost all of you have [LAUGHTER] but, but, but, I will make sure you know there's a- I think it wasn't that long ago that I was a student, you know, working late at night on homework problems and, and I know that many of you have been doing that, uh, for the homework, for studying for the midterm, um, for working on your final term project. So, um I want to make sure, um, you know I'm very grateful for the hard work you put into this course, and I hope that, um- I hope that, uh, your, your hard earned skills will also reward you very well in the future, and also help you do work that, that you find as meaningful, so thank you very much [APPLAUSE]. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_18_Continous_State_MDP_Model_Simulation_Stanford_CS229_Machine_Learning_Autumn_2018.txt | All right. Hey, everyone. Welcome back. Um, so let's continue our discussion today of reinforcement learning and MDPs. And specifically, what I hope you learned from today is how to apply reinforcement learning, um, even to continuous state or infinite state MDPs. Um, so we'll talk about discretization, model-based RL, we'll talk about models/simulation and fitted value iteration is the main algorithm, um, I want to lead up to for today. Um, just to recap. Because we're gonna build on what we had learned, uh, in the last two lectures, I wanna make sure that you have the notation fresh in your mind. Um, MDP was states, actions, transition probabilities, discount factor reward. That was an example. Um, V Pi was the value function for a policy Pi which is the expected payoff. If you execute that policy starting from a state S and V star was the optimal value function. And last time, we figured out that if you know what is V star, then uh, Pi star, the optimal policy or the optimal action for a given state, can be computed as the argmax of that, right? Um, and, uh, one, one, one thing though we'll come back to later is, uh, an equivalent way of writing that formula, is that this is the expectation with respect to S prime drawn from P_sa of V star of S prime, right? So when we go to, uh, er, er, we've been, we have been working with discrete state MDPs with an 11 state MDP. So this is the sum over all the states S prime. But when we have- when we go to continuous state MDPs, the generalization of this or what this becomes- this is the expected value with respect to S prime drawn from the state transition probabilities here with- of, uh, index Pi_sa, covers state, covers that action of the value that you attain in the future. So V star of S prime. Okay? Um, and then we saw the value iteration algorithm. We're also- so we talked about valuation policy iteration. But today, uh, we're built on value iteration, but the value iteration algorithm uses Bellman's equations, uh, which says, take the left-hand side, set it to the right-hand side, right? And, uh, for, for V star, if V was equal to V star of the left-hand side is equal to the right-hand side, that was, um, oh, I'm sorry. It's missing a max there. Right? Um, for- if V was equal to V star, then the left-hand side and the right-hand side will be equal to each other. But, uh, what value iteration does is an algorithm that initializes V of S as 0 and repeatedly carries its update until V converges to V star. And after that, you can then, um, compute Pi star or find for every state find, the optimal action A. Okay? So um, because we're gonna build on this notation and this set of ideas today, I just want to make sure all this makes sense, right? Any questions about that before we move on? No? Okay. Cool. All right. So, um, no? No. Okay. So everything we've done so far was built on the MDP having a finite set of states. Right? So with the 11 state MDP, S was a discrete set of states. Um, last time on Monday, I think somebody asked, "Well, how do you deal with continuous states?" So we'll, we'll, we'll work on that today. But, uh, let's say you want to build a, um, [NOISE] uh, let me draw a car. Right? Let's say you want to build a ar, you know, maybe a self-driving car. Right? Um, ah, the state space of a car is, um, let's see. I'm gonna- well, instead of taking the- my artistic side view of the car, if you take a top-down view of a car. Right? So this is from the satellite imagery, you know, top-down view of a car where we have two views of the car heading this way. Um, how do you model the state of a car, right? Well, a common way to model the state of a car that's driving around on planet Earth, is that you need to know the position. Right? Um, and so that can be represented as x, y, uh, two numbers represent, you know, longitude or latitude or or something. Right? Um, you probably want to know the, uh, orientation of a car by maybe measured relative to North, you know, what's the orientation of [NOISE] the car? Um, and then it turns out if you're driving at very low speeds, this is fine. But if you're driving at anything other than very low speeds, then, um, [NOISE] we'll often include in the state-space, also the velocities and angular velocity. So x dot is the velocity in the x direction. So x dot is dx, dt. Right? Or it's the velocity and acceleration, y dot. So velocity in y direction and Theta dot is the angular velocity, the rate at which your car is turning. Okay? And it's sort of, um, up to you, how you want to model the car, is it important to model the current angle of the steering wheel, is it important to model how worn down is your front left tire as opposed to how worn down is your, your right tire. So depending on the application you are building, is up to you to decide what is the, um, state-based- state-space you want to use to model this car, um, and I guess- and, and if you are building a car to, to, to race in a racetrack, maybe it is important to model what is the temperature of the [NOISE] engine and how, you know, worn down is each of your four tires separately. But for a lot of normal driving, uh, this would be, uh, uh, you know, sufficient level of detail to model the state-space. Okay? Um, but so this is a, uh, six-dimensional, uh, um, so this is a, uh, six-dimensional state-space representation. Oh, and for those that work in robotics, uh, that would be called the kinematic model of the car, and that would be the dynamics model of the car. Right? If, if you want to model their velocities as well. Um, or let's see actually, all right. How about a helicopter? Right? I don't know. I hope this is a helicopter. All right. Ah, the states- how, how, how do you model a state-space of a helicopter? Helicopter flies around in 3D rather than drives around in 2D. And so common way to model the state-space helicopter would be to model it as- [NOISE] having a position x, y, z. Um, and then also, a 3D orientation of a helicopter is usually modeled with, uh, three numbers which we sometimes call the roll, pitch, and yaw. Right? So you're- if, if, if you're in an airplane roll is that you are rolling to left or right, pitch is are you pitching up and down, and yaw is, you know, are you facing North, South, East or West, right? So there's one way to turn the three dimensional orientation of an object like an airplane or a helicopter into three numbers. So, so, uh, er, the, the details aren't important. And if you actually work on a helicopter, you would figure this out. But for today's purposes it's just- right, uh, I- I guess the [NOISE] roll, pitch, yaw. Right? But that, uh, um, to represent the, uh, orientation of a three-dimensional object flying around, this is conventionally represented with three numbers, uh, such as roll, pitch and yaw. Um, and then [NOISE] x dot, y dot, z dot, Phi dot, Theta dot, Psi dot. They're linear velocity and the, um, angular velocity. Okay? Um, [NOISE] maybe just one last example. So it turns out in, in, in reinforcement learning, uh, maybe early, early history of reinforcement learning, one of the problems that a lot of people just happened to work on, um, uh, and, and, and therefore, you'd see in a lot reinforcement learning textbooks, there's something called the inverted pendulum problem. But what that is, is a little toy, um, which is a little cart, that's on wheels, that's on a track, um, and you have a little pole that is attached to this cart and there's a free [NOISE] swivel there, right? Uh, and so this pole just flops over or this poll just swings freely and there's no motor, uh, there's no motor at this- at this little hinge there. [NOISE] And so the inverted pendulum problem is- see that I've prepared this. Right? Is- [LAUGHTER] no. I've always prepared this.. If, if, if you have, uh, if you have a free pole and if this is your cart moving left and right, the inverted pendulum problem is, you know, can you, if you see it swivel, can you kind of balance that, right? [LAUGHTER] Um, and so, uh, one of the- so common textbook examples of, uh, um, reinforcement learning is, uh, can you choose actions over time to move this left and right so as to keep the pole oriented upward? Right? And so for a problem like this, um, if you have a linear rail just a one-dimensional, you know, like a railway track that this cart is on, the state-space would be x which is the, uh, position of the cart. Um, [NOISE] Theta which is the, ah, orientation of the pole as was x dot and Theta dot. Right? So this would be a four-dimensional state-space for the- for the inverted pendulum if, if it's like running left and right on a railway track- on a one-dimensional railway track, right? Um, all right. Cool. So, uh, for all of these problems, if you want to build, you know, a self-driving car and have it do something or, um, build an autonomous helicopter [NOISE] and have it either hover stably or fly a trajectory or keep the pole upright in inverted pendulum. These are examples of robotics problems where you would model the state space as a continuous state-space. So what I wanna do today is focus on problems where the state-space [NOISE] is, um, R_n. So n-dimensional set of row numbers. And in these examples, I guess n would be 4 or 6 or 12. Right? Oh, and, uh, again, for the- for the mathematicians in this class, technically, uh, angles are not real numbers because they wrap around, and they only go to 360, and then they wrap around to 0. But I think for the purposes of today, uh, that's not important. So we just treat this as R_n. Oh, yeah. Okay. [NOISE]. [NOISE] So [NOISE] all right. Um, so the most straight- straightforward way- [NOISE] the most straightforward way to work with a continuous state space is discretization where, um, you know, you might have in this example a two-dimensional state-space, maybe, ah, x and Theta for the inverted pendulum. And then you just lay down the cell or grid values, right? And discretize it back to a- a discrete state problem. Ah, and so, you know, so you can give the states a set of names, one, two, three, four, whatever and anywhere within that little square you just pretend that your MDP in the robot is in state number 1. So this takes a continuous state problem and turns it back to a discrete state problem. Um, this is such a simple straightforward way to do it. Ah, this is actually reasonable to do for small problems. Um, and if you have a relatively small low dimensional states MDP like an inverted pendulum problem, you know, four-dimensional, it's actually perfectly fine to discretize the state 3 and solve it this way. Ah, let me describe some disadvantages of discretization first. And then- and then we chat a little bit about when you should just use discretization because even though it's not the best algorithm, it works fine for smaller problems. But for bigger problems, we'll have to go to more sophisticated algorithms like fitted value iteration, okay? But, um, so what are the problems with discretization, right? Well first, actually this marker is- [NOISE] this is a very- you know there's kind of a naive representation, ah, for, ah, V_star, ah, and Pi_star, right? Which is- you know, remember the very first problem we talked about, ah, predicting housing crisis? Um, imagine if x was the size of a house, and the vertical axis was the price of a house, ah, and you have a dataset that looked like this, [NOISE] all right? Discretization is the- the- the discretization equivalent of trying to fit a function as data would be to look for the input feature and, um, you know let's discretize it into Phi values. And for each of these little buckets in- in each of these five intervals, let's fit a constant function, right, or something like that, right? So this staircase would be how, you know, discretization will represent the price of a house as a function of the size. Um, and the analogy is that what we're doing in reinforcement learning is we want to approximate the value function. And if you were to discretize it then, um, on the x-axis is maybe the state. And now, I'm down to one-dimensional state, right? Cause that's where I can plot. And you are saying that, well, let's approximate the value function, you know, as a- as a staircase function, as a function of the set of states, right? And you know- and this is not terrible. If you have a lot of data and very few input features, you can get away with this. This will work, okay? But, it- it- it- it doesn't, it doesn't seem to s- allow you to fit a smoother function, right? Um, so that's one downside. It's just not a very good representation. Um, and the second downside is the, ah, dimensionality. All right. Some- somewhat fancifully named cursive dimensionality, which is, ah, and Richard Bellman had given this name, and this is a cool sounding name. But, what it means is that if, um, the state space is in R_n, um, and discretize, you know, each dimension into k values, then you get k_n discrete states, right? So if we discretize, ah, position and orientation into 10 values which is quite small, then you end up with you know 10-n states which grows exponentially and in the dimensional state space n. So, um, discretization works fine if you have relatively low dimensional problems, like two-dimensions, no problem, four dimensions maybe it's okay. But they were very high-dimensional state spaces. Ah, this is- this is not a good- this is not a good representation, right? And, um, it turns out the cursive dimensionality- to take a slight aside from continuous state spaces because dimensionality also applies for very large discrete state MDPs. So for example, one of the places people have applied reinforcement learning is in factory optimization, right? So we have a factory with 100 machines in a factory and if every machine in the factory is doing something slightly different, um, then if you have 100 machines in a giant factory, ah, each- and each machine can be in k different states, then the total number of states of your factory is, um, k to the power of 100, right? And so even if- so- so cursive dimensionality also applies to very large discrete state spaces such as if you have a factory, with 100 machines, and then your total state space becomes k to the 100. Um, and it turns out that for this to have a discrete state space, ah, fitted value iteration can be a much better algorithm as well. We'll get to fitted evaluation in a little bit, okay? So, um, let's see. So some practical- so, ah, now despite all this criticism of digitalization if you have a small state space there's a simple method, ah, to try to apply, you know. And- and if- if you have a very small state space, go ahead and discretize it if you want quick things to try and just get something working. Ah, so let me share with you some maybe guidelines. Ah, this is- this is how I do it I guess, right? If you have a, you know, two-dimensional state space or three dimensional state space, it's no problem, just discretize. Of usually for a lot of problems, uh, it's just fine. Um, if you have maybe a 4-6 dimensional state space, um, you know, I would think about it, ah, and it will still often work. So for the inverted pendulum which is four-dimensional state space, it works just fine. Um, I've had some friends work on, ah, trying to, ah, drive a old bicycle, right? Which you can model as a six-dimensional state space, ah, and you know discretization it- it kind of works as it- it- it works if you put some work into it. Ah, one of the tricks you want to use as you approach the 4-6 dimensional state space range is, ah, choose your discretization more carefully. So for example, if the state S2 is really important. So if you think the- the actions you need to take or the value of the performance is really sensitive to the state S2 and less in the state S1, then, um, in this range people end up designing, um, unequal discretizations where you might discretize S2 much more finely than S1, right? And- and the reason you do that is, ah, the number of states, the number of discrete states is now blowing up exponentially. Something to the power of 4, something to the power of 6. And these tricks allow you to just reduce a little bit the number of discrete states you end up having to model. Um, I think, you know, if you have a 7-8 dimensional problem, ah, I- that- that's pushing it. That's when I would kind of be nervous, and- and, you know, be increasingly inclined to not use discretization. I personally rarely used discretization for problems that are eight-dimensional. Ah, and then when your problems that are even higher-dimensional than this. You know like 9, 10, and higher than I would very seriously consider, um, ah, an algorithm that does not discretize. Very rare, um, to use discretization for- for problems this high. Even seven to eight is quite rare. I've seen it done in rare occasions but- but- and- and - and these things get worse exponentially, right? With the number of dimensions. So maybe there's a set of guidelines for when to use discretization and when to seriously consider doing something else. All right. So, um, in the alternative approach that you see today, ah, what you will be able to do is to approximate V star directly without resorting to discretization, okay? And, um, uh, there'll be an analogy that will make later, uh, just, you know alluding to this plot again. Right, so this analogy between linear regression where you're trying to approximate y as a function of X and value iteration, where you're trying to learn or approximate V as a function of s. Right, that's v star. Which is that in linear regression, um, you say let's approximate X as a linear function of y, right, um, or if you don't want to use the roll features y, ah, what you can do is, um, use, you know, theta transpose, theta transpose phi, oh, I'm sorry, got that totally mixed up. Right, where phi of X is the features of x, ah, so if, um, ah, right? So this is what linear regression does where if X is your housing price then maybe phi of X is equal to, you know, X_1, X_2, X_1 squared, X_1, X_2 and so on, right? So that's how, that's how you can use linear regression to approximate the price of a house, either as a function of the raw features or as a function of some, you know, slightly more sophisticated, slightly more complex set of features of the house. And what, we will- what, what you see in, um, fitted value iteration is a model where we will approximate v star of s as, um, a linear function of features of the state. Okay? So that's the algorithm we'll build up to. And, uh, um, uh, yeah we're going to try to use linear regression with a lot of modifications to approximate the value function. Okay? And, and, and again in reinforcement learning in value iteration, um, the, the- your goal is to find a good approximation to the value function because once you have that you can then use, you know, the equation we had earlier to compute the optimal action for every state, right? So, so we just focused on computing the value function. Now in order to derive the fitted value iteration algorithm, um, it turns out that, uh, um, fits it value iteration, um, works best with a model with simulator of the MDP. So let me describe what that means and how you get the model and then we'll talk about how you can actually you implement the fitted value iteration algorithm and have it work on these types of problems. Okay? All right. So, um, what a model of a or a simulator of your robot is- is just a function that takes as input a state, takes as inputs an action and it outputs the next state S prime drawn from the state transition probabilities. Okay? Um, and the way that a model is built, um, is that, um, uh, the states and the actions, uh, above, uh, uh, and, and let's see, and the way the model is built is the state is just a row value vector. Okay? Oh, and, um, I think for simplicity, uh, for now let's assume that the action space is discrete. Um, it turns out that for a lot of MDPs, the state space can be very high dimensional, and the action space is much lower-dimensional than the state space. Uh, so for example for a car, you know, S is, uh, uh, six-dimensional. But the space of actions is just two dimensional, right? The steering and braking. Uh, It turns out for a helicopter you know the state space is 12-dimensional. And I guess you probably mostly, I wouldn't expect you to know how a helicopter flies but it turns out there you have, uh, four-dimensional actions in a helicopter. The way you fly a helicopter is you have two control sticks, so your left hand and your right hand you can move, uh, uh, has two-dimensions of control. And for the inverted pendulum, I guess, the state space is 4D and the action spaces is just 1D, right? You move left or right. So you actually see in a lot, um, reinforcing learning problems that it's quite common for the state-space to be much higher dimensional than the action space. And so, um, let's say for now that we do not want to discretize the state space because it's too high dimensional. But just for the sake of simplicity let's say we discretize the action space for now, right? Which is, which is usually much easier to do. But I think as we develop fitted value integration as well, uh, we'll- we'll you might- you'll get hints of when maybe you don't need to discretize your action space either, but let's just say we have a discrete, discrete action space for now. Okay? So, all right so how do you get a model, right? Um, one way to build a model is to use a physics simulator. So, um, you know in the case of an inverted pendulum, right? It turns out that, uh, uh, well the action is what's the acceleration you apply to either positive or negative or to the, to accelerate to the left or the right. Then it turns out that, um, uh, let's see, so the state space is four-dimensional, right, and it turns out that, um, if you sort of flip open a- a physics textbook Newtonian mechanics, uh, if you know the weight of the car, the weight of the pole, um, uh, uh, yeah I think that's it actually. If you know the mass of the car and the mass of the pole, uh, and the length of the pole, it turns out you can derive equations about what is the velocity, right? So it turns out S dot is equal, you know, don't- don't worry about this. Think of the math as decoration rather than something you need to learn where, you know, L is the length of the pole, M is the mass of one of these things actually don't worry about it. M is the pole mass, uh, A is the force extended and so on. Um, uh and, and a conventional physics textbook will, will, kind of let you derive these equations, uh, or, or rather than trying to derive this yourself using, uh, uh, you know, either yourself using Newtonian mechanics or finding the help of a physicist friend, uh, there are also a lot of, um, uh, open source, uh, physics simulators and software packages. Where you can download an open source simulator plug in the dimensions and mass and so on of your system, and then they'll spit out of the simulator like this. It tells you how the state evolves from one time step to another time step. All right, and so- but so in this example the simulator would say that, um, S prime is equal to S plus, you know, Delta t times I guess, uh, times S dot, where Delta t could be lets say 0.1 seconds, right? So you want to simulate this at 10 hertz, uh, so that 10, 10 updates per second so that the time difference between the current state and the next state is one-tenth of a second. Then you write a simulator like this. Okay? Um, and, but- and, and really, the most common way to do this is not to actually derive the, um, uh, physics update equations and the most common way to do this is to just download one or the open source physics engines, right? So, um, this will work okay for, uh, problems like the inverted pendulum. Um, I once used a sort of physics engines to build a simulator for a four-legged robot and manage to used reinforcement learning to get a four-legged robot to walk around, right? So it, it, it works. Although um, um, the second way to get a model is to learn it from data. All right, and I, I personally end up using this much more often. So, um, here's what I mean. There actually- let's say you want to build a, uh, controller for an autonomous helicopter, right? So, so really, this is a case study. And what I'm describing is real, like this will actually work, right? Uh, let's say you wanna build, uh, uh, let's say you haven't- let's say you have a helicopter and you want to build an autonomous controller for it. What you can do is, um, start your helicopter off in some state S0, right? So with, uh, GPS accelerometers, magnetic compass, you can just measure the position and orientation of the helicopter and then have a human pilot, fly the helicopter around. So the human pilot, [NOISE] you know, using control sticks, will move the helicopter. They'll, they'll, they'll command the helicopter with some action A0, and then a 10th of a second later, [NOISE] the helicopter will get to some slightly different position and orientation as one. And then the human pilot, you know, will just keep on moving the control sticks, uh, and rec- so you record down what actions they are taken, A1. And based on that, the helicopter [NOISE] will get to some new state S2, and then they will [NOISE] take some action A2, [NOISE] that get to some state S3, [NOISE] and so on. And, and [NOISE] let me just write this as S_T, right? So in other words, what you do is, uh, take the helicopter out to the field and hire a human pilot to fly this thing for a while and record the position of the helicopter 10 times a second, and also record all the actions that human pilot was taking on the control stick. Okay. Um, and then do this not just one time, but do this m times. So let me use, uh, superscript 1. [NOISE] Or you get the idea. All that, great. Uh, to denote the first, uh, trajectory. So you do this a second time, [NOISE] right? And so on and, and, uh, maybe do this m times, right? So ba- thi- this is just a lot of math that's saying fly the helicopter around, you know, m times, right? And then record everything that happened. And now, um, your goal is to apply, [NOISE] uh, supervised learning, [NOISE] right? To estimate S_t plus 1 as a function of S_t [NOISE] and A_t, right? So the job of the model- the job of the simulator is to take as input the current state and the current action, [NOISE] and tell you where the helicopter is gonna go, you know, the- like 0.1 seconds later. And so, um, given all this data, what you can do is apply a supervised learning algorithm to predict wha- what is the next state S prime as a function of the current state and action, right? And, and the other notation as [NOISE] in, in when I drew that box for the simulator above, I was using S prime to denote S_t plus 1 and, uh, S and a, right? So that's the mapping between the notations. Um, and so [NOISE] if you use the linear regression version [NOISE] of this idea, um, you will say, [NOISE] let's approximate S_t plus 1 as a linear function of the previous state, plus another linear function of the previous state. Um, and it turns out this actually works okay for helicopters flying at slow speeds. This is actually not a terrible model, if, uh, if your helicopter is moving slowly, uh, and, and, and not flying upside down. If, if your helicopter is flying in a relatively level way at kind of a slow speed, this model is not too bad. [NOISE] Um, if you're flying a helicopter in a highly dynamic situations, flying very fast, making a very fast aggressive turn, then this is not a great model but this is actually okay for slow speeds, right? Um, and so I guess A here will be, uh, n by n matrix because, uh, the state space is n-dimensional, you know, uh, uh, so A is a square matrix and B, um, will usually be a tall skinny matrix I guess, whereas the dimension of B is the dimension of the state space by the dimension of the action space. Okay? [NOISE] And so, um, in order to fit the parameters a and b, [NOISE] you would minimize with respect to the parameters A and B of this, [NOISE] uh, okay. [NOISE] So you wanna approximate S_t plus 1 as a function of that, and so, you know, pretty natural to fit the parameters of this linear model in a way that minimizes the squared difference between the left-hand side the right-hand side. Wait, did I screw up? Yes. Go ahead. [inaudible]. Uh, say that again. [inaudible]. Oh, sure. Uh, what's the difference between flying a helicopter m times versus flying a helicopter, once, very, very long. Uh, uh in this example, it, it makes no difference. Yeah. This, this is fine either way. Uh, uh, u- u- unless, um, uh- yeah for practical purposes, it doesn't matter. Uh, sorry. Uh, for, for, um, for the purposes of this class, it doesn't matter. For practical purposes, if you fly the helicopter m times, it turns out the fuel burns down slowly. And so the way the helicopter changes slowly and you wanna average over how much fuel do you have or wind conditions, this is what actually is done. But for the purposes of understanding this algorithm, flying a single time for a long time, you know, works just fine as well. Okay? Um, so this is the linear regression version of this, and, uh, uh, and we, we actually talk about, uh, uh, some other models later, uh, called LQR and LQG. Uh, you, you see this linear regression version of a model as well. Just read, just a linear mo- model, the dynamics, right? Uh, um, uh, we- we'll, we'll come back to linear models dynamics later, uh, next week. But it turns out that, um, if you want to use a nonlinear model, uh, you know, plugging in non-linear. If, if you, you can also plug in, right, Phi of S, you know, and maybe phi prime of a as well, if you want to have a lan- non-linear model. Um, and, and this will work even better depending on your choice of features. Okay? Now, um, [NOISE] finally, having run this little linear regression thing, where you- and it- it's not quite linear regression because A and B are matrices, but, uh, but you can minimize this objective. And it turns out to- this turns out to be equivalent to running linear regression n times. Um, so S has 12 dimensions. This turns out to be equivalent to running linear regression n times to predict the first state, second state, third state variable, and so on, right? That- that's- this is what- what this is equivalent to. But having done this, you now have a choice of two possible models. One model would be to just [NOISE] say my model will set S_t plus 1 as A_St [NOISE] plus B_At, uh, another version. [NOISE] Would be to set St plus 1 equals A_st plus B_at plus Epsilon t, where Epsilon t is distributed. [NOISE] Uh, maybe from, uh, from a Gaussian- from a Gaussian density. Okay? Um, and so this first model would be a deterministic model, and this model would be a stochastic model, right? And, um, if you use a stochastic model, then that's saying that- [NOISE] right. When you're running your simulator, when you're running the model, um, every time you generate St plus one, you would be sampling this Epsilon from a Gaussian vector, and adding it to the prediction of your linear model, and, and if you use a stochastic model, what that means is that, you know, if you simulate your helicopter flying around, your simulator will generate random noise that adds and subtracts a little bit to the state space of helicopter as if there were little wind gusts blowing it, blowing the helicopter around, okay? Um, and this is, uh, uh, uh, yeah, right. So, um, right. So it turn- and, uh, in, um, in most cases, when you're building reinforcement learning models- oh, and so the, the approach we're taking here, this is called model-based reinforcement learning where you're going to build a model of your robot, and then let's train the reinforcement learning algorithm in the simulator, and then take the policy you learn, take the policy of how you learned in simulation and apply it back on your real robot, right? So this, this, this approach we're taking is called model-based RL. [NOISE] Um, there is an alternative called model-free RL, which is you just run your reinforcement learning algorithm on the robot directly, and let the robot bash the robot around and so on, and let it learn. I think that in terms of robotics applications, uh, um, I think model-based RL has been taking off faster. A lot of the most promising approaches are model-based RL because if you have a physical robot, you know, you just can't afford to have a reinforcement learning algorithm bash your robot around for too long, or how many helicopters do you want to crash before your learning algorithm figures it out? Um, model-free RL works fine if you want to play video games because if you're trying to get a computer or, or, or, or play chess, or Othello or Go, right? Because, um, you have a perfect simulator for the video game which is a video game itself, and so your, your, your RL algorithm can, I don't know, blow up hundreds of millions of times in a video game, and, and that's fine, uh, for so- for, for playing video games or for playing, um, like, uh, you know, traditional games, model-free approaches can work fine, but I- most of the, um, a lot of the, uh, uh, uh, success applications of reinforcement learning robots have been model-based. Although again, the field is evolving quickly so there's this very interesting work at the intersection of model-based and model-free, that that, that, gets more complicated, right? But I- I- I want to say, if you want to use something tried and true, you know, for robotics problems seriously because they're using model-based RL, because you can then fly a helicopter in simulation, let it crash a million times, right? And no one's hurt, there's no physical damage anywhere in the world. It was just, uh, uh, Okay. And, uh, um, and- oh, and just one last tip. One thing we learned, um, uh, building these, uh, reinforcement learning algorithms for a lot of robots is that, um, you know, having run this model, you might ask, well, how do I choose the distribution for this noise, right? Uh, there- how, how, how do you model the distribution for the noise? Um, one thing you could do is estimate it from data. But as a practical matter, what happens is so long as you remember to inject- so let's see. It turns out if you use a deterministic simulator, uh, a lot of reinforcement learning algorithms will learn a very brittle model, uh, that works in your simulator but doesn't actually work when you put it into your real robot, right? And so if you- if you actually look on YouTube or Twitter, um, in the last year or two, there have been a lot of cool-looking videos. There are people using reinforcement learning to control various weirdly-configured robots, like a snake robot or some five-legged thing or some- whatever. it's just a cool random, I- I- this is- I- I- I'm not good at drawing this but, you know, if you build a five-legged robot, I don't even know what has five legs, right? How do you control that? It turns out that if you have a deterministic simulator, um, using these methods, it's not that hard to generate a cool-looking video of your reinforcement learning algorithm, supposedly controlling a five-legged thing or some crazy, you know, a worm with, uh, two legs or something, these crazy robots that you can build in a simulator. But it turns out that, um, uh, even those easy, it's, uh, well, not easy. Even though you can generate those types of videos in the deterministic simulator, um, if you use a deterministic model of a robot, uh, and you ever actually tried to build a physical robot, and you take that policy from your physics simulator to the real robot, uh, the, the odds of it working on the real robot are quite low, if you use a deterministic simulator, right? Because the problem with simulators is that your simulator is never 100% accurate, right? You know, it's always just a little bit off. And one of the lessons we learned, uh, that we've- I hope you learned, uh, [NOISE] uh, applying RL to a lot of robots is that if you want your model-based RL work to work not just in simulation and generate a cool video, but you want it to actually work on a physical robot, like a physical helicopter that you own, that is really important to add some noise to your simulator. Because if the policy you learn is, um, robust to a slightly stochastic simulator, then the odds of it generalizing, um, uh, you know, to the, to the real world, to the physical real world is much higher than if you had a completely deterministic simulator. So I think whenever I'm building a robot, right? I- I- I pretty much- actually, you know, I don't think I- oh, with one exception- okay, I [inaudible] will talk about that next week, but with one, with one very narrow exception, I pretty much never use deterministic simulators, uh, when, when working on robotic control problems, unless- uh, uh, assuming, assuming I want it to work in the real world as well, right? Um, and, uh, and again, you know, tips and tricks. Uh, so, uh, the most important thing is to add some noise, and then, uh, sometimes the exact distribution of noise. Yeah, go ahead and try to pick something realistic, but the exact distribution of noise actually matters less, I want to say than just the fact of remembering to add some noise. Okay. [NOISE] By the way, I- you guys really don't know this, but my PhD thesis, uh, was, um, using reinforcement learning to fly helicopters. So, so I'm trying to, I don't know, so, so you're telling me someone has crashed a bunch of helicopters [LAUGHTER] model helicopters, and has lived through the pain and the joys of seeing this stuff work or not work. [LAUGHTER] [NOISE] All right. So now that you have built a model, built a simulator, uh, for your helicopter, for your four-legged robot or for your car, um, how do you, um, how do you approximate the value function, right? So, um, in order to apply, um, fitted value iteration, the first step is to choose features of the state s. Right. And then, um, we approximate v of s. You know, we approximate v-star using a function v of s, which is going to be Theta transpose Phi of s. Um, and so, I don't know. And so, uh, you know, in the case of, uh, uh, in, in, the case of the, um, uh, inverted pendulum, right? Then Phi of s, maybe you have x, x-dot, maybe you've x squared or x times x-dot or x, uh, times the polar orientation, and so on. So take, take your state to s, and think up some nonlinear features that, that you think might be useful for representing the value. Um, and remember that what the value is, the value of a state is your expected payoff from that state, expected sum of discounted rewards. So the value function captures, if your robot starts off in that state, you know, how well is it gonna do if it starts here? So when you're designing features pick a bunch of features that you think hope convey, um, how well is your robot doing. That makes sense? And so, uh, maybe for the inverted pendulum, for example, if the pole is way over to the right, then maybe the pole will fall over given a reward of minus 1 when the pole falls over. Right? Uh, but so, sorry. I'm overloading the notation a bit. Theta is both the angle of the pole as well as the parameters. But, but, but if the pole is falling way over that looks extreme pretty badly, unless, um, x-dot is very large and positive, right? And so maybe there's interaction between Phi and x-dot. So you might say, "Well, let me have a new feature, which is the angle of the pole multiplied by the velocity." Right? Because then-, uh, because it seems like these two variables kind of depend on each other. Um, so, so, so just as when you are trying to predict the price of a house, you would say, "Well, what are the most useful features predicting the price of a house?" Uh, um, you would do something similar, um, for fitted evaluation. And one nice thing about-, um, uh, one nice thing about model-based RL is that once- model-based reinforcement learning, is that once you have built a model, you see a little bit that you can collect an essentially infinite amount of data from your model. Right? And so with a lot of data, you can usually afford to choose a larger number of features, because you can generate a ton of data with which to fit this linear function. And so, you know, you- you're, you're usually not super constrained in terms of, uh, needing to be really careful not to choose too many features because of fear of overfitting. You could get so much data from your simulator that, you know, you can usually make up quite a lot of features, uh, and then some of the features end up not being useful, it's okay. Because you can get enough data for running your simulator for the algorithm to still fit a pretty good set of parameters Theta, even if you have a lot of features. Because you can have a lot- you can generate a lot of data to fit this function. Okay. So, um, let's talk through the fitted value iteration algorithm. Let's see. All right. You know what? This is a long algorithm. Let me just use a fresh board for this. [NOISE]. All right. So, uh, let me just write down the original value iteration algorithm for these v states. Uh, so what we had previously was we would update V of s according to R of s, plus Gamma, max over a, right? So this is what we had, um, last Monday. And, uh, I said at the start of today's lecture that you can also write this as this. [NOISE]. Okay. So let's take that and generalize it to a fitted value iteration. [NOISE]. All right. Um, so first, let's choose a set of states randomly, and let's initialize the parameters to equal 0, okay? Um, and what we're going to do is where-,uh, so, so let's see. In linear regression, you learn a mapping from x-y, and you have a discrete set of examples for x, and you fit a function mapping from x and y. So and what we're going to do here, we're going to learn the mapping from s to v of s. And we are going to take a discrete set of examples for s, and try to figure out what is v of s for them, and then for the straight line, you know, to try to model this relationship, right. So, so just as you have a finite set of examples, a finite set of houses that you see a certain set of values of x in your training set for predicting housing prices. We're gonna see, you know, a certain set of states, and then use that finite set of examples to use linear regression to fit v of s. Right? So that's what this initial sample is meant to do. And so, um, this is the outermost loop of value iteration- of fitted value iteration. And then for i equals 1 [NOISE] through m. [NOISE] Let's see, [NOISE] uh. All right. So, um, what we're going to do is, um, go over each of these m states, uh, go over each of these m states, right, and for each one of them, um, we're going to- and for each one of those states of each one of those actions, we're going to take a sample of k things in order to estimate that expected value. Right. And so this expectation is over S prime drawn from this state transition distribution. They say, you know, from this state, if you take this action where you get to the next. And so, uh, these two loops this for i equals 1 through m. And for each action a this is just looping over every state and every action, and taking k samples. Sampling k samples of where you get to if you take an action a in a certain status. Right. And so [NOISE] um, uh, and by taking that k examples and computing this average q a, right, is your estimate of that expectation. Okay. So, so all we've done so far is, uh, take k samples, you know, from this distribution of with S prime is drawn and average V of s. Oh, actually, uh, oh, I'm sorry. And, uh, if I move R of s inside, sorry, then that's q of a. Yeah. Okay, that makes sense? [NOISE] Sorry. Let me just rewrite this to move R of s inside [NOISE]. Fix this up a little bit. So this is written as Gamma. If you write this as max over a, of R of s plus Gamma, uh, [NOISE] Yeah. Okay. Yes, sorry. So we move the max and expectation out, then this is, this is q of a. Okay? Um, next, let's set y i equals max over a of q of a [NOISE]. And so by taking the max over a of q of a, um, that's what y i is. Is your estimate at the right-hand side of value iteration. Okay. [NOISE] And so y i is your estimate for, um, for this quantity, for the right hand side of value iteration. Now, in the original value iteration algorithm, um, I'm, I'm just using VI to approximate that to abbreviate value iteration. In the original algorithm, what we did was we set V of S i to be equal to y i, right? In the original value iteration algorithm, we would compute the right hand side, this purple thing, and then set V of s equals to that, right, just set right-hand side equal to- I set the left hand side equal to the right-hand side. But in, um, fitted value iteration, you know, V of s is now approximated by a linear function. So you can't just go into a linear function, and set the value of the points individually. So what we're going to do instead is in fitted Vi, we're going to use linear regression to make V of Si as close as possible to yi. But V of Si is now represented as a linear function of the state. So a linear function of the features of state. So V of Si is Theta transpose Phi of Si, and you want that to be close to yi. And so the final step is run linear regression to choose the parameters Theta that minimizes the squared error, okay? [NOISE] Does that make sense? Okay, um, oh, yes. Let me just make my curly braces match. Yeah. Okay, okay. So that's fitted. Uh, go ahead, question? [inaudible]. Oh, this one? Oh, this one? Oh, no, the, the m is used differently. Uh, so when we were learning a model m was just how many times you fly the helicopter in order to build a model. And the number of times you fly the helicopter in order to build a physics model, to build a model, the helicopter dynamics has, has nothing to do with this m, which is the number of states you use in order to, sort of, anchor, or in order to, uh, uh, so I think I'm actually- so the, the, the way to think about this is, is you want to learn a mapping from states to B of S. And so, uh, this sample, this m states is- we're gonna choose m states on the x axis, right? So, uh, and that m is the number of points you choose on the x axis. And then in each, uh, iteration, the value iteration we're gonna go through this procedure. So you have sub S1 up to Sm. Right. And then for each of these, you're going to compute some value yi using this procedure. And then you fit a straight line to the sample of yi's. [inaudible]. Uh, think of this- think of the way you build a model and the way you apply fitted value evaluation as two completely separate operations. So, um, you can have one team of ten engineers flying a helicopter around 1,000 times, build a model, run the linear regression and then they have a model and then they could publish the model on the Internet and a totally different team could download their model and do this and the second team does not need to talk to the first team at all, other than downloading the model off the Internet. There is a question. [inaudible] Oh, yes. Good question. You mean they're sampling, they're sampling k times, right? Yeah. That's a great question, yes. That was a- yes. That was one my next points which is the reason you sample from this distribution is because you're using- so you should do this if you are using a stochastic simulator, right? And then actually it does. Actually, I just wanted to ask you guys what should you do? How can you simplify this algorithm if you use a deterministic simulator instead of a stochastic simulator? Oh, well, let's see. So if you use a determinic- deterministic simulator then, you know, given a certain state at a certain action it will always map to the exact same S-prime right? So how can you simplify the algorithm? [inaudible] action instead of drawing k times, you only need to draw once. Yeah, yeah, cool. Great. Yes. So if you're a deterministic simulator, you can set k equals 1 and set the sample only once because this distribution, it always returns the same value. So all of these k samples would be exactly the same so you might as well just do this once rather than K times. Make sense? Okay cool. Yeah. [inaudible] Oh, this one? [inaudible] Oh, no. This is, um, this is actually a square bracket. Um, the thing is, um, we're trying to approximate this expectation and the way you approximate the mean is you'd sample k times if you take the average, right? Right. So- so what we've done here is in order to approximate this expectation, we're gonna draw k samples and then sum over them and divide by k. So you average over the k samples. All right, cool. Got some more question? What's the little [inaudible] how many states you'll get from K sample and [inaudible] Let's see. So how do you choose M and how do you test for overfitting and so, you know, one- once you have a model, one of the nice things about model-based RL is let's say that Phi of S, right, let's say that Phi of S has 50 features. So let's say you chose 50 features to approximate the value function of your inverted pendulum system. Then we know that- you know that you're going to be fitting linear regression, right, to this 50-dimensional state-space. I mean this step here, this is really linear regression, right? And so you can ask, if you want to run linear regression with 50 parameters, how many examples do you need to fit linear regression? And I will say you know if M was maybe 500, right, maybe you'd be okay. You have 500 examples to 50-50 parameters. But if for computational reasons, if- if it doesn't run too slowly , to even set M equals 1,000 or even 5,000, then there's no harm to letting M be bigger. So usually M, you might as well set to be as big as you feel like, subject to the program not taking too long to run because it- it, you know if- if you're, um, if you're fitting- unlike supervised learning, if you're fitting data to housing prices, um, you need to go out and, you know, collect data right off Craigslist or- or what's on Zillow or Trulia or Redfin or whatever about prices of houses. And so data is expensive to collect in the real world. But once you have a model, you can set M equals 5,000 or 10,000 or 100,000 and just- and then your algorithm will run more slowly. But as long as your algorithm doesn't run too slowly, there is no harm to setting M to be bigger. Makes sense? Um, all right cool. So, um, so I know that there's a lot going on to this algorithm but this is fitted value iteration. And if you do this, uh, this, you can get reasonable behavior on a lot of robots by choosing a set of features and learning the value function to approximate the value of the- really approximate the expected payoff of a robot starting off in different states. Okay. Um, now just a few details to wrap up, again, some practical aspects of how you do this. After you've learned all these parameters, this- you've now learned- go ahead, yeah. [inaudible] Oh, I see. Yes, thank you. Um, yes. So in this, um, expression, where do you get V of S prime j from? Yes. So you would get this from Theta transpose Phi of S prime j, using the parameters of Theta from the last iteration of fitted value iteration. Ju- just as in value iteration, this is the values from the last iteration that you use to update a new iteration. So then you use the last value of Theta to update the new one. Yeah, thank you. Cool. Oh, and, um, one- one other thing you could do which is, um, I talked about the linear regression version of this algorithm which is, you know, this whole- this whole exercise is about generating a sample of S and Y so you can apply linear regression to predict the value of Y from the values of S, right? But there's nothing in this algorithm that says you have to use linear regression. In order to- now that you've generated this dataset, that's this box that I have here, this- this is linear regression, right, but you don't have to use linear regression. In modern yo- deep reinforcement learning, um, one of the ways- well one of the ways to go from reinforcement learning to deep reinforcement learning is to just use a neural network with this step instead. Then you can- then- then you call that deep reinforcement learning where- no. But, hey, it's legit, you know. [LAUGHTER] Um, uh, but, but, you can also use locally weighted linear regression or whatever regression algorithm you want in order to estimate y as a function of the state s. Yeah, and actually if you use a neural network, it relieves the need to choose features Phi as well, you can feed in the raw features. You know, your angle, your orientation and, and using neural networks, to learn that mapping in a supervised learning way. Okay, um, all right. So one last, ah, ah, important, ah, I guess practical implementational detail, which is, um, fitted VI right, uh, gives, uh, approximation to V star. And this, um, implicitly defines Pi star. Right, because the definition for Pi star is that, um, right. So, um, when you're running a robot, you know, you need to execute the policy Pi right, given the state you're gonna pick an action, given the state you're gonna pick an action. And, and having computed V star, it only implicitly defines the optimal policy Pi star. All right, um, and so ah if you're running a robo- if you're running a robot in real time, then you know actually if you fly a helicopter, you might have to choose control actions at 10 hertz meaning 10 times a second you're given the state, you have to choose an action. Uh, uh, if you're building a self-driving car, again a 10 hertz controller wo- would be pretty reasonable. I guess choose a new action and maybe 10 times a second would be pretty reasonable. Um, but how do you compute this expectation and this maximization 10 times per second? So, um, in what we use for fitted value iteration, we used right, a sample, uh, of- we use k examples to approximate the expectation. Right, but if you're running this, um, in real time on a helicopter, you know, probably you don't want to, uh, uh, uh, at least I know for my robotics implementations I have been reluctant to use a random number generator right in the inner loop of how we control a helicopter. Right it, it, it might work but I, but I think, you know, it's approximately- if you want to compute this arg max, you need to approximate this expectation and do you really want to be running a random number generator on a helicopter? And if you're really unlucky the random number gen- generator generates an unlucky value, will your helicopter do something you know, bad and crash? Oh, I, I, I would, again just emotionally I don't feel very good if, uh, your self-driving car has a random number generator and, and a loop of how it's choosing to drive. Right, um, so just as a practical matter, ah, ah, ah, there are a couple of tricks that people often use. Which is, um, the simulator is often of this form, right. Okay, so most simulators have this form, next state is equal to some function of the pre- uh, previous state and action plus some noise. And so one thing that is often done is, um, for your deployment or for the, you know for the, for, for, for the actual policy you implement on the robot. Um, set epsilon t equals 0 and set k equals 1. Right and so, um, so, so, so this, this is a reasonable way to make this policy run on a helicopter, which is during training you do want to add noise to the simulator because it causes a policy you learn to be much more robust. So little errors in the simulator, your simulator is always a little bit off. You know maybe it didn't quite simulate wind gusts or when you turn the helicopter does it bank exactly the right amount. Simulator is always, you know, it's in practice is always a little bit off. Um, so it's important to have noise in the simulator in model based RL. But when you're deploying this in a physical simulator, um, one thing you could do that'll be very reasonable is just get rid of the noise and set k equals 1 and so what you would do is, um, uh, let's see. Um, whenever you're in the state s, pick the action a according to arg max over a of v of s, a. Right so, uh, this f is this f from here. So this is the simulator with the noise removed. Okay and so what you would do is actually, and, and, you know, computers are now fast enough you can- you could do this 10 times a second. Right if you want to control a helicopter or a self car at 10 hertz, you could actually easily do this, you know, at, at, at 10 times a second, which is your car or your helicopter is in some physical state in the world. So you know what is S and so you can quickly for every possible action a that you could take, use a simulator to simulate where your helicopter will go, um, if you were to take that action. So go ahead and run your simulator, you know, once for each possible action you could take. Right, computers are actually fast enough to do this in real time. Um, and then for each of the possible next actions you could get to, compute v apply to that. Uh, so, so this is really right S prime, um, uh, drawn from P_sa, uh, but with a deterministic simulator, right. Right, so every 10th of a second you could in your simulator try out every single possible action, use your simulator to figure out where you would go under each and every single possible action, and apply your value function to see of all of these possible actions, which one gets my helicopter, you know, in the next one-tenth of a second to the state that looks best according to the value functions you've learned from fitted value iteration. Okay, um, and it turns out if you do this then you can, this is how you actually implement something that runs in real time. And, oh, and I just mentioned, you know, the, the, the idea of a training with stochastic simulator and just setting the noise to zero, it's one of those things that's not very vigorously justified but in practice this, this works well. Yes, question? [inaudible]. Oh yes. Ah, so, so, um, for the purposes of this, you can assume you have a discretized action space, ah, and it turns out that for a self-driving car it's actually okay to discretize the action space. Uh, for a helicopter, we tend not to discretize the action space but, um, it turns out if f is a continuous function, then you can use other methods as well. Right this is about optimizing over the, I, I didn't mean to talk about this and sorry this is getting a little bit deeper. But, uh, even if a was a continuous thing, uh, you can actually use real time optimization algorithms, uh, to very quickly try to optimize this function even as the function that it contains actually. Uh, there's a literature on something called model predictive control which, which can actually, you can actually do these optimizations in real time and use to fly a helicopter. Just one last question. So what's your different action you'll transition from the next stage? So how do you know when you're still looking? Do you make an observation or do you use your [inaudible]? Wait oh, uh, uh say, what's the question again? So once you are, like when you have had the helicopter, once you pick an action you'll transition to the next stage so do you make an observation to set up where you are or do you use the [inaudible]. Oh, I use an observation, yeah, yes, yes. So you take an action and then your helicopter will do something, there will be some wind, your model may be off and so you would then a 10th of a second later, take another you know GPS reading, accelerometer reading, magnetic compass reading and use the helicopter sensors that tell you where you actually are. Now, cool. Okay, cool. All right, I hope, uh, yeah, hope- hopefully this was helpful. I feel like, you know, the- I think it's fascinating that the excitement about self-driving cars and flying helicopters and all that, it gives well-balanced equations like these. I, I think that's kinda cool. [LAUGHTER] Okay, that's great. Thanks, I'll see you guys next week. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_7_Kernels_Stanford_CS229_Machine_Learning_Andrew_Ng_Autumn_2018.txt | All right. Good morning. Um, let's get started. So, ah, today you'll see the Support Vector Machine Algorithm. Um, and this is one of my favorite algorithms because it's very turnkey, right? If you have a classification problem, um, you just, kind of, run it and it more or less works. So in particular, I'll talk a bit more about the optimization problem that you have to solve for the support vector machine, then talk about something called the representer theorem, and this will be a key idea to how we'll work in potentially very high-dimensional, like 100,000 dimensional, or a million dimensional, or 100 billion dimensional, or even infinite-dimensional feature spaces. And just to teach you how to represent feature vectors and how to represent parameters that may be, you know, 100 billion dimensional, or 100 trillion dimensional, or infinite dimensional. Um, and based on this we derived kernels which is the mechanism for work on these incredibly high dimensional fea- feature spaces, and then hopefully, time permitting wrap up with a few examples of concrete implementations of these ideas. So to recap, on last Wednesday, we had started to talk about the optimal margin classifier, which said that, if you have a dataset that looks like this, then you want to find the decision boundary with the greatest possible geometric margin, right? So the geometric margin, um, can be calculated by this formula, and this is just the- the- the derivations in the lecture notes. It's just, you know, measuring the distance, uh, to the nearest point, right? Um, and for now let's assume the data can be separated by a straight line. Um, and so Gamma i is- this is sort of geometry, I guess, derivation in the lecture notes. This is the formula for co- computing the distance from the example x_i, y_i, to the decision boundary governed by the parameters w and b. Um, and Gamma is the worst case geometric margin, right? You will make- so- right. Of all of your M training examples, which one has the least or has the worst possible geometric margin? And, the support vector, the optimal margin classifier, we tried to make this as big as possible. And by the way, what we'll- what you see later on is that the optimal margin classifier is basically this algorithm. And optimal margin classifier plus kernels meaning basically take this idea of pi in a 100 billion dimensional feature space that's a support vector machine, okay? So I saw- one thing I didn't have time to talk about, uh, on Wednesday was the derivation of this classification problem, so where does this optimization objective come from? So let me- let me just go over that very briefly. Um, so, the way I motivated these definitions we said that given a training set, you want to find the decision boundary parameterized by w and b, um, that maximizes the geometric margin, right? And so again, as recap, your classifier will output g equals w transpose x plus b. Um, and so you want to find premises w and b. They'll define the decision boundary where your classifications switch from positive to negative, that maximizes the geometric module. And so one way to pose this as an optimization problem is- um, let's see, is to try to find the biggest possible value of Gamma subject to that- subject to that the, um, geometric margin must be greater than or equal to Gamma, right? So, um, so, in this optimization problem, the parameters you get to fiddle with are, Gamma, w and b. And if you solve this optimization problem, then you are finding the values of w and b that defines a straight line, that defines a decision boundary, um, so that- so, so this constraint says that every example, right? So this constraint says every example has geometric margin greater than or equal to Gamma. This is- this is what they are saying. And you wanna set Gamma as big as possible, which means that you're maximizing the worst-case geometric margin. This makes sense, right? So- so if- if I- so the only way to make Gamma say 17, or 20, or whatever, is if every training example has geometric margin bigger than 17, right? And so this optimization problem was trying to find w and b to drive up Gamma as big as possible and have every example have geometric margin even bigger than Gamma. So this optimization problem maximizes the Geom- causes, um, causes you to find w and b with as big a geometric margin as poss- so as big as the worst-case geometric margin as possible, okay? Um, and so, does this make sense actually, right? Okay. Actually rai- raise your hand if this makes sense. Uh, oh, good. Okay. Well, many of you. All right. Let me see if I can explain this in a slightly different way. So let's say you have a few training examples, you know, the training examples geometric margins are, 17, 2, and 5, right? Then the geometric margin in this case is a worst-case value 2, right? And so if you are solving an optimization problem where I want every example- where I want the- the- the, uh, uh, where I want the min of i- of Gamma i to be as big as possible, one way to enforce this is to say that Gamma i must be bigger than or equal to Gamma, for every possible value of i. And then I'm going to lift Gamma up as much as possible, right? Because the only way to lift Gamma up subject to this is if every va- value of Gamma i is bigger than that. And so, lifting Gamma up, maximizing Gamma has effective maximizing the worst-case examples geometric margin, which is, which is, which is how we define this optimization problem, okay? Um, and then the last one step to turn this problem into this one on the left, is this interesting observation that, um, you might remember when we talked about the functional margin, which is the numerator here, that, you know, the functional margin you can scale w and b by any number and the decision boundary stays the same, right? And so, you know, if- if your classifier is y, so this is g of w transpose x plus b, right? So if- let's see the example I want to use, uh, 2, 1. If w was the vector 2, 1- [NOISE] Let's say that's the classifier, right? Then you can take W and B, and multiply it by any number you want. I can multiply this by 10, [NOISE] and this defines the same straight line, right? Um, so in particular, I think, uh, let's see with this 2 1x. [NOISE] This actually defines the decision boundary that looks like that. Uh, if this is X1 and this is X2, then this is the equation of the straight line where W transpose X plus B equals 0, right? Uh, that's uh, one, and two. Uh, you can- you can verify it for yourself. You plug in this point, then W transpose X plus B equals 0. We plug in this point, W transpose X equals 0, um and so that's the decision boundary where the, uh- as yet we'll predict positive [NOISE] everywhere here and we'll predict [NOISE] negative everywhere to the lower left, and this straight line, you know, stays the same even when you multiply these parameters by any constant, okay? Um, and so, um, to simplify this, uh, notice that you could choose anything you want for the normal W, right? Just by scaling this by a factor of 10, you can increase it, or scaling it by a factor of 1 over 10, you can decrease it. But you have the flexibility to scale the parameters W and B, you know, up or down by any fixed constant without changing the decision boundary, and so the trick to simplify this equation into that one is if you choose [NOISE] to scale the normal W to be equal to 1 over gamma. Um, uh because if you do that, then this optimization objective [NOISE] becomes- [NOISE] Um, maximize 1 over norm of W subject to- [NOISE] right? Uh, so it substitutes norm of W equals 1 of gamma, and so that cancels out, and so you end up with this optimization problem instead of maximizing 1 over norm W, you can minimize one half the norm of W squared subject to this. [NOISE] Right? Okay, and so that's a rough- I know I did this relatively quickly. Again- as usual the full derivation is written on your lecture notes but hopefully this gives you a flavor for why. If you solve this optimization problem and you're minimizing over W and B that you are solving for the parameters W and B that give you the optimal margin classifier. Okay. Now, delta margin classifier, we've been deriving this algorithm as if you know the features X I um, let's see, we've been deriving this algorithm as if the features X I are some reasonable dimensional feature X equals R2, X equals 100 or something. Um, what we will talk about later is a case where the features X I become you know, 100 trillion dimensional right? Or infinite dimensional. And um, what's- uh, what we will assume is that W, can be represented [NOISE] as a sum- as a linear combination of the training examples. Okay? So um, in order to derive the support vector machine, we're gonna make an additional restriction that the parameters W can be expressed as a linear combination of the training examples. Right? So um, and it turns out that when X I is you know, 100 trillion dimensional, doing this will let us derive algorithms that work even in these 100 trillion or these infinite-dimensional feature spaces. Now, I'm just deriving this uh, just as an assumption. It turns out that there's a theorem called the representer theorem that shows that you can make this assumption without losing any performance. Uh, the proof that represents the theorem is quite complicated. I don't wanna do this in this class, uh, it is actually written out, the proof for why you can make this assumption is also written in the lecture notes, it's a pretty long and involved proof involving primal dual optimization. Um, I don't wanna present the whole proof here but let me give you a flavor for why this is a reasonable assumption to make. Okay? And when- just to- just to make things complicated later on uh, we actually do this. Right? So Y I is always plus minus 1. So- so we're actually by- by convention, we're actually going to assume that W I can be written right? So in- in this example this is plus minus 1 right? So um, this makes some of the math a little bit downstream, come out easier but it is- but it's still saying that W is- can be represented as a linear combination of the training examples. Okay? So um [NOISE] let me just describe less formally why this is a reasonable assumption, but it's actually not an assumption. The representer theorem proves that you know, this is just true at the optimal value of W. But let me convey a couple ways why um, this is a reasonable thing to do, or assume I guess. So um, maybe here's intuition number one. And I'm going to refer to logistic regression. [NOISE] Right? Where uh, suppose that you run logistic regression with uh, gradient descent, say stochastic gradient descent, then you initialize the parameters to be equal to 0 at first. And then for each iteration of stochastic gradient descent, right [NOISE] you update theta it gets updated as theta minus the learning rate times [NOISE] you know, [NOISE] times X and Y okay? And so- sorry here alpha is the learning rate, uh, nothing, this is overloaded notation, this alpha has nothing to do with that alpha. But so this is saying that on every iteration, you're updating the parameters theta as- uh, by- by adding or subtracting some constant times some training example. And so kind of proof by induction, right if theta starts out at 0, and if- if on every iteration of gradient descent you're adding a multiple of some training example, then no matter how many iterations you run gradient descent, theta is still a linear combination of your training examples. Okay. And- and again I did this with theta- the- the- it was really theta 0 theta 1 up to theta n. Right? Whereas here we have uh, B and then W1 down to WN. Wow, this pen is really bad. [NOISE] I feel like- alright um, I feel like we should throw these away so they don't keep haunting us in the future. Okay. Right, so but- but um, if you- but uh, uh, so I did this a theta rather than W, but it turns out if you work through the algebra this is the proof by induction that, you know, as you run a logistic regression after every iteration the parameters theta or the parameters W are always a linear combination of the training examples. Um, and this is also true if you use batch gradient descent. [NOISE] If you use batch gradient descent [NOISE] then the update rule is this. Um, Yeah, right, [NOISE] okay, alright. And so it turns out you can derive gradient descent for the support vector machine learning algorithm as well. You can derive gradient descent optimized W subject to this and you can have a proof by induction. You know that no matter how many iterations you run during descent, it will always be a linear combination of the training examples. So that's one intuition for how [NOISE] you might see that assuming W is a linear combination of the training examples, you know is a- is a reasonable assumption. [NOISE] I wanna present a second set of intuitions and this one will be easier if you're good at visualizing high dimensional spaces I guess. But uh, let me just give intuition number two which is um let's see. So um, so first of all let's take our example just now right? Let's say that the classifier uses this, 2, 1 [NOISE] X minus 2, right? So this is W and this is B. Then it turns out that the decision boundary is this where this is 1 and this is uh, 2 and it turns out that the vector W is always at 90 degrees to the decision boundary right? This is a factor of I guess geometry or something or linear algebra, right? Where as the vector W 2, 1. So the vector W, you know, is sort of 2 to the right side and 1 up is always at- well, alright. The vector w is always at 90 degrees um, to the decision boundary and the decision boundary separates where you predict positive from where you predict negative. Okay? And so it it turns out that uh, if you have uh, to take a simple example, let's say you have um, two training examples, a positive example and a negative example. Right? Then by illus- X2 right? The linear algebra way of saying this is that the vector W lies in the span of the training examples. Okay? Oh and- and- and um, the way to picture this is that W sets the direction of the decision boundary and as you vary B then the position so you- the relative position, you know setting different values of B will move that decision boundary back and forth like this. And W uh, pins the direction of the decision boundary. Okay? Um, and just one last example for- for why this might be true um, is uh- so we're going to be working in very very high dimensional feature spaces. For this example, let's say you have uh, [NOISE] three features X1, X2, X3 right? And_ and later we'll get to where this is like 100 trillion right? Um, and let's say for the sake of illustration that all of your examples lie in the plane of X1 and X2. So let's say X3 is equal to 0. Okay, so let's say if all your training examples x equals 0, um, then the decision boundary, you know, will be- will be some sort of vertical plane that looks like this, right? So this is going to be the plane specifying, um, w transpose x plus b equals 0 when now w and x are three-dimensional. Um, and so the vector w, uh, will have a- should have W_3 equals 0 right. If- if one of the features is always 0, is always fixed then you know, W_3 should be equal to 0 and that's another way of saying that the vector w, you know, should be, um, represented as a- as a- in- in the span of just the features x1, x2, as a span of the training examples [NOISE] okay. All right, I'm not sure if- if either intuition 1 or intuition 2 convinces you, I think hopefully that's good enough. But this- the second intuition would be easier if you're used to thinking about vectors in high-dimensional feature spaces. Um, and again the formal proof of this result which is called the representation theorem is given in the lecture notes, but it's a very bizarre I don't know, it's actually- it's actually one of the most complicated- it's one- it's definitely the high end in terms of complexity of the- of the full derivation, of the formal derivation of this result. Um, so. [NOISE] All right, so let's assume that W can be written as follows. Um, so optimization problem was this, you wanna solve for w and b so that the norm of w squared is as small as possible and so that the a-this is bigger than the other one, right? Um, for every value of i. [NOISE] So let's see, norm of w squared. This is just equal to w transpose w, um, and so if you plug in this definition of W, you know, into these equations you have as the optimization objective min of one half, um, sum from i equals 1 through m. [NOISE] So this is w transpose W, um, which is equal to I guess sum of i's sum over j, alpha i, alpha j, y_i y_j. And then, um, X_i transpose X_j right? And, um, I'm going to take this. So this is an inner product between X_I and X_J. And I'm gonna use- I'm just gonna write it as this. Right, x_i this notation so x comma z, uh, equals x transpose z, uh, is the inner product between two vectors. This is maybe another alternative notation for writing inner products and when we derive kernels you see that, uh, expressing your algorithm in terms of inner products between features X is-is the key mathematical step needed to derive kernels and we'll use this slightly different sort of open-angle brackets close-angle brackets notation to denote the-the inner product between two different feature vectors. So that is the optimization objective, um, oh, and then this constraint it becomes something else i guess, this becomes, uh, uh, what is it, um, y_i times W which is, um, transpose x plus b is greater than 1. And again this simplifies or if you just multiply this out. [NOISE]. So just to make sure that mapping is clear, um- uh, all these pens are dying. All right I'll not [NOISE]. All right. So that becomes this and this becomes that, okay. Um, and the key property we're going to use is that, if you look at these two equations in terms of how we pose the optimization problem, the only place that the feature vectors appears is in this inner product. Right, um, and it turns out when we talked about the Kernel Trick and we talked with the application of kernels, it turns out that, um, if you can compute this very efficiently, that's when you can get away with manipulating even infinite dimensional feature vectors. We- we'll get to this in a second. But the reason we want to write the whole algorithm in terms of inner products is, uh, there'll be important cases where the feature vectors are 100 trillion dimensional but you can compute the- or even infinite dimensional but you can compute the inner product very efficiently without needing to loop over, you know, the other 100 trillion elements in an array, right? And- and we'll see exactly how to do that, um, later in- in- very shortly. Okay? [NOISE] So. All right, um, now it turns out that, uh, we've now expressed the whole, um, optimization algorithm in terms of these parameters Alpha, right? Defined here, uh, and b. So now the parameters Theta, now- now the parameter z is optimized for our Alpha, um, it turns out that by convention in the way that you see support vector machines referred to, you know, in research papers or in textbooks. It turns out there's a further simplification of that optimization problem which is that you can simplify to this, [NOISE] um, and the derivation to go from that to this is again relatively complicated. [NOISE] But it turns out you can further simplify the optimization problem I wrote there to this. Okay? And again, uh, you- you can copy this down if you want but this is also written in the lecture notes. And by convention this slightly simplified version optimization problem is called the dual optimization problem. Um, the way to simplify that optimization problem to this one that's actually done by, um, using convex optimization theory, uh, and- and- and again the derivation is written in the lecture notes but I don't want to do that here. If- if you want think of it as doing a bunch more algebra to simplify that problem to this one and consequently, you cancel out B along the way, it's a little more complicated than that but-but right, the full derivation is given in the lecture notes. Um, and so, um, finally, you know, the way you train for-the way you make a prediction, right, as you saw for the alpha i's and maybe for b, right, since you solve this optimization problem or that optimization problem for the Alpha i's and then to make a prediction, um, you need to compute h of W b of x for a new test example which is g of w transpose x plus b. Right. But because of the definition of w- w this is equal to g of, um, that's W transpose X plus b because this is w and so that's equal to g of sum over i Alpha_i y_i inner product between X_i and X plus b. And so once again, you know, once you have stored the Alphas in your computer memory, um, you can make predictions using just inner products again, right? And so the entire algorithm both the optimization objective you need to deal with during training. As well as how you make predictions is, um-uh, is expressed only in terms of inner products, okay? So we're now ready to apply kernels and sometimes in machine learning people sometimes we call this a kernel trick and let me just the other recipe for what this means, uh, step 1 is write your whole algorithm, [NOISE] um. [NOISE] In terms of X_i, X_j, in terms of inner products. Uh, and instead of carrying the superscript, you know X_i, X_j, I'm sometimes gonna write inner product between X and Z, right? Where X and Z are supposed to be proxies for two different training examples X_i and X_j but it simplifies the notation, uh, right a little bit. Two, um, let there be some mapping, um, from your original input features X to some high dimensional set of features Phi. Um, and so one example would be, let's say you try to predict the housing prices or predicting a house will be sold in the next month. So maybe X in this case is the size of the house, uh, or maybe is, uh, size and yeah, let write. Maybe X is the size of a house, and so you could, um, take this 1D feature and expand it to a high dimensional feature vector with X, X squared, X cubed, X to the 4th, right? So this would be one way of defining a high dimensional feature mapping. Or another one could be, if you have two features X_1 and X_2, uh, corresponding to the size of the house and number of bedrooms, now you can map this to different Phi X, which may be X_1, X_2, X_1 times X_2, X_1 squared X_2, uh, X_1 X_2 squared, and so on. They are kind of polynomials, set of features, or maybe another set of features as well, okay? And what we'll be able to do is, work with, um, feature mappings, Phi of X, where the original input X may be 1D or 2D or, or whatever, and Phi of X could be, you know, 100,000 dimensional or infinite dimensional. That we'll be able to do this very efficiently right. Or even infinite dimensional, okay? So I guess we will get some concrete examples of this later, but I want to give you the overall recipe. And then, what we're going to do is to find a way to compute K of X comma Z, equals Phi of X transpose Phi of Z. So this is called the kernel function. And what we're gonna do is, we'll see that there are clever tricks so that you can compute the inner product between X and Z even when Phi of X and Phi of Z are incredibly high dimensional, right? We'll see an example of this in a- in- in very very soon. And step four is, um, replace X, Z in algorithm with K of X, Z, okay? Um, because if you could do this then what you're doing is, you're running the whole learning algorithm on this high dimensional set of features, um, and the problem with swapping out X for Phi of X, right, is that, it can be very computationally expensive if you're working with 100,000 dimensional feature vectors, right. I,I- even by to this standards, you know, 100,000, it's. it's not the biggest I've seen, I've seen, actually, biggest I've seen that you have a billion features, uh, but even by today's standards, 100,000 features is actually quite a lot. Um, uh, and- and if you're launching I said, just 100,000 is, is- this is a lot- lot of large number of features, I guess. Um, and the problem of using this is it's quite computationally expensive, to carry around these 100,000 or million dimensional or 100 million dimensional feature vectors or whatever. Um, but that's what you would do if you were to swap in Phi of X, you know in the naive straightforward way for X, but what we'll see is that, if you can compute K of X, Z then you could, because you've written your whole algorithm just in terms of inner products, then you don't ever need to explicitly compute Phi of X, you can always just compute these kernels. Yeah. [inaudible] Let me get to that later, you know, I will go for some kernels and I will talk about uh, bias-variance probably on Wednesday. Yeah. I think the no free lunch theorem is a fascinating theoretical concept but I think that it has been, I don't know, it's been less useful actually because I think we have inductive biases that turn out to be useful. There's a famous theorem in learning theory called no free lunch. It was like 20 years ago. That basically says that, in the worst case, learning algorithms do not work [NOISE]. For any learning algorithm, I can come up with some data distribution so that your learning algorithm sucks. That, that's roughly the no free lunch theorem, proved about like 20 years ago. But it turns out most of the world- most of the time, the universe is not that hostile toward us. So- so, yeah, so as the learning algorithms turned out okay [LAUGHTER]. Um, all right, let's go through one example of kernels. Um, so for this example, let's say that your offer is not input features was three-dimensional X_1, X_2, X_3. And let's say I'm gonna choose the feature mapping, Phi of X to be, um, o- so pair-wise, um, monomial terms. So I'm gonna choose X_1 times X_1, X_1 X_2, X_1 X_3, X_2 X_1, all. Okay. And there are a couple of duplicates so X_1 X_3 is equal to X_3 X_1 but I'll just write it out this way. And so notice that, er, if you have- if X is in R_n, right? Then Phi of X is in R_n squared, right. So got the three-dimensional features to nine dimensional. And I'm using small numbers for illustration. In practice, think of X as 1,000 dimensional and so this is now a million. Or think of this as maybe 10,000 and this is now like 100 million, okay. So n squared features is much bigger [NOISE]. Um, and then similarly, Phi of Z is going to be Z_1 Z_1, Z_1 Z_2, okay? So we've gone from n features like 10,000 features, to n squared features which, in this case, 100 million features. Um, so because there are n squared elements, right? You will need order n squared time to compute Phi of X or to compute phi X transpose Phi of Z explicitly, right? So if you wanna compute the inner product between Phi of X and Phi of Z and they do it explicitly, in the obvious way, it'll take n squared time to just compute all of these inner products and then do the- and, and then they'll compute this, er, com- compute this, right. And it- it's actually n squared over 2, because a lot of these things are duplicated but that's the order n-squared. But let's see if we can find a better way to do that. So what we want is to write out the kernel of x, z. So this phi of x transpose phi of z, right? And, uh, what I'm gonna prove is that this can be computed as x transpose z squared, right? And the cool thing is that remember x is n-dimensional, z is n-dimensional. So x transpose z squared, this is an order n time computation, right? Because taking x transpose z, you know, that's just in a product of two n-dimensional vectors and then you take that number, x transpose z is a real number, and you just square that number. So that's the order n time computation. Um, and so let me just prove that x transpose z is equal to, well, le- le- le- let me, let me, let me prove this step, right? Um, and so x transpose z squared that's equal to, um, right. So this is x transpose z, right? And then times this is also x transpose z. So this formula is z transpose z squared, it's x transpose z times itself. Um, and then if I rearranged sums, this is equal to sum from i equals 1 through n, sum from j equals 1 through n, um, x_i z_i, x_j z_j. Um, and this in turn is, you know, sum over i, sum over j, of x_i x_j times z_i z_j, right?. And so what this is doing, is it's marching through all possible pairs of i and j and multiplying x_i x_j, with the corresponding z_i z_j and adding that up. But of course; if you were to compute phi of x transpose phi of z, what you do is you take this and multiply with that and then add it to the sum, then take this and multiply with that and add it to the sum, and so on until you end up taking this and multiplying that and adding it to your sum, right? So that's why, um- so that's why this formula is just, you know, marching down these two lists this, and multiplying, multiplying, multiplying and add it up, which is exactly, um, phi transpose. Which is exactly phi of x transpose phi of z. Okay? So this proves that, um, you've turned what was previously an order n square time calculation, into an order n time calculation. Which means that, um, if n was 10,000, instead of needing to manipulate 100,000 dimensional [NOISE] vectors to come up with these. Sorry. That's my phone buzzing. This is really loud. Okay. Instead of needing to manipulate sort of 100,000 dimensional vectors, you could do so manipulating only 10,000 dimensional vectors, okay?. Now, um, a few other examples of kernels. It turns out that, um, if you choose this kernel- so let's see. We had k of x comma z equals x transpose z squared, um, if we now add a plus c there where c is a constant, um, so c is just some fixed real number, that corresponds to modifying your features as follows. Um, where instead of just this- you know, binomial terms of pairs of these things, if we add plus c there, it corresponds to adding x_1, x_2, x_3, uh, to this- to your set of features. Ah, technically, there's actually weighting on this. There's your root 2c, root 2c, root 2c and then as a constant c there as well. And you can prove this yourself, and it turns out that if this is your new definition for phi of x, and make the same change to phi of z. You know, so root 2c z_1 and so on. Then if you can take the inner product of these, then it can be computed as this. Right? And so that's- and, and, and so the role of the, um, constant c it trades off the relative weighting between the binomial terms the- you know, x_i x_j, compared to the, to the single- to the first-degree terms the x_1 or, x_2 x_3. Um, other examples, uh, if you choose this to the power of d, right? Um, notice that this still is an order n time computation, right? X transpose z takes order n time, you add a number to it and you take this the power of d. So you can compute this in order n time. But this corresponds to now phi of x has all- um, the number of terms turns out to be n plus d choose d but it doesn't matter. Uh, it turns out this contains all features of, uh, monomials up to, uh, order d. So by which I mean, um, i- i- if, let's say d is equal to 5, right? Then this contains- then phi of x contains all the features of the form x_1 x_2 x_5 x_17 x_29, right? This is a fifth degree thing, uh, or x, or x_1 x_2 squared x_3 x, you know, 18. This is also a fifth order polynomial- a fifth order monomial it's called and so if you, um, choose this as your kernel, this corresponds to constructing phi of x to contain all of these features and there are not exponentially many of them, right? There a lot of these features. Any or all the, um, all, all the- these are called monomials. Basically all the polynomial terms, all the monomial terms, up to a fifth degree polynomial, up to a fifth order monomial term. So- and there are- it turns out there are n plus z choose ds which is, uh, roughly n plus d to the power of d very roughly. So this is a very, very large number of features, um, but your computation doesn't blow up exponentially even as d increases. Okay? So, um, what a support vector machine is, is, um, taking the optimal margin classifier that we derived earlier, and applying the kernel trick to it, uh, in which we already had the- so well. So optimal margin classifier plus the kernel trick, right, that is the support vector machine. Okay? And so if you choose some of these kernels for example, then you could run an SVM in these very, very high-dimensional feature spaces, uh, in these, you know, 100 trillion dimensional feature spaces. But your computational time, scales only linearly, um, as order n, as the numb- as a dimension of your input features x rather than as a function of this 100 trillion dimensional feature space, you're actually building a linear classifier. Okay? So, um, why is this a good idea? Let me just, sheesh. Let's show a quick video to give you intuition for what this is doing. Um, let's see. Okay. I think the projector takes a while to warm up, does it? [NOISE] All right. Any questions while we're- Yeah? [inaudible] Uh, yes. So, uh, this kernel function appears- applies only to this visual mapping. So each kernel function of, um, uh, uh, yes, after trivial differences, right? If you have a feature mapping where the features that could- are permuted or something, then the Kernel function stays the same. Uh, uh, so there are trivial chunk function- transformations like that but, uh, if we have a totally different feature mapping, you would expect to need a totally different kernel function. Cool. So I wanted to- let's see. Ah, cool, awesome. Uh, I want to give you a visual picture [NOISE] of what this um, [NOISE]. All right, um, this is a YouTube video that, uh, Kian Katanforoosh who teaches CS230 found and suggested I use. So I don't- I don't know who Udi Aharoni is but this is a nice visualization of what a support vector machine is doing. So um, let's see how the uh, uh, learning algorithm where you're trying to separate the blue dots from the red dots. Right? So um, the blue and the red dots can't be separated by a straight line, but you put them on the plane and you use a feature mapping phi to throw these points into much higher-dimensional space. So there's now three of these points in the three-dimensional space. In the three-dimensional space, you can then find w. So w is now three-dimensional because it applied the optimal margin classifier in this three-dimensional space that separates the blue dots and the red dots. Uh, and if you now you know examine what this is doing back in the original space, then your linear classifier actually defines that elliptical decision boundary. That makes sense right? So you're taking the data- all right um, so you're taking the data, uh, mapping it to a much higher dimensional feature space, three-dimensional visualization that in practice can be 100 trillion dimensions and then finding a linear decision boundary in that 100 trillion-dimensional space uh, which is going to be a hyperplane like a- like a straight, you know, like a plane or a straight line or a plane and then when you look at what you just did in the original feature space you found a very non-linear decision boundary, okay? Um, so this is why uh- and again here you can only visualize relatively low dimensional feature spaces even, even on a display like that. But you find that if you use an SVM kernel you know, um, right, you could learn very non-linear decision boundaries like that. But that is a linear decision boundary in a very high-dimensional space. But when you project it back down to you know, 2D you end up with a very non-linear decision boundary like that okay? All right. So. Yeah. [inaudible] digital words [inaudible] Oh sure, yes. So uh, in this high dimensional space represented by the feature mapping phi of X does the data always have to be linearly separable? So far we're pretending that it does, I'll come here back and fix that assumption later today. Yeah. Okay, so um now, how do you make kernels? Right? Um, so here's here's some- so here's some intuition you might have about kernels. Um if X and Z are similar you know if two if two- and for the examples X and Z are close to each other or similar to each other then K of x z, which is the inner product between X and Z, right? Presumably this should be large. Um and conversely if X and Z are dissimilar then K of x z, you know this maybe should be small, right? Because uh the inner product of two very similar vectors that are pointing the same direction should be large and the inner product of two dissimilar vectors should be small. Right? So this is one uh guiding principle behind, you know, what you see in a lot of kernels. Just if- if this is phi of x and this is phi of z, the inner product is large but then they kinda point off in random directions, the inner product will be small right? That's how vector inner product works. Um and so- well what if we just pull a function out of these three here, out of the air um, which is K of xz equals e to the negative x minus z squared over 2 sigma squared. Right? So this is one example of a similarity sim sim sim sim- if you think of kernels as a similarity measure of a function, this you know let's just make up another similarity measure of a function and this does have the property that if X and Z are very close to each other then this would be e to the 0 which is about 1. But if X and Z are very far apart then this would be small, right? So this function it- it actually satisfies this criteria. It satisfies those criteria and the question is uh, is it okay to use this as a kernel function? Right? So it turns out that um a function like that K of x z, you can use it as a kernel function. Only if there exists some phi such that K of x z equals phi of X transpose phi Z right? So we derived the whole algorithm assuming this to be true and it turns out if you plug in the kernel function for which this isn't true, then all of the derivation we wrote down breaks down and the optimization problem you know um, uh, can have very strange solutions, right? That don't correspond to good classification though a good classifier at all. Um and so this puts some constraints on what kernel functions we could or for example, one thing it must satisfy is K of X X which is phi X transpose phi of Z. This would better be greater than equal to 0, right? Sorry right? Because inner product of a vector with itself had better be non-negative. So K of X X is ever 0 or less than 0, then this is not a valid kernel function, okay? Um, more generally, there's a theorem that uh proves when is something a valid kernel. Um, somebody just outlined that that proof very briefly which is uh, less than X_1 up to X_d you know be any d points, right? And let's let K- sorry about overloading of notation um this is a- so K represents a kernel function and I'm gonna use K to represent the kernel matrix as well. Sometimes it's also called the gram matrix uh but it's called the kernel matrix. So that K_ ij is equal to the kernel function applied to two of those points um X_i and X_j, right? So you have d points. So just apply the Kernel function to every pair of those points and put them in a matrix, in a big d by d matrix like that. So it turns out that uh, given any vector Z- I think you've seen something similar to this in problem set one, but given any vector z, z transpose K z which is sum over i sum over j z i k i j z j, right? Um if K is a valid kernel function so if there is some feature mapping phi, then this should equal to sum of i sum of j Z_i phi of X_i transpose phi of z X_j times Z_j and by a couple other steps. Um let's see. This phi of X_i transpose phi of X_j. I'm gonna to expand out that inner product. So sum over k, phi of X_ i, element k times phi of X_j element k times Z_ j, um and then we are arranging sums is sum- sum over K oh sorry I'm running out of whiteboard let me just do it on the next board. So we arrange sums, sum of k, sum of i, sum of j, z i phi of x i subscript k, times phi of x [NOISE] j subscript k times z j. Which is sum of the k [NOISE] squared and therefore this must be greater than or equal to 0. Right. And so this proves that the matrix K, ah, the kernel matrix k is positive semi-definite. Okay. Um, and so more generally, it turns out that this is also a sufficient condition, um, for a kernel function to- for our function k to be a valid kernel function. So let me just write this out. This is called a Mercer's Theorem, M-E-R-C-E-R. Wait, um, so K is a valid kernel. [NOISE] So K is a valid kernel function i.e there exists phi such that K of x z, equals phi of x, transpose phi of z if and only if for any d points, you know, x one up to x z, on the corresponding kernel matrix [NOISE] is a positive semi-definite. So if you write this K greater equals 0. Okay. Um, and I proved just one-dimens- one- one direction of this implication. Right. This proof outline here shows that if it is a valid kernel function, ah, then this is positive semi-definite. Um, this outline didn't prove the opposite direction. You see if and only if. Right. Shows both directions. So this, ah, algebra we did just now proves that dimension of the proof I didn't prove the reverse dimension. But this turns out to be an if and only if condition. And so this gives maybe one test for, um, whether or not something is a valid kernel function. Okay. Um, and it turns out that- the kernel I wrote up there, um, that one, K of x z, uh. Right. And it turns out this is a valid kernel. This is called the Gaussian kernel. This is, uh, probably the most widely used kernel. Um, well a- actually well, uh, let me [NOISE]. Well, but the actually the most widely used kernel is-is maybe the linear kernel, um, which just uses K of x z equals x transpose z, ah, and so this is using you know phi of x equals x. Right. So no- no- no high dimensional features. So sometimes you call it the linear kernel. It just means you're not using a high dimensional feature mapping or the feature mapping is just equal to the original features. Ah, this is this is actually a pretty commonly used kernel function, ah, you- you're not taking advantage of kernels in other words. Ah,but after the linear kernel the Gaussian kernel is probably the most widely used kernel, uh, the one I wrote up there and this corresponds to a feature dimensional space that is um, infinite dimensional. Right. And, ah, this is actually- this particular kernel function, corresponds to using all monomial features. So if you have, ah, you know, X one and also X 1, X 2 and X 1 squared X 2 and X 1 squared X 5 to the 10 and so on up to X 1 to the 10,000 and X 2 to the 17. Right. Whatever. Um, ah, so this particular kernel corresponds to using all these polynomial features without end going to arbitrarily high dimensional um, by giving a smaller weighting to the very very high dimensional ones. Which is why it's wide. Yeah. Okay. Um, great. So the, ah, kernel to end- toward the end, I'll give some other examples of kernels. Um, so it turns out that the kernel trick is more general than the support vector machine. Um, it was really popularized by the support vector machine where you know researchers, ah, because Vladimir Vapnik and Corinna Cortes found that applying these kernel tricks to a support vector machine, makes for a very effective learning algorithm. But the kernel trick is actually more general and if you have any learning algorithm that you can write in terms of inner products like this, then you can apply the kernel trick to it. Ah and so you- you play with this for a different learning algorithm in the ah, in the programming assignments as well. And the way to apply the kernel trick is, take a learning algorithm write the whole thing in terms of inner products and then replace it with K of x z for some appropriately chosen kernel function K of x z. And all of the discriminative learning algorithms we've learned so far, um, ah, can be written in this way so that you can apply the kernel trick. So linear regression, logistic regression, ah, everything of the generalized linear model family, the perceptron algorithm, all of the- all of those algorithms, um, you can actually apply the kernel trick to. Which means that you could um, apply linear regression in an infinite dimensional feature space if you wish. Right. Um, and later in this class we'll talk about principal components analysis, which you've heard of but when we talk about principal components analysis, turns out that's yet another algorithm that can be written only in terms of linear products and so there's an algorithm called kernel PCA, kernel principal component analysis. If you don't know what PCA is, don't worry about it we'll get to it later. But a lot of algorithms can be, ah, married with the kernel trick. So implicitly apply the algorithm even in an infinite dimensional feature space, but without needing your computer to have an infinite amount of memory or using infinite amounts of computation. Ah, for this- actually the single place this is most powerfully applied is the- is the support vector machine. In practice I don't- in practice the kernel trick is applied all the time for support vector machines and less often in other algorithms. [NOISE] All right. Um, [NOISE] any questions, before we move on. No. Okay. [NOISE] All right. So last two things I wanna do today, Um, one is fix the assumption that we had made that the data is linearly separable, right? Um, so, you know, uh, sometimes you don't want your learning algorithm to have, uh, um, zero errors on the training set, right? So when- when you take this low dimensional data and map it to a very high dimensional feature space, the data does become much more separable. Uh, but it turns out that if your data set is a little bit noisy, [NOISE] right, if your data looks like this, you've, maybe you wanted to find a decision boundary like that, uh, and you don't want it to try so hard to separate every little example, right, that's defined a really complicated decision boundary like that, right? So sometimes either the low-dimensional space or in the high dimensional space Phi, um, you don't actually want the algorithms to separate out your data perfectly and- and then sometimes even in high dimensional feature space, your data may not be linearly separable. You don't want the algorithm to, you know, have zero error on the training set. And so, um, there's an algorithm called the L_1 norm [NOISE] soft margin SVM, which is a, um, modification to the basic algorithm. So the basic algorithm was min over this, right, subject to, [NOISE] okay. Um, [NOISE] and so what the L_1 norm sub margin does is the following; It says, um, you know, previously this is saying that remember this is the geometric margin. [NOISE] Right. If you normalize this by the norm of w becomes- excuse me, this is the functional margin. Um, if you divide this by the norm of w it becomes the geometric margin. Um, so this optimization problem was saying let's make sure each example has functional margin greater or equal to 1. And in the L_1 soft margin SVM we're going to relax this. We're gonna say that this needs to be bigger than 1 minus c. There's a Greek alphabet C. Um, and then we're gonna modify the cost function as follows. [NOISE] Where these c I's are greater than or equal to 0. Okay. So remember, um, if the function margin is greater or equal to 0, it means the algorithm is classifying that example correctly, right? So long as this thing is getting 0, then, you know, y and this thing will have the same sign either both positive or both negative. Uh, that's what it means for a product of two things to be greater than zero, both things have to have the same sign, right, and so if this is if- if, um, so as long as this is bigger than 0, it means it's classifying that example correctly. Um, and the SVM is asking for it to not just classify correctly, but classify correctly with the- with the functional margin of the- at least 1. Um, and if you allow CI to be positive, then that's relaxing that constraint. Okay. Um, but you don't want the CIs to be too big which is why you add to the optimization cost function, a cost for making CI too big. [NOISE] And so you optimize this as function of W. [NOISE] And these are Greek alphabets c. [NOISE] Um, and if- if you draw a picture, it turns out that, um, in this example with that being the optimal decision boundary, um, it turns out that these examples- [NOISE] these three examples would be equidistant from this straight line, right? Because if they weren't, then you can fiddle the straight line to improve the margin even a little bit more. It turns out that these few examples have, um, functional margin exactly equal to 1. And this example over there, we have functional margin equal to 2, and the further away examples of even bigger functional margins. And what this optimization objective is saying is that it is okay if you have an example here, where functional margin so everything right so everything here has functional margin one. If an example here I have functional margin a little bit less than one. And this by having- by setting Ci to 0.5 say is letting me [NOISE] get away with having function module lower than, less than 1. [NOISE] Um, er, one other reason why, um, you might want to use the L_1 norm soft margin SVM is the following, which is, um, [NOISE] let's say you have a data set that looks like this. [NOISE] You know, seems like- it seems like that would be a pretty good decision boundary, right? But, um, if we add just, you know, measure a lot of examples, a lot of evidence. But if you have just one outlier, say over here, then technically the data set is still linearly separable, right? [NOISE] If you really want to separate this data set, um, sorry, I seem to be killing these pens myself as well. [NOISE] All right. If you want to separate out this data set, you can actually, you know, choose that decision boundary. But the basic optimal margin classifier will allow the presence of one training example [NOISE] to cause you to have this dramatic swing in the position of the decision boundaries. So they are, because the original optimal margin classifier it optimizes for the worst-case margin, the concept of optimizing for the worst-case margin allows one example by being the worst case training examples have a huge impact on your decision boundary and so the L_1 soft margin SVM, um, allows the SVM to still keep the decision boundary closer to the blue line, even when there's one outlier. And it makes it, um, much more robust outliers. Okay. Um, [NOISE] and then if you go through the representer theorem derivation, uh, you know, represent w as a function of the Alphas and so on, um, It turns out that the problem then simplifies to the following; So this is- I'm just [NOISE] right, after some- some- after, you know, the whole representing the calc- the whole represents a calculation, [NOISE] derivation. [NOISE] This is just what we had previously. [NOISE] I've not changed anything so far. [NOISE] Right. This is just exactly what we had. Um, all right, and, uh- [NOISE] And it turns out that, um, the only change to this is we end up with an additional condition on the authorize. So if- if you go for that simplification, uh, now that you've changed the algorithm to have this extra term, uh, then the- the- the new form- this is called the dual form with the optimization problem. The only change is that you end up with this additional condition, right? The, the constraints between Alpha are between 0 and C. Um, and it turns out that, uh, today there are very good, you know, packages, uh, software packages which are solving that for you. I- I- I- I think once upon a time we were doing machine learning, you need to worry about whether your code for inverting matrices was good enough, right? And when- when code for inverting matrices was less mature there's just one thing you had to think about. But today uh, linear algebra, you know, packages have gotten good enough that when you invert the matrix you just invert the matrix. You don't have to worry too much- when you're solving you don't have to worry too much about it. So in the early days of SVM solving this problem was really hard. You had to worry if your optimization packages were optimizing it. But I think today there are very good numerical optimization packages. They just solve this problem for you and you can just code without worrying about the- the details that much. All right. So this L1 norm soft margin SVM and, uh, oh and so, um, and so this parameter C is something you need to choose. We'll talk on Wednesday about how to choose this parameter. But it trades off um- how much you want to insist on getting the training examples right versus you know, saying it's okay if you label a few terms out of this one. [NOISE] We'll- we'll discuss on Wednesday when we discuss bias and variance. How they choose a parameter like c. All right. So the last thing I want to- last thing I'd like you to see today is uh just a few examples of um, SVM kernels. Uh, let me just give um- all right. So, uh, it turns out the SVM with the polynomial kernel, uh, works quite well. So this is, uh, you know k of x, z equals x transpose z to the d. This thing is called a polynomial kernel, um, and this is called a Gaussian kernel which is really uh- the most widely used one is the Gaussian kernel. Right. And it turns out that I guess early days of SVMs, you know, one of the proof points of SVMs was, um, the field of machine learning was doing a lot of work on handwritten digit classification so that's uh- so a- a digit is a matrix of pixels with values that are, you know, 0 or 1 or maybe grayscale values, right? And so if you take a list of pixel intensity values and list them, so this is 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0 and just- this is all the pixel intensity values, then this can be your feature X and you feed it to an SVM using either of these kernels um, it'll do not too badly, uh, as a handwritten digit classification, right? So there's a classic data set, um, called MNIST, which is a classic benchmark, uh, in computing- uh, in- in history of machine learning and, um, it was a very surprising result many years ago that, you know, support vector machine with a kernel like this does very well on handwritten digit classification. Uh, in the past several years we've found that deep learning algorithms, specific convolutional neural networks do even better than the SVM. But for some time, um, SVMs were the best algorithm uh, and- and they're very easy to use in turnkey. There aren't a lot of parameters to filter with. So that's the one very nice property about them. Um, but more generally, uh, a lot of the most innovative work in SVMs has been into design of kernels. So here's one example. Um, let's say you want a protein sequence classifier, right? So uh, uh protein sequences are made up of ami- of- of amino acids so, you know, I guess a lot of our bodies are made of proteins and proteins are just sequences of amino acids and there are 20 amino acids, um, but, uh, in order to simplify the description and really not worry too much of biology, I hope the biologists don't get mad at me, I'm gonna pretend there are 26 amino acids even though there aren't because there are 26 alphabets. So I'm gonna use the alphabets A through Z to denote amino acids even though I know there's supposed to be only 20 but it's just easier to talk with- with 26 alphabets. And so a protein is a sequence of alphabets. Right? Because a protein in your body is a sequence that's made up of a sequence of amino acids and, uh, amino acids can be very- variable length, some can be very, very long, some can be very, very short. So the question is, how do you represent the feature X? So it turns out- uh, and so, um, the goal is to get an input x and make a prediction about this particular protein. Like, what is the function of this protein, right? And so- well, here's one way to design a feature vector which is, uh, I'm going to list out all combinations of four amino acids. You can tell this will take a while. Right. Go down to AAAZ and then AABA and so on. Uh, and eventually, you know, there'll be a BAJT, TSTA down to ZZZZ. Right. Um, and then I'm gonna construct phi of x according to the number of times I see this sequence in the amino acids. So for example, BAJT appears twice. So I'm gonna put 2 there um, uh, you know, TSTA, oh whatever. Right. Appears once so I'm gonna put a 1 there and there are no AAAAs, no AAABs, no AAACs and so on. Okay? So this is a- uh, a 20 to the 4, you know, 26 to the 4, 20 to the 4-dimensional feature vector. So this is a very, very high dimensional feature vector. And it turns out that, um, using some statistic as 20 to the 4 is 160,000. That's pretty high dimensional. Quite expensive to compute. And it turns out that using dynamic programming, given two amino acid sequences you can compute phi of x transpose phi of z, that's K of x,z. And there's a- there's a- there's a dynamic programming algorithm for doing this. Uh, the details aren't important for first-year students, uh, if any of you, um, have taken an advanced CS algorithms course and learned about the Knuth-Morris-Pratt algorithm, uh, it's- it's- it's quite similar to that. Uh, so it's Don Knuth, right, Stanford- Stanford professor, emeritus professor here. So the DP algorithm is quite similar to that and um, uh using this is actually quite um, this is actually a pretty decent algorithm for inputting a sequence of, say, amino acids and training a supervised learning algorithm to make a call- binary classification on amino acid sequences. Okay? So as you apply support vector machines one of the things you see is that depending on the input data you have, there can be innovative kernels to use, uh, in order to measure the similarity of two amino acid sequences or the similarity of two of whatever else and then to use that to um, build a classifier even on very strange shaped object which, you now, do not come, um, as a feature. Okay? So um, uh, and- and I think actually- another example- or if the input x is a histogram, you know, maybe of two different countries. You have histograms of people's demographics it turns out that there is a kernel that's taking the min of the two histograms and then summing up to compute a kernel function that inputs two histograms that measures how similar they are. So there are many different kernel functions for many different unique types of inputs you might want to classify. Okay? So that's it for SVMs uh, very useful algorithm and what we'll do on Wednesday is continue with more advice on now the- all of these learning algorithms. We'll talk about bias and variance to give you more advice on how to actually apply them. So let's break and then I'll look forward to seeing you on Wednesday. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_14_ExpectationMaximization_Algorithms_Stanford_CS229_Machine_Learning_Autumn_2018.txt | All right. [NOISE] Um, let's get started. So, um, let's see, logistical reminder, uh the class midterm, um, is this Wednesday and it's 48-hour take-home midterm. Um, and the logistical details you can find, uh, at this Piazza post, okay? So the midterm will start Wednesday evening. You have 48 hours to do it and then submit it online through Gradescope, uh, and because of the midterm, there won't be a section, uh, this Friday, okay? Oh and the midterm will cover everything up to and including EM, uh, which we'll spend most of today talking about, okay? Certainly don't look so stressed. It'll be fun. [LAUGHTER]. Maybe. All right. Um, so what I'd like to do today is start our foray into, uh, unsupervised learning. Uh, so far I've spent a lot of time on supervised learning algorithms including advice on how to apply supervised learning algorithms. These pens are great. In which you'd have, you know, positive examples and negative examples and you run logistic regression or something or SVM or something to find the line- find the decision boundary between them. Um, in unsupervised learning, you're given unlabeled data. So rather than given data with x and y, you're given only x. And so your training set now looks like X1, X2, up to Xm. And you're asked to find something interesting about the data. Uh, so the first unsupervised learning algorithm we'll talk about is clustering in which given a dataset like this, hopefully, we can have an algorithm that can figure out that this dataset has two separate clusters. Um, and so one of the most common uses of clustering is, uh, market segmentation. If you have a website, you know, selling things online, you have a huge database of many different users and run clustering to decide what are the different market segments, right? So there may be, you know, people of a certain age range, of a certain gender, people of a different age range, different level of education, people that live in the East Coast versus West Coast versus other parts of the country. But by clustering you can group people into, uh, different groups, right? So, um, I want to show you an animation of, um, really the most commonly used er, er, clustering algorithm called k-means clustering. And let me show you an animation of what k-means does and then we'll write- write out the math an- an- and tell you how you can implement it. So, um, let's say you're given data like this. So all these are unlabeled examples. Uh, so just x plotted here. And we want an algorithm to try to find maybe the two clusters here. Uh, the first step of k-means is to pick two points denoted by the two crop- two crosses called cluster centroids and, uh, the cluster centroids are your best guess for where the centers of the two clusters you're trying to find. And then k-means is an iterative algorithm and repeatedly you do two things. So first thing is, go through each of your training examples. Oh I'm sorry. Oh okay. Thank you. All right. Let me know if that happens again. Okay. Right. So, uh, you guys saw that, right? So. Right near two cluster centroids. So the first thing you do is go through each of your training examples, the green dots and for each of them you color them either red or blue depending on which is the closer cluster centroid. So here we've taken every dot and colored it in red or blue depending on which side it is- which cluster centroid it's closer to. And then, uh, the second thing you do, uh, is, uh, look at all the blue dots and compute the average, right? Just find the mean of all the blue dots, um, and move the blue cluster centroid there. And similarly, look at all the red dots- and look at only the red dots and find a mean- finding the- oh now what's wrong with this? Let's say- oh this thing though is very strange. Right. Apparently, if I keep moving my mouse, it doesn't do that. All right. Thank you. Uh, and then find the mean of all the red dots and move your, uh, red cluster centroid there. So let me do that, right? So the cluster centroids move as follows, um, to the mean of the red and the blue dots and this is just a standard arithmetic average, right? Uh, and then you repeat again where you, er, look at each of the dots and color it either red or blue depending on which cluster centroid is closer. So when I recolor every point based on, you know, what's closer, so that's the new set of colors, right? Um, and then the second part of the algorithm was again look at the blue dots, find the mean, look at the red dots, find the mean, and then move the cluster centroids over. [NOISE] Excuse me, uh, to that mean, okay? Um, and so, er, and it turns out if you keep running the algorithm, nothing changes. So the algorithm has converged. So if you look at this picture and you repeatedly color each point red or blue depending on which cluster centroid is closer, nothing changes. And you repeatedly look at each of the two clusters of color dots and compute a mean and move the clu- clu- clusters there, nothing changes. So this algorithm has converged even if you keep on running these two steps, okay? So um, let's see. Let's write down in math what we just did. [NOISE] All right. So this is, um, a clustering algorithm and specifically this is a k-means clustering algorithm. So your dataset now does not come with any labels. Um, and so in, uh, k-means, step one is, uh, initialize the cluster centroids, right? I'm gonna call them Mu_1 up to Mu_k, uh, randomly, okay? So this was a step where you plop down the red cross and the blue cross. Uh, and when they did it on the PowerPoints, you know, I did it as if we're just choosing these as random vectors. In practice a good way of the- actually the most common way to select a random initial cluster centroid isn't quite what I showed, is to actually pick k examples out of your training set and just set the cluster centroids to be equal to k randomly chosen the examples, right? So in a low-dimensional space like a 2D plot, you know, you can do on the diagram, it doesn't really matter but when you work with very high dimensional datasets, the more common way to initialize these is just pick, you know, k training examples and set the cluster centroids to be at exactly the location of those examples. But then low dimensionless spaces it- it- you know, it doesn't make a big difference. Um, and then next you repeat until convergence. Um, step one is- right? So this is a- well now I'll just write this down, okay? Um, so does that make sense? So the two steps you would alternate between the first one is set Ci for every value of i. So for every example, set Ci equal to, you know, either 1 or 2 depending on whether, er, that example Xi is closer to cluster centroid one or cluster centroid two, right? So ju- just take each point and color either red or blue. Uh, or and represent that by setting Ci equals 1 or 2, er, if you have two clusters. If k is equal to 2, right? Oh yeah. [inaudible] Oh. Er, the notes say L1 norm squared? From this morning? Uh, what notes were sent out this morning? [inaudible]. Oh that's red. It shouldn't be L1 norm. Uh, if it says L1 norm, that's a mistake. Sorry about that. Er, but usually- an- and it turns out whether you use L2 norm and L2 norm squared that gives you the same answer because the algorithm is the same either way. But it is usually- do we have a typo on the notes? [inaudible]. Oh I see. Oh got it. Oh- oh- oh okay. Let's say in notes we wrote that. Okay. Cool. But by default, when we write that norm we actually use- we mean L2 norm. Yeah, right? But by- by default this is the L2 norm of x if is unspecified. Er, if it's L1 norm, we usually write this. So L2 norm is more common and with or without the square it you get the same result. Okay. Cool. Thank you. All right. So let's color the dots. Um, paint each dot either red or blue. Uh, and then, um, uh, for this, um, this is, you know, some key examples and take all the examples assigned to a certain cluster, right? Assigned to cluster j and set Mu_j to be average of all the points assigned to that cluster J. Yeah. [inaudible] Oh sure. Er, no that does not work. Uh, you know, I don't think -I don't know if I have- all right. Now, that the black markers are working, um, this is better? All right let me try to use this? Is there a part of this that's unclear? If this part you can't see, I'll write it out more clearly. Yes. Go ahead. Let's go ahead. [inaudible]. Oh sure. How I do it, lights in front? [inaudible]. Got it. Let there be light. All right. Awesome great. That was an easy request to satisfy, great, you know, okay. I guess we'll actually look at it for another minute. All right. Is that okay? Thank you. Okay. This wasn't part of it. Okay. All right. Now, I can move it up. [NOISE]. All right. Um, so it turns out that, uh, um, this algorithm can be proven to converge. Um, the, exactly why it is written out in the lecture notes. But it turns out, if you write this as a cost function, right? Um, so the cost function for a certain set of assignments of, uh, points of examples to cluster centroids and for a certain set of positions of the cluster centroids. So, so c, these are the assignments and these are the centroids, right? So, so this cost here is sum of your training set of what's the square distance between each point, and the cluster centroid it is assigned to, right? So it turns out, um, I want to prove this, uh, little bit more detail in lecture notes but I'm going to prove this. It turns out that on every iteration, k-means would drive this cost function down, um, and so, you know, beyond a certain point this cost function, it can't go even, it can't go, uh, uh, any lower. Well, this, this can't go below 0, right? And so this shows that k-means must converge, or at least this function must converge because it's, uh, a strictly non-negative function that's going down on every iteration. So at some point, it has to stop going down, and then you could declare k-means are converged. Um, in practice, if you run k-means in a very, very large data set, then as you plot the number of iterations, uh, j may go down, and you know, and, and just because of, a lack of compute or lack of patience, you might just stop this running after a while. It is going down too slowly. So that's sort of k-means in practice where maybe it hasn't totally converged, we just cut it off and call it good enough. Um, now, uh, uh, the most frequently asked question I get for k-means is how do you choose k? It turns out that, um, when I use k-means, I still usually choose k by hand. And so, and, and this is why. Which is in unsupervised learning, um, sometimes it's just ambiguous, right? How many clusters there are [NOISE]. Right? Um, with this dataset, some of you will see two clusters, and some of you will see four clusters, and it's just inherently ambiguous what is the right number of clusters. So there are some formulas you can find online, the criteria like AIC and BIC for automatically choosing the number of clusters. In practice, I tend not to use them because, uh, um, I usually look at the downstream application of what you actually want to use k-means for in order to make a decision on the number of clusters. So for example, if you're doing a market segmentation, um, you know, because your marketers want to design different marketing campaigns, right? For different groups of users, then your marketers might have the bandwidth to design four separate marketing campaigns, but not 100 marketing campaigns. So that would be a good reason to choose four clusters rather than 100 clusters. So it's often, uh, uh, if you look at the purpose of what you're doing this for. Um, I think in the previous exercise, uh, in the homework, you see a, um, image compression, uh, exercise where you want to cluster, uh, colors into smaller number of clusters. You implement this. This is actually one of the most fun exercises I think. Um, uh, uh, that, uh, uh, but so there you'd, you know, be saying, well, how much do you want to compress the image to decide how many clusters to, to try to use, okay? So I usually, um, pick the number of clusters, you know, either manually or looking at what you want to use k-means cluster for. Um, when we're trying to cluster news articles, uh, the Google News example, I think I showed in the first lecture. You say, well, how many clusters is going to make sense for, for, for news articles, okay? All right. So good. So, uh, yeah? [inaudible]. Oh sure. Well, k-means get stuck on local minima. Yes, k-means gets stuck on sort of local minima sometimes. And so, if you're worried about local minima, the thing you can do is, uh, run k-means, say, 10 times, or 100 times, or 1000 times from different random initializations of the cluster centroids. And then run it, you know, say 100 times, uh, and then pick whichever run results in the lowest value for this cost function, okay? All right. Um, so you'll play with this more in, um, uh, in the programming exercise. Now, um, there's a, there's a problem that seems closely related. Um, but, but it's actually quite different ways to write the algorithms which is density estimation. So, so let me motivate this. Um, I actually have a- well, right, sometime back had some friends working on a problem which I'll simplify a little bit, um, of, uh, uh, you know, you have aircraft engines coming off the assembly line. All right. And every time an aircraft engine comes off the assembly line, you measure some features of these engines. You measure some features about the vibration, and you measure some features about the heat that the aircraft engine is producing. And, um, let's say that you get a dataset, right, that looks like this, okay? And, um, the anomaly detection problem is if you get a new aircraft engine that comes off the assembly line, and if the vibration feature takes on this value, and the heat feature takes on this value, is that aircraft engine an anomalous one, is it an unusual one, right? And so the application of this is, um, that as your aircraft engine comes off the assembly line, if you see a very unusual signature in terms of the vibrations and the heat the aircraft engine is generating, then probably something is wrong with this aircraft engine, and you have your people, have your, have your team inspect it further or test it further, uh, before you ship the airplane, before you ship the engine to a, to a airplane maker and then something goes wrong in the air, and there's a, there's a major accident, or major disaster, right? And so anomaly detection, uh, uh, is most commonly done, or one of the common ways to, um, implement anomaly detection is the model p of x which is given all of these blue examples, given all of these dots, can you model what is the density from which x was drawn? So then if p of x is very small, then you flag an anomaly, right? Meaning that, Gee, I think something's funny here, uh, and maybe someone should inspect this aircraft engine a little bit further. Um, so anomaly detection is used for, a task like this, for an inspection task like this. Um, it's used for, um, uh, many years ago, I was actually working with some telecoms providers, you know, uh, uh, helping out telecoms company on, um, anomaly detection to figure out if something's gone wrong with a cell tower network, right? So if one day one of the cell towers start throwing off network patterns that seem very unusual, then maybe something's wrong with that cell tower, like something's gone wrong. We sent out the technicians to fix it. Uh, it is also used for computer security. If a computer, say if a computer at Stanford starts sending out very strange, you know, um, uh, uh, network traffic, that's very unusual relative to everything it's done before, relative what this is, is a very anomalous network traffic, then maybe IT staff should have a look to see if that particular computer has been hacked. So these are some of the applications of anomaly detection. And the good way to do this is, given an unlabeled data set, model p of x. And then if you have very low probability samples, you flag that as a possible anomaly for further study. Now, given this dataset, um, uh, how do you model this? One interesting thing about this green dot is that neither the vibration nor the heat signature is actually out of range, right? You know, like there are a lot of aircraft engines with vibrations in that range. There are a lot of aircraft engines with heat in that range. So neither feature by itself is actually that unusual. It's actually the combination of the two that is unusual. Um, and so thus, thus, what I want to do is, uh, come up with an algorithm to model this. And in fact, we'll come up with an algorithm that can model, you know, maybe, maybe your data density looks like this, maybe more of an L shape like that. But how do you model p of x with the data coming from an L shape? Um, and it turns out that there is no textbook distribution, right? You know, there isn't, you know, if you look at a simple exponential family of model, the types of distributions, there is no distribution for modeling very, very complex distributions like this. So what we're going to talk about is, um, the mixture of Gaussians model which we look at data like this, and say, it looks like this data actually comes from two Gaussian. There's one Gaussian, maybe there's one type of aircraft engine that, that, that, you know, is drawn from a Gaussian like the one below, and a separate aircraft- type of aircraft engine that's drawn from a Gaussian like that above. And this is why there's a lot of probability [NOISE] mass in the L-shaped region, but very low probability outside that L-shaped region, right? And, and, and these ellipses I'm drawing are the contours of these two Gaussians, right? And so, um, what I'd like to do next is, uh, develop the mixture of Gaussians model, um, which is useful for anomaly detection, and, and, uh, uh, and, and then this will lead us to our second unsupervised programming algorithm, okay? So, um, in order to make the mixture of Gaussians model a bit easier to develop, let me just use a one-dimensional example where x is in R, okay? So, um, let's see. So let's say that, uh, we gather a data set that looks like this. [NOISE] Right. So it's just one row number. So it's just on num- number line I plotted a few dots. Um, so it looks like this data maybe comes from two Gaussians. Right? It looks like, you know, there's some data from this Gaussian. And there's some data from that Gaussian on the right. Um, and is- and if only we knew. Right? Which example had come from which Gaussian, if, if we knew that these examples had come from Gaussian 1, which I want to denote with crosses. And if only we knew- no, that was here. What- but actually this is fine. I'll leave that one there. If only we knew that these examples had come from Gaussian 2 which I'm going to draw with Os, then we just fit Gaussian 1 to the crosses, fit Gaussian 2 to the Os and then we'd be pretty much done. Right? Um, oh, and, and, and sorry. And so these are the two Gaussians. And so the overall density would be something like this. Right? Tha- that's the probability. A lot of probability mass on left. A lot of probability mass on the right, low, less probability mass on the, uh, in sort of in, in the middle. Okay? So the overall density I'll just draw again, would be, low high, low high something like that. Right? Um, but the reason- and, and, and if you actually had these labels. If you knew that these examples came from Gaussian 1, those examples come from Gaussian 2, then you can actually use an algorithm very similar to GDA, Gaussian discriminant analysis to fit this model. Uh, that the problem with this density estimation problem is, you just see this data and maybe the data came from two different Gaussians. But you don't know which example actually came from which Gaussian. Okay? So the EM algorithm or the expectation-maximization algorithm will allow us to, uh, fit a model despite not knowing which Gaussian each example that come from. So let me first write down the, um, mixture of Gaussians model. Uh, and then we'll describe the EM algorithm for this. So let's imagine- let's suppose that as a, um, so the term we sometimes use is latent, but latent just means hidden or unobserved. Um, random variables z. Right? And x_i, z_i, um. Okay? So- this part here. So let's imagine that, um, there's some hidden random variable z and, and the term latent just means hidden or unobserved. Right? It means that it exists but you don't get to see the value directly. So when I say latent, it just means hidden or unobserved. So let's imagine that there's a hidden or latent random variable z and, uh, x_i and z_i have this joint distribution. And this, this, this is very, very similar to the model you saw in Gaussian discriminant analysis. But z_i is multinomial with some set of parameters Phi. For a mixture of two Gaussians, this would just be Bernoulli with two values. But if it were a mixture of k Gaussians then z, you know, can take on values from 1 through k. [NOISE] Right? Um, and it was two Gaussians it'll just be Bernoulli. And then once you know that one example comes from, uh. Gaussian number j, then x condition that z_i is equal to j. That is drawn from a Gaussian distribution with some mean and some covariance Sigma. Okay? So the two unimportant ways. This is different than GDA. Um, one, well, I've set z to be 1 of k values instead of one of two values. And GDA, Gaussian discriminant analysis. We had z, you know, uh, why the labels y took on one of two values. Uh, and then second is, I have Sigma j instead of Sigma. So by, by convention when we fit mixture of Gaussians models, we let each Gaussian have his own covariance matrix Sigma. But you can actually force it to be the same way you want. So- but these are the trivial differences. Uh, the most significant difference is that, in Gaussian discriminant analysis, we had labeled examples x_i, y_i. Where z- y was observed. Right? And then the main difference between this and Gaussian discriminant analysis is, now we have replaced that with this latent or hidden random variable z_i that you do not get to see in the training set. Okay? So now, uh, actually you guys are right. These pens are terrible. All right. Oh, that was better. Cool. All right. So if we need the z_i's. Right? Then we can use, um, maximum likelihood estimation. All right? So if only we knew the value of the z_i's, which we don't. But if only we did, then we could use maximum likelihood estimation or MLE to estimate everything. You know. So we would write the log likelihood of the parameters. Right? Equals sum, um, log p of x_i, z_i, you know, given the parameters. Right? And then you take the derivative, set the derivatives equal to 0 and then you guys did this in problem set 1. Right? And, and then you would find that Phi j is equal to 1 over m. Right? Okay. So if only you knew the values of the z_i's, uh, then you could use maximum likelihood estimates, um, will- and, and this is what you get. And this is pretty much the formulas. Actually the- the- these two are exactly the formulas, uh, we had for, uh, Gaussian discriminant analysis. Except with replace y with z. Right? And then there's some other formula for Sigma that's written in the lecture notes. But I won't, but I won't write down here. Okay? Um, but the reason we can't use this, use these formulas is we don't actually know what are the values of z. So what we will do in the EM algorithm is two steps. Um, in the first step, we will, uh, guess the value of the z's. And in the second step we will use these equations using the values of this z's we just guessed. So let me- so, so sometimes in, um, the machine learning is something that's called- there's a bootstrap procedure where you get something that runs an algorithm. You're using your guesses and then you update your guesses and then run the algorithm again. Let me, let me make that concrete by writing this down. So the EM algorithm has two steps. The E-step, um, also called the expectation step is set to w i j. So w i j, um, is going to be the probability that z_i is equal to j. Okay? Um, given all the parameters. And, and much as we did with, um, generative learning algorithms, right, with generative learning algorithms, we used Bayes' rule to estimate the probability of y given x, and so to compute this, you use a similar Bayes' rule type of calculation. And so this would be [NOISE]. Oops, right, um, where, for example this term here P of x_i given z_i equals j. This would be a Gaussian density, right? This comes from a Gaussian density with mean Mu j and covariance Sigma j, right? And so this term here would be 1 over 2 Pi, to the N over 2 Sigma j, so one-half e to the negative one-half. All right. And then this term here, I guess this would be Phi j, that's just a Bernoulli probability, remember z is multinomial. Right, so z is multinomial with parameters Phi. So I guess the parameters Phi for multinomial distributions tell you, what's the chance of z being 1, 2, 3, 4, and so on up to k, and so the chance of z_i being equal to k is just- chance of z_i being equal to j is just Phi j right? It's just v to the off one of the parameters in your multinomial probability for, um, for the odds of z being different values. okay? And so, um, and similarly the terms in the denominator. This term here is from Gaussian and that second term is from the, um, multinomial probability that you have for z. And so that's how you plug in all of these numbers and use Bayes rule and use this equation to compute given- all given the position of all these Gaussians, what is the chance of w i j taking on a certain value, okay. And, and so to make this really concrete, you remember how I guess 1 or 0s, or the other way, um, If you were to look at these, uh, if you were to scan from right to left, remember how, you know, you get a sigmoid function, or the sigmoid can be this way or this way or it depends on the sign. I guess if these are positive samples these are negatives. You have a sigmoid function like this. And so w i j is just the height of this Sigma, it's just a chance of, you know, each of these examples being, coming from either the z equals 1 or z equals 0 and then you store all of these numbers in the variables w i j. Okay. So w i j is just to compute the posterior choice of this, this example coming from the left Gaussian versus the right Gaussian. You just saw that in the variable w i j. So that's the E-step, um, and you compute the w i j for every single training example i. Right? I think it's the M-step is, um, yeah. [BACKGROUND] Sorry, is this what? [BACKGROUND] Oh this one. Yes. Sorry, yes. Thank you, there we go, thank you. Yes, there's a following Gaussian. Okay. So in the- so the E-step tells us, you know, we're trying to guess the values of the z's right, when we figure out what's the probability of z being 1, 2, 3, 4 up to k was stored here. And then in the M-step, what we're going to do is use the formulas we have for maximum likelihood estimation, and I want you to compare these with the equations I had above, right. Okay. Well, I hope you see. So these equations are a lot like the equations above, except that instead of indicator z_i equals j, we replaced it with w i j, right? Which by the way is the expected value of this indicator function. Right, because the expected value of an indicator function is just equal to the probability of that thing in the middle being true. Okay? Um, and then, and then there's a formula for Sigma j as well that you can get from the lecture notes, but i won't, I won't write down here. Okay. So, um, one intuition of, um, this mixture of Gaussians algorithm is that it's a little bit like k-means but with soft assignment. So in k-means, in the first step we would take each point and just assign it to one of the k, k cluster centroids, right? And if it was a little bit closer to the red cluster centroid than the blue cluster centroid, we would just assign it to the red cluster centroid. So even if it was just a little bit closer to one cluster centroid than another, k means we just make what's called a hard assignment meaning, you know, whatever cluster centroid it's closest to, we just assigned it 100 percent of that ce- cluster centroid. So yeah, EM is, uh, you can think, uh, EM implements a softer way of assigning points to the different cluster centroids because instead of just picking the one closest Gaussian center and assigning it there, it uses these probabilities and gives it a waiting, in terms of how much is assigned to Gaussian 1 versus Gaussian 2. Um, and the second updates, you know, the means accordingly, right? Sum over all the x_i's to the extent they're assigned to that cluster centroid divided by the number of examples assigned to that cluster centroid. Okay? So, so, so that's one intuition be- between EM and k-means, um, and in a second, uh, uh, but, but when you run this algorithm, it turns out that this algorithm will converge with some caveats I'll get to later, and this will find a pretty decent estimate, um, of the parameters, you know, of say fitting a mixture of two Gaussians model. Okay? So this is, um, the- a- and so if you are given a dataset of say airplane engines, you can run this algorithm for the mixture of two Gaussians. And then when a new airplane engine rolls off the assembly line, um, um, so, so after you're fitting the k-means algorithm, you now have a- after fitting the EM algorithm, you now have a joint density of a P of x comma z. And so the density for x is just sum of all the values of z of P of x comma z. And so, and so a mixture of Gaussians can fit distributions that look like this, it can fit distributions that look like this, right? These are, these are both mixtures of two Gaussians. So this gives you a very rich family of models to fit very complicated distributions. And now that, um, right, and you can also fit, I don't know, something like this. So this is a mixture of two Gaussians, I guess one thin narrow Gaussian here and one much wider fatter Gaussian. So a mixture of two Gaussians can actually fit a model of different things, um, uh, can fit a lot- and a mixture of more than two Gaussians can fit even richer models. And so by doing this, you can now model P of x for many complicated densities, or including this one, right, this example I had just now. This will allow you to fit a probability density function that puts a lot of probability models on, on a region that looks like this. And so when you have a new example you can evaluate P of x, and if P of x is large, then you can say nope this looks okay and the P of x is less than Epsilon. You can flag an anomaly and say take a look- take another look at this here. Okay? So, um, I kind just wrote down this algorithm, with a little bit of a hand-wavy explanation for how it's derived, right? So like I said, if only you knew the values of C and just use maximum likelihood estimation, so let's guess the values of z and plug that into the formulas of maximum likely estimation. It turns out that hand-wavy explanation works, in the particular case of, um, the EM mixtures of Gaussians but that there is a more formal way of deriving the EM algorithm that shows that this is a maximum likelihood estimation algorithm, and that they converge at at least a local optimum. Um, and in particular, there- what we'll do is show that if your goal is, um, uh given a model P of x, z parameterized by Theta, if your goal is to maximize P of x, right? Oh, excuse me. So this is what maximum likelihood is supposed to do. That EM is exactly trying to do that, okay. So, um, I'll go on in a minute to present this more general derivation, the - the form of general derivation of the EM algorithm tha- that doesn't rely on this hand-wavy argument of I guess it's easier use maximum likelihood with the guess values. So I'll do the the rigorous derivation of EM in a minute. But before I do that, let me just pause and check if there any questions. Yeah. [inaudible]. Um, yeah, uh, maybe- let's see. Maybe I'll help to not think of them as weights. Um, yeah, I think thi- this is actually the weighting you assigned to a certain Gaussian, so there's one intuition, uh, hen - hence the weights, but, um, um, let me think, what's going to explain this? So one way to think of this as wij is how much xi is assigned to, you know, to- to- to the um, µj Gaussian. So, um, wij is the strength of how strongly you want to assign that training example xi to that cluster or to that- to that particular Gaussian. Um, and so this is the number of 2, 0, and 1 right? And, uh, the strength of all the assignments, and every point is a sign with a total strength equal to 1, because all these properties must sum up to 1. And so, when I take this point and assign it, you know, 0.8 to a more close Gaussian and 0.2 to a more distant Gaussian. And this is our guess for, you know, well there's an 80% chance that it came with that Gaussian and a 20% chance it came with the second Gaussian. That makes sense? [inaudible]. Oh I see. So let's see. Um, so when you're running the EM algorithm, you never know what are the true values of z, right? You're- you're given a data set, so you're only told the x's, and as far as we know, uh, these airplane engines were generated off, you know, two different Gaussians. Maybe there are two separate assembly processes. You know, one from the, uh, uh, one from plant number one, one from plant number two, and maybe they actually operate a little bit differently, but by the time they merge onto one, um, uh, you know, by- by the time the two suppliers of aircraft engines get to you, they've been mixed together, and so you can't tell anymore which aircraft engine came from proce- plant one and which pla- aircraft engine came from plant two. Um, and they you know there are two plants, where you just see the stream of aircraft engines, you're hypothesizing that there are two types. And so in every iteration of EM, you're taking each, uh, aircraft engine and guessing, you know, for this one, I think there's 80% chance that it came from process one, and a 20% chance it came from process two, so that's the E-step. And then in the M-step, you look at all the engines that you're kind of guessing were generated by process one, and you update your Gaussian to be a better model for all of the things that were- that you kind of think were generated by process one. And if there's something that you're absolutely sure came from process one, then it has a weight of one close to one in this. Do you think that was something that, you know, there's a 10% chance it comes from process one, then that example is given a lower weight and now you update the mean for that Gaussian. That make sense? Cool. All right. So, [NOISE] 33 minutes. Yeah. Okay cool. All right. Well I still remember when, um, I was an undergrad doing a summer internship at AT&T Bell Labs. Um, and then someone a few offices down had learned about EM for the mixture of Gaussians for the first time, and he was running it on his computer, and he's going around to every single office. Saying, "Oh my God you've got to check this out, this is unbelievable look at what this algorithm can do for three mixes of Gaussian. " So tha- that shows you, those are the type of people I hang out with [LAUGHTER]. All right. Um, so in order to derive yo - you know, so -so this is a slightly hand-wavy argument. As I uh, let's get- let's guess the values of the z's. Let's just have these weights and plug them into maximum likelihood. Um, what I would like to do is give a more rigorous de- derivation for why EM Algorithm is a reasonable algorithm, and why it's a maximum likelihood estimation algorithm and why we can expect it to converge. And it turns out that rather than just proving, you know, that this is a sound algorithm, what we'll see on Wednesday is that this view of EM, uh, allows us to derive EM in a- in a more correct way for other models as well, the mixtures of Gaussian. On, on Wednesday, we'll talk about, uh, uh a model called factor analysis, it lets you model Gaussians in extremely high dimensional spaces, where if you have 1,000 dimensional data, but only 30 examples, how do you fit a Gaussian into that? So we'll talk about that on Wednesday. And it turns out this derivation of EM we're going to go about- through now is crucial for, um, applying EM accurately in- in- in problems like that. Okay, so. Uh, in order that live up to that derivation, let me describe, um, Jensen's inequality. So let f be a, a convex function. Um, to do EM, we're actually going to need a concave function, so it'll be all minus of everything, but we'll get to that in a second. But so, a convex function means the second derivative is greater than 0, or in other words, it looks like that, right? So that's a convex function. Uh, let x be a random variable. Then f of the expected value of x is less than or equal to the expected value of x. Okay. Now, [NOISE] um, [NOISE] maybe, um, here's an example. All right. So here's the, um, let's see, there's the function f of x, and let's say that these are the values 1, 2, 3, 4, 5. And suppose that X is equal to 1 with probability one-half and is equal to 5 with probability one-half, right? Just for the illustration. Then here is f of 1. Here is f of 5. Um, here is f of 3. And f of 3 is f of the expected value of X, right, because so the expected value of X. And sometimes I write this without the square brackets, right. It's the average of X is equal to 3. Um, and so the expected value, excuse me, f of the expected value of X is equal to this value, whereas the expected value of f of x is the mean of f of 1 and f of 5. All right. So the expected value of f of x. F of x is a 50% chance of being f of 1, and a 50% chance of being f of 5. And so the expected value of f of x is equal to this value in the middle. It's really take these two, right. Take this value and this value and take their mean. So is this value up here, and, and this value is the expected value of f of x. Okay. And so in this example the expected value of f of x is greater than f of the expected value of X, right, as, as predicted by Jensen's inequality. Um, I'm gonna just draw one illustration that may or may not help, and some of my friends like it, I sometimes use it but if it's confusing then don't worry about it. But it turns out that if you draw a line that connects these two, then the midpoint of this line, um, is the height of f of expected value of x, right, so the height of this. You know, so, so given these two points, this point and this point, if you draw this line, it's called a chord, um, then the height of this point is the expected value of f of x. And this point is, um, f of the expected value of x. Right. And in any convex function, you know, really take any convex function. That's also convex function. If you draw any chord, that green point is always higher, right, than that green point which is why- which is another way of seeing why Jensen's inequality holds true. Okay. If this visualization doesn't help don't worry about it but it's just a- actually what a lot of my friends do is we keep on forgetting which direction Jensen's inequality goes. [LAUGHTER] Why are we not using Jensen [LAUGHTER] that's not great. So a lot of my friends don't remember, we draw this picture and draw that chord, and we quickly figure out which way the inequality goes. Um, all right. So one addendum further. If f is strictly greater than 0. And so if this is the case, we say f is strictly convex. Then- Okay. So, um, let's see, a straight line is also a convex function, right. So this is a convex function, this is a convex function and this is a convex function. It turns out a straight line, that's also a convex function. But so in this addendum is saying that if f is a strictly convex function meaning basically it's now a straight line, right. And a bit modern, it's not a straight line. But if the curvature if it's always bending up, uh, then the only way for the left and right-hand sides to be equal is if x is a constant, meaning it's a random variable that always takes on the same value. Okay. So Jensen's inequality says that, you know, um, left-hand side is going to be the same as right hand side. Sorry, I think I reversed the order of these two for that equation that doesn't have it. But so Jensen equality says left-hand side is always less than or equal to the right-hand side, and the only way it's equal as if X, you know, is a random variable that always takes on the same value. Okay, yeah. What if- What if the value of f of 1 was equal to the value of f of 3. Wouldn't that? Yeah. So it turns out what if value f of 1 is equal to the value of f of 3. It turns out, it does. Vary. So let's see. So one way that [NOISE] could happen would be if the function were like that. And then if you take the- draw the chord, take the mean it's still higher. [inaudible] The point of f of 1. [inaudible] Then it's important. If, if you kind of flat that part here, then the function is not strictly convex. And so it's still less than equal to but it's not, but they can't be equal to if x is random. Okay. So and we'll use this in a little bit. We'll actually end up using this. Um, and again for the strict probabilistic, you know, if those of you that, I don't know, take classes in advanced probability, the technical way of saying x is a constant is x is equal to EX with probability 1. You know, I, I think that for all practical human purposes you do not need to worry about this. But I think if you [LAUGHTER] take a class in measure theory. The Professor in measure theory will be happy if you say this and you say x is a constant but maybe, maybe none of you. Okay. Just don't worry about it. Um, oh yes. Okay. Now, um, just one more addendum, um, to this is that the form of Jensen's inequality we are going to use is actually a form for a concave function. So instead of convex, um, I'm gonna say concave. And so, you know, a concave function is just a negative of a convex function, right. If you take a convex function and take the negative of that, it becomes concave. And so the whole thing works with the- with everything flipped around the other way. Okay. And yep, so this is strictly concave. Okay. So the form of Jensen's inequality we are gonna use is actually the, um, concave form of Jensen's inequality, and we're actually going to apply it to the log function. So the log function, right. Log x looks like this. And so that's a concave function. And so the inequality we'll use would be in this direction that I have in orange. All right. So here's the density estimation problem. Meaning, density estimation means you want to estimate P of x. All right. So we have a model for P of x, z, with parameters theta. And so, you know, instead of writing out mu, sigma- mu, sigma, and phi, like we did for the mixture of Gaussians. I'm just gonna capture all the parameters you have. Whatever your parameters are, I'm just gonna capture them in one variable theta. And you only observe x. So your training set looks like that. So the, um, log likelihood of the parameters theta is equal to some of your training examples log P of x_i, parameterized by theta. Um, and this in turn is log of sum over z, P of x_i, z_i parameterized by theta, right. Because P of x, you know, is just taking the joint distribution and summing out, marginalizing out z_i. Okay. [NOISE] And so what we want is maximum likelihood estimation which is define the value of theta that maximizes this log-likelihood. And what we would like to do is derive an EM, derive an algorithm which will turn out to be an EM algorithm as an iterative algorithm for finding the maximum likelihood estimates of the parameters theta. [NOISE] So, um, let me draw a picture of that, I'd like you to keep in mind as we go through the math, which is, you know, the, the horizontal axis is the space of possible values of the parameters Theta. And so there's some function O of Theta that you try to maximize. This right. And so what EM does is, um, let's say you initialize Theta as some value, you know, maybe randomly initialize, um, sim- similar to the k-means cluster centroids. Where just randomly initialize your mu's with a mixture of Gaussian's. What the EM algorithm does is in the E-step, we're going to construct a lower bound shown in green here for the log-likelihood. And this lower bound, this green curve has two properties. One is that it is a lower bound. So everywhere you look, you know, over all values of Theta, the green curve lies below the blue curve. So this is a lower bound. And the second property that the green curve has is that it is equal to the blue curve at the current value of Theta. Okay. So what the E-step does, uh, which you'll see later on, and just keep this picture in mind as we go through the E-step and the M-step is, um, construct the lower bound that looks like this, right. Oh, and, and also, uh, to, uh, to foreshadow probably the derivation. Right? There- there was an addendum to Jensen's inequality where we said, well, under these conditions it holds with equality. Right. E of f of x equals f of e of x. We said, "Well, the two things are equal with under certain conditions." Um, we want things to be equal. We want the green curve to be equal to the blue curve at the old value of Theta. So we- we'll use that addendum to Jensen's inequality when we drive that. Um, so this E-step is draw the green curve. And then what the M-step does is it takes a green curve, and then it finds the maximum. Actually, certainly stroke [inaudible] so I'll draw in green. What the M-step does is it takes the green curve, and it finds the maximum. And one step of EM will then move Theta from this green value to this red value. Okay. So the E-step constructs the green curve, and the M-step, uh, finds the maximum of the green curve. And this is one iteration of EM. The second iteration of EM, now that you're at this red thing is will construct a new lower bound, and then again, you use a different lower bound. Everywhere the red curve is below the blue curve, and the values are equal at this new value. That's the E-step, and then M-step will maximize this red curve, um, and so on. Now you're here. Construct another thing, do that. Right. And you can kinda tell that as you keep running EM, this is constantly trying to increase L of Theta. Trying to increase the log-likelihood, until it converges to a local optimum. Okay. Um, the EM algorithm does converge only to local optimum. So if, you know, there was another even bigger thing there that it may never find its way over to that other- that, uh, better optimum. But the EM algorithm by repeatedly doing this, will hopefully converge to a pretty good local optimum. Okay. All right. So let's write on how we do that. Um, let me think. Actually, let me use the other board. No, I think this is okay. All right. So I've already said that our goal is to find the parameters theta that maximize this. [NOISE] All right. Uh, and so that equation we said are just now is sum over i log, sum over zi, p of xi comma zi given Theta. Okay. So this is just what we had written down, I guess, uh, on the left. What I'm going to do next is, um, divide by- [NOISE] multiply and divide by this. Okay. Um, where Qi of zi is a probability distribution, i.e., the sum over zi, Qi of zi equals 1. Okay. So I'm going to multiply and divide by some probability distribution, and we'll, we'll decide later how to come up with this probability distribution Qi, right. But, you know, I'm allowed to construct a probability distribution and multiply and divide by the same thing. Right. Now, if you look at this, all right, let's put square brackets here. If this Qi, that is the probability distribution meaning that sum over zi Qi, zi sums over- sums to 1. Then this thing inside is, um, equal to sum over i log of an expected value of zi drawn from the Qi distribution of [NOISE] right, actually, if I, let me use colors to make this clearer. Right. So the way you compute the expected value of z-, you know, some function of zi is you sum over all the possible values of zi of the probability of zi times whatever that function is. So this equation is just the expected value with respect to zi drawn from that Qi distribution of that thing in the square brackets, in the purple square brackets. Now, using the, um, concave form of Jensen's inequality, we have that this is greater than or equal to [NOISE]. So this is a form of Jensen's inequality where, um, f of E, x is greater than or equal to E of f of x, where here, um, this is the logarithmic function. Right. So the log function is a concave function. It looks like that. And so, um, using the, I guess here using, using the form Jensen's inequality with the signs reversed, um. Right, f of Ex is greater than equals E of fx. So you get log of expectation is greater than equal to expectation of the log, all right. And then finally, let me just take this expectation and unpack it one more time. So this is now sum over i, sum over zi. [NOISE]. Okay. So I just took this expected value and turned it back into the sum of the random variable probability, times that thing. Okay. So, um, if you remember this picture from the middle, what we wanted to do was to construct a function, construct this green curve. There's a lower bound for the blue curve. And if you view this formula here as a function of Theta right, so your x, um, x is just your data, and z is a variable you sum over. So this whole thing is the function of Theta, right? Because x's are fixed, z is just something you f- sum over. So this whole formula here, this is a function of the parameters Theta. And what we've shown is that this thing, you know, this formula here, this is a lower bound for the log-likelihood, uh, for- for, for, for this thing. I guess this is L of Theta. So- go ahead. [inaudible]. Oh, how I got to this equation? Uh, sure. Um, let me think. So let's see. What's a good way to do this? Um, uh, yeah. Let's say that z takes on values from 1 through 5, right. Let's say z takes on values from 1 through 10. So you roll a 10 sided dice. And I want to compute, um, you know, the expected value of, uh, some function of, of some function g, g of z. Right. Then the expected value of g of z is sum of all the possible values of z of the probability that you get that z, times g of z. Right. So that's, that's what's the expected value is of a function of a random variable. And, and this is- and the expected value of z is sum over z, P of z times z. That's the average of random variable. And so, um, in the notation that we have, the probability of z taking on different values is denoted by Qi of z, which is why we wind up with that formula. Does that makes sense? Does it? Okay. Is that okay? Does that make sense? Yeah. All right. If, if one of these steps doesn't make sense, let me know. Th- other questions? Okay. All right. Hope that makes sense. [NOISE]. Um. [NOISE] Now, one of the things we want when constructing this green lower bound is we want that green lower bound to be equal to the blue function at this point, right? And this is actually how you guarantee that when you optimize the green function. By improving on the green function, you're improving on the blue function. So we want this lower bound to be tight. Right, the, the two functions be equal, tangent to each other. So in other words we want this inequality to hold with equality. So we want, um, yeah, so we want the left hand side and the right hand side to be equal for the current value of Theta, right? [NOISE] So on a given iteration of EM where the current parameters are equal to Theta, we want, we want- I know this was a lot of math but, you know, we want the left and right hand sides to be equal to each other. Right. Because that's what it means for, uh, for the lower bound to be tight, for the green color to be exactly touching the blue curve as we construct that lower bound. And so for this to be true, we need the random variable inside to be a constant. So we need P of x_i, z_i, divided by Qi of z_i to be equal to const- to, to a constant. Meaning that no matter what value of z_i you plug in, this should evaluate to the same value. In other words, the ratio between the numerator and denominator must be the same. Um, unfortunately so far, we have not yet specified, how we choose this distribution for z_i, right. So, so far the only constraint we have is that Qi has to be a probability density- has to be a probability distribution over z_i, but you could choose one of the distributions you want for z_i. And it turns out that, um, uh, we can set Qi of z_i to be proportional to p of x_i, z_i parameterized by Theta. And this means that for any value of z, you know, so z_indicates as it could from Gaussian one and Gaussian two. Right. So this means that the chance of Gaussian one is proportional to the chance of Gaussian one versus Gaussian two. Whether z_i takes on one or two is proportional to this. And I don't want to prove it but one way to ensure this, and this is proven in the lecture notes. But it turns out that one way to ensure. Um, well so the Qis need to sum to 1. So one way to ensure that this is proportional to the right-hand side is to just take the right-hand side. Sorry. Let me move here. So one- so let's see. Right. So the Qis have to sum to 1. And so one way to ensure the proportionality is to just take the right-hand side, and normalize it to sum to 1. Um, and after, after a couple of steps that are in the lecture notes but I don't want to do here, you can show that this results in sending Qi of z_i to be equal to that, that posterior probability, okay? And so, um, sorry I skipped a couple steps here. You can get from the lecture notes, but it turns out that if you want this to be a constant meaning whether you plugged in z_i equals 1 or z_i equals 2 or whatever, these evaluate to the same constant. The only way to do that is make sure the numerator and denominator are proportional to each other. And because Qi of z_i is a density that must sum to 1. One way to make sure they're proportional is to just set this to be with the right-hand side but normalize the sum to 1. Okay. And we derived this a little bit more carefully in the lecture notes. So just to summarize, this gives us the EM algorithm. Let's take all of this- everything we just did and wrap in the EM algorithm. In the E-step, we're going to set Qi of z_i equal to that. And previously this was the w_i_js. Right. So instead of- so previously, we're restoring these probabilities in the variables you call w_i_js. And then in the M-step, we're going to take that lower bound that we constructed, which is this function, and maximize it with respect to Theta. Okay. Um, and so remember in the M-step we constructed this thing on the right-hand side as a lower bound for the log-likelihood. And so for the fixed value of Q, you can maximize this with respect to Theta and that updates the Theta, you know, maximizing the green lower boundary, that's what the M-step does. And if you iterate these two steps, then you find that this should converge to a local optima. Okay. Oh and just maybe that's the obvious question. Um, why don't we try to maximize right Theta, uh, why are we trying to maximize the log-likelihood directly? It turns out that if you take the mixture of Gaussians model, try to take derivatives of this and set derivatives equal to 0, there's no known way to solve for the value of Theta that maximizes the log-likelihood. But you find that for the mixture of Gaussians model and for many models including factor analysis that we talked about on Wednesday, if you actually plug in the Gaussian density- uh, if you actually plug in that mixture of Gaussians model for P, um, and take, you know, take, take derivatives, set derivatives equal to 0 and solve, you will be able to find an analytic solution to maximize this M step, and that'll be exactly what we had worked out in the early derivation of the EM algorithm. Okay. But so this derivation shows that, uh, the EM algorithm, you know, is a maximum likelihood estimation algorithm with optimization solved by constructing lower bounds and optimizing lower bounds, okay? All right. Um, that's it for today, and only it's stuff up to here, right, and so this stuff will be up to the midterm but we'll talk about factor analysis a lot on Wednesday, but it will not be on the midterm. Okay. So let's break for today, and I'll see you guys on Wednesday. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Locally_Weighted_Logistic_Regression_Stanford_CS229_Machine_Learning_Lecture_3_Autumn_2018.txt | What I'd like to do today is continue our discussion of supervised learning. So last Wednesday, you saw the linear regression algorithm, uh, including both gradient descent, how to formulate the problem, then gradient descent, and then the normal equations. What I'd like to do today is, um, talk about locally weighted regression which is a, a way to modify linear regressions and make it fit very non-linear functions so you aren't just fitting straight lines. And then I'll talk about a probabilistic interpretation of linear regression and that will lead us into the first classification algorithm you see in this class called logistic regression, and we'll talk about an algorithm called Newton's method for logistic regression. And so the dependency of ideas in this class is that, um, locally weighted regression will depend on what you learned in linear regression. And then, um, we're actually gonna just cover the key ideas of locally weighted regression, and let you play with some of the ideas yourself in the, um, problem set 1 which we'll release later this week. And then, um, I guess, give a probabilistic interpretation of linear regression, logistic regression depend on that, um, and Newton's method is for logistic regression, okay? To recap the notation you saw on Wednesday, we use this notation x_i, i- y_i to denote a single training example where x_i was n + 1 dimensional. So if you have two features, the size of the house and the number of bedrooms, then x_i would be 2 + 1, it would 3-dimensional because we have introduced a new, uh, sort of fake feature x_0 which was always set to the value of 1. Uh, and then yi, in the case of regression is always a real number and what's the number of training examples and what's the number of features and, uh, this was the hypothesis, right? It's a linear function of the features x, um, including this feature x_0 which is always set to 1, and, uh, j was the cost function you would minimize, you minimize this as function of j to find the parameters Theta for your straight line fit to the data, okay? So that's what you saw last Wednesday. Um, now if you have a dataset, that looks like that, where this is the size of a house and this is the price of a house. What you saw on Wednesday- last Wednesday, was an algorithm to fit a straight line, right to this data so the hypothesis was of the form Theta 0 + Theta 1 x_0- x_0- Theta 1 x_1 to be very specific. But with this dataset maybe it actually looks, you know, maybe the data looks a little bit like that and so one question that you have to address when, uh, fitting models to data is what are the features you want? Do you want to fit a straight line to this problem or do you want to fit a hypothesis of the form, um, Theta 1x + Theta 2x squared since this may be a quadratic function, right? Now the problem with quadratic functions is that quadratic functions eventually start, you know, curving back down so that will be a quadratic function. This arc starts curving back down. So maybe you don't want to fit a quadratic function. Uh, instead maybe you want, uh, to fit something like that. If- if housing prices sort of curved down a little bit but you don't want it to eventually curve back down the way a quadratic function would, right? Um, so- oh, and, and if you want to do this the way you would implement is is you define the first feature x_1 = x and the second feature x_2 = x squared or you define x_1 to be equal to x and x_2 = square root of x, right and by defining a new feature x_2 which would be the square root of x and square root of x. Then the machinery that you saw from Wednesday of linear regression applies to fit these types of, um, these types of functions of the data. So later this quarter you'll hear about feature selection algorithms, which is a type of algorithm for automatically deciding, do you want x squared as a feature or square root of x as a feature or maybe you want, um, log of x as a feature, right. But what set of features, um, does the best job fitting the data that you have if it's not fit well by a perfectly straight line. Um, what I would like to do today is- so, so you'll hear about feature selection later this quarter. What I want to share with you today is a different way of addressing this out- this problem of whether the data isn't just fit well by a straight line and in particular I wanna share with you an idea called, uh, locally weighted regression or locally weighted linear regression. So let me use a slightly different, um, example to illustrate this. Um, which is, uh, which is that, you know, if you have a dataset that looks like that. [NOISE] So it's pretty clear what the shape of this data is. Um, but how do you fit a curve that, you know, kind of looks like that, right? And it's, it's actually quite difficult to find features, is it square root of x, log of x, x cube, like third root of x, x the power of 2/3. But what is the set of features that lets you do this? So we'll sidestep all those problems with an algorithm called, uh, locally weighted regression. Um, okay, and to introduce a bit more machine learning terminology. Um, in machine learning we sometimes distinguish between parametric learning algorithms and non-parametric learning algorithms. But in a parametric learning algorithm does, uh, uh, you fit some fixed set of parameters such as Theta i to data and so linear regression as you saw last Wednesday is a parametric learning algorithm because there's a fixed set of parameters, the Theta i's, so you fit to data and then you're done, right. Locally weighted regression will be our first exposure to a non-parametric learning algorithm. Um, and what that means is that the amount of data/parameters, uh, you need to keep grows and in this case it grows linearly with the size of the data, with size of training set, okay? So with the parametric learning algorithm, no matter how big your training, uh, your training set is, you fit the parameters Theta i. Then you could erase a training set from your computer memory and make predictions just using the parameters Theta i and in a non-parametric learning algorithm which we'll see in a second, the amount of stuff you need to keep around in computer memory or the amount of stuff you need to store around grows linearly as a function of the training set size. Uh, and so this type of algorithm is your- may, may, may not be great if you have a really, really massive dataset because you need to keep all of the data around your- in computer memory or on disk just to make predictions, okay? So- but we'll see an example of this and one of the effects of this is that with that, it'll, it'll be able to fit that data that I drew up there, uh, quite well without you needing to fiddle manually with features. Um, and again you get to practice implementing locally weighted regression in the homework. So I'm gonna go over the height of ideas relatively quickly and then let you, uh, uh, gain practice, uh, in the problem set. All right. So let me redraw that dataset, it'd be something like this. [NOISE] All right. So- so say you have a dataset like this. Um, now for linear regression if you want to evaluate h at a certain value of the input, right? So to make a prediction at a certain value of x what you- for linear regression what you do is you fit theta, you know, to minimize this cost function. [NOISE] And then you return Theta transpose x, right? So you fit a straight line and then, you know, if you want to make a prediction at this value x you then return say the transpose x. For locally weighted regression, um, you do something slightly different. Which is if this is the value of x and you want to make a prediction around that value of x. What you do is you look in a lo- local neighborhood at the training examples close to that point x where you want to make a prediction. And then, um, I'll describe this informally for now but we'll- we'll formalize this in math for the second. Um, but focusing mainly on these examples and, you know, looking a little bit at further all the examples. But really focusing mainly on these examples, you try to fit a straight line like that, focusing on the training examples that are close to where you want to make a prediction. And by close I mean the values are similar, uh, on the x axis. The x values are similar. And then to actually make a prediction, you will, uh, use this green line that you just fit to make a prediction at that value of x, okay? Now if you want to make a prediction at a different point. Um, let's say that, you know, the user now says, "Hey, make a prediction for this point." Then what you would do is you focus on this local area, kinda look at those points. Um, and when I say focus say, you know, put most of the weights on these points but you kinda take a glance at the points further away, but mostly the attention is on these for the straight line to that, and then you use that straight line to make a prediction, okay. Um, and so to formalize this in locally weighted regression, um, you will fit Theta to minimize a modified cost function [NOISE] Where wi is a weight function. Um, and so a good- well the default choice, a common choice for wi will be this. [NOISE] Right, um, I'm gonna add something to this equation a little bit later. But, uh, wi is a weighting function where, notice that this, this formula has a defining property, right? If xi - x is small, then the weight will be close to 1. Because, uh, if xi x- so x is the location where you want to make a prediction and xi is the input x for your ith training example. So wi is a weighting function, um, that's a value between 0 and 1 that tells you how much should you pay attention to the values of xi, yi when fitting say this green line or that red line. And so if xi - x is small so that's a training example that is close to where you want to make the prediction for x. Then this is about e to the 0, right, e to the -0 if the- if the numerator here is small and e to the 0 is close to 1. Right, um, and conversely if xi - x is large, then wi is close to 0. And so if xi is very far away so let's see if it's fitting this green line. And this is your example, xi yi then it's saying, give this example all the way out there if you're fitting the green line where you look at this first x saying that example should have weight fairly close to 0, okay? Um, and so if you, um, look at the cost function, the main modification to the cost function we've made is that we've added this weighting term, right? And so what locally weighted regression does is the same. If an example xi is far from where you wanna make a prediction multiply that error term by 0 or by a constant very close to 0. Um, whereas if it's close to where you wanna make a prediction multiply that error term by 1. And so the net effect of this is that this is summing if, if, you know, the terms multiplied by 0 disappear, right? So the net effect of this is that the sums over essentially only the terms, uh, for the squared error for the examples that are close to the value, close to the value of x where you want to make a prediction, okay? Um, and that's why when you fit Theta to minimize this, you end up paying attention only to the points, only to the examples close to where you wanna make a prediction and fitting a line like a green line over there, okay? Um, so let me draw a couple more pictures to- to- to illustrate this. Um, so if- let me draw a slightly smaller data set just to make this easier to illustrate. Um, so that's your training set. So there's your examples x1, x2, x3, x4. And if you want to make a prediction here, right, at that point x, then, um, this curve here looks the- the- the shape of this curve is actually like this, right? Um, and this is the shape of a Gaussian bell curve. But this has nothing to do with a Gaussian density, right, so this thing does not integrate to 1. So- so it's just sometimes you ask well, is this- is this using a Gaussian density? The answer is no. Uh, this is just a function that, um, is shaped a lot like a Gaussian but, you know, Gaussian densities, probability density functions have to integrate to 1 and this does not. So there's nothing to do with a Gaussian probability density. Question? So how- how do you choose the width of the- Oh, so how do you choose the width, lemmi get back to that. Yeah. Um, and so for this example this height here says give this example a weight equal to the height of that thing. Give this example a weight to the height of this, height of this, height of that, right? Which is why if you actually- if you have an example this way out there, you know, is given a weight that's essentially 0. Which is why it's weighting only the nearby examples when trying to fit a straight line, right, uh, for the- for making predictions close to this, okay? Um, now so one last thing that I wanna mention which is, um, the- the- the question just now which is how do you choose the width of this Gaussian density, right? How fat it is or how thin should it be? Um, and this decides how big a neighborhood should you look in order to decide what's the neighborhood of points you should use to fit this, you know, local straight line. And so, um, for Gaussian function like this, uh, this- I'm gonna call this the, um, bandwidth parameter tau, right? And this is a parameter or a hyper-parameter of the algorithm. And, uh, depending on the choice of tau, um, uh, you can choose a fatter or a thinner bell-shaped curve, which causes you to look in a bigger or a narrower window in order to decide, um, you know, how many nearby examples to use in order to fit the straight line, okay? And it turns out that, um, and I wanna leave- I wanna leave you to discover this yourself in the problem set. Um, if- if you've taken a little bit of machine learning elsewhere I've heard of the terms [inaudible] Test. It's on? Okay, good. It was on. Good. It turns out that, um, the choice of the bandwidth tau has an effect on, uh, overfitting and underfitting. If you don't know what those terms mean don't worry about it, we'll define them later in this course. But, uh, what you get to do in the problem set is, uh, play with tau yourself and see why, um, if tau is too broad, you end up fitting, um, you end up over-smoothing the data and if tau is too thin you end up fitting a very jagged fit to the data. And if any of these things don't make sense yet don't worry about it they'll make sense after you play a bit in the- in the problem set, okay? Um, so yeah, since- since you- you play with the varying tau in the problem set and see for yourself the net impact of that, okay? Question? Is tau raised to power there or is that just a- just a- [NOISE] Thank you, uh, this is tau squared. Yeah. Yeah. So- so what happens if you need to invert, uh, the [inaudible] What happens if you need to infer the value of h outside the scope of the dataset? It turns out that you can still use this algorithm. It's just that, um, its results may not be very good. Yeah. It- it- it depends I guess. Um, locally linear regression is usually not greater than extrapolation, but then most- many learning algorithms are not good at extrapolation. So all- all the formulas still work, you can still implement this. But, um, yeah. You can also try- you can also try a linear problem set and see what happens. Yeah. One last question. Is it possible to have like a vertical tau depending on whether some parts of your data have lots of- Yeah. Yes, this is mostly for the variable tau depending- Uh, uh, yes, it is, uh, and there are quite complicated ways to choose tau based on how many points there on the local region and so on. Yes. There's a huge literature on different formulas actually for example instead of this Gaussian bump thing, uh, there's, uh, sometimes people use that triangle shape function. So it actually goes to zero outside some small rings. So there are, there are many versions of this algorithm. Um, so I tend to use, uh, a locally weighted linear regression when, uh, you have a relatively low dimensional data set. So when the number of features is not too big, right? So when n is quite small like 2 or 3 or something and we have a lot of data. And you don't wanna think about what features to use, right. So- so that's the scenario. So if, if you actually have a data set that looks like these up in drawing, you know, locally weighted linear regression is, is a, is a pretty good algorithm. Um, just one last question. Then we're moving on. When you have a lot of data like this, does it usually complicate the question, since you're [BACKGROUND] Oh, sure. Yes, if you have a lot of data, that wants to be computationally expensive, yes, it would be. Uh, I guess a lot of data is relative. Uh, yes we have, you know, 2, 3, 4 dimensional data and hundreds of examples, I mean, thousands of examples. Uh, it turns out the computation needed to fit the minimization is, uh, similar to the normal equations, and so you- it involves solving a linear system of equations of dimension equal to the number of training examples you have. So, if that's, you know, like a thousand or a few thousands, that's not too bad. If you have millions of examples then, then there are also multiple scaling algorithms like KD trees and much more complicated algorithms to do this when you have millions or hun- tens of millions of examples. Yeah. Okay. So you get a better sense of this algorithm when you play with it, um, in the problem set. Now, the second topic-one of- so I'm gonna put aside locally weighted regression. We won't talk about that set of ideas anymore, uh, today. But, but what I wanna do today is, uh, on last Wednesday I had said that- I had promised last Wednesday that today I'll give a justification for why we use the squared error, right. Why the squared error and why not to the fourth power or absolute value? Um, and so, um, what I want to show you today- now is a probabilistic interpretation of linear regression and this probabilistic interpretation will put us in good standing as we go on to logistic regression today, uh, and then generalize linear models later this week. We're going to keep up to-keep the notation there so we could continue to refer to it. So why these squares? Why squared error? Um, I'm gonna present a set of assumptions under which these squares using squared error falls out very naturally. Which is let's say for housing price prediction. Let's assume that there's a true price of every house y i which is x transpose, um, say there i, plus epsilon i. Where epsilon i is an error term. That includes, um, unmodeled effects, you know, and just random noise. Okay. So let's assume that the way, you know, housing prices truly work is that every house's price is a linear function of the size of the house and number of bedrooms, plus an error term that captures unmodeled effects such as maybe one day that seller is in an unusually good mood or an unusually bad mood and so that makes the price go higher or lower. We just don't model that, um, as well as random noise, right. Or, or maybe the model will skew this street, you know, preset to persistent capture, that's one of the features, but other things have an impact on housing prices. Um, and we're going to assume that, uh, epsilon i is distributed Gaussian would mean 0 and co-variance sigma squared. So I'm going to use this notation to mean- so the way you read this notation is epsilon i this twiddle you pronounce as, it's distributed. And then stripped n parens 0, sigma squared. This is a normal distribution also called the Gaussian Distribution, same thing. Normal distribution and Gaussian distribution mean the same thing. The normal distribution would mean 0 and, um, a variance sigma squared. Okay. Um, and what this means is that the probability density of epsilon i is- this is the Gaussian density, 1 over root 2 pi sigma e to the negative epsilon i squared over 2 sigma squared. Okay. And unlike the Bell state-the bell-shaped curve I used earlier for locally weighted linear regression, this thing does integrate to 1, right. This-this function integrates to 1. Uh, and so this is a Gaussian density, this is a prob-prob-probability density function. Um, and this is the familiar, you know, Gaussian bell-shaped curve with mean 0 and co-variance- and variance, uh, uh, sigma squared where sigma kinda controls the width of this Gaussian. Okay? Uh, and if you haven't seen Gaussian's for a while we'll go over some of the, er, probability, probability pre-reqs as well in the classes, Friday discussion sections. So, in other words, um, we assume that the way housing prices are determined is that, first is a true price theta transpose x. And then, you know, some random force of nature. Right, the mood of the seller or, I-I-I don't know-I don't have other factors, right. Perturbs it from this true value, theta transpose xi. Um, and the huge assumption we're gonna make is that the epsilon I's these error terms are IID. And IID from statistics stands for Independently and Identically Distributed. And what that means is that the error term for one house is independent, uh, as the error term for a different house. Which is actually not a true assumption. Right. Because, you know, if, if one house is priced on one street is unusually high, probably a price on a different house on the same street will also be unusually high. And so- but, uh, this assumption that these epsilon I's are IID since they're independently and identically distributed. Um, is one of those assumptions that, that, you know, is probably not absolutely true, but may be good enough that if you make this assumption, you get a pretty good model. Okay. Um, and so let's see. Under these set of assumptions this implies that [NOISE] the density or the probability of y i given x i and theta this is going to be this. Um, and I'll, I'll take this and write it in another way. In other words, given x and theta, what's the density- what's the probability of a particular house's price? Well, it's going to be Gaussian with mean given by theta transpose xi or theta transpose x, and the variance is, um, given by sigma squared. Okay. Um, and so, uh, because the way that the price of a house is determined is by taking theta transpose x with the, you know, quote true price of the house and then adding noise or adding error of variance sigma squared to it. And so, um, the, the assumptions on the left imply that given x and theta, the density of y, you know, has this distribution. Which is- really this is the random variable y, and that's the mean, right, and that's the variance of the Gaussian density. Okay. Now, um, two pieces of notation. Um, I want to, one that you should get familiar with. Um, the reason I wrote the semicolon here is, uh, that- the way you read this equation is the semicolon should be read as parameterized as. Right, um, and so because, uh, uh, the, the alternative way to write this would be to say P of xi given yi, excuse me, P of y given xi comma theta. But if you were to write this notation this way, this would be conditioning on theta, but theta is not a random variable. So you shouldn't condition on theta, which is why I'm gonna write a semicolon. And so the way you read this is, the probability of yi given xi and parameterize, oh, excuse me, parameterized by theta is equal to that formula, okay? Um, if, if, if you don't understand this distinction, again, don't worry too much about it. In, in statistics there are multiple schools of statistics called Bayesian statistics, frequentist statistics, this is a frequentist interpretation. Uh, for the purposes of machine learning, don't worry about it, but I find that being more consistent with terminology prevents some of our statistician friends from getting really upset, but, but, but, you know, I'll try to follow statistics convention. Uh, so- because just only unnecessary flack I guess, um, but for the per- for practical purposes this is not that important. If you forget this notation on your homework. don't worry about it we won't penalize you, but I'll try to be consistent. Um, but this just means that theta in this view is not a random variable, it's just theta is a set of parameters that parameterizes this probability distribution. Okay? Um, and the way to read the second equation is, um, when you write these equations usually don't write them with parentheses, but the way to parse this equation is to say that this thing is a random variable. The random variable y given x and parameterized by theta. This thing that I just drew in green parentheses is just a distributed Gaussian with that distribution, okay? All right. Um, any questions about this? Okay. So it turns out that [NOISE] if you are willing to make those assumptions, then linear regression, um, falls out almost naturally of the assumptions we just made. And in particular, under the assumptions we just made, um, the likelihood of the parameters theta, so this is pronounced the likelihood of the parameters theta, uh, L of theta which is defined as the probability of the data. Right? So this is probability of all the values of y of y1 up to ym given all the xs and given, uh, the parameters theta parameterized by theta. Um, this is equal to the product from I equals 1 through m of p of yi given xi parameterized by theta. Um, because we assumed the examples were- because we assume the errors are IID, right, that the error terms are independently and identically distributed to each other, so the probability of all of the observations, of all the values of y in your training set is equal to the product of the probabilities, because of the independence assumption we made. And so plugging in the definition of p of y given x parameterized by theta that we had up there, this is equal to product of that. Okay? Now, um, again, one more piece of terminology. Uh, you know, another question I've always been asked if you say, hey, Andrew, what's the difference between likelihood and probability, right? And so the likelihood of the parameters is exactly the same thing as the probability of the data, uh, but the reason we sometimes talk about likelihood, and sometimes talk of probability is, um, we think of likelihood. So this, this is some function, right? This thing is a function of the data as well as a function of the parameters theta. And if you view this number, whatever this number is, if you view this thing as a function of the parameters holding the data fixed, then we call that the likelihood. So if you think of the training set the data as a fixed thing, and then varying parameters theta, then I'm going to use the term likelihood. Whereas if you view the parameters theta as fixed and maybe varying the data, I'm gonna say probability, right? So, so you hear me use- well, I'll, I'll try to be consistent. I find I'm, I'm pretty good at being consistent but not perfect, but I'm going to try to say likelihood of the parameters, and probability of the data even though those evaluate to the same thing as just, you know, for this function, this function is a function of theta and the parameters which one are you viewing as fixed and which one are you viewing as, as variables. So when you view this as a function of theta, I'm gonna use this term likelihood. Uh, but- so, so hopefully you hear me say likelihood of the parameters. Hopefully you won't hear me say likelihood of the data, right? And, and similarly, hopefully you hear me say probability of the data and not the probability of the parameters, okay? Yeah. [inaudible]. Like other parameters. [inaudible]. Uh, okay. So probability of the data. No. Uh, uh, theta, I got it sorry, yes. Likelihood of theta. Got it. Yes. Sorry. Yes. Likelihood of theta. That's right. [inaudible]. Oh, uh, no. So- no. Uh, uh, so theta is a set of parameters, it's not a random variable. So we- likelihood of theta doesn't mean theta is a random variable. Right. Cool. Yeah. Thank you. Um, by the way, the, the, the stuff about what's a random variable and what's not, the semicolon versus comma thing. We explained this in more detail in the lecture notes. To me this is part of, um, uh, you know, a little bit paying homage to the- to the religion of Bayesian frequencies versus Bayesian, uh, frequentist versus Bayesians in statistics. From a- from a machine- from an applied machine learning operational what you write code point of view, it doesn't matter that much. Uh, yeah. But theta is not a random variable, we have likelihood of parameters which are not a variable. Yeah. Go ahead. [inaudible]. Oh, what's the rationale for choosing, uh, oh, sure, why is epsilon i Gaussian? So, uh, uh, turns out because of central limit theorem, uh, from statistics, uh, most error distributions are Gaussian, right? If something is- if there's an era that's made up of lots of little noise sources which are not too correlated, then by central limit theorem it will be Gaussian. So if you think that, most perturbations are, the mood of the seller, what's the school district, you know, what's the weather like, or access to transportation, and all of these sources are not too correlated, and you add them up then the distribution will be Gaussian. Um, and, and I think- well, yeah. So you can use the central limit theorem, I think the Gaussian has become a default noise distribution. But for things where the true noise distribution is very far from Gaussian, uh, this model does do that as well. And in fact, for when you see generalized linear models on Wednesday, you see when- how to generalize all of these algorithms to very different distributions like Poisson, and so on. All right. So, um, so we've seen the likelihood of the parameters theta. Um, so I'm gonna use lower case l to denote the log-likelihood. And the log-likelihood is just the log of the likelihood. Um, and so- well, just- right. And so, um, log of a product is equal to the sum of the logs. Uh, and so this is equal to- and so this is m log 1 over root. Okay? Um. And so, um, one of the, uh, you know, well-tested letters in statistics estimating parameters is to use maximum likelihood estimation or MLE which means you choose theta to maximize the likelihood, right? So given the data set, how would you like to estimate theta? Well, one natural way to choose theta is to choose whatever value of theta has a highest likelihood. Or in other words, choose a value of theta so that that value of theta maximizes the probability of the data, right? And so, um, for- to simplify the algebra rather than maximizing the likelihood capital L is actually easier to maximize the log likelihood. But the log is a strictly monotonically increasing function. So the value of theta that maximizes the log likelihood should be the same as the value of theta that maximizes the likelihood. And if you divide the log likelihood, um, we conclude that if you're using maximum likelihood estimation, what you'd like to do is choose a value of theta that maximizes this thing, right? But, uh, this first term is just a constant, theta doesn't even appear in this first term. And so what you'd like to do is choose the value of theta that maximizes this second term. Ah, notice there's a minus sign there. And so what you'd like to do is, uh, uh, i.e, you know, choose theta to minimize this term. Right. Also, sigma squared is just a constant. Right. No matter what sigma squared is, you know, so, so, uh, so if you want to minimize this term, excuse me, if you want to maximize this term, negative of this thing, that's the same as minimizing this term. Uh, but this is just J of theta. The cost function you saw earlier for linear regression. Okay? So this little proof shows that, um, choosing the value of theta to minimize the least squares errors, like you saw last Wednesday, that's just finding the maximum likelihood estimate for the parameters theta under this set of assumptions we made, that the error terms are Gaussian and IID. Okay, go ahead. Oh, thank you. Yes. Great. Thanks. Go ahead. [inaudible]. Oh, is there a situation where using this formula instead of least squares cost function will be a good idea? No. So this- I think this derivation shows that this- this is completely equivalent to least squares. Right. That if- if you want- if you're willing to assume that the error terms are Gaussian and IID and if you want to use Maximum Likelihood Estimation which is a very natural procedure in statistics, then, you know, then you should use least squares. Right. So yeah. If you know for some reason that the errors are not IID, like, is there a better way to figure out a better cost function? If you know for some reason errors are not IID, could you figure out a better cost function? Yes and no. I think that, um, you know, when building learners algorithms, ah, often we make model- we make assumptions about the world that we just know are not 100% true because it leads to algorithms that are computationally efficient. Um, and so if you knew that your- if you knew that your training set was very very non IID, there are- there're more sophisticated models you could build. But, um, ah, ah, yeah. But- but very often we wouldn't bother I think. Yeah. More often than not we might not bother. Ah, I can think of a few special cases where you would bother there but only if you think the assumption is really really bad. Ah, if you don't have enough data or something- something. Quite- quite rare. All right. Um, lemme think why, all right. I want to move on to make sure we get through the rest of things. Any burning questions? Yeah, okay, cool. All right. Um, so out of this machinery. Right. So- so- so what did we do here? Was we set up a set of probabilistic assumptions, we made certain assumptions about P of Y given X, where the key assumption was Gaussian errors in IID. And then through maximum likelihood estimation, we derived an algorithm which turns out to be exactly the least squares algorithm. Right? Um, what I'd like to do is take this framework, ah, and apply it to our first classification problem. Right. And so the- the key steps are, you know, one, make an assumption about P of Y given X, P of Y given X parameters theta, and then second is figure out maximum likelihood estimation. So I'd like to take this framework and apply it to a different type of problem, where the value of Y is now either 0 or 1. So is a classification problem. Okay? So, um, let's see. So the classification problem. In our first classification problem, we're going to start with binary classification. So the value of Y is either 0 or 1. And sometimes we call this binary classification because there are two clauses. Classification. Right. Um, and so right- so that's a data set where I guess this is X and this is Y. Um, so something that's not a good idea is to apply linear regression to this data set. Some- sometimes you will do it and maybe you'll get away with it but I wouldn't do it and here's. Which is, um, is- is tempting to just fit a straight line to this data and then take the straight line and threshold it at 0.5, and then say, oh, if it's above 0.5 round off to 1, if it's below 0.5 round it off to 0. But it turns out that this, um, is not a good idea, uh, for classification problems. And- and here's why? Which is- for this data set it's really obvious what the- what the pattern is. Right? Everything to the left of this point predict 0. Everything to the right of that point predict 1. But let's say we now change the data set to just add one more example there. Right. And the pattern is still really obvious. It says everything to the left of this predict 0, everything to the right of that predict 1. But if you fit a straight line to this data set with this extra one point there, and just not even the outlier it's really obvious at this point way out there should be labeled one. But with this extra example, um, if we fit a straight line to the data, you end up with maybe something like that. Um, and somehow adding this one example, it really didn't change anything, right? But somehow the straight line fit moved from the green line to the, uh, moved from the blue line to the green line. And if you now threshold it at 0.5, you end up with a very different decision boundary. And so linear regression is just not a good algorithm for classification. Some people use it and sometimes they get lucky and it's not too bad but I- I- I personally never use linear regression for classification algorithms. Right. Because you just don't know if you end up with a really bad fit to the data like this, okay? Um, so oh and- and- and the other unnatural thing about using linear regression for a classification problem is that, um, you know for a classification problem that the values are, you know, 0 or 1. Right. And so it outputs negative values or values even greater than 1 seems- seems strange, um. So what I'd like to share with you now is really, probably by far the most commonly used classification algorithm ah, called logistic regression. Now let's say the two learning algorithms I probably use the most often are linear regression and logistic regression. Yeah, probably these two, actually. Um, and, uh, this is the algorithm. So, um, as- as we designed a logistic regression algorithm, one of the things we might naturally want is for the hypothesis to output values between 0 and 1. Right. And this is mathematical notation for the values for H of X or H prime, H subscript theta of X, uh, lies in the set from 0 to 1. Right? This 0 to 1 square bracket is the set of all real numbers from 0 to 1. So this says, we want the hypothesis output values in you know between 0 and 1, so that in the set of all numbers between z- from 0 to 1. Um, and so we're going to choose the following form of the hypothesis. Um, so. Okay. So we're gonna define a function, g of z, that looks like this. And this is called the sigmoid, uh, or the logistic function. Uh, these are synonyms, they mean exactly the same thing. So, uh, it can be called the sigmoid function, or the logistic function, it means exactly the same thing. But we're gonna choose a function, g of z. Uh, and this function is shaped as follows. If you plot this function, you find that it looks like this. Um, where if the horizontal axis is z, then this is g of z. And so it crosses x intercept at 0, um, and it, you know, starts off, well, really, really close to 0, rises, and then asymptotes towards 1. Okay? And so g of z output values are between 0 and 1. And, um, what logistic regression does is instead of- let's see. So previously, for linear regression, we had chosen this form for the hypothesis, right? We just made a choice that we'll say the housing prices are a linear function of the features x. And what logistic regression does is theta transpose x could be bigger than 1, it can be less than 0, which is not very natural. But instead, it's going to take theta transpose x and pass it through this sigmoid function g. So this force, the output values only between 0 and 1. Okay? Um, so you know, when designing a learning algorithm, uh, sometimes you just have to choose the form of the hypothesis. How are you gonna represent the function h, or h of- h subscript theta. And so we're making that choice here today. And if you're wondering, you know, there are lots of functions that we could have chosen, right? There are lots of why, why not, why not this function? Or why not, you know, there are lots of functions with vaguely this shape, they go between 0 and 1. So why are we choosing this specifically? It turns out that there's a broader class of algorithms called generalized linear models. You'll hear about on Wednesday, uh, of which this is a special case. So we've seen linear regression, you'll see logistic regression in a second, and on Wednesday, you'll see that both of these examples of a much bigger set of algorithms derived using a broader set of principles. So, so for now, just, you know, take my word for it tha- that we want to use the logistic function. Uh, uh, it'll turn out- you'll see on Wednesday that there's a way to derive even this function from, uh, from more basic principles, rather than just putting all this, this all out. But for now, let me just pull this out of a hat and say, that's the one we want to use. Okay. [NOISE] So, um, let's make some assumptions about the distribution of y given x parameterized by theta. So I'm going to assume that the data has the following distribution. The probability of y being 1, uh, again, from the breast cancer prediction that we had, from, uh, the first lecture. Right? It will be the chance of a tumor being cancerous, or being, um, um, malignant. Chance of y being 1, given the size of the tumor, that's the feature x parameterized by theta. That this is equal to the output of your hypothesis. So in other words, we're gonna assume that, um, what you want your learning algorithm to do is input the features and tell me what's the chance that this tumor is malignant. Right? What's the chance that y is equal to 1? Um, and by logic, I guess, because y can be only 1 or 0, the chance of y being equal to 0, this has got to be 1 minus that. Right? Because if a tumor has a 10% chance of being malignant, that means it has a 1 minus that. It means it must have a 90% chance of being benign. Right? Since these two probabilities must add up to 1. Okay? Yeah. [inaudible] Say that again. [inaudible]. Oh, can we change the parameters here? Yes, you can, but I'm not- yeah. But I think just to stick with convention in logistic regression. You, you- yeah. Sure. We can assume that p of y equals 1 was this, and p of y equals 1 was that, but I think either way. It's just one you call positive example, one you call a negative example. Right. So, so, uh, use this convention. Okay. Um, and now, bearing in mind that y, right? By definition, because it is a binary classification problem. But bear in mind that y can only take on two values, 0 or 1. Um, there's a nifty, sort of little algebra way to take these two equations and write them in one equation, and this will make some of the math a little bit easier. When I take these two equations, take these two assumptions and take these two facts, and compress it into one equation, which is this. [NOISE] [BACKGROUND] Okay? Oh, and I dropped the theta subscript just to simplify the notation of it. But I'm, I'm gonna be a little bit sloppy sometimes. Well, a little less formal, whether I write the theta there or not. Okay? Um, but these two definitions of p of y given x parameterized by theta, bearing in mind that y is either 0 or 1, can be compressed into one equation like this. Uh, and, and let me just say why. Right? It's because if y- use a different color. Right. If y is equal to 1, then this becomes h of x to the power of 1 times this thing to the power of 0. Right? If y is equal to 1, then, um, 1 - y is 0. And, you know, anything to the power of 0 is just equal to 1. [NOISE] And so if y is equal to 1, you end up with p of y given x parameterized by theta equals h of x. Right? Which is just what we had there. And conversely, if y is equal to 0, then, um, this thing will be 0, and this thing will be 1. And so you end up with p of y given x parameterized theta is equal to 1 minus h of x, which is just equal to that second equation. Okay? Right. Um, and so this is a nifty way to take these two equations and compress them into one line, because depending on whether y is 0 or 1, one of these two terms switches off, because it's exponentiated to the power of 0. Um, and anything to the power of 0 is just equal to 1. Right? So one of these terms is just, you know, 1. Just leaving the other term, and just selecting the, the appropriate equation, depending on whether y is 0 or 1. Okay? So with that, um, uh, so with this little, uh, on a notational trick, it will make the data derivations simpler. Okay? Um, yeah. So let me use a new board. [NOISE] I want that. All right. Actually we can reuse along with this. All right. So, uh, we're gonna use maximum likelihood estimation again. So let's write down the likelihood of the parameters. Um, so well, it's actually p of all the y's given all the x's parameterized by theta was equal to this, uh, which is now equal to product from i equals 1 through m, h of x_i to the power of y_i, times 1 minus h of x_i to the power of 1 minus y_i. Okay. Where all I did was take this definition of p of y given x parameterized by theta, uh, you know, from that, after we did that little exponentiation trick and wrote it in here. Okay. Um. [NOISE] And then, uh, with maximum likelihood estimation we'll want to find the value of theta that maximizes the likelihood, maximizes the likelihood of the parameters. And so, um, same as what we did for linear regression to make the algebra, you have to, to, to make the algebra a bit more simple, we're going to take the log of the likelihood and so compute the log likelihood. And so that's equal to, um, [NOISE] let's see, right. And so if you take the log of that, um, you end up with- you end up with that. Okay? And, um, it- so, so, in other words, uh, the last thing you want to do is, try to choose the value of theta to try to maximize L of theta. Okay. Now, so, so just, just to summarize where we are, right. Uh, if you're trying to predict, your malignancy and benign, uh, tumors, you'd have a training set with XI YI. You define the likelihood, define the log-likelihood. And then what you need to do is have an algorithm such as gradient descent, or gradient descent, talk about that in a sec to try to find the value of theta that maximizes the log-likelihood. And then having chosen the value of theta when a new patient walks into the doctor's office you would take the features of the new tumor and then use H of theta to estimate the chance of this new tumor in the new patient that walks in tomorrow to estimate the chance that this new thing is ah is- is malignant or benign. Okay? So the algorithm we're going to use to choose theta to try to maximize the log-likelihood is a gradient ascent or batch gradient ascent. And what that means is we will update the parameters theta J according to theta J plus the partial derivative with respect to the log-likelihood. Okay? Um, and the differences from what you saw that linear regression from last time is the following. Just two differences I guess. For linear regression. Last week, I have written this down, theta J gets updated as theta J minus partial with respect to theta J of J of theta, right? So you saw this on Wednesday. So the two differences between that is well, first instead of J of theta you're now trying to optimize the log-likelihood instead of this squared cost function. And the second change is, previously you were trying to minimize the squared error. That's why we had the minus. And today you're trying to maximize the log-likelihood which is why there's a plus sign. Okay? And so, um, so gradient descent you know, is trying to climb down this hill whereas gradient ascent has a, um, uh, has a- has a concave function like this. And it's trying to, like, climb up the hill rather than climb down the hill. So that's why there's a plus symbol here instead of a minus symbol because we are trying to maximize the function rather than minimize the function. So the last thing to really flesh out this algorithm which is done in the lecture notes, but I don't want to do it here today is to plug in the definition of H of theta into this equation and then take this thing. So that's the log-likelihood of theta and then through calculus and algebra you can take derivatives of this whole thing with respect to theta. This is done in detail in the lecture notes. I don't want to use this in class, but go ahead and take derivatives of this big formula with respect to the parameters theta in order to figure out what is that thing, right? What is this thing that I just circled? And it turns out that if you do so you will find that batch gradient ascent is the following. You update theta J according to- oh, actually I'm sorry, I forgot the learning rate. Yeah, it's your learning rate Alpha. Okay. Learning rate Alpha times this. Okay? Because this term here is the partial derivative respect to Theta J after log-likelihood. Okay? And the full calculus and so on derivations given the lecture notes. Okay? Um, yeah. [inaudible]. Is there a chance of local maximum in this case? No. There isn't. It turns out that this function that the log-likelihood function L of Theta full logistic regression it always looks like that. Uh, so this is a concave function. So there are no local op. The only maximum is a global maxima. There's actually another reason why we chose the logistic function because if you choose a logistic function rather than some other function that will give you 0 to 1, you're guaranteed that the likelihood function has only one global maximum. And this, there's actually a big class about, actually what you'll see on Wednesday, this is a big class of algorithms of which linear regression is one example, logistic regression is another example and for all of the algorithms in this class there are no local optima problems when you- when you derive them this way. So you see that on Wednesday when we talk about generalized linear models. Okay? Um, so actually, but now that I think about, there's just one question for you to think about. This looks exactly the same as what we've figured out for linear regression, right? That when actually the difference for linear regression was I had a minus sign here and I reversed these two terms. I think I had H theta of XI minus YI. If you put the minus sign there and reverse these two terms, so take the minus minus, this is actually exactly the same as what we had come up with for linear regression. So why, why, why is this different, right? I started off saying, don't use linear regression for classification problems because of ah because of that problem that a single example could really you know- I started off with an example assuming that linear regression is really bad for classification and we did all this work and I came back to the same algorithm. So what happened? Just, yeah go ahead. [BACKGROUND]. Yeah. All right, cool. Awesome. Right? So what happened is the definition of H of theta is now different than before but the surface level of the equation turns out to be the same. Okay? And again it turns out that for every algorithm in this class of algorithms you'll see you on Wednesday you end up with the same thing. Actually this is a general property of a much bigger class of algorithms called generalized linear models. Although, yeah, i- i- interesting historical diverge, because of the confusion between these two algorithms in the early history of machine learning there was some debate about you know between academics saying, no, I invented that, no, I invented that. And then he goes, no, it's actually different algorithms. [LAUGHTER] Alright, any questions? Oh go ahead. [BACKGROUND]. Oh, great question. Is there a equivalent of normal equations to logistic regression? Um, short answer is no. So for linear regression the normal equations gives you like a one shot way to just find the best value of theta. There is no known way to just have a close form equation unless you find the best value of theta which is why you always have to use an algorithm, an iterative optimization algorithm such as gradient ascent or ah and we'll see in a second Newton's method. All right, cool. So, um, there's a great lead in to, um, the last topic for today which is Newton's method. [NOISE] Um, you know, gradient ascent right is a good algorithm. I use gradient ascent all the time but it takes a baby step, takes a baby step, take a baby step, it takes a lot of iterations for gradient assent to converge. Um, there's another algorithm called Newton's method which allows you to take much bigger jumps so that's theta, you know, so- so, uh, there are problems where you might need you know,say 100 iterations or 1000 iterations of gradient ascent. That if you run this algorithm called Newton's method you might need only 10 iterations to get a very good value of theta. But each iteration will be more expensive. We'll talk about pros and cons in a second. But, um, let's see how- let's- let's describe this algorithm which is sometimes much faster for gradient than gradient ascent for optimizing the value of theta. Okay? So what we'd like to do is, uh, all right, so let me- let me use this simplified one-dimensional problem to describe Newton's method. So I'm going to solve a slightly different problem with Newton's method which is say you have some function f, right, and you want to find a theta such that f of theta is equal to 0. Okay? So this is a problem that Newton's method solves. And the way we're going to use this later is what you really want is to maximize L of theta, right, and well at the maximum the first derivative must be 0. So i.e. you want to value where the derivative L prime of theta is equal to 0, right? And L prime is the derivative of theta because this is where L prime is another notation for the first derivative of theta. So you want to maximize a function or minimize a function. What that really means is you want to find a point where the derivative is equal to 0. So the way we're going to use Newton's method is we're going to set F of theta equal to the derivative and then try to find the point where the derivative is equal to 0. Okay? But to explain Newton's method I'm gonna, you know, work on this other problem where you have a function F and you just want to find the value of theta where F of theta is equal to 0 and then- and we'll set F equal to L prime theta and that's how we'll we'll apply this to um, logistic regression. So, let me draw in pictures how this algorithm works. Uh. [NOISE] [BACKGROUND] All right. So let's say that's the function f, and, you know, to make this drawable on a whiteboard, I'm gonna assume theta is just a real number for now. So theta is just a single, you know, like a scalar, a real number. Um, so this is how Newton's method works. Um, oh, and the goal is to find this point. Right? The goal is to find the value of theta where f of theta is equal to 0. Okay? So let's say you start off, um, right. Let's say you start off at this point. Right? At the first iteration, you have randomly initialized data, and actually theta is zero or something. But let's say you start off at that point. This is how one iteration of Newton's method will work, which is- let me use a different color. Right. Start off with theta 0, that's just a first value consideration. What we're going to do is look at the function f, and then find a line that is just tangent to f. So take the derivative of f and find a line that is just tangent to f. So take that red line. It just touches the function f. And we're gonna use, if you will, use a straight line approximation to f, and solve for where f touches the horizontal axis. So we're gonna solve for the point where this straight line touches the horizontal axis. Okay? And then we're going to set this, and that's one iteration of Newton's method. So we're gonna move from this value to this value, right? And then in the second iteration of Newton's method, we're gonna look at this point. And again, you know, take a line that is just tangent to it, and then solve for where this touches the horizontal axis, and then that's after two iterations of Newton's method. Right. And then you repeat. Take this, sometimes you can overshoot a little bit, but that's okay. Right? And then that's, um, there's a cycle back to red. Let's take the three, then you take this, let's take the four. [NOISE] Excuse me. So you can tell that Newton's method is actually a pretty fast algorithm. Right? When in just one, two, three, four iterations, we've gotten really really close to the point where f of theta is equal to 0. So let's write out the math for how you do this. So um, let's see. I'm going to- so let me just write out the, the derive, um, you know, how you go from theta 0 to theta 1. So I'm going to use this horizontal distance. I'm gonna denote this as, uh, delta. This triangle is uppercase Greek alphabet delta. Right? This is lowercase delta, that's uppercase delta. Right? Uh, and then the height here, well that's just f of theta 0. Right? This is the height of- it's just f of theta 0. And so, um, let's see. Right. So, uh, what we'd like to do is solve for the value of delta, because one iteration of Newton's method is a set, you know, of theta 1 is set to theta 0 minus delta. Right? So how do you solve for delta? Well, from, uh, calculus we know that the slope of the function f is the height over the run. Well, height over the width. And so we know that the derivative of del- f prime, that's the derivative of f at the point theta 0, that's equal to the height, that's f of theta, divided by the horizontal. Right? So the derivative, meaning the slope of the red line is by definition the derivative is this ratio between this height over this width. Um, and so delta is equal to f of theta 0 over f prime of theta 0. And if you plug that in, then you find that a single iteration of Newton's method is the following rule of theta t plus 1 gets updated as theta t minus f of theta t over f prime of theta t. Okay. Where instead of 0 and 1 I replaced them with t and t plus 1. Right? Um, and finally to, to- you know, the very first thing we did was let's let f of theta be equal to say L prime of theta. Right? Because we wanna find the place where the first derivative of L is 0. Then this becomes theta t plus 1, gets updated as theta t minus L prime of theta t over L double prime of theta t. So it's really, uh, the first derivative divided by the second derivative. Okay? So Newton's method is a very fast algorithm, and, uh, it has, um, Newton's method enjoys a property called quadratic convergence. Not a great name. Don't worry- don't worry too much about what it means. But informally, what it means is that, um, if on one iteration Newton's method has 0.01 error, so on the X axis, you're 0.01 away from the, from the value, from the true minimum, or the true value of f is equal to 0. Um, after one iteration, the error could go to 0.0001 error, and after two iterations it goes 0.00000001. But roughly Newton's method, um, under certain assumptions, uh, uh, that functions move not too far from quadratic, the number of significant digits that you have converged, the minimum doubles on a single iteration. So this is called quadratic convergence. Um, and so when you get near the minimum, Newton's method converges extremely rapidly. Right? So, so after a single iteration, it becomes much more accurate, after another iteration it becomes way, way, way more accurate, which is why Newton's method requires relatively few iterations. Um, and, uh, let's see. I have written out Newton's method for when theta is a real number. Um, when theta is a vector, right? Then the generalization of the rule I wrote above is the following, theta t plus 1 gets updated as theta t plus H that, where H is the Hessian matrix. So these details are written in the lecture notes. Um, but to give you a sense, it- when theta is a vector, this is the vector of derivatives. All right, so I guess this R_n plus 1 dimensional. If theta is in R_n plus 1, then this derivative respect to theta of the log-likelihood becomes a vector of derivatives, and the Hessian matrix, this becomes a matrix as R_n plus 1 by n plus 1. So it becomes a squared matrix with the dimension equal to the parameter vector theta. And the Hessian matrix is defined as the matrix of partial derivatives. Right? So um, [NOISE] and so the disadvantage of Newton's method is that in high-dimensional problems, if theta is a vector, then each step of Newton's method is much more expensive, because, um, you're, you're either solving a linear system equations, or having to invert a pretty big matrix. So if theta is ten-dimensional, you know, this involves inverting a 10 by 10 matrix, which is fine. But if theta was 10,000 or 100,000, then each iteration requires computing like a 100,000 by a 100,000 matrix and inverting that, which is very hard. Right? It's actually very difficult to do that in very high-dimensional problems. Um, so, you know, some rules of thumb, um, if the number of parameters you have for- if the number of parameters in your iteration is not too big, if you have 10 parameters, or 50 parameters, I would almost certainly- I would very likely use Newton's method, uh, because then you probably get convergence in maybe 10 iterations, or, you know, 15 iterations, or even less than 10 iterations. But if you have a very large number of parameters, if you have, you know, 10,000 parameters, then rather than dealing with a 10,000 by 10,000 matrix, or even bigger, the 50 by 1000 by 50,000 matrix, and you have 50,000 parameters, I will use, uh, gradient descent then. Okay? But if the number of parameters is not too big, so that the computational cost per iteration is manageable, then Newton's method converges in a very small number of iterations, and, and could be much faster algorithm than gradient descent. All right. So, um, that's it for, uh, Newton's method. Um, on Wednesday, I guess we are running out of time. On Wednesday, you'll hear about generalized linear models. Um, I think unfortunately I- I promised to be in Washington DC, uh, uh, tonight, I guess through Wednesday. So, uh, you'll hear from some- I think Anand will give the lecture on Wednesday, uh, but I will be back next week. So un- unfortunately was trying to do this, but because of his health things, he can't lecture. So Anand will do this Wednesday. Thanks everyone. See you on Wednesday. |
Stanford_CS229_Machine_Learning_Full_Course_taught_by_Andrew_Ng_Autumn_2018 | Lecture_6_Support_Vector_Machines_Stanford_CS229_Machine_Learning_Andrew_Ng_Autumn_2018.txt | All right. Hey, everyone. Morning and welcome back. Um, so what I'd like to do today is continue our discussion of Naive Bayes and in particular, um, we've described how to use Naive Bayes in a generative learning algorithm, to build a spam classifier that will almost work, right? And, and, and so today you see how Laplace smoothing is one other idea, uh, you need to add to the Naive Bayes algorithm we described on Monday, to really make it work, um, for, say, email spam classification, or, or for text classification. Uh, and then we'll talk about the different version of Naive Bayes that's even better than the one we've been discussing so far. Um, talk a little bit about, ah, advice for applying machine-learning algorithms. So this would be useful to you as you get started on your, ah, CS229 class projects as well. This is a strategy of how to choose an algorithm and what to do first, what to do second, uh, and then we'll start with, um, intro to support vector machines. Okay? Um, so to recap, uh, the Naive Bayes Algorithm is a generative learning algorithm in which given a piece of email, or Twitter message or some piece of text, um, take a dictionary and put in zeros and ones depending on whether different words appear in a particular email and so this becomes your feature representation for, say, an email that you're trying to classify as spam or not spam. Um, so using the indicator function notation, um, X_j-, uh, X_j- I've been trying to use the subscript J not consistently to denote the indexes and the features and ith index in the training examples and you'll see I'm not being consistent with that. So X_j is whether or not the indicator for whether words j appears in an email. And so, um, to build a generative model for this, uh, we need to model these two terms p of x given y and p of y. Uh, so Gaussian distribution analysis models these two terms with a Gaussian and the Bernoulli respectively and Naive Bayes uses a different model. And with Naive Bayes in particular p of x given y is modeled as a, um, product of the conditional probabilities of the individual features given the class label y. And so the parameters that Naive Bayes model are, um, phi subscript y is the class prior. What's the chance that y is equal to 1, before you've seen any features? As well as phi subscript J given y equals 0, which is a chance of that word appearing in a non-spam, as well as phi subscript J given y equals 1 which is a chance of that word appearing in spam email. Okay? Um, and so if you derive the maximum likelihood estimates, you will find that the maximum likelihood estimates of, you know, phi y is this. Right? Just a fraction of training examples, um, that was equal to spam and maximum likelihood estimates of this- and this is just an indicator function notation, way of writing, um, look, at all of your, uh, emails with label y equals 0 and contact y fraction of them, did this feature X_j appear? Did this word X_j appear? Right? Um, and then finally at prediction time, um, let's see, you will calculate p of y equals 1 given X. This is kinda according to Bayes rule. Okay? Um, all right. So it turns out this algorithm will almost work and here's where it breaks down, which is, um, you know, so actually eve- every year, there are some CS229 students and some machine learning students, they will do a class project and some of you will end up submitting this to an academic conference. Right? Some, some- actually some, some of CS229 class projects get submitted, you know, as conference papers pretty much every year. One of the top machine learning conferences, is the conference NIPS. NIPS stands for Neural Information Processing Systems, um, ah, and let's say that in your dictionary, you know, you have 10,000 words in your dictionary. Let's say that the NIPS conference, the word NIPS corresponds to word number 6017, right? In your, in your 10,000 word dictionary. But up until now, presumably you've not had a lot of emails from your friends asking, "Hey, do you want to submit the paper to the NIPS Conference or not." Um, and so if you use your current, you know, email, set of emails to find these maximum likelihood estimates of parameters, you will probably estimate that, um, probability of seeing this word given that it's spam email, is probably zero. Right? Zero over the number of, ah, examples that you've labeled as spam in your email. So if, if you train up this model using your personal email, probably none of the emails you've received for the last few ones had the word NIPS in it, um, maybe. Uh, and so if you plug in this formula for maximum likelihood estimate, the numerator is 0 and so your estimate of this is probably 0. Um, and then similarly, this is also 0 over, you know, the number of non-spam emails I guess. Right. So that's what this is, is just this formula. Right? And, um, statistically it's just a bad idea to say that the chance of something is 0 just because you haven't seen it yet and where this will cause the Naive Bayes algorithm to break down is, if you use these as estimates of the, of the parameters, so this is your estimates parameter phi subscript 6017 given y equals 1. This is phi subscript 6017 given y equals 0. Yes? And if you ever calculate this probability, that is equal to a product from I equals 1 through n. Let's say you have 10,000 words appear of X_i equals 1, p of X_i given y, right? And so if, um, you train your spam classifier on the emails you've gotten up until today, and then after CS229, your project teammates sen- starts sending you emails saying, hey, you know, we like the class project. Shall we consider submitting this class project to the NIPS conference? The NIPS conference deadline is usually in, um, sort of May or June most years so, you know, finish your class project this December, work on it some more by January, February, March, April next year and then maybe submit it to the conference May or June of 2019. When you start getting emails from your friends saying, let's submit our papers to NIPS conference, then when you start to see the word NIPS in your email maybe in March of next year, um, this product of probabilities will have a 0 in it, right? And so this thing that I've just circled will evaluate to 0 because you multiply a lot numbers, one of which is 0. Um, and in the same way this, well, this is also 0, right? And this is also 0 because there'll be that one term in that product over there. And so what that means is if you train a spam classifier today using all the data you have in your email inbox so far, and if tomorrow or- or, you know, or two months from now, whenever. The first time you get an email from your teammates that has the word NIPS in it, your spam classifier will estimate this probability as 0 over 0 plus 0, okay? Now, apart from the divide by 0 error, uh, it turns out that, um, statistically, it's just a bad idea, right? To estimate the probability of something as 0 just because you have not seen it once yet, right? Um, so [NOISE] what I want to do is describe to you Laplace smoothing, which is a technique that helps, um, address this problem. Okay? And, um, let's- let's- In order to motivate Laplace smoothing, let me, um, use a- a- a- Yeah, Let me use a different example for now. Right? Um. Let's see. All right. So, you know, several years ago, this is- this is all the data, but several years ago- so- so let me put aside Naive Bayes, I want to talk about Laplace smoothing. We will come back to apply Laplace smoothing in Naive Bayes. So several years ago, I was tracking the progress of the Stanford football team, um, just a few years ago now. But that year on 9/12, um, our football team played to Wake Forest and, you know, actually these are all the, uh, all the stay games we played that year, right? And, um, uh, we did not win that game. Then on 10/10, we played Oregon State and we did not win that game. Arizona, we did not win that game. We played Caltech, we did not win that game. [LAUGHTER]. And the question is, these are all the away games- almost all the out of state games we played that year. And so you're, you know, Stanford football team's biggest fan. You followed them to every single out of state game and watched all these games. The question is, after this unfortunate streak, when you go on- there's actually a game on New Year's Eve, you follow them to their over home game, what's your estimate of the chances of their winning or losing? Right? Now, if you use maximum likelihood, so let's say this is the variable x, you would estimate the probability of their winning. Well, maximum likelihood is really count up the number of wins, right, and divide that by the number of wins plus the number of losses. And so in this case, um, you estimate this as 0 divided by number of wins with 0, number of losses was 4, right? Which is equal to 0, okay? Um, that's kinda mean, right? [LAUGHTER]. They lost 4 games, but you say, no, the chances of their winning is 0. Absolute certainty. And- and- and just statistically, this is not, um, this is not a good idea. Um, and so what Laplace smoothing, what we're going to do is, uh, imagine that we saw the positive outcomes, the number of wins, you know, just add 1 to the number of wins we actually saw and also the number of losses add 1. Right? So if you actually saw 0 wins, pretend you saw one and if you saw 4 losses, pretend you saw 1 more than you actually saw. And so Laplace smoothing, you're gonna end up adding 1 to the numerator and adding 2 to the denominator. And so this ends up being 1 over 6, right? And that's actually a more reasonable may- maybe it is a more reasonable estimate for the chance of, uh, them winning or losing the next game. Um, uh, and the- there's actually a cert- certain- certain set of circumstances under which there's more estimates. I didn't just make this up in thin air. Uh, Laplace, um, uh, you know, uh, it's an ancient that -- well known, uh, very influential mathematician. He actually tried to estimated the chance of the sun rising the next day. And the reasoning was, well, we've seen the sunrise all times and so, uh, but tha- that doesn't mean we should be absolutely certain the sun will still rise tomorrow, right? And so his reasoning was, well, we've seen the sunrise 10,000 times, you know, we can be really certain the sun will rise again tomorrow but maybe not absolutely certain because maybe something will go wrong or who- who knows what will happen in this galaxy? Um , uh, uh, and so his reasoning was- he derived the optimal estimate- way of estimating, you know, really the chance the sun will rise tomorrow. And this is actually an optimal estimate under I'll say- I'll say the same assumptions, we don't need to worry about it. But it turns out that if you assume that you are Bayesian, where the uniform Bayesian prior on the chance of the sun rising tomorrow. So if the chance the sun rising tomorrow is uniformly distributed, you know, in the unit interval anywhere from 0 to 1, then after a set of observations of this coin toss of whether the sun rises, this is actually a Bayesian optimal estimate of the chances of the sun rising tomorrow, okay? If you don't understand what I just said in the last 30 seconds, don't worry about it. Um, uh, it's taught in sort of a Bayesian statistics- advanced Bayesian statistics classes. But mechanically, what you should do is, uh, take this formula and add 1 to the number of counts you actually saw for each of the possible outcomes. Um, and more generally, uh, if y, er, excuse me. If- if you're estimating probabilities for a k way random variable, um, then you estimate the chance that X being i to be equal to, um, so- so that's the maximum likelihood estimate. And for the fast-moving, you'd add one to the numerator and, um, you add k to the denominator. Okay? So for Naive Bayes, the way this mod- modifies your parameter estimates is this. Um, I'm just gonna copy over the formula from above. Right? Um, so that's the maximum likely estimate. And with Laplace smoothing, you add one to the numerator and add two to the denominator and this means that your estimates are probably- these probabilities they're never exactly 0 or exactly 1, which takes away that problem of, you know, the 0 over 0. Okay. Um, and so if you implement this algorithm, it's not- it's not like a great spam classifier but it's not terrible either. And one nice thing about this algorithm is is so simple, right? Estimated parameters is just counting ,um, uh, uh can be done, you know, very efficiently, right, just- just by counting, uh, and then- and classification time is just multiplying a bunch probabilities together. Uh, this is very confusing first algorithm. All right. Any questions about this? Yeah. [inaudible]? Oh sorry. This is y. Er, oh yes. Thank you. Er, yes thank you. All right. Oh, by the way, I- I was actually following the Stanford football team that year so, you know, they lost. [LAUGHTER]. Because, okay, I love our football team. They're doing much better right now. That was a few years ago. [LAUGHTER]. [NOISE]. All right. Um, [NOISE] So, um, in- in the- examples we've talked about so far, the features were binary valued. Um, and so, um, actually one quick generalization, uh, when the features are multinomial valued, um, then the generalization- actually here's one example. We talked about predicting housing prices, right? That was our very first world meaning example. Let's say you have a classification problem instead, which is you're listing a house you want to sell, what is the chance of this house to be sold within the next 30 days? So it's a classification problem. Um, so if one of the features is the size of the house x, right, then one way to turn the feature into a discrete feature would be to choose a few buckets, assert the size is less than 400 square feet, uh, versus, you know, 400 to 800 or 800 to 1200 or greater than 1200 square feet. Then you can set the feature XI to one of four values, right? So that is how you discretize a continuous valued feature to a discrete value feature. Um, and if you want to apply Naive Bayes to this problem, then probability of x given y, this is just the same as before. Product from i equals 1 through n of p of xj given Y where now this can be a multinomial probability. Right? Where if- if X now takes on one of four values there then, um, this can be a, uh, estimators and multinomial problem. So instead of a Bernoulli distribution over two possible outcomes, this can be a probably, uh, probability mass function probably over four possible outcomes if you discretize the size of a house into four values. Um, and if you ever discretized variables, a typical rule of thumb in machine learning often we discretize variables into 10 values, into 10 buckets. Uh, just as a- it often seems to work well enough. I- I drew 4 here so I don't have to write all 10 buckets. But if you ever discretize var- variables, you know, most people will start off with discretizing things into 10 values. All right. Now, uh, right. And so this is how you can apply Naive Bayes on other problems as well including cost line, for example, if a house is likely to be sold in the next 30 days. Now, um, there's, uh, there's a different variation on Naive Bayes that I want to describe to you that is actually much better for the specific problem of text classification. Uh and so our feature representation for x so far was the following, right? With a dictionary a, aardvark, buy, So let's say you get an email that's, you know, a very spammy email that's "Drugs, buy drugs now", [LAUGHTER] This is meant as an illustrative example, I'm not telling any of you to buy drugs. [LAUGHTER] Um, so if, uh, if you have a dictionary of 10,000 words, then I guess- let's say a is worth 1, aardvark is worth 2, uh, just to, you know, make this example concrete. Let's say the word buy is word 800, drugs is word 1,600, and let's say now is the word- is the 6,200th word in your, uh, 10,000 words in the sorted dictionary. Um, then the representation for x will be, you know, 0, 0, [NOISE] right? And they put a 1 there, and a 1 there, and a 1 there. Okay? Now, one, one- so, um, one interesting thing about Naive Bayes is that it throws away the fact that the word drugs has appeared twice, right? So that's losing a little bit of information, um, uh, and, and in this feature representation, um, you know, each feature is either 0 or 1, right? And that's part of why it throws away the information that's, uh, where the one-word drugs appear twice, and maybe should be given more weight for your- in your classifier. Um, [NOISE] there's a different representation, uh, which is specific to text. And I think text data has a, has a property that they can be very long or very short. You can have a five-word email, or a 1,000-word email, um, and somehow you're taking very short or very long emails and just mapping them to a feature vector that's always the same length. Just a different representation [NOISE] for, um, this email, which is, uh, for that email that says, "Drugs, buy drugs now", we're gonna represent it as a four-dimensional feature vector, [NOISE] right? And so this is going to be, um, n-dimensional for an email of length n. So rather than a 10,000-dimensional feature vector, we now have a four-dimensional feature vector, but now xj is, um, an index from 1 to 10,000 instead of just being 0 or 1. Okay? And, uh, n is- and I guess n varies by training example. So ni is the, uh, length of email i. So the longer email, this vector, the feature vector x will be longer, and the shorter email, this feature vector will be shorter, okay? So, um, let's see. Uh, just to give names to the algorithms we're gonna develop, these are- these are really very confusing, very horrible names. But this is what the community calls them. That the, the model we've talked about so far is sometimes called the Multivariate Bernoulli. And that model, uh, so Bernoulli means coin tosses, so multivariate means, you know, there are 10,000 Bernoulli random variables in this model whereas as a Multivariate Bernoulli event model. An event comes with statistics I guess. Um, and the new representation we're gonna talk about is called the [NOISE] Multinomial Event Model. Uh, these two names are- are- are- frankly, these two names are quite confusing. But these are the names that, uh, I think- actually, one of my friends Andrew McCallum, uh, as far as I know, wrote the paper that named these two algorithms. But- but I think these are- these are the names we seem to use. Um, and so, with this new model, um, we're gonna build a generative model, and because it's a generative model, or model p of x, y which can be factored as follows and using the Naive Bayes assumption, we're going to assume that p of x given y is product from i equals 1 through n, of j equals 1 through n, of p of xj, given y, and then times, you know, p of y. Is that second term, right? Now, one of the, uh, uh, one, one of the reasons these two models were very- were frankly actually very confusing to the machine learning community, is because this is exactly the equation [NOISE] that, you know, you saw on Monday, when we described Naive Bayes for the first time, um, that, you know, this, you know, p of x given y is part of probabilities. Right? So this is exactly, uh, so this, this equation looks cosmetically identical, but with this new model, the second model, the confusingly named Multinomial Event Model, um, the definition of xj and the definition of n is very different, right? So instead of a product from 1 through 10,000, there's a product from 1 through the number of words in the email, and this is now instead a multinomial probability. Rather than a binary or Bernoulli probability. Okay? Um, and it turns out that, uh, well, [NOISE] with this model, the parameters are same as before. Phi y is probability of y equals 1, and also, um, the other parameters of this model, phi k, given y equals 0, is a chance of xj equals k, given y equals 0. Right? And- and just to make sure you understand the notation. See if this makes sense. So this probability is the chance of word blank being blank if label y equals 0. So what goes into those two blanks? Actually, what goes in the second blank? Uh, let's see. Well- well, yeah? [inaudible]. Yes. Right. So it's the chance of the third word in the email, being the word drugs, or the chance of the second in the email being buy, or whatever. And one part of, um, why we implicitly assume, mainly why this is tricky, is that, uh, we assume that this probability doesn't depend on j, right? That for every position in the email, for the- the chance that the first word being drugs is same as chance of the second word being drugs, is same as the third word being drugs, which is why, um, on the left-hand side j doesn't actually appear on the left-hand side, right. Makes sense? Any questions about this? No? Okay. All right. Um, and so the way you calculate the probability, the way you would, um, uh, and, and so the way that, uh, given a new email, a test email, um, uh, you would calculate this probability is by, you know, plugging these parameters that you estimate from the data into this formula. Okay? [NOISE] Um, oh, and then, um, I wrote down, uh, [NOISE] right. And then, and then the other set of the parameters is this. [NOISE] Right. Kind of just with y equals 1, is that y equals 0. And then for the maximum likelihood estimate of the parameters, I'll just write out one of them. [NOISE] Your estimate of, uh, the chance of a given word is really anywhere in any position, being word k. What's the chance of some word in a non-spam email being the word drugs, let's say? Um, the chance of that is equal to [NOISE] I find that- well, this indicates a function notation. It looks complex. I'll just say in a second, uh, what this actually means. So the denominator, um, so this space means- so- and so if you figure out what the English meaning of this complicated formula is, this basically says, "Look at all the words in all of your non-spam emails, all the emails of y equals 0, and look at all of the words in all of the emails, and so all of those words, what fraction of those words is the word drugs?" And that's, uh, your estimate of the chance of the word drugs appearing in the non-spam email in some position in that email, right? And so, um, in math, the denominator is sum of your training set, indicator is not spam, times the number of words in that email. So the denominator ends up being the total number of words in all of your non-spam emails in your training set, um, and the numerator as some of your training set, sum from i equals 1 through m, indicates a y equals 0. So, you know, count up only the things for non-spam email, and for the non-spam email j equals 1 through ni, go over the words in that email and see how many words are that word k. Right. And so, uh, uh, if in your training set you have, um, uh, ah, you know, 100,000 words in your non-spam emails and 200 of them are the word drugs, that occurs, uh, you know, 200 times, then this ratio will be 200 over 100,000. Okay? Oh, and then lastly, um, [NOISE] to implement Laplace smoothing with this, you would, um, add 1 to the numerator as usual, and then, um, let's see. Actually, what- what- what- what would you add to the denominator? Uh- Uh, wait. But what is k? Not k, right? k is a variable. So k indexes into, ah, the words? What do you have? About 10,000. 10,000. Cool. How come? Why 10,000? [inaudible]. Cool. Yeah. Yeah. All right. Yeah, Right. Oh, I think I just realized why you say k I think, uh, overloading notation. When defining the possibility, I think I used k as the number of possible outcomes. Yeah, but here k is an index. Yeah. Right? So, um, uh, see I want a numerator and add to number of the possible outcomes in the denominator which in this case was there 10,000. So, um, uh, so this is the probability of, um, X being equal to the value of k, where k ranges from 1-10,000 if you have a dictionary size. If you have a list of 10,000 words you're modeling. And so the number of possible values for X is 10,000, so you add 10,000 to the denominator. Makes sense? Cool. Yeah. Question? [inaudible]. Oh, what do you do if the word's not in your dictionary? So, um, uh, there are two approaches to that. One is, um, just throw it away. Just ignore it, disregard it, that's one. Uh, second approach, is to take the rare words and map them to a special token which traditionally is denoted UNK for unknown words. So, um, if in your training set, uh, you decide to take just the top 10,000 words in- into your dictionary, then everything that's not in the top 10,000 words can map to your unknown word token or the unknown words special symbol. Yeah. [inaudible]. Oh, why did I write the run before? Oh, this is an indicator function notation. Uh, uh, so indicator function uh, boy- so if- if, um, and so this is- this notation, right? Means uh- well, so indicator of, you know, 2 equals 1 plus 1. This is true. An indicator of, you know, 3 equals 5 is- is 0, is false. So that's the- yeah, um, cool. Yes, uh, but this is a- this is a little formula that's either true or false depending on whether y-i is 0. Uh, I guess if y-i is 01 this- this is the same as not y-i I guess, so 1 minus y-i will give us 0- yeah. Cool. Okay great. Um, all right. So I think both of the models, ah, ah, including the details that maximum likelihood estimate are written out in, um, more detail in the lecture notes. Um, so, you know, when would you use the Naive Bayes algorithm. It turns out Naive Bayes algorithm is actually not very competitive with other learning algorithms. Uh, so for most problems you find that logistic regression,um, will work better in terms of delivering a higher accuracy than Naive Bayes. But the- the- the advantages of Naive Bayes is, uh, first it's computationally very efficient, and second it's relatively quick to implement, right? And it also doesn't require an iterative gradient descent thing, and the number of lines of code needed to implement Naive Bayes is relatively small. So if you are, uh, facing a problem, way you go is to implement something quick and dirty, then Naive Bayes is- is maybe a reasonable choice. Um, and I think, um, you know as you work on your class projects, I think some of you probably a minority will try to invent a new machine learning algorithm, and write a research paper. Um, and I think, you know, inventing the machine learning algorithm is a great thing to do. It helps a lot of people on a lot different applications so that's one. Um, the majority of class projects in CS229 will try to apply a learning algorithm to a project that you care about. Apply to a research project you're working on somewhere in Stanford or apply to a fun application you wanna build or apply to a business application for some of you taking this on SCPD, taking this remotely. And if your goal is not to invent a brand new learning algorithm, but to take the existing algorithms and apply them, then rule of thumb that's suggested here is, um, ah, when you get started on a machine learning project, start by implementing something quick and dirty. That's been implemented in most complicated possible learning algorithms. Start by implementing something quickly, and, uh, train the algorithm, look at how it performs, and then use that to deep out the algorithm, and keep iterating on- on that. So I think, you know, we're- we're- that's at Stanford. So we're very good at coming up with very complicated algorithms. But if your goal is to make something, um, work for an application, rather than inventing a new learning algorithm and publishing a paper on a new technical, you know, contribution. If you- if your main goal is, uh, you're working on an application on- on understanding news better or improving the environment or estimating prices or whatever. Uh, and your primary objective is just make an algorithm work. Then rather than, uh, building a very complicated algorithm at the onset, um, I would recommend implementing something quickly, uh, so that you can then better understand how it's performing, and then do error analysis which we'll talk about later, and use that to drive your development. Um, you know one- one- one analogy I sometimes make is that, um, if you are, uh, uh, let's see. So if you're writing a new computer program with 10,000 lines of code, right? One approach is to write all 10,000 lines of code first, and then to try compiling it for the first time, right. And that's clearly a bad idea, right? And it's a, you know, you should write small modules, run it, it test it- unit testing, and then build up a program incrementally. Rather than write 10,000 lines of code, and then start to see what syntax errors you're getting for the first time. Um, and I think it's similar for machine learning. Uh, instead of building a very complicated algorithm from the get-go, um, you build a simpler algorithm, test it, and then- and then use the- see what it's doing wrong, see what it's doing wrong to improve from there. You often end up, um, uh, getting to a better performing algorithm faster. Um, so here's- here's- here's one example. This is actually something I used to work on. I- I actually started a conference on email and anti-spam. My student worked on spam classification many years ago. And, um, it turns out that when your'e starting out on a new application problem, um, it's hard to know what's the hardest part of the problem, right. So if you want to build an anti-spam classifier, there are lots of you could work on. For example, spammers will deliberately misspell words. Uh, you know, a lot of mortgage spam, right, refinance your mortgage or whatever. But instead of writing th- the words uh, mortgage spammers will write M-0-R-T-G-A-G-E. Right. Or instead of G-A-G-E, maybe, uh, slash slash, right. But all of us as people have no trouble reading this as a word mortgage but uh, this will trip up a spam filter. This might map the word to- to an unknown word. There it was off by just a letter and it hasn't seen this before, and that's the lightest way to slip by this spam filter. So that's one idea for improving, um, spam or- actually one of our PhD students [inaudible] actually wrote a paper mapping this back to words like that. So the spam filter can see the words the way that humans see them, right. So- so that's one idea. Um, another idea might be a lot of spam email spoofs email headers. [NOISE] You know, uh, spam has often tried to hide where the email truly came from, uh, by spoofing the email header that, you know, address and other information. Um, ah, an- an- another thing you might do is, ah, try to fetch the URLs that are referred to in the email, and then analyze the web pages that you get to. Right, there are a lot of things that you could do to improve a spam filter. And any one of these topics could easily be three months or six months of research. But when you are building say a new spam filter for the first time, how do you actually know which of these is the best investments of your time. So my advice to, ah, those who work on projects, if your primary goal is to just get this thing to work, is to not so-somewhat arbitrarily dive in, and spend six months on improving this or spend, you know, six months on trying to analyze email headers. But you instead implement a more basic algorithm. Almost implement something quick and dirty. And then look at the examples that your learning algorithm is still misclassifying. And you'll find that, if after you've implemented a quick and dirty algorithm, you find that your sp- anti-spam algorithm is misclassifying a lot of examples with these deliberately misspelled words. It's only then that you have more evidence that it's worth spending a bunch of time solving the misspelled words, the deliberately misspelled words problem. Right. When you implement a spam filter, and you see that it's not misclassifying a lot of examples of these misspelled words, then I would say don't bother. Go work on something else instead or at least- at least treat that as a low priority. Okay. So one of the uses of, um, GDA Gaussian discriminant analysis as well as Naive Bayes is that- is, uh, they're not going to be the most accurate algorithms. If you want the highest classification accuracy, their are other algorithms like logistic regression or SVM which we talked about, or neural networks we'll talk about later, which will almost always give you higher classification accuracy than these algorithms. But the advantage of Gaussian discriminant analysis, and Naive Bayes is that, um, they are very quick to train or it's non-iterative. Uh, uh, this is just counting, and GDA is just computing means and co-variances, right. So it's very competition efficient, and also they are- they are simple to implement. So it can help you implement that quick and dirty thing that helps you, um, get going more quickly. And so I think for your project as well, I would advise most of you to uh, uh, you know, as you start working on your project, I would advise most of you to, um, don't spend weeks designing exactly what you're going to do. Uh, if you have an applicant- if- if you- if you- if you have an applied project, but instead get a data set, uh, and apply something simple. Start with logistic regression not- not a neural network or not- not something more complicated. Or start with Naive Bayes, and then see how that performs, and then- and then go from there. Okay? All right. So that's it for, uh, Naive Bayes, um, and generative learning algorithms. The next thing I wanna do is move on to a different cla- type of classifier, ah, which is a support vector machine. Um, let me just check any questions about this before I move on. Yeah. [inaudible]. Sorry you can use logistic regression with. [OVERLAPPING] Discrete variables [inaudible] Oh I see yeah right yes so yes, uh, right. So one of the weaknesses of the Naive Bayes Algorithm is that it treats all of the words as completely, you know, separate from each other. And so the words one and two are quite similar and the words, you know, like mother and father are quite similar. Uh, and so wi- wi-with this, uh, feature representation, it doesn't know the relationship between these words. So, um, in machine learning there are other ways of representing words, uh, there's a technique called word embeddings, um-[NOISE] In which you choose the feature representation that encodes the fact that the words one and two are quite similar to each other. Uh, the words mother and father are quite similar to each other. Yeah the words, um, whatever London and Tokyo are quite similar to each other because they are both city names. Uh, and so, uh, this is a technique that I was not planning to teach here but that is taught in CS 230. So in- in- in neural networks [NOISE] , right, but you can also read up on word embeddings or look at some of the videos and resources from CS 230 if you want to learn about that. Uh, so the word embeddings techniques. These are techniques from neural networks really. Will reduce the number of training examples you need so they are a good text classifier because it comes in with more knowledge baked in, right. Cool. Anything else? [NOISE] Cool. By the way I do this in the other classes too. In some of the other classes, somebody's got a question they go, no we don't do that we just covered that in CS 229 so [LAUGHTER]. Actually CS224N I think also covers this. Yeah, The NLP class, yeah, pretty sure, actually I am sure they do. Okay so, [NOISE] su-support vector machines, SVMs. Um, let's say the classification problem, [NOISE]. Right, where the data set looks like this, uh, and so you want an algorithm to find, you know, like a nonlinear decision boundary, right? So the support vector machine will be an algorithm to help us find potentially very very non-linear decision boundaries like this. Now one way to build a classifier like this would be to use logistic regression. But if this is X 1, this is X 2, right, so logistic regression will fit the three lines of data, Gaussian discriminant analysis will end up with a straight line decision boundary. So one way to apply logistic regression like this would be to take your feature vector X 1 X 2 and map it to a high dimensional feature vector with, you know, X 1, X 2, X 1 squared, X 2 squared X 1, X 2 maybe X 1 cubed, X 2 cubed and so on. And have a new feature vector which we would call phi of x. That- that has these high-dimensional features right, now, um, it turns out if you do this and then apply logistic regression to this augmented feature vector, uh, then logistic regression can learn non-linear decision boundaries. Uh, with these other features it's just regression and you actually learn the decision boundary. This is- there's a- there's a shape of an ellipse, right. Um, but randomly choosing these features is little bit of a pain right. I- I- I don't know. What I- I- I actually don't know what, you know, type of a, uh, set of features could get you a decision boundary like that right. Rather than just an ellipse and more complex as your boundary. Um, and what we will see with support vector machines is that we will be able to derive an algorithm that can take say input features X 1, X 2, map them to a much higher dimensional set of features. Uh, and then apply a linear classifier, uh, in a way similar to logistic regression. But different in details that allows you to learn very non-linear decision boundaries. Okay. Um, and I think, uh, you know, a support vector machine, one of the- actually one of the reasons, uh, support vector machines are used today is- is a relatively turn-key algorithm. And what I mean by that is it doesn't have too many parameters to fiddle with. Uh, even for logistic regression or for linear regression. You know you might have to tune the gradient descent parameter, uh, tune the learning rate sorry, tune the learning rate alpha. And that's just another thing to fit in with. We`ll try a few values and hope you didn't mess up how you set that value. Um, support vector machine today has a very, uh, robust, very mature software packages that you can just download to train the support vector machine on- on any on, you know, on a problem and you just run it and the algorithm will, kind of, converge without you having to worry too much about the details. Um, so I think in the grand scheme of things today I would say support vector machines are not as effective as neural networks for many problems. But, um, uh, but one great property of support vector machines is- is- is turn key. You kind of just turn the key and it works and there isn't as many parameters like the learning rate and other things that you had to fiddle with. Okay, um so the road map is, uh, we're going to develop the following set of ideas. We talked about the optimal [NOISE] margin classifier today, and, uh, we'll start with the separable case and what that means is going to start off with datasets, um, that we assume look like this and that are linearly separable. Right, and so the optimal margin classifier is the basic building block for the support vector machine, and, uh, we'll first derive an algorithm, uh, that' ll be- that will have some similarities to logistic regression but that allows us to scale, uh, in important ways that to find a linear classifier for training sets like this that we assume for now can be linearly separated. Um, so we'll do that today. And then what you'll see on Wednesday is, um, excuse me, next Monday, which is next Monday is an idea called kernels. And the kernel idea is one of the most powerful ideas in machine learning. Is, um, how do you take a feature vector x, maybe this is R 2, right, and map it to a much higher dimensional set of features. In our example there that was R 5, right, and then train an algorithm on this high dimensional set of features. And- and the cool thing about kernels is that this high dimensional set of features may not be R 5. It might be R100,000 or it might even be R infinite. Um, and so with the kernel formulation we're gonna take our original set of features that you are given for the houses you're trying to sell. For, uh, you know, medical conditions you're trying to predict and map this two-dimensional feature vector space into maybe infinite dimensional set of features. And, um, what this does is it relieves us from a lot of the burden of manually picking features, right, like do you want to have square root of X 1 or maybe X 1, X 2 to the power of two thirds. So you just don't have to fiddle with these features too much because the kernels will allow you to choose an infinitely large set of features. Okay, um, and then finally, uh, we'll talk about the inseparable case. [NOISE] So we're gonna do this today and then this next, uh, Monday okay. So [NOISE] and by the way I, you know, th-the machine learning world has become a little bit funny. I think that if you read in the news the media talks a lot about machine learning, the media just talks about, you know, neural networks all the time, right? And you'll hear about neural networks and deep learning a little bit later in this class. But if you look at what actually happens in practice in machine learning. Uh, the set of algorithms is actually used in practice, is actually much wider than neural networks and deep learning. So- so we do not live in a neural networks only world. We actually use many, many tools in machine learning. It's just that deep learning attracts the attention of the media in some way there's quite disproportionate to what I find useful, you know, I knew that's like- I loved that, you know but- but they're not- they're not the only thing in the world, uh, and so yeah and then late last night I was talking to an engineer, uh, about factor analysis which we'll learn about later in CS229 right, there's an unsupervised learning algorithm and there's an application, uh, that one of my teams is working on in manufacturing. Where we're gonna use factor analysis or something very similar to it. Which- which is totally not a neural network technique. Right. But still there, there are all these other techniques that including support vector machines and Naive Bayes I think do get used and are important. All right so let's start developing the optimal margin classifier. [NOISE] So, um, first, let me define the functional margin, which is, uh, informally, the functional margin of the classifier is how well- how, how confidently and accurately do you classify an example. Um, so here's what I mean. Uh, we're gonna go to binary classification, and we're gonna use logistic regression, right? So, so let's, let's start by motivating this with logistic regression [NOISE]. So this, this is a classifier H of theta equals the logistic function of pi to theta transpose x. And so, um, if you turn this into a binary classification, if, if, if you have this algorithm predict not a probability but predict 0 or 1, then what this classifier will do is, uh, predict 1. If theta transpose x is greater than 0, right? Um, and predict 0 otherwise. Okay. Because theta transpose x greater than 0, this means that, um, g of theta transpose x is greater than 0.5 [NOISE], and you can have greater than or greater than equal to, it doesn't matter. It is, it's exactly 0.5, it doesn't really matter what you do. Um, and so you predict 1 if theta transpose x is greater than equal to 0, meaning that the upper probability- the estimated probability of a class being 1 is greater than 50/50, and so you predict 1. And if theta transpose x is less than 0, then you predict that this class is 0. Okay. So this is what will happen if you have, um, logistic regression output 1 or 0 rather than output a probability, right. So in other words, this means that if y_i is equal to 1, right? Then hope or we want that theta transpose x_i is much greater than 0. Uh, this double greater than sign, it means much greater, right? Um, uh, because if the true label is 1, then if the algorithm is doing well, hopefully theta transpose x, right? Will be faster there, right? So the output probability is very, very close to 1. And if indeed theta transpose x is much greater than 0, then g of theta transpose x will be very close to 1 which means that is, it's giving a very good, very accurate prediction. Very correct and confident prediction, right? That, that equals 1. Um, and if y_i is equal to 0, then what we want or what we hope, is that theta transpose xi is much less than 0, right? Because, uh, if this is true, then the algorithm is doing very well on this example. Okay. So, um. So the functional margin which we'll define in a second, uh, captures [NOISE] this idea that if a classifier has a large functional margin, it means that these two statements are true, right? Um, so looking ahead a little bit, there's a different thing we'll define in a second which is called the geometric margin and that's the following. And for now, let's assume the data is, is linearly separable. Okay. Um, right. So let's say that's the data set. [NOISE] Now, [NOISE] that seems like a pretty good decision boundary for separating the positive [NOISE] and negative examples. [NOISE] Um, that's another decision boundary in red, that also separates the positive negative examples. But somehow the green line looks much better than the red line, okay? So, uh, why is that? Well, the red line comes really close to a few of the training examples, whereas the green line, you know, has a much bigger separation, right? Just has a much bigger distance from the positive and negative examples. So even though the red line and the [NOISE] green line both, you know, perfectly separate the positive and negative examples, the green line has a much bigger separation, uh, which is called the geometric margin. But there's a much bigger geometric margin meaning a physical separation from the trained examples even as it separates them. Okay. Um, and so what I'd like to do in the, uh, next several, I guess in the next, next, next 20 minutes is formalize definite functional margin, formalize definition geometric margin, and it will pose the, the, I guess the optimal margin classifier which based in the algorithm that tries to maximize the geometric margin. So what the rudimentary SVM does, what the SVM and low-dimensional spaces will do, also called the optimal margin classifier, is pose an optimization problem to try to find the green line to classify these examples, okay? So, um, [NOISE] now, um, in order to develop SVMs, I'm going to change the notation a little bit again. You know, because these algorithms have different properties, um, using slightly different notation to describe them, makes then the math a little bit easier. So when developing SVMs, we're going to use, um, minus 1 and plus 1 to denote the class labels. And, um, we're going to have a H output. So rather than having a hypothesis output a probability like you saw in logistic regression, the support vector machine will output either minus 1 or plus 1. And so, uh, g of z becomes minus 1 or 1, um, actually. So output 1 if z is greater than equal to 0, and minus 1 otherwise, okay. So instead of a smooth transition from 0 to 1, we have a hard transition, an abrupt transition from negative 1 to, um, plus 1. [NOISE] And finally, where previously we had for logistic regression, right? Where, uh, this was R N plus 1 with x_0 equals 1. For the SVM, we will have h of, I'll just write this out. Okay. Um, so for the SVM, the parameters of the SVM will be the parameters w and b. And hypothesis applied to x will be g of this, and where dropping the x_0 equals 1 [NOISE] constraint. So separate out w and b as follows. So this is a standard notation used to develop support vector machines. Um, and one way to think about this, is if the parameters are, you know, theta 0, theta 1, theta 2, theta 3, then this is a new b, and this is a new w. Okay? So you just separate out the, the, the, uh, theta 0 which was previously multiplying to x_o, right? And so um, uh, yeah, right. And so this term here becomes sum from i equals 1 through N, uh, w_i x_i plus b, right? Since we've gotten rid of [NOISE] x_0. [NOISE]. All right. So let me formalize the definition of a functional margin. [NOISE] So um, ah, so the parameters w and b are defined as linear classifier, right, so you know, wh- what- the formulas we just wrote down the parameters w and b, defines the a- a, uh, uh, re- really defines a hyperplane. Ah, but defines a line, or in high dimensions it'd be a plane or a hyperplane that defines a straight line, ah, ah, separating out the positive and negative examples. And so we're gonna say the functional margin of the, actually my hyperplane [NOISE] Okay, so the functional margin of a hyperplane defined by this with respect to one training example. We're going to write as this, um, and hyperplane just means straight line, right, but in high dimension. So this is linear classifiers, so its just, you know, functional margin of this classifier with respect to one training example, we're going to define as this. And so if you compare this with the equations we had up there, um, you know, if y equals 1 we hope for that, if y equals 0, we hope for that. So really what we hope for is for our classifier to achieve a large functional margin, right? And so, um, so if y_i equals 1 then what we want or what we hope for, um, is that w transpose x_i plus b is greater than, much greater than 0, and that the label is equal to minus 1. [NOISE] Then we want or we hope that [NOISE] this is much smaller than 0. Um, and if you, kind of, combine these two statements, if you take y_i, right, and multiply it with, er, that, [NOISE] then, you know, these two statements together is basically saying that you hope that Gamma hat i is much greater than 0, right, because y_i now is plus 1 or minus 1 and, uh, uh, and so y is equal to 1 you want this to be very, very large. If y_i is negative 1, you want this to be a very, very large negative number. Um, and so either way it's just saying that you hope this would be very large, okay? So we just hope that. [NOISE] And- and as an aside, ah, one property of this as well is that, um, so long as Gamma hat i is greater than 0, that means the algorithm, um, right, is equal to y_i. [NOISE] Ah, so- so- so long as the, um, functional margin, so long as this Gamma hat i is greater than 0, it means that, ah, either this is bigger than 0, this is less than 0 depending on the sign of the label. And it means that the algorithm gets this one example correct at least, right? And if- if much greater than 0 then it means, you know, so if it is greater than 0 it means in- in the logistic regression case it means that, the prediction is at least a little bit above 0.5, a little bit below 0.5, probably 0 so that at least gets it, right? And if it is much greater than 0 much less than 0, then that means it, you know, the probability of output in the logistic regression case is either very close to 1 or very close to 0. So one other definition, [NOISE] I'm gonna define the functional margin with respect to the training set to be Gamma hat, equals min over i of Gamma hat i, where here i [NOISE] equals ranges over your training examples. Okay. So, um, this is a worst-case notion, but so this definition of a function margin, on the left we defined functional margins with respect to a single training example, which is how well are you doing on that one training example? And we'll define the function margin with respect to the entire training set as, how well are you doing on the worst example in your training set? Okay, ah, this is a little bit of a plateau notion and we're for now, for today, we're assuming that the training set is linearly separable. So we're gonna assume that the training set, you know, it looks like this. [NOISE] And that you can separate it on a straight line [NOISE] that will relax this later, but because we're assuming, just for today, that the training set is, um, linearly separable, we'll use this kind of worst-case notion and define the functional margin to be the functional margin of the worst training example. Okay? [NOISE]. Now, one thing about the definition of the functional margin is, it's actually really easy to cheat and increase the functional margin, right? And one thing you can do, um, in regards to this formula is if you take w, you know, and multiply it by 2 and take b and multiply it by 2. [NOISE] then, um, everything here just multiplies by two and you've doubled the functional margin, right, but you haven't actually changed anything meaningful. Okay, so- so one, one way to cheat on the functional margin is just by scaling the parameters by 2 or instead of 2 maybe you can [NOISE] multiply all your parameters by 10 and then you've actually increased the functional margin of your training examples as 10x, but, ah, this doesn't actually change the decision boundary, right? It doesn't actually change any classification, just to multiply all of your parameters by a factor of 10. Um, so one thing you could do is, ah, replace, one thing you could do, um, would be to normalize the length for your parameters. So for example, hypothetically you could impose a constraint, the normal w is equal to 1, or another way to do that would be to take w and b and replace it with w over normal b and replace b with, [NOISE] right, just the value of parameters through by the magnitude, by the- by the Euclidean length of the parameter vector w, and this doesn't change any classification, It's just rescaling the parameters. Ah, but, ah, but, but that it prevents, you know, display of cheating on the functional margin. Okay. Um, and in fact, more generally you could actually scale w and b by any other values you want and- and it doesn't- doesn't matter, right? You can choose to replace this by w over 17 and b over 17 or any other number or any, right, and the classification stays the same. Okay. So we'll come back and use this property, in a little bit. Okay. [NOISE] All right. So to find the functional margin, let's define the geometric margin. An- and you'll see in a second how the geometric and functional margin relate to each other. Um, so les- let's, let's define the, uh, geometric margin with respect to a single example. Which is, um- so let's see- let's say you have a classifier. All right, so given parameters w and b that defines a linear classifier and the equation, wx plus b equals 0 defines the equation of a straight line. Uh, so the axes here are x_1 and x_2, and then half of this plane you know, in this half of the plane, you'll have w transpose x plus b is greater than 0. And in this half, you'll have w transpose x plus b is less than 0. And in between this- the straight line given by this equation w transpose x plus b equals 0, right. And so given parameters w and b, the upper right, that's where your cost high will predict y equals 1 and the lower left is where it'll predict y is equal to negative 1, okay. Now, let's say you have one training example here, right? So that's a training example, x_i, y_i. And, uh, let's say it's a positive example, okay? And so, um, your classifiers classify this example correctly, right? Because in the upper right half- half plane- here in this half plane w transpose x plus b is greater than 0. And so in the- in this upper-right region, uh, your classifier is predicting plus 1, right? Whereas in this lower half region would be predicting h of x equals negative 1. Right, and that's why this straight line where it switches from predicting negative to positive is the decision boundary. So what we're going to do is define this distance, um, to be that geometric margin of this training example, is the Euc- the Euclidean distance is what will define to be the geometric margin. So let me just write down what that is. [NOISE] So the geometric margin of, you know, the classifier of the hyperplane defined by w, b with respect to one example x_i, y_i. This is going to be gamma i equals w transpose x plus b over [NOISE] the normal w. [NOISE] Um, and let's see I'm not proving why this is the case, the proof is given in the lecture notes but, uh, the lecture notes shows why this is the right formula for measuring the Euclidean distance that I just drew [NOISE] in the picture up there, okay. Uh, but, and then, I'm not proving this here but the proof is given in the lecture notes but this turns out to be the way you compute the Euclidean distance between that example and uh, and the decision boundary, okay? Um, and uh, a- and this is [NOISE] for the positive example I guess. Uh, more generally, um, I'm going to define the geometric margin to be equal to this, uh, and this definition applies to positive examples and the negative examples, okay? And so the relationship between the geometric margin and the functional margin is that the geometric margin is equal to the functional margin divided by the norm of w. [NOISE] Finally, um, the geometric margin with respect to the training set is, um, where again uses worst-case notion of, uh- look through all your training examples and pick the worst possible training example, um, and that is your geometric margin on the training set. Uh, an- and so I hope the- sorry, I hope the notation is clear, right. So gamma hat was the functional margin and gamma is the geometric margin, okay? And so, um, what the optimal margin classifier does is [NOISE] , um, choose the parameters w and b to maximize the geometric margin, okay? Um, so in other words, thi- this- the optimal margin classifiers is the baby SVM, you know, it's like, a SVM for linearly separable data, uh, at least for today. [NOISE] And so the optimal margin classifier will choose that straight line, because that straight line maximizes the distance or maximizes the geometric margin to all of these examples, okay? Now, uh, how you pose this mathematically, there are a few steps of this derivations I don't want to do but I'll, I'll just describe the beginning step and the last step and leave the in bet- in between steps to the lecture notes. But it turns out that, um, one way to pose this problem is to maximize gamma w and b of gamma. So you want to maximize the geometric margin subject to that. Subject to the every training example, um, uh, must have geometric margin, uh, uh, greater than or equal to gamma, right? So you want gamma to be as big as possible subject to that every single training example must have at least that geometric margin. [NOISE] This causes you to maximize the worst-case geometric margin. And it turns out this is, um- not in this form, this is in a convex optimization problems. So it's difficult to solve this without a gradient descent and initially known local optima and so on. But it turns out that via a few steps of V writing, you can reformulate this problem as, um, into the equivalent problem which is a minimizing norm of w subject to the geometric margin, right. Um, so it turns out- so I hope this problem makes sense, right? So this problem is just you know solve for w and b to make sure that every example has a geometric margin greater or equal to gamma and you want gamma to be as big as possible. So this is the way to formulate optimization problem that says, ''Maximize the geometric margin.'' And what we show in the lecture notes is that, uh, through a few steps, uh, you can rewrite this optimization problem into the following equivalent form which is to try to minimize the norm of w, uh, subject to this. And maybe one piece of intuition to take away is, um, uh, the smaller w is the bigger, right? Th- th- the, the less of a normalization division effect you have, right? Uh, but the details I gave you in the lecture notes, okay? Um, but this turns out to be a convex optimization problem and if you optimize this, then you will have the optimal margin classifier and they're very good numerical optimization packages to solve this optimization problem. And if you give this a dataset then, you know, assuming your data's separable [NOISE] and we'll fix that assumption, uh, when we convene next week, then you have the optimal management classifier which is really a baby SVM and we add kernels to it, then you have the full complexity of the SVM norm, okay? All right, let's break for today, uh, see, see you guys next Monday. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_10_Matrices_and_Uncertainty.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: DA, thank you. It's an excellent point. You're absolutely right. So I think both statements stand. So Compton's experiments convinced even more of the community. To win the Nobel Prize then or now you have to convince three people in Stockholm, which is hard. To convince the larger international community, which even back then included several thousand people, that's a different-- that sometimes occurs on different timescales by different processes. So you're absolutely right, and I did not emphasize this. As many of you might know, and DA reminds us, Einstein did not win the Nobel Prize for any of the work on relativity even though he had achieved kind of renown, both within the profession and beyond, as we talked about briefly a few classes ago, because of his work on relativity, in particular, for the Eddington eclipse expedition stuff, which was released to the world in November of 1919. And yet, even after that, what he actually won the Prize for was indeed his heuristic suggestion about light quantum. And so I think the idea there, DA, to make sense of it-- and I agree with you. It is a little funny timing-- is that there was growing awareness his work was of value and of interest. And then Compton's work really convinced, so to speak, the remaining skeptics, of whom there were many. In fact, even Compton's work didn't convince every single remaining skeptic, but that did a lot of the work for the larger community beyond. And you can see that they were coming out kind of around the same time, but not literally one after the other. And I think a measure of that is how quickly Compton won his Nobel Prize for that work. So Einstein's prize was something like 16 years later, and Compton's was like 3 and 1/2 years later approximately, something like that. So in terms of the way that the community recognized the value of Compton's experiments, I think that does point to a broader kind of coalescing. But you're quite right that Einstein did receive the Nobel Prize. When he won it, he won it for the light quantum hypothesis. Excellent point. To be honest, I'd have to go back and check more carefully. My sense of that had always been Einstein's paper in 1905 that introduced the light quantum itself described three or four or five separate instances in which the light quantum effect-- light quantum hypothesis would help to make sense of things. So his paper on the light quantum covered several phenomena, including things like black-body radiation and other things and not only photoelectric phenomena like from [INAUDIBLE]. So I guess I had always thought they meant the light quantum which could be applied to things like photoelectric effect and other phenomena. But it's a great question you raise, Orson. To be honest, I'm not sure. OK. So today we're going to launch into-- we're going to move into what starts being known even in real time as quantum mechanics, not this or that portion of a kind of old quantum theory. And we're going to look today-- for kind of 2/3 of today's class, we'll be looking at work most closely associated with Werner Heisenberg. Again, a name you probably already knew. One of the readings for today was his very, very brief paper published in 1925. We're not going to start with that, we're going to warm up to it. So in fact, what we're going to do today is have one more look, a little more kind of detailed juicy look at the last gasp of what became known as old quantum theory. Not just because it's old, but I want to talk about it because it helps us, I think, understand really what was Heisenberg practicing. This is the work he himself was pursuing as a graduate student, working with some people we'll talk about for part one-- he and many of his peers. And also, in a sense, what finally made Heisenberg want to make a break with that. So I think it's worth sitting for this first part of today's class session on work that, again, was rapidly seen as part of the old not the new. Because I do think it helps us understand what was Heisenberg self-consciously trying to do when he did start composing this new work in the spring of 1925. So let's jump in with that. A quick reminder. I think this was likely familiar for many of you even before our previous class session. But nonetheless, we looked last time at things like the Bohr model of the atom first introduced in 1913. And Bohr had kind of finally in frustration-- he had first wanted to treat very complicated arbitrary electrons with many, many-- excuse me-- arbitrary atoms with many, many electrons. And in the end, finally settled to try to tackle just the simplest kind of atom-- hydrogen atoms with a single electron. And again, he had this kind of backwards/forwards, one step forward, one step back kind of approach to start with thoroughly familiar expressions from Newtonian's mechanics, from Coulomb's electrostatics-- really what we would now call classical physics. And then he kind of attached these new so-called quantum conditions and then try to see where that would lead him. And that led him, for example, to suggest that the motion of electrons in a hydrogen atom-- the motion of a single electron in a hydrogen atom-- might be restricted to very discrete states of motion. These radii or the average distances would not take any old value across a continuum, but would snap into place-- 1 squared, 2 squared, 3 squared and so on time some unit length we now call the Bohr radius. And when he did that, he was able then to have an explanation where really none had been in hand before for empirical phenomena like the so-called Balmer series, again, shown here. This is actual real data from a spectrograph-- not taken by me, taken from Google image. But someone performed an actual more modern investigation of the emission lines, the spectral lines that get emitted from a gas of hydrogen atoms when you add some extra energy. You excite the atoms. They relax by emitting not just light of any old color, but light of very particular colors, especially those to which our eyes are sensitive here in the visible part of the spectrum. Well, that was really very exciting and got a lot of people's attention. But, of course, the work wasn't over then. Even before Bohr's atom-- model of the atom-- it had been known by the same kinds of experts who were making very precise measurements of the exact color-- the real frequencies of this line and that line and that one-- that all kinds of things can happen to the pattern of light that comes out to the very specific values of the colors of light if the gas of atoms is placed in an external magnetic field. So what would appear as a single sharp line like this kind of bluish turquoise line, or this very sharp, single red line, when there's no external magnetic field, those would split. They would typically split into triplets of lines. So this is my simulated data using the Paint program in PowerPoint. Imagine we took this real measurement-- this line here. This is what it would look like in the absence of an external magnetic field. In the presence of an external magnetic field, that single line would split into three very slightly different colors, but very closely spaced. So again, if you look at a kind of spectrogram. They'd be very close to each other and just marginally different in the numerical value of their frequencies when the excited atoms were placed in an external magnetic field. Many researchers had noticed that. One of the people who was most careful and precise in quantifying this effect was a Dutch physicist named Pieter Zeeman, who became known as the Zeeman effect-- the splitting of single spectral lines into-- usually into triplets in the presence of an external magnetic field. So that set in motion some work by colleagues of people like Niels Bohr, who wanted to see could they make sense of this empirical information about spectral lines. One of the most significant in this was Arnold Sommerfeld, who was the senior professor, the single ordinarius professor of theoretical physics in Munich in Germany. And Sommerfeld is interesting for many, many reasons , among them he really took up the kind of mantle of Bohr's model, as we'll see for much of this first part of today's class, and he did some really, I think, very creative kind of conceptual moves we'll look at a little bit. He's also fascinating, I think, historically because he was a remarkably prolific and productive mentor of younger physicists. So coming out of the Sommerfeld school, his immediate advisees in Munich included many, many people who would go on to win the Nobel Prize, and otherwise, make a lasting impact on our understanding of matter-- people like Vera Heisenberg and Wolfgang Pauli and later Hans Bethe and others. So in Munich, Sommerfeld set up this really remarkable school to focus on what they began to call atomic structure-- atombau and spektralliniens. So what about these emissions of lines from excited atoms-- can they use that information in it's very fine-grained quantitative detail to try to learn more about the structure of, say, electrons motions in atoms? And they began, as we'll see, by trying to generalize Bohr's model to what they consider the more general state of motion-- elliptical orbits. And this was the first edition of what became a remarkably influential kind of research monograph-- not exactly a textbook-- but often used to train the younger generation. OK. So remember we saw last time, for simplicity when Bohr was putting together his first atomic model in 1913, he actually considered circular orbits for a single electron at a time. So he knew, as all of us would know, that in general, under a central force, whether we're talking about the motion of planets in the solar system or a lonesome electron around the positively charged nucleus of a hydrogen atom, the general state of motion will be ellipses. It will be not a circle necessarily. For simplicity, Bohr started with a circle. Well, Sommerfeld knew his celestial mechanics as well as anyone. He said the more general state of motion would be ellipses. So again, much like Bohr had done, Sommerfeld and his students go back to the perfectly familiar totally Newtonian kind of classical mechanical expression for the energy of a body of mass, m, in an elliptical orbit-- sorry-- an elliptical orbit, in this case, with a positively charged nucleus as one focus of the orbit. So whereas Bohr had neglected this middle term, we need to include that term when we describe elliptical motion. This is proportional to the angular momentum. I should say, by the way, one of my favorite books to make sense of this-- and I'm just going to give you a little taste-- is by a friend of mine, Suman Seth, a book came out a couple of years ago called Crafting the Quantum, and does a really deep dive into Sommerfeld and his whole school. I think it's really fascinating. Some of you might enjoy learning more about that. So I'm going to give you the short version of some of the things that we learned from Suman's work. OK. So whereas, a circular orbit has only one degree of freedom-- things can vary really only by changing the radius-- an elliptical orbit, in general, has two independent degrees of freedom. We could think of them as like radius and one of these angles-- a kind of azimuthal angle. So Sommerfeld proposed basically take Bohr's prescription and just do that for every independent degree of freedom. He's thinking very formally in terms of the classical mechanics. There are ways of characterizing all the ways that this body and motion can kind of wiggle-- all the independent states that would characterize its motion for each of those, subjected to this new quantum condition. So this is what had been Bohr's original one. This is where Bohr had written down mvr equals nh, which we saw last time. In Sommerfeld's hands, that just becomes one of potential whole series of quantum conditions. There is now an independent degree of freedom related to this angular motion, and it goes by the angular momentum capital L times this accumulated angle. What if that also could only take integer values set with a scale set by Planck's constant? So if you go through the same series of exercises as Bohr had done, but now for the more general expression, you arrive at a new expression for the energy, which depends on two of these integers, not only one. It's the same series of steps. It's just Sommerfeld's starting from the more general states of motion. And these integers have to obey certain kind of relationships to each other. Now, if one follows that through, Sommerfeld continues-- he's doing his work starting in 1915-16, very soon after Bohr's work, and finally brings it together in this monograph in 1919. If one introduces that second more kind of ellipse-specific quantity-- this is really telling us how much the orbital motion deviates from being merely a circle-- subject that to only take on integer units proportional to Planck's constant-- then the projection of that vector quantity along any other direction z will also be discrete. It will be quantized. So now you have another value that typically became labeled by M-- M sub l-- we still use that notation to this day-- that we have the original Bohr principal quantum number, the one associated with the momentum that's conjugate to the radius. We have now the second quantum number Sommerfeld introduces related to this kind of ellipse to the deviation from circles. And we have a third integer quantum number given by the projection of that kind of elliptical motion along any particular direction in space we might consider. We'll call that the z-axis. And so, for example, if you had one unit of deviation from circular motion, if this new quantum number associated with l Were equal to 1 instead of 0, then this projection could only take on one of three possible values-- either minus 1, 0, or plus 1. If you were kind of, so to speak, two units away from a pure circle, then this projection could take one of five values. The point was for any integer value n sub phi associated with that kind of ellipticity, there would be an odd number of these possible projections-- always an integer, and there will be an odd number of distinct values it could take. Some of this might be familiar to you from chemistry classes. We still use this in spectroscopy to this day. This is really where it starts to come from. So in place of only one quantum number, as in Bohr's very simple circular model, Sommerfeld and his students began considering three quantum numbers. The Bohr principle one, you might think about a kind of ellipticity one, and the projection, each of which could only take on integer values. So again, don't worry if that's brand new and seems confusing. I just want to focus briefly on what's the kind of form reasoning? What are the arguments that Sommerfeld's putting forward? Just like Bohr had been doing, he's going to start with these thoroughly familiar classical descriptions of the objects motions, familiar from things like celestial mechanics. He's not inventing that part new. He's taking it off the shelf. And then kind of midway through his calculation, he'll just snap on some new so-called quantum condition. He won't derive it. He won't really, so to speak, justify it or prove it. He's going to make-- he's going to do like Niels Bohr had done-- take classical descriptions of motion and then start adding on these kind of ad hoc constraints to force quantities that might have taken on any one of a continuum of values in the classical mechanics in the Newtonian case. Those will now snap into kind of integer values for a scale set by Planck's constant. Now, why would anyone do that? Whoever bought this crazy book, Atombau and Spektrallinien. Because Sommerfeld began giving-- showing some real payoffs for that. If one follows this seemingly, again, kind of complicated almost Baroque series of steps, then Sommerfeld could come back to the quantity he started with, the spectral lines, including things like the splitting of the Zeeman effect when these excited atoms-- gases were put into external magnetic fields. So again, he begins reasoning, begins with thoroughly familiar expressions from electromagnetism, even pre-Maxwell, going back to the days of Coulomb and Ampere and others. He goes to the early 19th century and then starts kind of bringing them into this new framework. So one of the first things that Sommerfeld and his students do with this new kind of generalized framework of the Bohr-Sommerfeld approach is to reason that any object with electric charge q-- any little body with some electric charge that's moving with some angular momentum in some orbital motion will have something called a magnetic moment. Again, that was something that the French scholars of electricity had worked out in the very early 1800s. This part was not new by the early 1900s. That's a classical effect. And likewise, had been worked out often in what we now call classical electrostatics is that in an external magnetic field, such a moving electric charge, the energy of that system will actually depend on the relative orientation between this vector quantity, the magnetic moment, which is proportional to this vector related to just angular momentum, the orientation between that vector and the vector-- excuse me-- the orientation of this vector field-- the external magnetic field. So in fact, the energy of a system will be lowered when the magnetic moment lines up perfectly in parallel with the external magnetic field. Then this quantity together would be maximally positive. There's an overall minus sign in front. You would reduce the energy of the system by the largest amount. The system will be most stable when the quantity called the magnetic moment lines up perfectly in line in parallel with the applied external magnetic field. Well, now Sommerfeld says, if that's true for any kind of objects even in the macroscopic or classical world, what if we bring those ideas now and mix them together with this Bohr-Sommerfeld notion of quantized states of motion for an electron in a hydrogen atom. So one now had quantized this state of motion, the angular momentum akin to the kind of ellipticity-- So if this thing can only take certain integer values proportional to Planck's constant, then this thing, the magnetic moment, will likewise now be quantized when we think about atomic systems or parts of atoms. So now, let's do the same trick. The energy levels of an electron as it whips around its nucleus, if that entire system is now placed in some external magnetic field, the energy associated with that electron's motion will now be split. In fact, it will be split into exactly M sub l distinct levels. Remember, the projection of this thing along some orientation space will also only take some integer values given by that third quantum number that Sommerfeld had introduced. So now, these Zeeman triplets, the splitting of what had otherwise been a single frequency of emission into three closely spaced lines, Sommerfeld and his student said, here's what must be happening. It must be that there was an electron in an elliptical state of motion-- we'll say one unit of this kind of elliptical motion, one quantum unit associated with the quantity l. Therefore, the projection along any direction in space could either be exactly anti-parallel to it, perpendicular, or exactly parallel with it. There's only three options when we project this discrete quantity onto a given direction in space. And so what must be happening is there's an electron with one unit, and therefore, three projections, that's dropping down to a state where this ellipticity vanishes and drops down to a circular orbit again. And so you had three different energies for that starting point before the electron jumped down to the lower energy state. That would correspond to three slightly different amounts of energy released in the form of the radiation. The spectral lines and therefore their colors, their frequencies, would be slightly different from each other because you had an electron in a more general state of motion than Bohr had considered. That's the idea. That's why people start paying attention to any of this kind of complexity. Sommerfeld starts, again, scoring new wins empirically by trying to do the same kind of trick that Bohr had done-- thoroughly classical expressions, kind of snap on some new quantum conditions, and then reason about some empirical regularities. So now, this approach addressed what had already by this point become known as the ordinary Zeeman effect, the ordinary splitting, which was into triplets. There was also by this time, by the later 19-teens evidence of what became known as the anomalous Zeeman effect. So there was the ordinary and anomalous. For the anomalous Zeeman effect, it was not a splitting into triplets, it was a splitting into doublets. So again, with my simulated data, take that, say, that single purple line of the Balmer spectrum, in an external field sometimes these single lines will actually split not into three closely spaced lines, but is splitting into only two. And this now could not be explained by Sommerfeld's previous approach because remember, as you may-- I went by it quickly. You can check on the slides. Sommerfeld's early argument would only work for an odd number of splittings because the projections always went by 2n plus 1. There would always be an odd number of splittings. And therefore, an odd number of slightly different energy levels. So you could account for triplets or even five part splittings. You could never use that argument, as they recognized, to account for doublets. How do you get an even number of splittings-- split lines from that argument? You can't. So that became like the next big challenge for Sommerfeld and his most ambitious young students, one of whom was Wolfgang Pauli, a contemporary and a very close friend of Werner Heisenberg. They were students together in Sommerfeld's school in Munich. So Pauli finished his PhD. He took up a postdoc with Niels Bohr in Copenhagen. And this challenge was very much on his mind and many people's minds. And I love this story that Pauli recalled many years later. I'll just quote it for you. He says, a colleague who met me strolling rather aimlessly in the beautiful streets of Copenhagen, as shown here, said to me in a friendly manner, you look very unhappy. Whereupon I answered fiercely, how can one look happy when he's thinking about the anomalous Zeeman effect? With all the hard things on our minds, I'm sure we can recognize his dire frustration. Maybe we have worse things on our minds. The point is this really was just terribly frustrating, and many people were really concerned about the anomalous Zeeman effect. How could you explain doublets? As I mentioned, it was not only Sommerfeld and his very bright students like Pauli who worried about that. Many folks certainly across Central Europe who were in touch with each other, this was on many of their minds. So another set of very young students, still PhD students at the time, were also thinking hard about these anomalous effects, these doublet splittings for excited gases of atoms in an external magnetic field. So these now included students working very closely with Hendrik Lorentz. We know some of Lorentz's work. He was by now a very, very senior professor in Leiden in the Netherlands, and two young PhD students, George Uhlenbeck and Sam Goudsmit. They were working also to try to figure out this strange kind of anomaly. So they began reasoning kind of similarly to how Sommerfeld had been doing. They knew his book. His monograph had been out for years by now. This looked like the path forward. They realized that in analogy with the motion of planets around the sun, the Earth has two different kinds of angular momentum. It moves around the sun in its yearly orbit. That would be like the elliptical motion that Sommerfeld was building into his new kind of complicated models. But the Earth also spins on its own axis, as of course, we all know. That's what turns the day into night. So you have the orbital motion that accounts for the changing seasons over the course of the year in the analogy for the Earth. There's also the different spin, a different kind of angular momentum, of the Earth spinning on its own axis to turn day to night. So they began-- the younger students, Uhlenbeck and Goudsmit, realized-- reasoned that what if the electron also had some second kind of ingredient of angular momentum? What if it had some intrinsic spin along its own kind of axis separate from this kind of elliptical motion that Sommerfeld had focused on? And then they do the same trick that everyone was doing. If that second kind of intrinsic spin or angular momentum is associated with an electron, and if we snap the value that that spin could take into discrete values with a scale set by Planck's constant, then the electron would have an additional magnetic moment. In an external magnetic field, it would have a new vector quantity proportional to this new spin vector with a magnitude fixed by Planck's constant. So now they go back and, again, follow Sommerfeld's reasoning very, very closely. Place that entire assembly in an external magnetic field, there will again even just classically, one would expect, a shift to the energy of the entire system when the spinning charged object is placed in an external field. The energy associated with the magnetic field on the system will be minimized when the magnetic moment of the spin vector lines up perfectly in parallel with the external field. And now, Uhlenbeck and Goudsmit reasoned, that if this spin, if the intrinsic angular momentum could only ever line up exactly parallel or exactly anti-parallel, opposite direction, to that external field, then you would expect doublets. The same kind of argument that Sommerfeld and his students had made could now be applied to a two-valued offset in the energy instead of a three or an odd numbered-offset. And, in fact, they showed very compellingly that if you really do set the value of this magnitude of that intrinsic spin to be Planck's constant h divided by 4 pi, or as we saw last time, h bar, the common abbreviation, h divided by 2 pi itself divided by 2, then you would actually quantitatively match the actual values of the splittings of these frequencies. They didn't call this spin, interestingly. They call this space quantization, which tells you a bit of where their thinking was coming from. The idea was that this spin vector, this new vector S, this axis around which the electron was imagined to revolve, could only point in discrete directions in space, that it would have to snap into position either exactly parallel or exactly antiparallel to this external field. So the orientations in space along which that vector could point were quantized. They called the space quantization. And as I just mentioned, by fixing the magnitude they actually, again, could really get a remarkably close match to these very finely carefully measured optical results from the latest spectroscopy. But it wasn't so clear cut. And again, they recognized this themselves, to their credit. With that magnitude, by putting in this much, this amount of the kind of intrinsic spin, intrinsic angular momentum for the electron its own imagined little axis, then it started to sound pretty outlandish. A point on the electrons equator-- and they had ideas coming actually from Lorentz himself, their main advisor, about the kind typical size they would expect the electron to be-- they had a sense of a radius based on its charge distribution and all that, that if you actually-- if that little ball of a given size was spinning that quickly around its own axis, then a point on the equator would actually be spinning faster than the speed of light. By 1922 and '23 and '24, that seemed like a no-no. By that point, relativity was very widely accepted. Moreover, if you have something spinning that quickly, it looked like the mass of the whole electron would diverge. So they get some remarkable kind of clarity in terms of this very frustrating doublet of the anomalous Zeeman effect. But it starts to get harder and harder to make it all fit together in a single kind of visualizable model. And so by 1924 when these young students were pursuing that, it actually became kind of impossible at the same time to assume this is really a genuine angular momentum along some axis of rotation of a physically spinning tiny little object, and match both the empirical values they were aiming for, and be consistent with things like kinematics, like the motion of an object with mass. It became impossible to visualize. So they themselves worked that out. They talked about it with their main PhD advisor, a very senior Hendrik Lorentz. Lorentz said, actually, you know what? Let's not publish that yet. Don't publish that because it did seem to have such absurd conclusions or such unvisualizable conclusions. They had another advisor, much younger and much more brash, named Paul Ehrenfest also in Leiden. Behind their back, Ehrenfest actually submitted their paper to the journal without telling them, which is remarkable. And he supposedly told them, you're still young enough to afford a stupidity. And I have my little joke here that that's completely unethical. We would never do that today. Today, the advisor would steal their student's own work, put the advisor's name on it, and then submit it to a journal behind their back. That's a joke. We wouldn't do that. I just find this whole scenario quite amazing, that Ehrenfest was like, oh, yeah. I already sent that in. And they were like no, no. Lorentz told us not to. OK. Last little bit, and then we'll pause for some discussion for this first part. This is the longest part for today. So independent of these students in Leiden, independent of Goudsmit and Uhlenbeck, Wolfgang Pauli, that young graduate from Sommerfeld school, he was still wracking his brain against this anomalous Zeeman effect as well. He came up with what to him seemed like a really totally separate explanation. In time, people, including Pauli, began to put it together with this work which did see publication behind their backs by Goudsmit and Uhlenbeck. So this came in really just weeks after Goudsmit and Uhlenbeck's own work had been submitted quite independently. So Pauli was well-trained in the Sommerfeld school. And by this point, Pauli was trying to abstract from these very mechanical kind of visualizable models of the solar system and was trying to think about a kind of algebraic approach. A kind of what are the numbers with which we can associate abstract states of motion? And one of the lessons that Pauli took from Sommerfeld's work was that they had to add more and more of these integer values, some new quantum numbers n, almost always proportional to Planck's constant with which to characterize states of motion of, say, an electron in an atom. Pauli, unlike his advisor Sommerfeld, was less and less concerned about coming up with an actual physical model of is it just like the Earth around the sun? He was just more like we have all these relationships involving quantum conditions, new integers, scale set by Planck's constant. And so he argued, very abstractly, not based on a kind of visual model, that if one included yet another quantum number, this fourth quantum number, that had what he called a classically indescribable or unvisualizable double valuedness-- it was an [GERMAN]. It was literally two values, like two-faced. If this new quantity could only take on one of two values, then one could account for the Zeeman effect for the rest of the reasons very much like what Goudsmit and Uhlenbeck had argued. Pauli was not talking-- thinking at first about a spin or angular momentum. He just said maybe there's another abstract property of this state of motion. We lock it in place. We quantize it with a new integer. This one can take one of two values. Whereas, the others could take different sets of values. This became known as the Pauli exclusion principle, which is a notion that, of course, we still take very seriously today. But what Pauli was arguing was that electrons in an atom should be described by a total of four distinct quantum numbers, as we still do to this day-- the original principal quantum number-- that was when Bohr introduced; the kind of elliptical part that Sommerfeld introduced; the projection part that Sommerfeld introduced of the orbital motions; and then this new one that we would now associate with spin. Pauli at first thought Uhlenbeck and Goudsmit's idea was ridiculous because of these visualizable things. He later realized, oh, maybe these really are relatable. And so the fourth quantum number becomes now associated with the spin. The next part that Pauli put forward was not just that we characterize an electron with these four separate integers, but that no two electrons can have the exact same set of those quantum numbers at the same time. That's the exclusion principle. He later recalled that he was inspired by this kind of no two can have the same set of values by watching can-can dancers, which was all the rage in the 1920s in many cities in Europe-- famously associated with, say, the Moulin Rouge in Paris, but in Copenhagen as well. And the idea was that these dancers were so skilled that one would have just left a little physical location on the dance floor immediately when the next-- when her partner had to step into that same spot. They never collided. They were close to each other. Their motions were similar, but no two of those dancers, much like no two electrons, occupied exactly the same state at the same time. So that becomes known as the exclusion principle. So let pause there, and we can take some questions and discussion. So DA asks, doesn't a circular orbit have angular momentum? It certainly does. And so the idea, DA, is that in the elliptical orbit there are two separate degrees of freedom, is the fancy way of saying it, the two different ways with which to characterize the orbital motion. Whereas, in a perfectly-- a uniform, circular orbit, there's only one quantity that is required to characterize-- to completely characterize the motion. And so that's why I think of this second one Sommerfeld introduces as more like the-- it's sort of like the ellipticity. It's not exactly that, but it's related to that. If the orbital motion is deformed away from a perfect circle, then you'll need at least two numbers to characterize that motion completely, not just one. So to DA's excellent point, the electron moving in a perfect circle certainly has angular momentum, but you can characterize that completely with only one number, one parameter. Once there is a generalization, you need at least two. So that's where Sommerfeld is going with that. And Jade also is correct. And in Jade's response, you put that whole assembly in an external magnetic field, and there would be no mechanism without accounting for these other states of motion to account for these what became known as fine structure splitting. That's right. Gary says, Heisenberg, de Broglie, and Einstein most noted work happened when still young scientists. That's right. So Gary, you'll be-- as an economist, as we've got to know-- economists have actually studied this quite a bit. And there are a number of studies-- I can send you some references afterwards-- suggesting-- although other counterstudies that muddy the waters. Some early studies suggest that in some fields of study there a kind of advantage to not knowing too much. That's how the economists had characterized it. So not having to feel like you have to answer every last thing you've tried to explain your whole career. Their explanation was a kind of conceptual freedom. That's how they tried to make sense of a modest statistical signal that at least in some fields, including quantum physics, is one of the areas that I know of from one of these studies, that there was a kind of early entrance kind of bonus, we might say-- early starts. Other data or studies have muddied that relationship a bit. But certainly, there are a bunch of economists in particular who are interested in life cycle and innovation is how they would describe it today. It's an excellent question. Lucas asked about space quantization-- very different from how we talk about spin now. It certainly is. We also have modern program of trying to quantize space time. Good. No. That's a-- so Lucas, it's a great question. And that's, again, we know too much, or at least we're exposed to too much, that was-- it is-- it's really funny vocabulary. I wanted to harp or emphasize Goudsmit and Uhlenbeck's first idea. Because they really agreed this was kind of nutty sounding. It was how could a three vector only point in one of two orientations. It was as if it was-- as if its orientations in space were quantized as opposed to space [INAUDIBLE].. They agreed that the x, y, and z in which we live would be continuous and have the kind of classical or relativistic behavior. So they weren't trying to challenge that. And the notion of trying to quantize space time, that begins to kick in by the late '20s, early '30s, but not quite-- that's not what these folks were thinking about just yet. Alex asks a question about conventions. Why is it m sub l and not n sub l? Ah, very good. I don't know. I have a suspicion. This third quantum number is, again, some of you might know from chemistry classes as well as from physics ones, it's nowadays often called a magnetic quantum number because it really is about the orientation of the orbital motion with respect to an external magnetic field. So it's now called the magnetic quantum number. And even in German, it would be [GERMAN].. So it wouldn't surprise me if it gets renamed as an m later. That's a good question. I don't actually own a copy of Sommerfeld's first edition. It's a very expensive book now. It would be interesting to go back. It's probably on Google Books. It's probably been scanned to see back in 1919 were people using N sub l or M sub l. I don't know. I suspect it gets-- if it wasn't put in that way from the start, I imagine why we use an M these days is because it is, as I say, called the magnetic quantum number. It's an interesting question. Other questions or thoughts about any of that stuff? OK. Keep the questions coming if you want. Let me press on now and see how are some very smart, young, and ambitious people reacting to that body of work, in particular, one of Pauli's closest friends, Werner Heisenberg. So Heisenberg and Pauli were almost exact contemporaries who are almost exactly the same age. They both finished their PhDs around the age of 22-- the PhD-- not their undergraduate degree. They're both very precocious young students of Sommerfeld. And by this time, Heisenberg, we know from his letters with people like Wolfgang Pauli-- many, many of the letters have survived-- he was also growing frustrated with what had become the pattern of this so-called old quantum theory, even from their own otherwise very beloved mentor, Sommerfeld-- this idea of starting with classical descriptions of motion and then kind of tacking on what seemed like ad hoc quantum conditions. So by 1924 at the tender age of 22 or 23, Heisenberg then began his first postdoc position. He began working even more closely with Niels Bohr in Copenhagen. And that's where he introduces this the paper that we had for part of our reading today. It was submitted-- a few months later, he moved to Gottingen. But he really did a lot of this work really fresh out of his PhD. So he now starts to set out a kind of program. The opening paragraph of this paper reads like a manifesto. We have to do something new, is how I read this very young author in that opening paragraph. What he's arguing for is a first principles treatment of the quantum realm, rather than what felt even then as a kind of kluge, or a grab bag, or a series of ad hoc rules of thumb. So he wanted to break this impasse of the kind of Bohr-Sommerfeld approach. And in fact, he drew very clear inspiration from Einstein in 1905. And again, we know that partly from his letters at the time, as well as his later recollections. He thought that much as Einstein had made a big deal, as we saw a few classes ago, about the mock-like positivism-- let's only focus on objects of positive experience, otherwise we'll get ourselves tied up in knots-- Heisenberg says we have to do the same thing all over again now for understanding the behavior of atoms and parts of atoms. And so in particular, Heisenberg says, again, in his opening paragraph, he says, we can never observe an electron in its orbit within an atom. So why are we trying to calculate properties of that motion that we could never observe in the first place? And here, again, as you've seen in the reader, he says it seems sensible to discard all hope of observing hitherto unobservable quantities, by which he means things like properties of an electrons orbit-- Keplerian or circular or otherwise. Instead, it seems more reasonable, he goes on, to try to establish a theoretical quantum mechanics-- his terms-- a theoretical quantum mechanics-- analogous to classical mechanics, but in which only relations between observable quantities appear. And that's really-- he has a kind of echo in his head of the once brash, young Albert Einstein and Mach. Previous approaches could be seriously criticized on the grounds that they contain as basic elements relationships between quantities that are apparently unobservable in principle-- it's just like textbook Ernest Mach here-- such as position and period of revolution of the electron. Supposedly, Heisenberg tells us in later recollections, he shared his excitement about this move with Einstein around this time, and around 1924 or '25. I was so inspired by your work, Einstein, that kind of thing. Einstein, by this point, was quite senior and very renowned. And supposedly Einstein's retort was a good joke shouldn't be repeated too often. As we'll see, Einstein was less convinced this is the way to go by the 1920s, even though it had been so critical to his own thinking in the early parts of around 1905. OK. So what does Heisenberg do with all this? It's one thing to have an opening paragraph about a kind of guiding philosophy. What does he do? So as we saw, he argues that physicists should focus on quantities that are in principle observable or empirical. And what he has in mind as he goes through in the rest of this brief paper are things like the frequencies, the numerical values of the colors, the frequencies of these spectral lines. How do we learn about the structure of atoms by measuring the stuff that comes out-- things that we can literally observe and very carefully measure, like the frequencies of spectral lines. And in particular, he goes back to say that these spectral lines obey a law of addition. Remember, we saw last time even in the 1880s, it had been clear that there were these kind of empirical relationships between the frequencies of the color lines-- really, like how many hertz, how many cycles per second, is this blue line? And then these combinations of inverse squares of integers. That was the Balmer relationship, and Bohr gave a kind of explanation for it with his atomic model. Well, if you start from that, you realize that the sum of the frequencies of two different spectral lines associated with transitions between different states of an electron inside, say, the hydrogen atom, they will sum. So the sum of any two of these frequencies that actually appear will give you-- add them together, you'll get a frequency that will also appear somewhere in the spectrum. And so that would be like-- and by the way, and you can keep going. This is not true only for adding any two of them. You could keep going for an arbitrary number of the actual numerical values of these frequencies of the spectral lines. And for all the colors that actually come out and can be measured, even beyond-- pardon me-- just the visible portion of the spectrum into the infrared, into the ultraviolet, they obey this law of addition. And so what that would be is like an electron could jump from say a principal quantum number of 6 to 1 all at once, that will give a certain color of the corresponding emission. Or it could make a series of hops. It could jump from the principal quantum number 6 down to 3, 3 to 1, or 6 to 3, 3 to 2, 2 to 1, or 6 to 5 and 5, et cetera. Each of those jumps or transitions corresponds to an emission of light of a very specific frequency given by this relationship. And if you add up the frequencies of any of these jumps, you'd see they would all-- they could be arranged in this format. So Heisenberg began saying, we should be focusing on that stuff. These are our empirical objects of positive experience that we can see. We can measure. We can gain quantitative information about the frequencies of these spectral lines. So he begins trying to construct the kind of arrays of these lines that kind go together through this law of addition. In the midst of that work-- by now, this takes him from the fall into the spring of that academic year, spring of 1925-- Heisenberg actually has a very famously-- an unusually bad attack of hay fever. He had allergies all his life. He has a particularly bad attack of hay fever. So he actually had to leave Copenhagen. He travels to the nearby island of Heligoland, which is actually in the North Sea between Denmark and Germany, as I understand it. Over the years, both countries have claimed it. So the point is it's pretty close to where he was in Copenhagen, but gets out of town. And he's actually then basically on his own in this tiny little kind of touristy island for several weeks still trying to puzzle through this stuff. So he's even more isolated than he had been just as a postdoc. He literally leaves town. It's during that short stay that he continues trying to sit with this notion of the behavior of these frequencies of emitted spectral lines. Then he goes on to think now on his own these frequencies new refer to their characterizing the light that comes out. And he knows, again, even classically the way we would characterize electromagnetic waves, waves of light, would be some amplitude and some frequency, some phase, the phase being proportional to the color we see. The point is that the quantities he's now trying to build up in these arrays, these kind of new 1, 2, new 3, 4 and so on, they appear in the exponent when you actually try to mathematically describe the light that comes out. So then he goes on to just reason on the island. If these frequencies up here are adding, if the exponents are adding, that suggests that these amplitudes should multiply. Just think about even in elementary multiplication, if you have two numbers that we write in kind of scientific notation that have a coefficient and an exponent, if we multiply those two numbers together the exponents add. That's what he saw the frequencies doing. And that goes along with the coefficients of the amplitudes multiplied. So he says, well, maybe that's what goes on with these spectral lines. All the information we can glean about these spectral lines, the amplitude would be related to the intensity of the brightness. And so let's see if we can make similar arrays with more information we can glean of the structure of these atoms from the empirical information. So taking this analogy or this mathematical relationship, if these things add, these things should multiply. And that's what he finds all alone on Heligoland while suffering from hay fever, that somehow these arrays of the amplitudes start looking pretty funny to him. It turns out the order of multiplication changes the result, that if he just tries to multiply them, the order in which he multiplies them changes the answer. Whereas, of course, that doesn't happen if you add these things up. You can add them up, and the sum would not change. He writes up this brief paper. He submits it once he then actually moves to his new position in Gottingen back in Germany. And now, he's working not with Bohr, but most closely with the mathematical physicist, Max Born. So Born's more senior. He's more like Albert Einstein's age. Born, like Sommerfeld, was very, very well trained in classical mechanics, celestial mechanics, other mathematical techniques. And Born's reaction when this excited young Heisenberg arrives is basically saying, you're really an idiot. You're dummkopf, is what I imagine Born saying. You're studying matrices. The point is for the very-- even the very elite training of someone like Heisenberg and Pauli at this leading school of theoretical physics in Munich, throughout his entire training he'd never taken a course in what we would now call linear algebra, much like Einstein hadn't learned about non-Euclidean geometry, so on. So Born, now Max Born, had to tell and/or remind young Heisenberg that the notion that two arrays of numbers, when you multiply them, you get different answers depending on the order. That's what you should generically expect when you're handling matrices. So I've just taken two kind of random matrices just to remind ourselves 2 by 2 matrices, two arrays of numbers-- any random numbers would do, and any random components. Let's first multiply this one times that one in that order. And as I'm sure you all have learned how to do the matrix multiplication, this element here will result from the sum of this times this plus this times that. You know how to multiply matrices, I'm Sure So we can go through the exercise and multiply this matrix times that in that order. We'll get a new resultant matrix. It will also be a square matrix, in this case, 2 by 2. If we had multiplied these in the opposite order, if we multiplied B times A-- the exact same matrices multiply in the opposite order, again as all or most of you have likely seen by now, we get a different answer-- a perfectly legitimate answer. It's still a square matrix. But the actual matrix is quite different. The entries are quite different. There's no 6 over here. There's no 1 over here and so on. A times B does not equal B times A in general when you're multiplying matrices. Heisenberg didn't know that in 1925. And so Max Born, his newest kind of mentor and colleague in Gottingen had to explain to him what most of us now learn very early in our training. Most of us learn that because of Heisenberg's work. We now have many reasons to want to be able to manipulate matrices in all kinds of investigations. So matrices-- the fancy way to say this is that matrices do not commute. That just means that the outcome of an operation like multiplication depends on the ordering. So to get-- that's, again, a bit abstract. And if you have not spent a lot of time with that, that's OK, of course. I find the following example pretty helpful. I always mess this up when I try to do this in person. So this is the one benefit of doing this over Zoom is I have this picture to guide me. One kind of matrix we might consider would be the rotation matrix in three dimensions. We saw rotation matrices when we talked about Minkowski and special relativity. Now, imagine we want to have-- describe rotations in three dimensions of space-- x, y, and z. So imagine I have a book-- this little rectangle here. In class, I would do holding a book. And let's say I want to first rotate this book in the yz plane. I'm going to rotate it down first. Then separately, my next operation is to rotate that book in the xy plane. So I'm going to rotate it down by 90 degrees and then over by 90 degrees, and my result of applying those two matrices-- the matrix corresponding to rotation only in the yz plane by 90 degrees, then applying a separate matrix only rotation in the xy plane by 90 degrees. Here's the result. So now, the book is pointing in a certain orientation different from where it started. Let me apply the exact same two rotation matrices but in the opposite order. So let me start with the book in the same starting position. Now, I'll rotate in the xy plane first by 90 degrees and then in the yz plane by 90 degrees. You see the orientation is different. This is, for me at least, I find that helpful, especially when I don't mess it up in person, as a reminder that the outcome or the product of the multiplication of matrices depends on their order. It really can be as simple as that thinking about moving things around in an ordinary three-dimensional space, let alone any abstract matrices we might be manipulating. So this becomes known, therefore, as matrix mechanics. The work that Heisenberg introduces in 1925, and soon he works very closely with Max Born and another colleague in Gottingen, Pascual Jordan, they develop an entire first principles mechanics of the quantum realm-- a quantum mechanics built around these matrices. So this first version is called matrix mechanics dating from 1925 originally. Any questions on that? I still find it just stunning that this super hotshot young kid, Werner Heisenberg, didn't know what matrices were and hadn't realized that you could actually multiply arrays of numbers and worry about the order-- have to worry about the order. Any questions about the framework or the motivation? We can see, again, he's clearly trying to do the kind of young Einstein, Ernst Mach kind of thing. That's what sets him down that path. And he's literally removed from the conventions. He has to move to the island of Heligoland for a short break. I find that pretty interesting. No questions? It's all perfectly obvious. All of you would have done that? I wouldn't have done that. Jesus asks, did Einstein have any particular reasons for thinking Heisenberg should be careful of positive-- oh, thank you, Jesus. Great question. Really good. And I think the answer is-- well, the answer is yes. I think Einstein did have reasons. And it really tells us, so to speak, more about Einstein than about the value of that kind of argument. By 1925, Einstein's thinking was really quite different than it had been in 1905, and a lot of it had to do with his experience in between with what became known as general relativity. So we saw that in 1905, Einstein was very concerned with this kind of Machian positivism with what we might call operationalism. Tell me exactly how I measure the time-- I have my little hand on my watch-- all that kind of spelling things out empirically and kind of operationally. And he gets more and more kind of separated from that, or drifts away from that, let's say, as he begins working more and more on general relativity, where he gets more and more involved with pretty fancy, but also pretty abstract mathematics. He's thinking eventually not only about what would an observer see sealed up in an elevator or a spaceship, but actually about global properties of an entire universe that might be warping and so on. So by 1915, especially by 1917 and later, meaning before Heisenberg's work, Einstein has moved to thinking about things like cosmology, entire universes, which certainly not all of which could be subject to the kind of Mach-like empiricism that had driven his 1905 work. So he becomes more driven by both a kind of beauty and power of mathematical reasoning, which had really not been clearly a driver for him in 1905, and also, just moving away from the kind of litmus test of, is it measurable? Can I smell it or taste it? So I think Einstein's comment-- if it ever happened. We learned that only through Heisenberg's much later recollections. It might be an apocryphal story. But it sort of adds up. Even if it didn't really happen, it's plausible because of Einstein's own kind of shift in his intellectual approaches. So I take that to be more a comment about Einstein than about is Machian positivism good or bad for science? I think we can still debate that on separate terms. I think it tells us about Einstein's thinking. I agree with Alex. It's hard to believe a time when leading scholars, let alone anyone at all, would never have heard about matrices. I agree. I think that's part of what's so kind of-- I don't know-- kind of touching about this story. Heisenberg was both like this brash kid, saying everyone-- all my teachers are wrong. That's not unique in the history of science. Young people often say that. But also, in some sense, kind of just so naive. So like, I'm going to set off on my own, literally leave town. Everything that's been done [INAUDIBLE] for these 22 years that I've been on the planet-- he's really young-- that's all just a dead end. And yet, also just be like, how are you going to do your next steps there, buddy? Have you heard about matrices? It's a remarkable kind of moment to me. And so I just find that story really compelling. Fisher asks, is there is anything in this class has taught me that you can never [INAUDIBLE].. Fair enough. So the course 18 rolls will now swell from this class. Taking more and more math is probably good for one's soul anyway, and certainly can be helpful in unexpected ways. By the way, we'll see this kind of thing happen up happen even later this term. Well after the middle of the term, we'll see examples from later in the 20th century, where I think the same kind of thing happens, where some physicists kind of accidentally or for other reasons find that they get really interested in ways of classifying large groups of objects-- what's like with like-- without having studied things like group theory, which was by then a very, very well-established branch of pure mathematics, lead groups and all the rest. So there's a not uncommon kind of pattern that's not only limited to Einstein or Heisenberg or these other folks. It feels like the flip scenario of the TRIPOS, where those students could do so much math, but didn't typically learn. OK. Yeah, the tables [INAUDIBLE]. That's right. That's a good example. So Heisenberg clearly was not a Cambridge student. I think it's an excellent thing to keep in mind. And that's not to make fun of Heisenberg. He was very well-trained. He was trained at one of Central Europe's finest academies for training young scientists. He and so many of his cohort would went on to-- would go on to earn extraordinary recognition in the field. It was just a different kind of training-- different choices about what to prioritize even for very young students. I find that fascinating, the kind of different options that are being prioritized at different times. I think that's one lesson we can learn. We can step back and take in a kind broader view, a sample across a few different institutions. So it seems like physicists are getting younger too. He was in the Swiss Polytech as a teenager, where the Wranglers were into their 20s. Ah. So I'm not sure that's quite right, Fisher. So I think the idea that there is a lot of people will just never leave Cambridge. It's still true, actually. So the actual people competing to be Wranglers were basically undergraduates and usually on what was a kind of canonical age for that. But then a bunch of them would stay on in town. The Wranglers would stay, sometimes get fellowships to then do almost like a research fellowship, which sometimes would be like grad school and sometimes wouldn't. There's a whole tradition of never bothering to get a PhD from Cambridge. They would be called Mr. They were at this point exclusively-- almost exclusively-- men. They'd be called Mr. so-and-so, even though they were on they were paid to do research full time. Or they would kind of tutor TRIPOS students while pursuing their own later degrees. So the Wranglers-- many of them would hang around longer. And they'd have the Wrangler title for life. But the actual-- they had to sit for their exam in the Senate house when they were approximately 21 or 22, typically. So Heisenberg had skipped grades, in that sense. He was finishing his brief PhD around the age of 22. Yeah, anyway. So some of you might know-- we'll come soon actually in coming-- well, not so soon. A couple of weeks we'll come to the work of the preeminent physicist Freeman Dyson, who actually just passed away this past spring in his 90s. Dyson was one of these British guys who just never got a PhD. He was hired as a full professor at Cornell at like the age of 26, and he was Mr. Dyson. So I mean, that was a very kind typical British-- like, almost like in your face. Like, I'm so good I don't even need that stupid grad school thing. That's a separate kind of tradition we can talk about when we come back to Freeman Dyson in a few weeks. OK. Let me go. And for the last part for today, unless there are any more questions on Heisenberg and matrix mechanics. And again, it probably is already obvious to you, but just to say if parts of the kind of maneuvers that Heisenberg does in that paper that is on the reading for today, if it's hard to follow all the ins and outs of that paper, first of all, you're in good company. It's hard even for today's physicists because it's just not how we do things today. So it looks weird even to the pros, so that's OK. And for our purposes, I think what's most fascinating is really just the first opening steps, this kind of manifesto, the kind of Mach-like manifesto. And then just this concentration on frequencies-- obey this empirical relation, frequencies appear in an exponent. Let me follow that down. If that much is clear, that's what I hope we'll get out of the paper even though he goes into a little bit more detail and gets pretty-- a little more complicated after page two. Don't worry about that if it's hard to follow. Jesus says, it's cool how in the late 19th century in Britain, a lot of leading physics was based on complicated mathematical models. Whereas, Einstein and Heisenberg, who seemed to have cared less about math than the Wranglers-- certainly at that stage-- focus on unintuitive non-mathematical ideas as the basis for their work. I think that's true. Or at least there's a kind of spectrum. I think then, as now, there's a range of ways that have proven to be productive in trying to learn about nature. And isn't that a good thing? And now, I think we hopefully have learned that lesson even more so, if I may get on a soapbox for a second. Shouldn't we be working even harder to get even more perspectives and life experiences and people in the mix? Because there's not one way to do this right. And so shouldn't we work even harder than frankly we've been able to today-- than we have to date-- to increase the type-- the numbers of people and the types of people from whence they come with the whole grab bag of experience. Just for this lesson, even we only look at very narrowly demographically a range of ways to succeed, doesn't that lesson suggest even broader [INAUDIBLE]? Hopefully, we'll keep that in mind not just for this term but beyond. But it's a great point. I agree, that there was more than one way to do pretty interesting and original work circa 1910, 1920. And that's a lesson, I think, again, we'll see throughout this term. OK. Good. Let me go through this last part because now we want to see what does Heisenberg do once he knows that there are things called matrices in the world? He keeps thinking about this, of course. He's not done. And so with Max Born's help, he's clarified some of the basic mathematics, what he had kind of stumbled into thinking about were these arrays of numbers, like matrices. He then had learned kind of bumblingly on his own and with more formal help from Born in Gottingen that the outcome of transformations depends on the order of operations. These matrices don't commute. Now, Heisenberg, again, is still trying to make sense of that kind of physically or conceptually. It still doesn't kind of sit with him all at once. So a few years later now in the spring of 1927, just two years after his trip to Heligoland, he returns to Born's Institute in Copenhagen for another visit-- a basically, kind of longer-- another postdoctoral visit. And now he's trying-- he's still kind of trying to make sense of what this noncommuting matrix stuff might mean physically or conceptually. And he returns to, again, a kind of simple thought experiment, one that he can kind of picture or visualize in his mind. It comes to be known as the gamma ray microscope. And he thinks about it really in these kind of cartoon-like forms. So my cartoon here is, I think, at the level that I think Heisenberg was beginning to grapple with is when he comes back to Copenhagen spring, summer 1927. So let's imagine we're trying to identify or measure the location of an electron. Obviously, electrons are small. We can't just look at them. We have to use something like a microscope. How do we see any small things or characterize their positions? We use a microscope. So electrons are really small. And so we have to use light of a particularly small wavelength, which is to say, a very high frequency. And so we'd have to use something like gamma rays-- rays of light, little beams of light, that have particularly small wavelengths on the-- comparable to the size of the thing we're actually trying to measure. So how does a microscope work in general? We have some object we're trying to resolve-- we're trying to make an image of to measure its location. We shine light on it and collect the scattered light in our microscope. That's what we do, sort of generally. So if we want to measure the position of this really tiny thing like an electron, we'll do that. We'll bounce light off of it. We'll collect some of that scattered light in our device. The electron's very small. So we have to be a little sensible about the kind of light we'll use. We'll have to use light of a wavelength that is at least as small as the object we're trying to measure and as we have to have reasonable resolution. We also have to keep in mind there's something called the resolving power of any actual optical instrument. It's related to diffraction, or sometimes it's called the diffraction limit. Some of you might have learned this even in kind of classical E&M or optics coursework. So any of the light that is scattered into our collecting device, into our microscope, has to fall within some kind of angular cone or aperture with some angle we call theta. And it turns out if we try to squeeze down this aperture to be smaller and smaller, if we squeeze theta to be a smaller and smaller angle, we actually-- we lose the ability to make a sharp picture. You have to get the balance right between the wavelength of light you use-- you want that to be small. So in principle, it could differentiate small features of your target. But you can't make your aperture too small or you'll get a fuzzed-out, diffraction-limited image. There's a trade-off, an inverse trade-off that was known, again, even kind of classically in optics. If that's not familiar to you, by the way, you're in good company. Heisenberg almost failed his PhD qualifying exam because he couldn't remember the diffraction limit for how actual microscopes work. So he had forgotten this quite familiar classical result. It stuck with him after that, and so he was back on his mind in 1927. There's this trade-off in what we call the resolving power proportional to the wavelength, inverse or proportional with a measure of the kind of aperture or angular size. So the aperture will collect light, the scattered light, again, with a range of scattered momentum. So not any old scattered light beam will fall into our device. There'll be momenta, whose vector quantities fall within this kind of angular cone-- a cone of angular size theta. So that means these scattered photons in order to fall into our device could have any component in this direction. I'll call this vertical direction now the x direction. We're trying to measure the position of the electron in this direction. What's the component of the scattered photon's momentum that would enable that photon to be captured by us in our device? Any component delta px that falls within the aperture, and that's just the sine theta component of our hypotenuse here. Well, now we've just-- as Compton had already learned, we've now just basically produced a collision, a kind of two-body collision, between an electron and a very high-energy photon. So there'll be some recoil of the electron. There'll be some scatter of the electron because it's just been smacked by some very high-energy photon. So whatever the uncertainty in the scattered momentum of the photon is, we'll have a comparable uncertainty in the resulting momentum of the electron. It will recoil just like in Compton's scattering. What's its recoil momentum? Well, we can't identify that with perfect accuracy because we've clumped together, so to speak, we've lost the ability to distinguish or differentiate among scattered momenta-- components of the scattered momenta within some cone of angular size or with theta. So we have some uncertainty in the post-scattering recoil momentum of the electron from a kind of Compton scattering argument. But now we know what that component is. We can actually fill this in. Because using Einstein's arguments from photoelectric effect and so on-- again, we saw this with the Compton scattering in the previous class, or maybe two classes ago-- the actual momentum of that photon is given by Planck's constant divided by the wavelength of the associated wave. That's how Einstein began reasoning, as we saw, combining the kind of poynting vector classical electromagnetism with his notion that photons are discrete bundles of energy with [INAUDIBLE]. So now, Heisenberg says let's combine those things. We can't make the resolving power, which is say how fine a picture or how fine a determination of the location in space that electron is-- we can't make that arbitrarily small while also making the uncertainty in the scattered momentum arbitrarily small. In fact, if we multiply them together, their simultaneous values cannot be made zero. In fact, the product is going to be roughly of order h, Planck's constant. If we make the aperture too small, then we fuzz out our location information. However, by making the aperture not small enough, we allow in scattered momenta of a sizable range of components. So we have this kind of seesaw trade-off. The wavelength appears opposite to each other between the resolving power and the scattered momentum. The effect of the aperture width is one is downstairs. One is upstairs. So we can't simultaneously make both delta x and delta p arbitrarily small at the same time. And it gets there from thinking about where operating on these things, he thinks about this inspired by the fact that his matrices don't commute. So the order in which we do things should matter. What does it mean to be acting on this electron? And he goes to this physical kind of cartoon model. So at first, Heisenberg says this is really us being clumsy. Heisenberg's own interpretation of his own uncertainty principle, which he publishes in the spring of 1927 while visiting in Copenhagen, is basically that we humans are big, clumsy animals. We can't help but disturb tiny little things like electrons. After all, we're the ones who released this kind of enormously energetic light beam on our poor little electron because we wanted to make a sharp picture. But the energy scale of the light was comparable to the kind of relevant energies or momentum of the electron. So we've intervened. We've disturbed this object. So of course, we can't make a clear picture. He's talking about this in Copenhagen with Bohr. And Bohr strongly, strongly disagrees. And here again, I want to think back to that really quite, I think, very compelling piece by Megan Shields Fermato that I put on the reader, I think, just for the previous class. I think this actually makes even more sense here. Again, this tells us about how Bohr was operating all the time-- his entire career, often with these young, sometimes very brash kind of alpha male style young assistants. But also, even more often with his very patient wife Margrethe. He's always working in dialogue. He's always talking these things through orally and sharpening his arguments like-- it's like a championship debater. And Margrethe was in the mix from the start. Bohr's also always doing that, especially with his younger assistants, people like Heisenberg. That's what they do throughout the spring and summer of 1927. Heisenberg comes excitedly to his advisor and mentor Bohr, almost a kind of father figure for Heisenberg by this point, Niels Bohr. Look, I've learned this new thing. There's this trade-off because we humans can't treat the atomic realm with kind of enough kind of daintiness or with enough sufficient care. And Bohr says no, you've misunderstood your own work. There are shouting matches. There are crying fits, mostly Heisenberg, not Bohr. There's name-calling. This becomes intensely emotional as they continue this kind of behind the scenes discussion in Copenhagen really for weeks and weeks, not just for a day or two. It continues throughout the spring and summer. So Heisenberg and Bohr tussle over how to make sense of the exact same equation. Again, we've seen this over and over again. Bohr is not saying your math is wrong. Bohr is not saying I don't believe your expression or you're off by a factor of 2 pi. He says, you're not interpreting your own mathematics correctly. I make meaning of your expression differently than you do. And Heisenberg doesn't-- it is not a comfortable discussion. What Bohr argues instead is that this expression, what we now call the uncertainty principle, is not a result of our interventions, of our clumsy interventions with small little-- a fragile, quantum world. But Bohr says it's actually about the quantum objects independent of ourselves. It's something deep about how nature works on the scale of atoms, he's now convinced himself, that they simply do not and cannot have simultaneously sharp values for certain pairs of properties, for complementary pairs of properties. One set of those pairs is position and momentum. That's the component of momentum in that same direction-- so delta p sub x here. That that's one of these examples of a complementary pair of properties-- delta x delta p sub x. They go on together to work out-- a similar one would be the energy involved in a certain interaction, delta e, and the time over which that interaction takes place. That becomes another pair of complementary properties or quantities that can't be simultaneously identified with arbitrary precision. So there's another version of the uncertainty principle. Delta e delta t is of order or not less than roughly h. So to Bohr, he says that's something we have to learn about nature not about our own kind of clumsiness, that the electron even before we smacked it with this high-energy photon simply did not have on its own simultaneously sharp values of position and momentum along that direction even before it was smacked by the photon. So now, that begins to lead to even broader kind of sprawling discussions, much more kind of philosophically freighted discussions which they're still working out often during strolls in the gardens right near Bohr's Institute in Copenhagen. Really, again, this occupies them throughout the spring and summer of 1927. So if the uncertainty principle holds for quantum objects even on their own, not just when we interact with them, then they begin to puzzle through together there could be no such thing as a sharp trajectory-- a specific path through space and time for quantum objects. After all, what is a trajectory? It's a collection at every moment in time of exactly where an object is and exactly where it's going. And that's precisely what Heisenberg's uncertainty principle seems not to allow us to attribute to things like an electron. We make a trajectory by saying at this moment, the electron is right here and it's moving in this specific direction at this rate and so on. That's what a trajectory is. And the ingredients for a trajectory are what we're now told by this unfolding work is simply not available to us. Because in nature, at least according to Bohr, these things were never sharp enough at any given moment. So to Bohr at least, and eventually to Heisenberg through this kind of-- to call it a dialogue is to paper over the emotion involved and the real struggle. But eventually, Heisenberg is kind of, so to speak, converted as well to this Bohr line. It becomes known as the kind of Copenhagen interpretation because so much of it is being worked out in Bohr's Institute through these long walks in the gardens, often with Margrethe and with Heisenberg and Pauli and the younger postdocs basically cycled through Bohr's Institute. So the real lesson of Heisenberg's own equation is that given some position of some object at some initial time, say, x at T0, like we would do even with Newtonian mechanics, we can't know its position at some later time with certainty. Because to do that, we'd have to know both its position and its simultaneous momentum to arbitrary accuracy. So that's not just interesting to Bohr. This heralds an amazing conceptual revolution. Bohr is always talking in this period in these kind of grand philosophical kind of sweeping statements. And eventually, people like Heisenberg and Pauli come along to that as well. It suggests the fall of determinism-- this grand plan that stretches back to the 18th century, that given the present state of a system and knowledge of all the things that are impacted that used to seem, at least in principle, to be sufficient to protect with certainty-- to predict with certainty what would happen in the future. And instead, what Bohr eventually convinces Heisenberg of and many others, is that that's simply no longer possible, that the whole notion of causality, or of A causing B, or of determinism-- I know this now, therefore, I can with certainty what's going to happen later-- that starts to fall into an enormous amount of challenge in the quantum realm given these arguments about things like the uncertainty principle. So let me stop there. We have a moment or two at least for questions. Sorry I went on a little bit long So let's see. DA says, isn 't this work really around the time of de Broglie's work? Oh, very good. So very good. So Heisenberg knew a little bit about de Broglie's work but was not terribly influenced by it. They were actually-- the first inklings were coming out independently and in parallel. So when Heisenberg was working on what becomes known as matrix mechanics, de Broglie's thesis had been submitted to the Sorbonne or whatever the Parisian University was but was not yet widely known. And in the coming months, they get in touch and they begin comparing notes. And we'll actually talk more about the impact of de Broglie's work in our next class. And meanwhile, DA, you say, Bohr's argument contrary to his earlier work? Yes, it certainly was. So Bohr was not stuck at one moment in time either. Bohr's work in 1913 depended on having very, very specific trajectories. In fact, very simple ones-- an exact uniform circular motion. He was convinced by Heisenberg a decade later that these things are not subject to observation. That's kind of a leap that gets us into a model. So Bohr's own thinking was not kind of fixed only in 1913 either maybe, you might say, to his credit. And then he dug in even more over the course of that more recent work with people like Heisenberg to think that there was a bigger conceptual lesson that they all had missed, including himself. So he's making his own break from his own work from 1913 as well. Jesus asks, what's the competing interpretations of quantum mechanics other than Copenhagen? Yeah. Very good. OK. So let me pause on that both because there's no time. That's-- and also because we will talk a bit about some of that in the coming classes. It's a huge question, Jesus, a great question, one that's very dear to me. And we'll get a little sampling at least in a few class sessions-- a great question. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_16_Secrecy_and_Security_in_the_Nuclear_Age.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Welcome to 8.225/ STS.042. Just a quick announcement-- hopefully, you saw the message also through the Canvas course site, as well. We're going to do a similar routine for this coming week as we had done earlier this week, which is to say students' main assignment for Monday's class session this coming Monday is to watch, on your own time, at your own schedule, at your leisure, another documentary film. This one's also about a little less than 90 minutes, so it's comparable to a class session time. This film is called Containment. It was made by Peter Galison, some of whose work we've read before, and some of his colleagues worked on the film. And it takes some of the threads we've been talking about even further up more recent in time. I think it's a fascinating, again, very engaging, at times very troubling documentary. And so the main assignment is to watch that. So for today, it'll be a more familiar style class session. I have a bunch of slides. So I want to carry the story forward a bit further in time compared to just the developments immediately during the Second World War, which we've been focusing on for several class periods in a row. And so today is looking into the period that came to be called the Cold War, in looking at some aspects of nuclear questions during the Cold War, much, much more beyond that that we can't cover in detail in class. But I want to at least introduce some things along the lines we've already been talking about. So today, as usual, almost always there's three main parts to the class. We're going to start by talking about, what do people mean when they talk about this phrase, the atomic secret? And as we'll see, that gets bound up with questions about both espionage and possible implications of espionage during the Second World War, and also about what's called proliferation, other countries besides the United States or the immediate US-UK-Canada partnership during the war that led to the Manhattan Project, how and when did some other countries begin to develop their own nuclear weapons. And that whole bundle of themes and topics we'll talk at least briefly about for the first part of class today. Then we'll talk about the US decision to actually pursue a different kind of nuclear weapon, the hydrogen bombs or fusion weapons. And again, you got some of this introduced near the end of the film, The Day After Trinity. And I'll just talk a bit more about how those decisions played out in the US context. And then the third part will be pretty quick. But I just want to make sure we all know that the policy challenges posed by nuclear weapons hardly disappeared in 1945 or 1950. So the last part will be a look at some of the international treaties that came and went over the longer period beyond just the 1950s, stretching later into the Cold War. So that's really just a much briefer sketch, just to make sure we're clear that these issues really did have a very long, a long tail. And in fact, of course, many of the issues are still relevant to this day. That's where we're heading today. So as we saw both in the previous lecture and also I think very evocatively in the film The Day After Trinity, the Manhattan Project was this really unprecedented, sprawling scientific and technical project, ultimately employing about 125,000 people at 30 or maybe 31 sites, distinct sites across the United States and parts of Canada. It was just enormous. It included some of the, at the time, world's largest factories ever constructed at all, some of those at the isotope separation plants in Oak Ridge, likewise, the billion cubic feet or cubic meters-- billion cubic something-- of concrete poured at the Hanford site to host these enormous, enormous nuclear reactors to do things like create kilograms of plutonium instead of only micrograms. Just a reminder, this was a project on an unprecedented, industrial scale, as well as involving lots of complicated questions for science and engineering along the way. So that work was done in secret during the war. And as I mentioned briefly, some of those sites like Oak Ridge and Hanford were literally not even on the map at the time. The existence and the location of the sites was considered too sensitive to put even on simple maps of, say, the state of Tennessee, for example, or Oak Ridge. But the secrecy was-- some part of the secrecy was removed quite dramatically at the end of the Second World War when the nuclear weapons were used against Hiroshima and Nagasaki. And suddenly, this was worldwide news, no longer only top secret. So what I was curious about a number of years ago-- and I wrote an article that was part of the readings for today-- was how did people react to what had previously been treated as very, very top secret once the fact of the weapons was no longer a secret? And so the way to think about that that I pursued was, what did people mean when they use the phrase, the atomic secret? They use that phrase all the time, especially in that first decade after the end of the war, in all kinds of settings and media. What did people mean? What did they think the term the atomic secret referred to? And one thing I want to prime us for just days before the next election here in our current days, is that a lot of these discussions, maybe not so surprisingly, were unfolding in the midst of and reflective of domestic politics, not only domestic-- there are lots of international things going, on as well-- but a lot of the rhythms of what changed, what kinds of conversations seemed to dominate at different times. I think we can make sense of that by thinking about a timeline of domestic, often electoral politics. When's the next congressional midterm, especially when's the next presidential election? How are those elements of a familiar US rhythm of public discourse? How are those wrapped up with some questions that, on the face of it, didn't seem to be tied up in an obvious way with those kinds of political jockeying? So the very first public response that got a lot of play in literally days and weeks after the news about the bombings of Hiroshima and Nagasaki, the first response was there is no secret. The response to the question, what's the atomic secret, was to say there is no secret. And that was especially advocated or promulgated by a group that came to be called the Federation of American Scientists or the FAS. Their very first title was a Federation of Atomic Scientists. And this included many, many folks who had been at various wartime Manhattan Project installations, including Los Alamos, but also at Oak Ridge, Tennessee, less so at Hanford, though some, and also at the Met lab in Chicago. And they very soon after the war founded a journal that's still around today, just celebrated its 75th anniversary, called The Bulletin of the Atomic Scientists. This was really a kind of lobbying effort, by their own terms. Their goal was to educate members of the public, including politicians and journalists and other policymakers, but also American citizens and voters more generally, about what they thought everyone should know about the new nuclear age. And their first response, which got a lot of media play literally starting in September 1945, was that there is no secret, which is to say, there's no way to keep these things the unique possession of only the United States, that many, many countries with advanced industrial capabilities and smart scientists can, and very likely will, develop their own weapons. Because there is no single thing you could lock down to prevent them from doing it. A related version, a kind of a variation on that theme, which again, got a lot of airplay in op eds and newspaper editorials, in magazine features, and so on, was that there had been one secret, but that's no longer secret at all. The secret was-- given what we know about, say, the behavior of uranium nuclei and slow neutrons, the secret that remained was, could such a thing ever be built? Could a nuclear weapon that released its energy from the runaway chain reaction of fissioning nuclei, was that physically possible? And now that's not a secret, because it had been demonstrated quite dramatically and very publicly that such a thing was possible. So that's a kind of variation on the theme that there is no secret. Or we'd say, well, the secret is now, if there was a secret, it was that it could be done, and that's no longer secret. And again, just to remind us, that message was being deployed not in a vacuum. It wasn't being articulated only for a sake of public information. That was part of the earnest goal of many of these organizers, but it was also in the midst of some very, very heated, politically divisive debates within US Congress among the punditry, let's say, or the commentators, about what to do in this suddenly new post-war world about atomic energy, about nuclear reactions generally. And there were two-- very quickly, like within weeks of the official Japanese surrender, there were two competing bills introduced into the US Congress with very different proposals for what to do after the war. Basically, what do you do with the Manhattan Project, is what it really came down to. Should there be-- both these bills assumed there would be some continuation of something like the Manhattan Project. An option that was not on the table was shut it all down and walk away. That was not considered in any serious way at the time. So instead, what seemed like the relevant options, what were reflected in these dueling pieces of legislation or proposed legislation, was number one, continue on something like the World War II basis, meaning explicit military control. The Manhattan Project, as you may remember, was really overseen by the War Department vis-a-vis the Army Corps of Engineers. That's why the Army General, Leslie Groves, was the ultimate head of the Manhattan Project, even though he had many scientific directors working with him. So option one was continue that, that there should be basically a War Department or soon a Defense Department military control of a kind of postwar version of the Manhattan Project. And the competing bill was to have a civilian agency. As I said, there was no bill that said, let's have no atomic project moving forward. So it was basically, should there be continued military or a shift to a civilian agency control of anything having to do with nuclear reactions, whether it's for reactors for power generation or for weapons. And these discussions about, is there an atomic secret or not, those are being deployed in the context of a very concerted political fight over these very different visions for the legislative future of atomic energy. So we need to be careful not to read these editorials in a vacuum. There was another phase-- and this is what I write about a bit more in that article-- that didn't just stop by saying there is no secret. And so here it was really interesting to look at what were widely circulating, often quite influential, requoted statements, either by scientists or by journalists or by policy makers, including things like congressional testimony, but also in broad circulation magazines, of people articulating other answers to that question, what is the atomic secret? And what I found really interesting-- I didn't expect this going in when I was starting this work-- was that in several years after the end of the war, really right through late 1948, there was a pretty distinct pattern to these next set of responses, that even when people said there does exist something called the atomic secret, the nature of that secret-- all these different observers seemed to agree or be consistent, the nature of the secrets had to do with industrial capacity with materials, with how do you build factories that work. It was not about-- it was explicitly not about single formulas or text-based information that could, at least in principle, be smuggled out or aid some rival nation. But it was about large-scale industrial processes like, how do you pour a billion cubic meters or cubic feet of concrete and build these enormous industrial scale reactors, I mean, that kind of stuff, as opposed to, oh, slow neutrons will trigger, have a higher reaction rate. And so three close variations of that you can cluster from this very wide number, dozens and dozens and dozens of widely circulating op eds and opinion pieces and policy proposals and so on, they say, well, yeah, there are atomic secrets. There are things that legitimately should still be kept secret. But they have nothing to do with text-based information. It was things like, how do you deal with the fact that uranium hexafluoride will burn through existing industrial gaskets, when we talk about those gaseous diffusion tanks at Oak Ridge. Or how do you scale up a reactor from a research reactor like Enrico Fermi's very first pile that he helped oversee the whole Chicago team? How do you go from that to bringing in DuPont contractors to scale up these unbelievably large-- largest in the world at the time-- reactors and related power plants? Those kinds of things were secret. These folks argued they should remain secret. And they were not in danger of being quickly replicated, because they depended on a huge material basis. Now, one of the things that starts to happen in the midst of these discussions was not only a debate in the US about the proper handling of these things moving forward-- military or civilian oversight of a new agency and so on-- but also that some secrets might indeed have already been shared, or something illicit might have actually slipped beyond the carefully controlled perimeters of the wartime Manhattan Project. And this was really shocking, a series of these kind of revelations, some of which became very sensational. One of the very first happened three days after the official end of the Second World War. So in the first week of September 1945-- this became more broadly known a few months later, but widening circles of people knew about this as early as the first week of September. It became international news by February of '46, so it was not too long later. So what happened was there was an employee of the Soviet embassy in Canada named Igor Gouzenko. And he and his family defected. They were about to be restationed back to the Soviet Union. They didn't want to leave. And so Gouzenko defected. He said, basically, he wanted to stay in Canada, he and his family. To help make his case, he left with troves of documents from the Soviet embassy. He had been a cipher clerk. He worked in encryption, basically, for the Soviets as a Soviet agent during the war, stationed in Canada. And he took with him out of his office file cases full of secret documents that indicated wartime Soviet espionage vis a vis both UK and Canadian projects. This was, as you can probably imagine, a big, big, big surprise when this news finally broke broadly a few months later. So here he is. This is actually Gouzenko under a hood. This was, like, constant nightly news. He was in public, but always shielded, his face was shielded. In fact, the family went into something like witness protection. So the Canadian government eventually relented, and he and his family were able to stay in Canada. And they were given new identities, much like witness protection kind of program within the United States. So he would go on TV, but be hooded. It was variegated. Again, you can imagine the sensationalism of this. And he was a kind of fixture throughout 1946. So it turns out these materials covered many forms of attempted and sometimes successful espionage. But it included, as a little minor part of all the things he took with him, clues that led to a trail that ultimately led investigators to a British physicist named Alan Nunn May, who'd been trained at Cambridge, who had spent most of the war years working as part of the Manhattan Project in Canada at what was called the Chalk River nuclear reactor site. There were many, many reactors built in many Manhattan Project installations, the biggest of which were at Hanford. There were other kind of experimental designs being worked on, including at this site near Ottawa, under the auspices of the Manhattan Project. So the British delegation had dozens and dozens, probably hundreds of physicists and engineers who spent the war years at various Manhattan Project sites. Nunn May was one of them working at the reactor. He had been previously a member of the Communist party in Britain. He had a kind of sympathy toward the Soviet Union, by his own later admissions. And he was very concerned that the Allies' wartime ally, meaning the Soviet Union, which was fighting on the same side by that point as Britain, Canada, and the United States, was somehow not part of this otherwise information sharing program for the Manhattan Project. The Manhattan Project was explicitly a three-nation cooperation, US, UK, and Canada, even though in the broader wartime context-- so reasons May at least-- the Soviets were our allies, as well, "our" meaning the Allied side. And yet, there was not the same kind of extension of cooperation or sharing. So he took it upon himself, as became clear from investigators following these leaked documents from Gouzenko, and ultimately, Alan Nunn May just confessed to it when confronted, that he had passed along physical samples from the reactor site to a Soviet agent and ultimately with the intention of getting them to aid the Soviet project. So Nunn May was giving literally physical samples of radioactive materials, in this case, both U-233 and enriched 234. And in his mind, this would help speed their program. They could see the levels of purity or purification or enrichment that would be needed, and so on. This was, as he knew, clearly against the rules. And yet he convinced himself this was somehow OK because of the fact that the Soviet Union was officially an ally of these other countries. So that was the first atomic espionage revelation that came from this very sensational defection from the Soviet clerk. And again, what was being traded, what was being moved around, were physical samples of material. Here's a uranium, an enriched uranium sample, a small trace amount that nuclear chemists back in the Soviet Union might be able to make progress on. It was not about paperwork. That characterization, however, began to change starting in September of 1948. So you have dozens and dozens of these articulations, what is the atomic secret until this time. And they really follow this pattern of being about material industrial things, not about texts or formulas. And this is very, I think, quite a sharp break in the characterization of the so-called atomic secret, starting with this report in September 1948. This was a very attention-grabbing report by a US congressional committee, the so-called House Committee on Un-American Activities. It was often abbreviated as HUAC for House Un-American Activities Committee. This was a group, at the time a standing committee, a permanent committee. It wasn't actually permanent, it was at the time what's called a standing committee, had been set up in the 1930s, but as many of you might know, became very active in the US soon after the Second World War and really led the charge in a kind of anti-communist series of investigations and efforts. So one of the first things they did after the war to get a lot of attention was hold some very high-profile hearings about supposed or alleged communist infiltration of Hollywood and of the broader kind of educational entertainment industry. And the allegation was that all these left-leaning communists were going to poison the minds of honest Americans by seeping propaganda into movies that people would watch without realizing they're being brainwashed. That was the level of the kind of discourse at the time. Their next big set of headline grabbing revelations was actually about so-called atomic secrets, about allegations of nuclear espionage. And the way they chose to characterize it over and over and over again, which is a relentless, relentless similarity, was actually about text. It was about secret formulas or single pieces of paper that could be smuggled out as if it were kind of a James Bond movie and somehow aid a rival nation-- by this point they were mostly concerned about the Soviets-- in somehow producing a weapon from text alone. And that was quite a different framework than all these emphases on industrial capacity material. So their first foray into the atomic secrets kind of landscape was this kind of blockbuster report that they released with great fanfare in September 1948, where among the many, many claims they made was that there had been a so-called Scientist X. And they made sure they knew who it was, but they weren't going to release the name. It even heightened the mystery of it all. The Scientist X during the war, an employee of the Manhattan Project, had given what they called a complicated formula to a known communist agent in spring of '43 with the express goal of helping the Soviets make their own nuclear weapon. So here's an excerpt from their report. It's a very lengthy report, but here's where they describe part of this episode in particular. They say, Scientist X read to this alleged communist agent-- he was actually a well-known labor organizer in the San Francisco Bay Area, who might indeed have been a Communist agent, I don't know. What he was known at the time was being a pro-labor activist. At the time those were often conflated in public discourse. Anyway, what they wrote was that the Scientist X read to this other person a complicated formula which this other person copied down, like F equals MA. And they'll go, let me scribble that down. Scientist X gave his reason for asking the agent to copy it down that the formula was in the handwriting of some other person, and that he, Scientist X, had to return the formula to the University of California radiation laboratories in the morning. And as I write in the article, this is like medieval. It's like literally the handwriting of the scribe is infused with nuclear power, and that one equation is enough to lead to proliferation of weapons. And therefore it has to be carefully ferreted back to the lab before anyone notices that one scrap of paper missing. If all it takes to produce a weapon is a stolen scrap of paper as opposed to a billion pounds or a billion cubic feet of concrete, then it sounds like a very scary issue indeed, doesn't it? What's interesting is to go back to the much later declassified Military Intelligence Division report on which HUAC was basing this allegation. Military Intelligence Division was a version of a kind of domestic FBI attached to the war department during the war that was conducting all kinds of surveillance during the war, including of the Manhattan Project sites. So this was originally classified. Many, many years later it was declassified. This is the portion of the transcript that seems to match closest to what HUAC was claiming. And I want to be clear-- the case of this transcript wasn't the simplest or complete truth either. I'm just saying, what was the documentary base on which these later claims were made five years later? If you go back to the 1943 intelligence reports, they describe a very different scene. This alleged communist agent, the local labor activist, had asked the scientist for copies of an article that had already been published. In fact, it goes on to say, a research article published in the Physical Review, which is by definition not classified, let alone some scrap of paper and handwriting. And the intelligence agents had recorded this perhaps verbatim dialogue. What shows up in quotations in the report is the scientists responding, quote, I could certainly get reprints of it. I could get you a copy of the article. But this would give the Soviets no knowledge at all that would be helpful for making bombs. After all, the details of nuclear fission had been published in the open literature. The synthesis of plutonium at University of California had been published in the open literature. And at least, as the once classified intelligence report suggests, what was being traded was paper-based, not some handwritten formula and something that the scientist himself said would be of no particular value. It wouldn't tell them anything they didn't already know. But that's certainly not how HUAC spun this, because remember, the transcript from the Intelligence Division was still highly classified. So HUAC could basically try to control the message. And so they held these very public hearings under the Klieg lights, these very famous lights. It was really all the rage. And they began what other observers at the time called a trial by newspaper. This was not a criminal proceeding, so there was no rules of evidence or cross-examination. This was a kind of public-- not even a hearing, a public testimony, let's say, not subject to legally binding protections for anyone involved. And yet the committee could then selectively and strategically leak information, which they proceeded to do for months and months. So they released their own big report and then would kind of surreptitiously, or not so subtly, leak additional information to various trusted newspaper reporters and keep this in the headlines, and likewise, not just newspapers, but very broad circulation magazines, as well. And I want to remind ourselves-- it took me a while to piece this together at first-- why would they do this in September of '48 if the Intelligence Division had already vetted this in 1943? And I realized, oh, September 1948 was right before a big and highly, highly contested presidential election in the United States, the election in early November of '48. And this was one way to try to score political points, as happens before and since, as we were quite familiar with, of one party trying to convince voters the other party shouldn't be trusted with sensitive matters, including things like nuclear energy. So this was, at least in part, a kind of political opportunity, which I don't want to suggest was unusual-- as we know, that happens before and since-- but it helps us make sense of the timing of it. This was as much about presidential election politics as it was any kind of new revelation about espionage. And what interested me was actually the shift in how HUAC chose to characterize the most dangerous or seemingly most important elements of what it takes to make nuclear weapons. It goes from industrial capacity to a single slip of paper that could be very carefully smuggled out of the country. And then following that report in a very, again, sensationalistic report released in September of 1948, then you see just again a huge proliferation of news media, congressional testimony, public lectures, and all the rest, all addressing the question, what is the atomic secret, but now all clustering around this kind of text, not material or industrial capacity. So again, people might disagree on what the atomic secret is, but there's a new clustering. It's no longer about hard stuff to ship overseas, but about simple stuff, like things you could write down on a piece of paper. And here's an example that I find very haunting or telling. When Albert Einstein was featured yet again on the cover of either Time magazine or Life magazine-- I can't remember which-- they put his famous equation in the mushroom cloud as if the bombs were made from that equation, as if that equation had had any particular role in designing, let alone developing, testing, and using these new kind of technical devices. And so when people said there is an atomic secret, they would give the answer to say it's either these so-called complicated formulas, or it's things like information about the nuclear stockpile-- the US has this many weapons ready versus that many that could be smuggled out on a piece of paper; the size and shape of the bomb-- that would have implications for things like delivery systems, what kind of airplane or boat might you need; a blueprint or some kind of sketch about that very complicated implosion mechanisms we talked a bit about-- but that was the secret and you could have a single diagram that would somehow give it all away; or other kind of so-called general principles of bomb design. So again, there's lots and lots of-- a kind of variance in what people think the atomic secret is. But they're clustering now in a very different characterization of both how science and technology works and of what's most careful to guard. And I found that shift really very, very interesting. And once again, this is not playing out in a vacuum. Not only was there the US election season to think about, but even about a year later more dramatic developments that again just kept this theme in the news, really for years. So again, as some of you might know, late in August of 1949 so four years after the end of the Second World War, the Soviet Union did succeed in secretly detonating its own first nuclear weapon. It was a fission bomb, a plutonium weapon using an implosion mechanism remarkably similar to that which was tested at the Trinity test and then ultimately used in the bomb over Nagasaki. So it was a very similar kind of device that the Soviets detonated in secret in their own territory in August of '49. The US authorities nicknamed it Joe I, not really in honor, but in reference, in joking reference to Joseph Stalin who was still the leader of the Soviet Union. Then about three to three and a half weeks later, US President Harry Truman announced to great dramatic fanfare that the US had detected the Soviet test. The Soviets did not announce it. In fact, they kept it secret. The Soviets had been announcing all along until then that they had detected weapons. That turned out to have been intentional propaganda and misinformation. And when they actually did successfully detonate a bomb, they didn't announce it. Three weeks later, the US President announced it instead. And what I find super interesting is actually another really interesting book by my friend Michael Gordon, whose work I mentioned previously. Michael wrote this book called Red Cloud at Dawn. And he does I think a really fascinating reconstruction of the decision making process within the US government about whether and how to announce that the US teams had detected the Soviet bomb. In brief, the worry was that if the US announced that the US government knew the Soviets had detonated the bomb, would that give away too much of our own espionage and surveillance infrastructure? It turns out the US gained confidence about this detonation with these kind of high altitude aircraft that were already in or at least very close to Soviet airspace that already would have been seen as problematic or maybe provocative. And these planes were equipped with certain kinds of detectors to find trace amounts of certain radioactive materials. And a few days after the actual detonation, these planes found compelling evidence for certain rare isotopes that seemed to be only associated with that kind of implosion weapon. So the question was, do you even say anything? Because then will the Soviets know that we have these planes in or near their airspace? I just find that fascinating. So not only yes, we know, but we can't tell you how we know. It's one of these again quintessential kind of Cold War episodes. You can read more about that in Michael's book. Anyway, the point is, Truman-- the balance, the decision was indeed to announce it, but with no official announcement of how the US knew. But now this became, again, really shocking in many parts of the world, including in the US. The so-called US monopoly over nuclear weapons had been shattered. Now the country that had emerged as the greatest kind of political rival in the post-war scene, meaning the Soviet Union, had its own weapons of mass destruction. And what's interesting is that until that time, US authorities kept saying that the Soviets are still five years away. The very first assessments right after the end of the war were that the Soviet Union would need at least five years to make their own weapon. That turns out to have been actually pretty close. The challenge was that every year or continuously US authorities would keep making these updated estimates, and they keep saying the Soviets are still five years away. That became, therefore, less and less accurate, as you can imagine. So the first estimates were pretty reasonable. In 1948 it was no longer reasonable to say the Soviets will need until 1953 to make their own bomb. So when so-called Joe I, when this first Soviet weapon was detected and successfully detonated, that really seemed like a big, big shock, even to the insiders in the US who, so to speak, should have known better. And so this just again ramps up the kind of Cold War jockeying and therefore the discussions about things like, is there an atomic secret, and what should we do about it? Now, soon on the heels of that, again, very surprising information for many people in the United States-- soon after that, starting in late January 1950, other news broke that there had been at least one more example of atomic espionage from the Manhattan Project during the war. This also involved a member of the British delegation, who, like Alan Nunn May, had been sent over to North America as part of the British team to work on various Manhattan Project sites. In this case, it was this individual, Klaus Fuchs. Here's his badge photo from Los Alamos. So Fuchs was actually originally from Germany. He was a very early anti-Nazi, and we were talking about this a little bit during office hours earlier today. There was a phrase that came into common usage in the United States after the war called prematurely antifascist. That was code for someone who's probably a communist, because in the earliest days in Germany, the groups that were most actively trying to oppose the Nazis even before the Nazis took over were the communists. This was a kind of left-right kind of battle. And so if you were against the Nazis before you had quote-unquote legitimate reasons to be against the Nazis, then it meant you were a communist. At least it seemed to imply that. Really remarkable there was no room for anything in between. So Fuchs was one of these left-leaning, perhaps indeed communist members who had been prematurely anti-fascist. When the Nazis took over, he then had to get out of Germany fast. So he emigrated to Britain as so many young physicists and mathematicians did. He then got involved with a British uranium project and then was sent over to North America. He first worked at Oak Ridge for quite some time in detail and things like isotope separation and then was relocated to Los Alamos and worked on many aspects, or at least a few aspects of the project from there. And then he went back to Britain after the war and worked on the British nuclear efforts at Harwell, not so far from Oxford. And then again, one of these intelligence investigations finally led to suspicion about Fuchs. He confessed late in January 1950 that he had indeed been sharing materials with a Soviet agent throughout the war. That confession then led to other investigations, many of them still now back in the United States. And that led to the arrest of people, including very famously Julius and Ethel Rosenberg, a husband and wife pair who were based in New York City. They weren't at Los Alamos, but Ethel Rosenberg's brother, David Greenglass, had been stationed at Los Alamos as an army machinist. He was not a trained scientist, but he was one of the many, many, many technicians who was on site at Los Alamos. He, like his sister, had, at the very least, left-leaning political sympathies. And during the war, perhaps with Fuchs' help-- or in any case, the investigation of Fuchs led to this other ring-- Greenglass had been smuggling out some things or trying to get information from Los Alamos with the intention of getting it to the Soviets. And it seems that his brother-in-law, Julius Rosenberg, helped in ferreting this information out. So when this came to light, there was a very, again, hugely sensational trial in the United States for the Rosenbergs. Greenglass agreed to testify against his family, so he was given, basically, immunity. He cut a deal. So even though Greenglass was the one who actually stole the materials, he cooperated with the prosecution. And the trial turned out to be against Julius and Ethel Rosenberg and a third defendant, Morton Sobell. And so here's an example of what was entered into evidence for the prosecution of David Greenglass, if I remember correctly, redrawing from memory the kinds of sketches he had acquired during the war and had tried to get to the Soviets. And this was a fairly crude sketch of the implosion mechanism for a plutonium bomb. So again, what's fascinating, the trials stretches out over 1951, again, daily updates in the newspapers and nightly news. And to aid the prosecution the Atomic Energy Commission, which was this kind of postwar successor to the Manhattan Project, they actually declassified what was then being called the single most closely guarded atomic secret, which was this implosion mechanism. So to help the prosecution gain a conviction of the Rosenbergs for espionage, the government agency literally declassified what they were at the same time claiming was the single atomic secret that presumably could be written down on a single piece of paper and smuggled out to another place. In fact, at one point-- I write about this in the paper-- when testimony about this was being given by Greenglass under oath, they partially emptied the courtroom gallery of everyone except journalists. So if you want to keep something secret, I would think you would get the journalists out and let the jury stay and let the other civilian onlookers stay. They did the opposite, so really just mixed messages about what is or is not the most potential or serious atomic secret. The point is, this is all about text-based diagrams or formulas and not about industrial capacity. And again, as you may know, the Rosenbergs were found guilty. They were ultimately executed soon thereafter. And there's lots and lots and lots that's been written about that. Many, many things were once classified and over the decades have become declassified. There's more to be said, but it seems that Julius Rosenberg very likely was guilty of at least some of the things he was accused of. It's much less clear that Ethel Rosenberg was guilty of the things for which she was convicted. And there's all this stuff that was later found out of prosecutorial strategies to go after Ethel Rosenberg hard in the hopes that Julius would confess, and they both-- neither confessed, so anyway, a really messy, messy trial. Also it looked like some not proper coordination between the prosecution and a judge. It was a mess in terms of just legal procedure. And I think no matter what side one comes down historically about the efficacy of nuclear espionage, I don't think anyone claims this was a clear or clean trial, thousands of pages written about it. But that's just the upshot. The point is it's coming on the heels of these quite dramatic revelations about wartime espionage. So once this was, or at least parts of this was becoming part of the public record, many commentators, especially the United States at the time and even since then, have reached toward these examples of espionage-- Alan Nunn May, the Greenglass-Rosenbergs connection, Klaus Fuchs, and then others that have since come to light, a handful of others, to say, well, that must explain why and how the Soviets actually did succeed in making a nuclear weapon so soon compared to estimates. Remember it wasn't so soon compared to the original estimates, so soon compared to the later estimates, which proved to have been inaccurate. And so the idea was, in a sense, that they cheated, that they only caught up with the Allied effort because they had all the benefits of espionage. Once again, I want to give a shout out to my friend and colleague Alex Wellerstein, who has a fantastic set of online materials at his blog. It's a blog with footnotes. This is actually like a scholarly blog. And he has a great piece, actually from a number of years ago by now, about hand-drawn diagrams from one of the heads of the Soviet nuclear weapons project, Igor Kurchatov, to the equivalent of the Leslie Groves figure, Lavrentiy Berea, from early 1946, which you can see this handwritten Cyrillic that look like pretty compelling versions of an implosion design, the implication being that Kurchatov really was benefiting from some of these kinds of purloined worksheets and figures that people like Greenglass and Fuchs were able to supply. On the other hand, since the fall of the Soviet Union, many more materials from Soviet sources have become available to scholars within Russia and elsewhere. And the story, again, like most stories, gets a bit more complicated. Here are some really interesting more recent analyzes by Alexey Kozhevnikov-- we've read some of his work for the paper II assignment-- and also, again, Michael Gordon's book that I mentioned, that espionage, I think it's fair to say, clearly played some role in the Soviet program. But it's not so straightforward what the efficacy was, let's say what the role was. And there's lots-- again, the more one looks, the more complicated the story gets, rather than simple, which is-- maybe that's how it often is in history. So here are some of the salient points. There's much more that these other folks go into that I find really interesting. The information that was obtained via espionage-- and some of it clearly was obtained via espionage-- to within the Soviet Union was often treated as suspect because at least some of the folks there didn't know if they were being fed intentional misinformation. Was this a kind of information war after all? And so were these things being planted so that Soviet agents might be duped into getting unhelpful information? And so often these would be doled out to various internally rivalry groups within this very large Soviet program. It grew to be very, very big, much like the Manhattan Project. And so you have competing groups that would almost have a kind of peer review, for lack of a better term. One group would be told, this is from the American project, and we think it's real. Go with it. Another group would be told basically, we think this is garbage. Poke holes in it. And so it's not like they said, here's the answer, go do it. Likewise-- again, it became clear only decades later-- the Soviets wound up pursuing many, many roots in parallel, much as the US or the Allied project had done during the war, even with the knowledge of the efforts that had proven to be most effective in the US case. So typically for the US project, if there were four different ideas to separate fissionable uranium from the more stable isotope, the response from General Groves was usually, try all four. If there's a war on, give it all you got, try all of them. We'll go with whatever is quickest. And yet, and afterwards, some would prove to have been more effective than others. It seems clear that the Soviets often did the same thing, even knowing which wound up being the most effective version in the wartime Manhattan Project. The Soviets would often set up parallel efforts instead of only trying the one that had seemed most effective. So that's not a huge savings of time or effort either, and also even more tricky that a lot of these very, very complicated devices, these bombs like the implosion design, depended on very specific properties of non-nuclear materials, both epoxies-- kinds of glues or adhesives-- certain kinds of, literally, the wiring to try to make sure that you have sub microsecond accuracy for the firing circuits, but also these different kinds of conventional explosives that burn at different rates. So you're trying to make this shaped ingoing pressure wave. And the Soviets had different forms of TNT equivalent. So they weren't just copying-- they couldn't just copy the blueprints, even if they had rather complete blueprints. So it's not to say the espionage was irrelevant. I'd hardly think that's the case. But it's also seems pretty inaccurate to say the Soviet bomb was merely a product of copying the Manhattan Project. And so again, we get the real life experience of these complicated human endeavors that didn't fit into the neat stories that were being told then or since. So in the meantime, again, coming back to things that really caught my attention about the way espionage and so-called atomic secrets were being characterized throughout the US at this time, was after the news about Klaus Fuchs did become very broadly known-- it was major newspaper news by late January of 1950-- there was a kind of slippage that became very common. I find this actually very chilling, a slippage, not to say all German communists who came to the United States were actually bad. That might have been one conclusion people might have unfairly drawn, but that wasn't where people went. The conclusion reached often was that all theoretical physicists are suspect. Let that sink in for a second, that because of Klaus Fuchs, because of his perfidy and because of his what was often called his warped mentality, his infantile, naive sense of how the world should work as characterized by his critics, that that's a sign of physicists who have been too poorly trained in the humanities. That part I like. That's why we have GIRs here in past courses, right? The allegation was these narrow-minded scientists hadn't learned about government and history and literature and human affairs. So they have baby ideas about how the world works, including kind of idealized notions of world government or of communist utopias. And so therefore, theoretical physicists as a group are dangerous. If you think the bomb was made by equations, if you think the bomb is essentially text made real, and therefore the most dangerous things are formulas and engineering diagrams as opposed to billion cubic feet of concrete, if theories are dangerous because they make bombs, and if theorists have this kind of warped or unbalanced education where they're more susceptible to so-called communist propaganda than well-trained economists or whatever else, then that's a double danger, right? Because the bombs are allegedly made by theorists, using kind of reifying equations. I find that just stunning that people would have come to those conclusions knowing that the Manhattan Project required unprecedented industrial capacity with experts from metallurgy and chemical engineering and electrical engineering and many, many areas well beyond theoretical physics. And so you have this really strange way of thinking about scientific and technical progress, let alone thinking about individuals who majored in one field of study or another. And again, that doesn't die down very quickly. As late as 1956, there's a case of a federal judge sentencing a Cornell grad student to jail time. The grad student had been accused of being a member of the Communist party. The student pled the Fifth Amendment against self-incrimination in court. That was not at the time recognized as a legitimate defense. That was righted a few years later with the Warren court. So he was held in contempt and sent to jail, not only for not cooperating, but because the federal judge was convinced that younger generation of pure scientists engaged in research in physics-- by which he meant this Cornell theoretical physics grad student-- had succumbed to communistic propaganda. Again, that's a remarkable leap or slippage between bombs or formulas and theorists or commies. That seems to go well beyond the evidence at the time about some instances of espionage that indeed were serious, but might have had a range of possible outcomes. Let me pause-- oh, the last part, the last slide before we pause, this again starts to have real world implications of many kinds. One that, again, I found rather surprising is if you go back and look in the Congressional record, which is now thankfully digitized, go up and count all of the hearings in the first decade after the end of the Second World War that this House Un-American Activities Committee held that involved any academics. They held all kinds of hearings about labor organizers, about school teachers, about Hollywood script writers. But they held lots and lots of hearings about academics, again, with this fear about poisoning impressionable youth. And you count up by anything you want to count-- the number of witnesses who were subpoenaed to testify, the number of hearings on specific topics, or the number of days devoted to those hearings-- by any measure, you see that they were overwhelmingly drawing on theoretical physicists in particular, much more than, say, chemists, much more than economists or philosophers or political scientists who actually once studied things like communist world systems or demand side economics or anything else. So the people who HUAC deemed most dangerous and most in need of these kind of interrogation high profile congressional hearings were people who by then they could associate with nuclear secrets which were formulas which were dangerous. And so this begins having a long tail. And there were just dozens and dozens of careers that were really destroyed because people chose to plead the Fifth Amendment or the First Amendment or had been briefly members of a Communist Party for a study group in 1935 and found it boring and left, and suddenly that came back to haunt them and were fired from tenured, as well as tenure track positions and all the rest, a very strong amount of blacklisting that went on that, again, many scholars have since begun to document. But you see, I can only make sense of charts like this by thinking about what were the depictions of how science and technology work, let alone what's the most important part behind making nuclear weapons. So let me pause there. There's a lot to chew on. The next parts will be quicker. So Gary asks, was Nixon in the House? Yes. So part of the famous HUAC hearings did involve a very young member of Congress, Richard Nixon, before even his first run for president, let alone his later runs. And so this was a bipartisan committee. It was a standing committee of the House. But then as now, the majority party could appoint the chairpeople, and so on. So during this particular period it was chaired by some Republicans who clearly were out against the Democrat Harry Truman for re-election. Again, that's not that's neither surprising or illegal. We have a two-party system in this country, for better or worse. But the way that the committee was activated had a very, very clear and very impactful partisan slant. The Alger Hiss case was exactly part of the exact same period, exactly right, Gary. Lucas says, you seem to know a lot about Soviet espionage during the Manhattan Project. What about German or Japanese wartime espionage? Very good. I don't know of any, and that's a fascinating question, Lucas. I can't think of a single instance. I'll turn to the TAs if any of them have come across it. I don't know of any. That's really interesting. There have been a handful more, like three or four more instances of wartime espionage, again, in aid of the Soviets or intended aid of the Soviets, often from US citizens during the war from the Manhattan Project that were totally unknown at the time and came to light literally decades later, but still only a handful. And all the ones I know of were people with the intent of helping the Soviets, which again, as I remind all of us, the Soviets were a wartime ally of the US, the UK, and Canada. And I'm not saying that excuses it, but that was often highlighted as a motivation for many of these folks. I'm not endorsing that behavior, but many of them would say the Soviets are losing many more members of their army than we are on behalf of the Allied front, these kinds of arguments for which I can understand where that was coming from at least. And I don't know of a single instance of even attempted, let alone successful espionage from the US projects, either for Germany and Japan. There was a lot more in the other direction. We talked briefly about US surveillance of the German nuclear project, including kidnapping nuclear scientists and all that. Johan asks, was physics in Allied occupied Germany-- what was physics like in Allied occupied Germany and Central Europe? Oh, very good question. That's a big topic. So the short answer is there was a lot of efforts soon after the war under something that became known as the Marshall Plan to do a heavy investment by the United States in many parts of the world, including Germany, to try to help them reinvest and reestablish a kind of civil society, partly as a bulwark against further temptations, it would have been called at the time, to align with the Soviets; partly, some revisionist historians have said, to make more markets and make it easier to trade for American commercial interests-- I think that's true. I think sometimes it's overblown, but certainly was part of the calculation-- and also for these kinds of geostrategic alliance efforts. So there was a kind of strategic use of redevelopment or rebuilding aid generally. And as some of my colleagues like John Krieger and other historians have shown, a fair amount of support for basic research in the sciences in many countries of Europe were undertaken with a similar kind of aim by the US, including by private US-based foundations, as well as US government dollars. Sometimes-- we now know those foundations were a cover for CIA money. It was basically money laundering to buy influence or buy good relations with otherwise left-leaning scientists in France, for example, or to avoid what looked like to have an influence of communist thinking in certain otherwise influential cultural figures, including lots of scientists. So there was often very, in terms of dollars that were spent, very generous funding on basic sciences and institutes, including Niels Bohrs Institute, but many in France, many in what would become Western Germany, many in Italy, many joint summer schools and educational efforts under NATO, as well as just the US. And again, there was frankly a mixture of incentives or kind of strategic thinking behind that on the US side, much of which came to light only decades later, including, as I say, literal money laundering, as well as, let's say, more transparent funding that nonetheless had a range of motivations behind it. So anyway, more to be said, but that's a fascinating question. Alex asks, is it possible the AEC declassified this so-called most closely guarded secret in order to draw attention away from industrial secrets? That's really interesting, Alex. Maybe. Certainly that would be consistent with their motives. But what I find interesting is that it really was-- frankly, it was a kind of double speak. And again, my ability to find a kind of benevolent explanation in this instance is strained, seeing just how much was going on behind the scenes improperly, and in some cases I think actually illegally with a coordination between prosecution and the judge or between federal agencies and journalists. There was a lot of very intentional leaking of otherwise protected information for strategic purposes. And again, I don't mean to say that was brand new. That's been happening as long as humans have kept secrets. But there was a lot of I think very strategic kind of flows, basically of smear campaigns. I mean, that's really what I think it could be called. And even if the people being smeared had done terrible things that deserved punishment-- and I think in many, many cases that seems like a fair conclusion-- the means by which it was done were often at least as troublesome or as worthy of careful evaluation years later. So I don't know. It's just-- honestly, it's a cesspit. When you start digging this stuff, it doesn't inspire one for human nature. I guess very few things in history do. But a lot of this gets super messy. And again, I can appreciate what people thought were the stakes. I mean, nuclear weapons, now the Soviets have one. All these people were active in espionage. You can see this kind of drumbeat of drama. So I don't mean to suggest that cool heads would have prevailed today. I don't think they would have. But nonetheless, with the fullness of time, with fuller documentation of what was going on in many parts of the world, not only within the United States, I think we can reevaluate many of these maneuvers that were done in this kind of high fever moment and maybe try to learn from them moving forward, put it that way. Let me move on. Let me go to these next parts. They'll be quicker. But let's press on to next parts for today and talk about this next section on the decision in the United States to pursue a new kind of nuclear weapon, the hydrogen bomb. And again, we saw some of this in the film, The Day After Trinity. So in October 1949, a few years into the nuclear age, the Atomic Energy Commission's General Advisory Committee was often called the GAC, which was a civilian advisory committee to the civilian Atomic Energy Commission with many experts, some of whom were just outside consultants, many of whom were Manhattan Project veterans, but otherwise not full-time involved in the nuclear effort afterwards. This Advisory Committee of approximately a dozen people wrote a top secret report recommending to the federal government not to pursue the development of this new kind of weapon, the hydrogen bomb. In fact, some committee members actually argued that such a weapon would be, in their words, an evil thing in any light. And we'll see more about what these weapons were in a few minutes. But they were saying that there were all kinds of reasons not to pursue them. And at least some of these members made an explicitly moral or ethical argument that these weapons are, in the parlance of today, weapons of mass destruction that could only be used against civilians. These are no longer in any sense weapons for a military theater. That was a top secret report, though it, again, angered certain powerful political figures who were very gung ho on trying to develop things like hydrogen weapons. And Oppenheimer began to stand out more and more in many of these people's minds as a kind of drag on the system, that he was perhaps intentionally trying to sway his own colleagues on the General Advisory Committee. He chaired the general Advisory Committee at the time. Was he exercising undue influence on them, was he trying to lobby against a new kind of weapon that some people thought the nation needed in order to secure the peace, and so on. So this played a very, very large role in the eventual security hearing against Oppenheimer, the result of which was that he was stripped of his security clearance and no longer consulted for the federal government, starting in 1954. So the H-bomb decision was fraught on many, many levels over many years. That's the idea. It was associated-- became closely associated with Oppenheimer because he chaired this GAC committee. The GAC committee report was highly classified. In fact, the text was only declassified and widely available starting in the 1980s, decades later. Now you can find it easily online in many, many places. And if you go back and read the actual report, it does indeed argue against the development of a hydrogen weapon, but not because of ethical worries and not because the members of the GAC were striking a kind of pacifist tone or an anti-nuclear tone, by any means. In fact, the report advocates a very aggressive nuclear stance. And they argue against hydrogen weapons, as we'll see in a few, for several reasons. Because the committee worried that it would derail too much of what they considered a critical effort to make more and more working weapons of the World War II type, that any effort to make as yet unproven hydrogen bombs would actually be too disruptive to the arsenal for other kinds of nuclear weapons, which the group said were essential to go on full speed ahead. So not only was Oppenheimer not being some pacifist dove saying all bombs are bad, they were actually-- Oppenheimer's own committee and Oppenheimer's cover letter and so on makes it clear that he individually, and the committee broadly, were advocating an aggressive expansion of the nuclear weapons capabilities. And it was only this Minority Report included as an appendix-- written not by Oppenheimer, but by other members, Enrico Fermi and Isidor Rabi-- that also raises additional reason not to pursue a hydrogen bomb where they speak briefly, but in explicitly moral tones. That wasn't Oppenheimer at all, in fact, and it doesn't seem to have reflected his views at the time. And we know that, because Oppenheimer was doing lots of consulting until his clearance was stripped. So even after the 1949 report from the General Advisory Committee, he was leading all kinds of study groups for classified advice on nuclear issues. Another example of the many, many was conducted in the summer of 1951. It was held on Caltech's campus, but again, it was all classified work sponsored by the US Army and the Air Force called Project Vista and had many, many parts. But one of those had to do with nuclear strategies for different kinds of stats as vis-a-vis the Soviet Union. And the subcommittee that Oppenheimer actually chaired recommended that hundreds of nuclear weapons should be deployed throughout Western Europe, including West Germany and Allied NATO bases all around Western Europe with actual working tactical weapons that could be used and should be used, they argued, to repel a potential invasion from the Soviet army. The worry at this point was that the Soviet army, in terms of numbers of troops, was just so much larger than any of the NATO-affiliated armies that if it were a conventional warfare fought in the early '50s, the Soviets would just roll over anyone else, was the fear. So this group said, well, then we'll just use usable tactical nuclear weapons on the battlefield to repel an otherwise numerically superior Soviet invading force. So Oppenheimer's own committee pushes for a first-use nuclear policy, which was incredibly aggressive at the time, that not only should these weapons be deployed, but we should use them first, even if the other side doesn't use nukes first. By this point the Soviets had their own [INAUDIBLE].. So this wasn't greeted with fanfare by everyone who could see it. In fact, it made more enemies with the Air Force because this seemed to cede more control over nuclear weapons to the Army than the Air Force. There's always this internal rivalry and tension. So the Air Force said, we should be the only ones who handle nuclear weapons because we have the big bombs and the big planes, and we can deliver them. So Oppenheimer made more internal enemies, but not because he was saying, get rid of nuclear weapons, because he was saying, make more of them, and be ready to use them aggressively. I find that chilling. Meanwhile, what were some more of the arguments within the actual 1949 GAC report? Why else did they say not to pursue a hydrogen weapon and instead to focus on implosion fission weapons? 2 and 1/2 years-- or I guess 1 and 1/2 years after the end of the war in April of '47, the US still only had components for seven bombs. They weren't even assembled. If there had been a sudden change in the geopolitical situation, and there was deemed to be some need to start using bombs as they've been used at the end of the Second World War, there were only parts for seven bombs even nearly two years later after the end of the War. Two years after that, or two and 1/2 years after, that the entire stockpile was still only 235 weapons. Of course, that was a closely guarded secret at the time. That's one of the things that has since been declassified. So that was very, very much on the minds of members of the GAC who had security clearance to know all these things. They argue, saying, we should be concentrating on building up a stockpile of weapons we know how to make and that we know could be effective, militarily effective in their estimation? Likewise, they go on to say, the delivery systems, which by that point mostly meant aircraft, would limit the size of hydrogen bombs. The idea was that any design that might work-- and there were at that point no workable designs. All the ideas seemed to require enormous factory-scale associated equipment like cryogenics and so on. You'd have to ship like a small factory, part of which would then blow up, was at least the thought. How could you possibly get these things to a military target? They're going to be too large even for the biggest bombers in the Air Force's fleet. So we don't know how to make them, they're saying, and if we could make them, they wouldn't be practical for delivery. Remember this is long before rockets, this is long before Sputnik, and so on. So as they write in their own report, their classified report, there appears to be no chance of hydrogen weapons being an economical alternative to fission weapons based on what they called the strict criteria of damage area per dollar. If we want to have a kind of nuclear arsenal, they're arguing, let's make it of weapons we know how to make and that we know could be delivered. There are even more esoteric challenges that the group was wrestling with, again, in the report, in the classified report, as well as other classified briefings. So at the time that the GAC wrote this report, all the known projected hydrogen bomb designs-- none of which had been tested or built yet-- all of the prototype ideas required huge amounts of tritium. That's a heavy isotope of hydrogen. They're called hydrogen bombs, but ordinary hydrogen it was feared wouldn't be enough. You needed many more neutrons per hydrogen nucleus. You needed a heavy isotope called tritium. Tritium, then as now, was very rare and very expensive to produce. The naturally occurring amounts are trace. So again, much like plutonium, the idea is you'd have to make tritium in reactors. And the designs called for kilograms worth of tritium per bomb, just like they called for tens of kilograms worth of plutonium. Producing just 10 grams-- not even not even a whole kilogram, producing 10 grams of tritium at the time meant taking reactors offline that otherwise would have been producing plutonium for the known design of nuclear weapons. And so they go through this calculation. The US would forego 100 or more fission bombs per hydrogen bomb even if the designs would work because the materials that would be needed were so rare. And it was a direct trade-off, a zero sum game, they argued, in how you produce the fissionable materials versus the fusion fuel. So they make this report saying, for all these tactical and strategic reasons, we advise against a crash course development of a hydrogen bomb, that was delivered to many US officials, including the president. Nonetheless, really just weeks later on the very last day of January, 1950, the US President, Harry Truman, announced publicly that he was nonetheless ordering the crash course development of a hydrogen bomb-- that was a public announcement-- and at the same time, then ordered a gag order so anyone with information about this could no longer speak to the press. So he makes a public announcement, then limits other public discussion. And again, looking back, you say, oh, but what about all this smart input from the GAC? They had clearance, they were experienced, they made good arguments. Again, with hindsight, I can see how Truman might have felt like his hand was forced, not only because of arguments about tritium rates of production, but also because of what else was changing in the world around him, around many people. Remember only weeks before he had announced to the world that the Soviets now had their own nuclear weapon, that announcement about so-called Joe I. One week after that, the communists in China won militarily. They beat the nationalists. China had had an ongoing Civil War. The US had backed one side. The other side won, the explicit Chinese Communist Party. So now it looked like many people read that as a kind of domino effect of further communist expansion, first the Soviets, now the Chinese. That was at least how it was often read or interpreted in the US. And then just days before Truman's announcement, Klaus Fuchs confessed to espionage, saying, yes, I really was stealing stuff from wartime Manhattan Project to give to the Soviets. So in some sense, we can understand how Truman might have arrived at that decision, even though there were, one might say, at least understandable and I think maybe compelling reasons given on the other side. Remember this is all before there was a single notion of how to actually make a hydrogen bomb. Truman was saying, we'll work harder at it, basically. So more than a year after his announcement in March of 1951, under high secrecy, there was a first really important conceptual breakthrough we now know. It wasn't known widely at the time. And that was introduced by these two individuals-- Stan Ulam, who was interviewed in The Day After Trinity a mathematician who worked at wartime Los Alamos and stayed much of his time at Los Alamos, and the theoretical physicist Edward Teller. And the two of them were working together and kind of bouncing ideas back and forth. And together around the same time, they helped refine this idea in secret that's now called the either Teller-Ulam or Ulam-Teller idea. There's a real fight afterwards about who deserves more credit. So Ulam does seem to have gotten to the main idea first. Even teller seemed to grudgingly admit that. So I tend to call it Ulam-Teller. The idea, again, is physically very interesting if we think only about the laws of nature for a moment. The idea was not to try to ignite a fusion reaction only based on the very high temperatures released by a fission bomb, but actually using radiation pressure. So these fission bombs give out tons of very high energy X-ray radiation in addition to producing a lot of heat. The early efforts to try to cook, to try to start a fusion reaction going, fusing very light nuclei together and thereby releasing energy, had all relied on heating the fusion fuel to very high energies and using a nuclear weapon fission bomb as a so-called trigger to heat up the fusion fuel. And none of them seem to work. You couldn't heat it fast enough or all these kinds of concerns. It didn't seem to be feasible. And what Ulam and Teller together really put forward was that those fission bombs do more than just give off heat. They give out unbelievable amounts of very high energy radiation X-rays. And people knew since long before Compton scattering days that radiation carries momentum. It can exert pressure. So then they realized if they could begin building a kind of cylindrical design and channel or focus that very high intensity very high energy radiation from this trigger, this fission bomb trigger, explode a kind of plutonium bomb at one end of a cylinder and funnel or lens the X-rays onto the fusion fuel, then it actually might ignite the fuel and might require much, much less tritium because the reaction might be more likely to get started. That was still just literally a thought. That was certainly not proven, but it was the first new conceptual insight after years of thinking about hydrogen weapons. So again, just the timeline I find just astonishingly fast. We saw Truman makes this announcement at the very last day of January, 1950. A little over a year later-- still, of course, top secret-- Stan Ulam and Edward Teller introduced this really quite compellingly new idea to at least start working on-- will it work, won't it work? A Little more than a year after that, the US establishes a second full-scale top secret nuclear weapons lab in California, not too far from Berkeley in Livermore, California, largely at the urging and the lobbying of Edward Teller and other political allies. So there was Los Alamos operating at full tilt, plus the second lab at Livermore. And the kind of gallows humor, the joke soon developed at Livermore that the main competition was actually Los Alamos, that the Soviets weren't the competition. The second lab had to prove its worth vis-a-vis the first lab. There was a real internal rivalry, much like Army versus Air Force and Navy and all these kinds of things. And again, just remarkably quickly, in November of '52-- so a year and a 1/2 after this first kind of sketch of a new idea to proceed-- the first working hydrogen bomb device was detonated by the US in the Pacific on the Enewetok Atoll with an explosive yield that was indeed 1,000 times more powerful than the bombs that had been used against either Hiroshima or Nagasaki. So the wartime fission bombs were measuring outputs in the tens of thousands of tons, so tens of kilotons of conventional TNT. And these hydrogen bombs were setting the scale up by a factor of 1,000. The first one ever tested had an explosive yield of more than 10 million tons equivalent of TNT. And then again, as you probably know, that was just the beginning of an enormous race of doing above-ground nuclear tests between the US and the Soviet Union and eventually other countries, as well, France and other countries, Britain and some others. We're not so clear who's doing what when, but certainly in these early years, mostly the US and the Soviet Union. This is a plot just of the number of above-ground tests, not even including their yield or their radioactive fallout or anything else. As you can see, the US was doing-- this is the Trinity test. That's the first. These are only test detonations. Here's the one in 1945, several more in 1946, the first Soviet test in 1949. By the time we get to the mid 1950s, the US conducted nearly 80 above-ground nuclear tests in that year alone. That's more than one per week. That's extraordinary, the pace of this. And almost all of these now were variations of the hydrogen bombs, no longer these seemingly small-- not so small, but by comparison small fission bombs. And then again, you see a brief pause here and then another huge expansion of the above-ground testing regime. Let me pause there, take a few questions, and I'll get to the last part real quickly. Jade asked, what happened in the pause? Do you mean the pause in the testing? We'll come to that in a moment. So that's a great question. We'll see in a second Yeah, so very good question-- did the notion of physicists having these warped or unbalanced educational backgrounds affect their perceived credibility? It did in some quarters. So many people who were either career politicians or career military strategists said, perhaps with good reason, that why should I listen to any of these funny college professors? That's usually a fair comment. They have no experience with, say, military strategy, for example, or with procurement or the supply chain or anything like that. So there were plenty of arguments that came down to basically, these people don't have sufficient relevant expertise. And that's a perfectly valid critique I think. It was also often used selectively because they didn't seem to make that critique against physicists like Edward Teller, who were arguing a position that was more in line with their own position. So again, I think that argument has merit and was also nonetheless deployed kind of selectively or strategically. So that argument was made. It's not an invalid argument. And yet again, its actual usage was, in hindsight, maybe not so clear cut as it could have been. And for example, there were plenty of university chemists and physicists who were advocating-- not just Teller, who were advocating the aggressive development of hydrogen weapons. And they were very eagerly listened to and given credit as being the relevant experts in classified congressional testimony, for example. So the who has the relevant expertise question was a fair one, but again, it was used-- it was deployed, let's just say, in a variety of ways. Other questions there? I'm almost at time. Let me start this next part. We can talk more about this next week, as well. But just to say the US effort to develop more and more of these hydrogen bombs continued. Here was what turned out to have been the largest above-ground test by the US ever conducted in March of 1954, the so-called Castle-Bravo bomb, 50% more explosive even than that first Ivy Mike shot. This one became well known for many reasons, not just because it was the record holder, but also because this was a larger explosive yield than the designers had expected, in fact, more than twice as explosive as the designers themselves had predicted. And that meant that the fall-out from it, there was more radioactive debris from it, and it traveled further than expected. Also it was an unlucky break with the wind. So the effects of this above-ground test in the Pacific were much more extensive than anyone had planned on. And what happened was there was a Japanese commercial fishing boat, a private fishing boat, very far from the test site, but it turns out not far enough, given just how far, hundreds of miles, the fall-out eventually began to flow. And so the fall-out did fall onto this fishing boat. Many people on the boat became sick very quickly with radiation sickness, quite severely sick. And this was no longer secret. This was now people in Japan who were clearly showing ill effects from a US above-ground test. That's one of the first times that questions about fall-out or the dangers of radioactivity become commonly discussed in the broader kind of mass media within the United States. As we've talked about a few times from good questions in the class, it's not that no one knew about radioactivity or its dangers. But it wasn't usually talked about very broadly until events that somehow couldn't be denied, media coverage like that Japanese fishing boat accident, but a very horrible accident from that 1954 test. That begins to trigger much more community activism and scientist activism against above-ground testing because of the dangers of fall-out. The United States, among the most successful campaigns, has the kind of popular face. One of the fingers in front of it, was Linus Pauling, who'd already won the Nobel Prize in chemistry. He was not working on the Manhattan Project. He was an outsider scientist, but a very smart one, who was also very media savvy. And he was among the first to draft these kind of large-scale petitions with many other leading public figures to stop above-ground testing. Another I think just completely fascinating example was started a few months later by a different group of several nuclear scientists, biomedical and public health experts, and civilians who had experience in political organizing, members of the local league of Women Voters, for example, but otherwise were not scientists, in the Saint Louis area. They call themselves the Committee for Nuclear Information. And they began this very famous baby tooth survey. I don't know if any of you will have heard of it. The idea was that because of all the above-ground testing, much of it within the continental United States, not just far away from the US in these Pacific Islands, but even lots of tests of the Nevada test site and elsewhere, that we were in a sense bombing our own population, not with the immediate blast, but with these long-term and potentially long-lived effects of radioactive fall-out. And as evidence of that, the idea was that radioactive fall-out would land in the grass, the cows would graze, their milk would carry radioactive elements, children would drink the milk, and it would be taken up in the bones. It would be taken up in everyone's bones, but children's bones were especially easy to access because they lose their baby teeth. So here you have this perfect reservoir of calcium-rich bone material from humans that's absorbing some of these radioactive fall-out elements like strontium 90, which were already by then known to be pretty bad, iodine isotopes, as well. So the idea was brilliant-- send us your children's baby teeth. We'll send you these standardized little information cards. Send us literally the teeth, and we can test them in the Saint Louis Washington University laboratory and see just how extensive the radiation poisoning of the US continental United States has been, due to the US's own above-ground testing. That was a brilliant political maneuver. And you can see it was this mixture of-- we might today call it citizen science. But it took some real political acumen from the League of Women Voters volunteers, as well as from these concerned scientists. And then this finally led to a unilateral moratorium on US testing starting in '58 in direct response to that. The US stopped testing, while the Soviets didn't stop. But then the US went back to testing once the Soviets started testing more and more massive weapons. And the real pause came actually after the Cuban Missile Crisis in October 1962, which seemed to have really shocked enough highly placed people, both in the United States and the Soviet Union. And that led fairly swiftly to the signing of the Limited Test Ban Treaty. It was called limited because it limited above-ground testing. It drove all testing underground, of which then it continued extensively for decades. But the idea was to stop at least the previous pattern of 80 or more above-ground tests in a given year. It really took a series of citizen activism, scientists getting involved on many fronts, as well as these really, I think, quite chilling and very dangerous kind of nuclear standoffs, all these coming together by the early 1960s to lead to a sense that above-ground nuclear testing was to be stopped. Let me pause there just to say that these next few slides are just to say the debates didn't end with that. It was then debates about delivery mechanisms, about missiles, about how many warheads should be on a missile, should there be anti-missile, anti-ballistic missile systems, and so on. Here's a preview for that next film. Because of all of this above-ground testing before it was driven underground, there is to this day lingering ill effects throughout, for example, the continental United States, though not limited to that, where 3,000 square miles of the continental United States have now by the US federal government been deemed officially uninhabitable because the remnant radiation levels are still too high. And that's the kind of thing that this next film called Containment tells us much more about. So the decades long and ultimately, in this case, thousands of millennia long environmental impacts of this relatively brief period of Cold War nuclear activity, that leaves a tail much, much longer than only the early tests and early test bans. And that's just a preview for the film. So finally, just the last slide-- but these nuclear issues have never been outside of society. I guess that should be obvious. But the scientific debates which were difficult and sometimes conceptually very rich, they were not happening in a vacuum, and nor were changing notions of what scientists' own roles can or should be. Are they subject to special scrutiny because they're somehow the wizards of dangerous materials? Are they politically naive and therefore shouldn't be trusted? Should they be speaking out because they have special insight or knowledge? We haven't resolved these questions to this day, but they get really churned up and really amplified as burning questions I think during this period of the early and mid Cold War. So those are things we can again continue to think about later in the term. So again, I'm sorry to run a little bit long. I'd be glad to stay late and answer questions, but that's what I want to get across for now. Iyabo asks, was there competition similar to Livermore Los Alamos during the Manhattan Project? No, in that case, it really was just the Manhattan Project. There were different facilities that were not doing the same thing, so in that sense, they weren't rivals. Oak Ridge was doing different things than Hanford. With Los Alamos and Livermore, both they had overlapping missions, more directly overlapping missions. And I think that bred more of a kind of immediate rivalry. So I think that was part of a difference. That's a good question. Again, sorry to run late. I'd be glad to stay on a bit longer if people have more questions, but we'll delve more into this next time. Thanks, everyone. Stay well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_25_String_Theory_and_the_Multiverse.txt | [SQUEAKING] [CLICKING] [RUSTLING] [CLICKING] DAVID KAISER: Welcome to our final class session for 8.225 STS042. Kind of hard to believe-- hard for me to believe at least. But we've made it to the end of the semester, this very strange and unusual semester. So I hope you're all doing OK. It's I know it's crunch time, even more than usual, I think, with lots of final projects and end of term stuff. So I hope you're all hanging in there, doing OK. There's a little while left on your paper three assignment for this class, and hopefully the other assignments for your other things are well in hand as well. Any final logistical questions about the assignments, or really anything like that? If not, that's OK. I will jump in. I do have one set of lecture slides for today and a kind of wrap-up. We won't wrap up every last question in all of physics, I don't think. But we'll explore some ideas that kind of build on, or follow the story forward in time, compared to some of the recent things we've been looking at, especially in this relatively recent overlap space-- conceptual overlap space-- between very high-energy theory, particle physics, and studies of cosmology, this area that we now call particle cosmology. And so for today, I want to talk about some things that you might well have heard about. Some of you might have already had a chance to read up on this in some detail. But there are things that are often described in popular culture, there are ideas that are still clearly speculative. These are not proven by any possible measure, but they are, at least to some members of the community, well-motivated. There's reasons why lots of people think about these things and take them seriously, even though we, the community, certainly don't have any kind of consensus yet, let alone a lot of reliable, empirical or observational information to help us sift through competing ideas. So we're in this kind of awkward state where lots of folks are pursuing interesting and sometimes very strange-sounding questions, so they get actually stranger and stranger sounding, I think. And so I wanted to end our semester this term with this taste of some small set of the open questions that members of the community are still really wrestling with and grappling with today. So I'll focus on ideas about string theory-- some ideas about string theory-- and the multiverse. And there's lots and lots and lots written about this, often for a non-technical audience. And part of also what I wanted to do with this class was just give you some pointers for some books that I really like, some accessible, really very well-written popular books, by physicists with a range of opinions about this stuff. And so some of them are written by arch rivals conceptually, and they, together, I think can help us get a reasonable sense of at least what seems to be at stake, what are some of our colleagues so exercised about, and why do they have competing visions for next big steps forward on this intellectual journey we've been marching along together this term. Not all of it is in the Complete Idiot's Guide, though I do actually like this book by my colleague, George Musser. He's an award-winning science writer and a PhD in astrophysics. He actually is, himself, not an idiot, but he wrote this fun book for the Complete Idiot's Guide series on string theory, and of course, other books as well. So partly what I'll do with these slides is give a little shout-outs with some book covers for some of these books that I enjoy, you might enjoy, after the dust has settled on this crazy semester, if you're looking for some virtual beach reading for IAP. These might be of interest. So as usual, of course, three main parts for the discussion today. I want to turn the clock back and look at this really 100-year long quest, 100-year long challenge, to try to find a quantum mechanical description of gravity. It's still an elusive challenge. And we'll see what were some of the earliest ideas about that and why has this approach taken such a different turn, compared to other efforts to describe fundamental forces of nature quantum mechanically. So we'll look at, in some sense, where string theory comes from, and why some people have been so excited about it, compared to other approaches. Then in the second part, we'll look again, just very briefly, at some of the collisions, some of the possible implications of thinking about these string theory ideas in the context of cosmology, getting a little closer to work that I've been involved with, or some of my colleagues have been more involved with. So when we take this kind of particle cosmology view, and take on board, or try to take seriously, some of the ideas or possible implications of the high energy theory part-- the string theory part-- what does that lead us, at least lead some people, to wonder about, in this more cosmological setting. That's part two. And then part three, just very briefly, we'll zoom out and again, remind ourselves that like all the work we've looked at together over this entire semester, none of these ideas are unfolding in a vacuum, that I think we can make sense of some of the rhythms of change, if not the particular ideas that come forward, by asking about who's doing this work in what settings, and some elements of a larger context that might help us make sense of the twists and turns. OK. So as I mentioned this century-long challenge, a kind of grand challenge for the field, has been to find some internally, self-consistent description of gravity that would be in the framework of quantum theory. We've seen, at least in some we looked at in some detail, others we had kind of hints at, that over the mostly second half of the 20th century especially, each of the other three main known physical forces of nature-- electromagnetism, the strong nuclear force, and the weak nuclear force-- each of those has been given a quantitative treatment in terms of quantum theory, in terms in particular of quantum field theory. We looked a little bit at quantum electrodynamics, the quantum mechanical version of electromagnetism, associated with people like Tomonaga and Julian Schwinger and Feynman and others. Likewise, we looked briefly at quantum chromodynamics. There's been this remarkable success, often in very close dialogue, between experimental inputs and theoretical advances. And yet gravity, the fourth known force of nature, has been this stubborn holdout. Turns out this has been going on for-- the effort has been going on for a long, long time. One of the earliest people to try to build a quantum theory of gravity is this young person shown here, Matvei Bronstein, who was working in the Soviet Union in the mid 1930s. He was quite young, as you can see in the photo. Bronstein was writing papers, mostly in Russian in some of the Russian journals-- a few of them in German. Not very well known outside of his immediate circle at the time, but he was doing what, in hindsight, we see was really pretty advanced stuff and that has held up, I think, quite well over time, where he recognized that if one tried to make a quantum mechanical treatment of Einstein's general theory of relativity, the reigning description of gravity, then you could try to formulate it on a model of exchange of virtual particles. The virtual particles, in this case, would have a little more complicated mathematical structure. That might not surprise us, given how much we saw even Einstein's own version of general relativity was so mathematically involved, this warping spacetime. Bronstein showed that you could actually reproduce the basic structure of general relativity as arising from the exchange of a certain kind of particle, a certain kind of force-carrying particle. Instead of the photon being the force-carrying particle for a quantum mechanical version of electromagnetism-- for quantum electrodynamics-- there would be a hypothetical graviton, a particle force-carrier for gravity, and it would be kind of analogous to the photon. It would have zero mass, just as the photon has zero mass. But it would have two units of spin, whereas the photon has one unit of spin, and most ordinary matter has half integer unit of spin, like electrons and quarks, or even protons and neutrons. So one could try to reformulate Einstein's version of gravity as arising from the exchange of a particular kind of force-carrying particle, a massless spin-2 particle, dubbed the graviton. That was done mostly in obscurity. And quite tragically, Bronstein was murdered about a year later. He never had a chance to explore this, or frankly anything else. He fell afoul, as many, many people did, of Joseph Stalin's purges in the Soviet Union. He had some political ideas that suddenly put him on the outs with the reigning authorities, and he was rounded up, found guilty on a so-called show trial, and executed the very same day. There's no chance for an appeal. His quest, in many ways, ended tragically, tragically early. Many, many years later, other colleagues, including many in Western Europe and North America, kind of rediscovered insights that Bronstein had put together way back in the 1930s. Much more famously, much, much better known, especially in the US and Europe, Richard Feynman followed along very similar thought paths in the early 1960s, when he himself got interested in gravity, not only particle physics. And Feynman also gave these now famous lectures that were published many years later, his lectures on gravitation, where he also taught his students that one could capture the mathematical structure of general relativity as arising from the exchange of these particular kinds of force-carrying particles. It's just classic. Now you might be saying, well, don't we already know that gravitons exist? After all, our very good friends at LIGO have found gravitational radiation. They certainly have, using these enormous detectors in Hanford, Washington, in Louisiana, now similar devices throughout Europe, and others under construction elsewhere. But this is really finding classical gravitational radiation. It is not identifying individual gravitons. It's like the difference between finding classical Maxwell waves, like radial waves, electromagnetic waves that are continuous and classical and extended through space and time. It's a similar gulf between that and finding evidence of individual photons, individual quantized force-carriers of a quantum mechanical force. So whereas now we have-- thank goodness-- really quite compelling evidence about classical, extended, wave-like features from gravity, this still is not evidence of individual gravitons. A challenge on that score still elusive. Now even though there's not compelling experimental inputs about the behavior of gravitons, one could still-- and people have over the century-- try to build a kind of quantum mechanical treatment of these hypothetical force-carriers-- these gravitons, massless spin-2 particles. And that's the kind of work that Bronstein began all the way back in the mid 1930s, and people like Feynman and many of his students tried to do head on throughout the 1960s, and so on. So you can basically say there's a kind of gravitational field, a kind of warping spacetime of the sort that Einstein had described. You can imagine the little perturbations around some average value. You can try to treat that as some quantum mechanical object, all in direct analogy to the treatment of, say, the Maxwell field as composed of photons. But unfortunately, as people like Bronstein were finding early on, this does not lead to a clear way forward. This approach leads to infinities, when you try to account quantitatively for the behavior of virtual gravitons, much as one would do with, say, the exchange of virtual photons. But unlike the case of QED, or as people now know, even unlike the case of quantum chromodynamics or the others, there's no way to get rid of or to self-consistently absorb these infinities. And we can get a sense for that by going all the way back to one of the very first lectures this term. Even classically, we can get a sense, at least, for why there might be a disanalogy between, say, electromagnetism and gravity, when we try to treat them each quantum mechanically. We go back to things that some of our favorite wranglers-- my favorite wranglers were super excited about back in the 1840s and 50s, people like William Thompson and James Clerk Maxwell. They were working out this electromagnetic-- or sorry-- this mechanical worldview, where they found these mathematical analogies between, say, electrostatics and Newtonian gravity. They can each be written in terms of some potential, some function that extends throughout space and changes over time. In the electromagnetic case, the electromagnetic potential could be understood as responding to, as we'd say, sourced by, where the electric charges are. So we could look at the variation, the gradient squared, of the electric potential as arising from the charge density, the charge per volume. And if we just do some dimensional analysis-- just some quick dimensions without trying to really calculate things carefully-- we can see that the charge density goes like the electric charge per volume, so it's like some charge, whether it's the charge electron or some larger collection of charges, divided by some volume. Also remember, as the wranglers certainly well knew, that this all important quantity-- the wave number-- goes inversely like some characteristic length. So wave number is inverse to wavelength. And so the charge density in appropriate units goes like the wave number cubed. As I mentioned at least briefly some lectures ago, the electric charge, if we choose to measure it in so-called natural or appropriate units, the electric charge is dimensionless. In fact, it's the charge squared goes like 1 over 137. It has a very particular-- just a number. It's not a dimensionful quantity. So dimensionally, the charge density to which the electrostatic potential is sensitive scales dimensionally like the cube of the wave number. Now let's try to follow that analogy for the gravitational side. Even for Newtonian gravity, the very similar scaling holds even for Einstein's more complicated version. Remember that the gravitational potential is sourced not by the electric charges, but by masses, or eventually in Einstein's version, by mass energy. So we have to worry about the dimensions of this source over here, the mass density. Well, that goes by some characteristic mass divided by volume. It's a density. The volume factor we just convinced ourselves scales like the cube of some wave number, but the mass, unlike the electric charge, the mass is also dimensionful. And so something that was not known to either William Thompson or James Clark Maxwell, but has become really clear to theorists over the course of the 20th century, is that dimensionful quantity mass actually matters quite a lot. Thinking just from relativity, we saw, thanks to Einstein, that energy, mass, and momentum are all basically interchangeable. The famous expression E equals MC squared, we saw briefly, is really more generally a relation between energy, mass, and momentum, and they're all interchangeable with some kind of change of units, given by the speed of light. But if we measure these quantities in appropriate units, they're all dimensionful-- energy, mass, and momentum-- and in fact, they're all really kind of interchangeable. There's not a sharp conceptual divide anymore, in relativity, between mass, energy, and momentum. That's the relativity side. Then from quantum theory, we saw, thanks to people like Louis de Broglie, and built into the Schrodinger equation too much else, that momentum and wave number themselves are directly related. In fact, they're related by a different constant of nature-- Planck's constant. So if we go back and find a scaling-- even classically for Newtonian gravity, that the source for these gravitational disturbances, which we might choose to quantize-- if they're sourced by a mass density, that actually has a different scaling with wave number, and therefore with momentum, than in the electrostatic case. There's a disanalogy between, say, quantum electrodynamics and a putative quantum theory of gravity, that has everything to do with insights from relativity and quantum theory-- E equals MC squared and the de Broglie wavelength. And so what that means, when you try to just follow an analogy to these otherwise very successful quantum field theories, is that those divergences become more severe in quantum gravity than any of the analogous ones for the other forces. Because gravity is sourced by mass, this dimensionful quantity, which itself is akin to a kind of momentum, the integrals-- when we try to count up the effects of virtual gravitons-- the integrals diverge with momentum more violently than even the ones do in for the other forces. And it really comes down to these very elementary notions from both relativity and quantum theory. The scaling is different. What does that mean in more in slightly more practical terms? We saw-- again, going back to these insights from Sin-Itiro Tomonaga, during the war in Tokyo and then independently rediscovered soon after the Second World War by people like Julian Schwinger and Richard Feynman-- that for quantum electrodynamics, they could always arrange their equations so that they never had a bare infinity on its own. They could always organize things as these input output relations-- this was the heart of what became known as renormalization-- and that you could always have these combinations which remained finite, even though each of these, on their own, would diverge usually logarithmically with momentum. So each of these, if you try to calculate them on their own, would be formally infinite, but their sum was well-defined and finite. And what people like Freeman Dyson and others showed is it that trick holds, order by order, in perturbation theory. You can make arbitrarily precise calculations and these calculations hold all the way through. What it means for quantum gravity, to have these more violent divergences, the stronger growth with momentum for these virtual graviton processes is that you need actually an infinite set of these absorbing coefficients, that you can't arrange things just in the input output way. You never have a finite number of these absorbing infinities. So you can never absorb or cancel the infinite divergences, at least from this first approach to quantum gravity. So this leads to a hard problem. And one of the first books that I'll recommend today is this really lovely book that came out 20 years ago by physicist Lee Smolin. It's, I think, a very accessible, very nice book. He is a leading theoretical physicist. He's also a very gifted author. This book is called The Three Roads to Quantum Gravity. And he really goes through some of these early arguments as to why the analogies failed, time and time again over decades, for generations when people tried to apply insights from something like quantum electrodynamics to try to just do the same thing for quantum gravity. So what now we can call, somewhat tongue-in-cheek, old school, quantum gravity is not subject to renormalization. You can't get any sensible, finite answers from trying to deal with the zigzagging of virtual gravitons, because there's stronger dependence of gravity on things like mass and momentum. OK. So read Lee's book. He'll explain it better than I do. But I think that's, in brief, some of what's going on was going on over, really, decades in this first wave of effort to combine gravitation and quantum theory. While all that was unfolding and becoming clarified-- the dead end was becoming clarified-- in a quite different branch of physics, now going back to ideas about the strong nuclear force, high energy nuclear physics and particle physics, in the late 1960s, a number of theorists-- some of them actually, at the time, based at MIT-- were trying to make sense of this. One of these main approaches we looked at, from the middle decades of the 20th century, for the strong force-- Geoffrey Chew's so-called S-matrix force. He was flipping these Feynman diagrams around, finding these self-consistent, dynamical solutions where basically nuclear particles could make the force-carrying particles that would bring the particles together to make those force-carrying particles. Could you find one self-consistent set of relationships among the particle zoo? And so younger theorists were trying to play with that self-consistent swapping structure, and they kept finding these extended objects-- geometrical objects-- in space, where a lot of the force among these particles would be spread out, splayed out in a cylinder or tube, a kind of a string-like, flux tube. And that really was, as far as I'm concerned, what might explain the nuclear force that keeps things like protons and neutrons bound within atomic nuclei. That was being published starting in 1967, '68 by a small little circle of specialists. It was curious. It was interesting. And it was very quickly overshadowed for the reasons that I think we can make sense of, given what we saw together, the kind of gathering force, the kind of choppy road toward the quark model and ultimately quantum chromodynamics. So we saw, although the idea of quarks had been introduced as early as 1964, it took really a solid decade, into the mid 1970s, before even the quark model proponents themselves, like Murray Gell-Man, were saying that these are physical entities in the world. And it was really only after the successes of this very specific quantum mechanical field theory-- quantum chromodynamics, or QCD-- where the various pieces came together and also got some helpful, bolstered by some new experimental inputs. So by the mid 1970s, it looked like people had this strong nuclear force well in hand. And all these kind of weird, funky, geometrical, flux tube things from the Geoffrey Chew-inspired work, that kind of faded away. It wasn't proven to be wrong, it just seemed like it was no longer the best road forward. The quark model finally seemed to be scoring success after success, and that was built, as we saw, in much closer analogy to quantum electrodynamics. You imagine elementary point particles with certain kinds of charges-- in this case, quarks and gluons-- and you trace the virtual particles among them, and all that in a framework, much more like quantum electrodynamics. So the early string ideas really kind of fell away, not because it was just proven, but because they seemed like maybe not the best way forward to understand the strong force. Before people stopped paying attention though, a few of this small circle recognized something curious in their equations for the Geoffrey Chew-like for the string approach to the nuclear force. If you look at the low energy limit of these complicated, self-consistent, highly nonlinear equations, it looks like, in low energies, this seemingly nuclear type interaction includes the exchange of a massless spin-2 particle. And some people said, wait a minute. I remember hearing something about that. A massless spin-2 particle maybe comes up in other contexts, like in gravity. And so a few groups who knew about the kind of strong force, stringy work said, maybe these insights about strings extended 1D structures, maybe it has nothing particularly to do with the strong force, but maybe that was a theory about gravity and not about nuclear particles. If so, then the typical length scale for these things would not be the size of a single nuclear particle like a proton, it wouldn't be femtometers-- 10 to the -15 meter-- it might be the Planck length, the smallest possible length longer than which one should be able to ignore quantum gravitational corrections. Maybe the actual fabric of Einstein's warping spacetime consists at these unimaginably tiny scales of a play of these extended one dimensional objects. Maybe the strings are what give rise to the effects of gravity, not that bound quarks within a proton, for example. So because the low energy behavior of these stringy structures gave rise to just the kind of particle needed to make sense of gravity, in terms of particle exchange, the ideas about string theory became very interesting to a small circle as a possible road toward quantum gravity-- quite different from the kind of particle scattering QED type model. In fact, it looked even more promising, again, as many of these folks began to put together by the early and mid 1980s, roughly 10 years later. Then it looked like calculating effects-- quantum virtual effects-- in this string theory of gravity might actually avoid all these messy infinities altogether. Why was that? The infinities keep arising in these quantum field theories because people integrate up to an infinite virtual particle momentum. That's like saying-- that's like imagining that we could model the exchange, the scattering of, say, a virtual force carrying photon an electron, that they meet literally at a mathematical point. They meet in a region of infinitesimally, tiny region of space. That suggests that they could be scattering off each other with a borrowed momentum, up to infinity, up to 1 over 0. The delta r could shrink to zero. That means that the momentum exchange could, in fact, become infinite, which is why you have integrate those integrals up to infinity. Well, these string objects are actually sweeping out not world lines through space and time, but actually world sheets. They are extended objects, so the analog Feynman diagrams never pinch down to a single mathematical point. In fact, you have these kind of pants legs structures, where you have one kind of extended structure might blend into others, but they never pinch down into a mathematical point. So if you never have these point like vertices in the kind of scattering among these putative strings, then you actually never integrate up to literally infinite momentum. So maybe virtual processes involving strings would naturally avoid all these divergence problems that seem to hang up the old school or classic approach to quantum gravity. So maybe you could have a finite theory of gravity that has the right particle exchange in the appropriate limit and avoids infinities. So why weren't we done in 1984? Well, because the same scholars found a pretty interesting catch. Some call it a bug, few would call it a feature. The simplest models that could be written down, that could be applied to a gravitational model, where these strings could be dancing around at the Planck length and giving rise, in the appropriate limit, to Einstein's general theory of relativity. They only could live mathematically in 26 spacetime dimensions. I put that in bold and italics, so we'll linger on that for a second. The earliest self-consistent models, when applied to gravity to get the right limit at low energies, to make it look like Einstein's theory at low energies, these strings weren't dancing around in three dimensions of space, at one of time-- XYZ plus T. They could only dance around in a mathematically, self-consistent way if they were living in a 26 dimensional spacetime. It seems pretty clear that we live in a four-dimensional spacetime, so what's going on? A little while later, some of these physicists found that they could actually make these mathematically, self-consistent string models of gravity. If they incorporated a particular symmetry-- strictly hypothetical symmetry-- for which even, to this day, there is no empirical evidence at all, a symmetry called supersymmetry, which suggests that for every known particle, every quark, every electron, every neutrino, and all the rest, there is a partner-- a super partner particle-- with the same mass and same charge, but one half unit more of spin. So for every spin, one half particle, like quarks, and electrons, and neutrinos, there would be integer spin superpartners and vice versa. So for every integer spin photon, there would be a half integer or three halves integer superpartner, like a photino. For every quark, there would be a squark, and so on. So at the cost of doubling every single known form of matter, these superstring theories could be formulated in only 10 dimensions of spacetime instead of 26. That's getting more than halfway there, but it still means a lot more structure than what we seem to measure or just observe in our daily lives. So these early superstring theories for gravity had some unbelievable promise. These people weren't of deluding themselves that these could be of interest. But it wasn't such a straightforward deal. So we can then go to some of the kinds of plots that I love to make and start counting stuff. So the blue curve shows all physics articles as covered by something called physics abstracts, a kind of worldwide consortium of physics-trained librarians in many parts of the world, trying to literally count up and categorize, by subtopic, all the articles by professional physicists in the peer reviewed journals, in as many languages as they could as they could master. So that's the blue curve. And you see it's growing fairly steadily over this period. And the red curve-- and I've renormalized it myself-- the red curve is then all the articles on string theory, including superstring theory, over the same time period. To fit them on one curve, I've actually divided the blue one by a factor of 100. So by the end of this period, the red curves are about 1% of all the publications in all of physics. Let's say they're almost equal, but I've cheated by making the blue curve changes amplitude by a factor of 100. So you see these very distinct structures in the effort expended on superstring theory, compared to global efforts on every branch of physics altogether. You see virtually no attention at all to string theory. It really is a very curious little specialty area until the mid 1980s. What proponents of this work later called the first string revolution-- which you have to know they only named after a second string evolution, right? No one called World War I World War I until World War II. So what later became called the first string evolution, in hindsight, was identified with this work that was really published by a few independent small groups around 1984, where they began showing that the stringy stuff could include the structure of gravity, if you imagine that string-like filament to be describing gravitational interactions at the Planck length, as opposed to nuclear physics. And indeed, there were several different, self-consistent, supersymmetric models published all in 1984, '85, and that led to an enormous burst-- an exponential rise of interest-- in string theories, coming very soon after that so-called first revolution. Then there's a kind of plateau for about a decade. And then there was what the proponents came to call the second string revolution, almost right on schedule 10 years later. What happened with the second string revolution, in brief, was that a few of these specialists found that what looked like five totally separate distinct, but self-consistent, superstring models. They looked different from each other. They all were mathematically self-consistent, even though they looked like they were different candidates. In the second wave, people like Edward Witten and other colleagues, found kind of one to one mapping. So in fact, these five different models might all be ways of expressing a single shared model. These might be almost like coordinate transformations of a single underlying model, and that became known as the second revolution. So if one performed what became known as dualities, it really is analogous to coordinate transformations. You can actually map, feature by feature, those seemingly five distinct models into a single structure. That got people excited again. You see another exponential takeoff of interest. So that by the early 2000s, efforts on string theory, which had once been vanishingly tiny, occupy nearly 1% of all the physics output of all the physicists on the planet. So let me pause there, see if there's any questions on that kind of alternate approach to an effort to try to quantize gravity. Any questions on that? It's so straightforward. You're all convinced. You're all now card-carrying string theorists. No lingering questions? Julie is in. Thank you, Julie. I'll tell Andy Strominger. He'll be glad. Any other questions? If not, that's OK, of course. But I'll press on. Because as if that weren't strange enough, boy, have I got news for you. You think that's strange-- we live in 10 dimensions-- let's see where people take this, since they're now paying attention. OK. Now I very blithely told you-- and none of you seem to object, so clearly it's the end of the term-- that these superstring theories can only be formulated in a minimum of 10 spacetime dimensions. They still only involve one timelike dimension. That means nine dimensions of space-- height, width, breadth-- and six others all at right angles to them. I mean, I look at where the walls meet in my house and I can't find where the other six right angles would come from. So it's not just a failure of my own imagination, which fails all the time. This leads to some pretty strange-sounding possible implications. Let's take a very simple one, going back again, right to the very first class sessions from this semester. Think about, say, Faraday's lines of force. Unless something very strange is happening, we really can't live in a 10-dimensional world. If you imagine a single mass-- some source of gravitation, some ball, with some mass, a spherically symmetric one, to keep things simple-- set it down at the origin of our coordinate system. Then just like a single electric charge that got Michael Faraday so excited in the 1820s and 30s, we can try to imagine how lines of force would emanate, in this case very simply, very spherically symmetrically, lines of gravitational force emanating from that source, that mass M at the center. Well, we can then try to do like Faraday did and an envelop, kind of surround our source, with an imagined sphere. We can change the radius of the spheres, we can imagine how the strength of that force should fall off with distance. Now again, as you know, the surface area of some sphere will scale as one power less of the radius compared to the volume. That holds in three dimensions. The surface area of a sphere goes like R squared, the volume goes like R cubed. That same geometrical relation holds for any dimensions of space, any hypersphere, in particular one that's in, say, nine dimensions of space, the surface area we can vary self-consistently project an appropriate sub manifold, to be all super fancy geometrical about it. We can calculate the surface area of a nine sphere, and the area will grow like the radius to the eighth power, to D minus 1. So if we just redo Michael Faraday's really lovely, ingenious in fact, geometrical scaling, then if we really think gravity lives in nine dimensions of space, then we would expect Newton's law to go like 1 over R to the eighth, instead of 1 over R squared. Why are we finding inverse square laws even in the context of Einstein's general theory of relativity, very high accuracy? Whereas if we just have gravity spilling out across nine co-equal dimensions of space, we would expect a completely different behavior for the force of gravity. So what's going on when you say that this theory of gravity is formulated in nine dimensions of space? One of the earliest efforts to address that is called compactification, which really just means make the extra dimensions super tiny. So the idea is that for, as yet unexplained or unknown reasons, the idea is that these extra dimensions of space are not arbitrarily extended. They're not macroscopic. I can't take a walk in dimensions numbers 4 through 9, because, at least hypothetically, they've been curled up onto each other. And the way the analogy is always made, and you might have heard before, we think about a soda straw or a garden hose. Now if you're very close to that straw or hose, you can see it's an extended object. It has both a length and a width to it. But if you look at that object from very far away, it looks indistinguishable from a dimensionless line, an arbitrarily thin line extended only in one dimension. So the idea was, what if these extra dimensions of space are not macroscopically large, but in fact, are curled up on each other? They have a kind of internal radius that never becomes large, that's controlled by some, as yet, hypothetical kinds of dynamics, so that from our clumsy human sized scales what is actually a garden hose at an impossibly tiny radius looks to us indistinguishable from being merely a line of zero width. That's the kind of hand-waving version of how you could compactify, in this case, one extra dimension. So we might live in a universe with X,Y and Z, macroscopic height, width, and breadth, and then W, let's say some fourth dimension of space. But that one is a tiny little circle that we don't probe at low energies, and so we never see this structure, because we're never measuring it on the right on the right length scales. If the curvature of that garden hose, of the radius, were of order the Planck length, we'd have no way to know it directly, because we can't measure anything with such high spatial resolution. So that's the idea to compactify these extra dimensions. That works pretty straightforwardly if you only have one extra dimension of space to get rid of. But remember, these superstring theories require at least six extra dimensions, nine dimensions of space, plus one of time. So we have three macroscopic ones that we can self-evidently move around in. So you have to compactify six extra dimensions, not just one. It turns out, again, as people began finding, as early as the mid 1980s, that that becomes an unbelievably complicated problem in topology, that there are estimated to be about 100,000 topologically distinct ways to curl up, or fold up, these six extra dimensions. So at every point in XYZ that we could walk along, there would be curled up, maybe in some different pattern, six extra dimensions of space, with a characteristic scale of order the Planck scale, 10 to the minus 35th of the meter. And yet the actual curling up-- the shape of it, really, the topology-- could be different from one location to the next. So people start talking about these so-called Calabi-Yau manifolds, a topic of great interest to pure mathematicians, and some of the string theorists wondered would that kind of topology accounting be useful for making sense of these shrunken down, compactified dimensions. My next book club suggestion is a lovely book by a colleague, Lisa Randall, who's teaches at Harvard. She used to teach at MIT, but Harvard convinced her to move all the way across campus, and a really lovely book called Warped Passages, which Lisa walks through some of the reasoning behind these extra dimensions of space, what are different ways to try to make them go away from our everyday experience. So she claims that, I think, really very nicely. So the first main conceptual approach to these extra dimensions is curl them up, even though it turns out that's pretty complicated to do. It got even worse. I would call it worse. It got more complicated in the early 2000s, and we're getting kind of close to where we are today. Other string theorists began to realize you don't only have to worry about the actual kind of dimensions of space, which can be topologically complicated in those so-called Calabi-Yau manifolds, but you actually have to worry about self-consistent configurations of, say, these strings within and among those dimensions. So you don't only have topologically distinct configurations of geometry, you actually have some physical degrees of freedom dancing around on them. And as you can imagine you get quite a different solution if your string is wrapped around a sphere, versus if it's wrapped around a torus, versus if it's wrapped around three torus or whatever else. So when they began counting up the actual number of low energy string theory states, we worry not only about the geometry of the dimensions of spacetime, but actually how these physical degrees of freedom-- these string-like states-- could be distributed within and among them. They came up not with 10 to the 5, 100,000, they came up with 10 to the 500, which is more than a rounding error. So they began to be concerned that there seemed to be 10 to the 500 distinct low energy string states-- 10 to the 500. I won't ask for a show of hands but it's pretty hard to come up with numbers like 10 to the 500. I've tried. Let me give you some of my mundane examples. If internet accounts are to be believed, then Jeff Bezos has 10 to the $5 for every buck that I can lay claim to, and that means leveraging everything-- sell the house, the kids never go to college, forget the car. Every single cent I could rub together, Bezos has more than $100,000 for every dollar I could scrape together. So that's either impressive or depressing. It's mostly depressing. But 10 to the 5 is nowhere near 10 to the 500. OK, so now let's go cosmo-- let's take the whole universe. Let's put the universe to work for us. If we try to measure the age of our universe in seconds, we get a measly 10 to the 17. If we're a little fancier and say, let's measure the age of the universe with the shortest measurable unit of time that has been measured so far in quantum optics, that's about a femtosecond, so 10 to the minus 15th of a second. That only gets us to something like 10 to the 32. We're still scraping. That's nowhere near 10 to the 500. Now let's compare, say, the mass of the entire Milky Way galaxy to the mass of a single electron. That's really big. It's a big galaxy. Electrons are tiny. That ratio is 10 to the 71. You get the idea, right? 10 to the 71 is basically equal to zero when we're comparing it to a number like 10 to the 500. So what became either interesting or horrifying to me, I think-- horrifying-- in the early 2000s was that a number of these string theorists realized that there were unfathomably large numbers of what looked to be self-consistent, low energy string states within these superstring theory models. Now why should we care, because at least within this superstring theory framework, every single quantity with which we might try to characterize our universe-- the masses and charges of every kind of particle, every electron, quark, gluon, everything-- would depend, in principle, on the particular one out of those 10 to the 500 string states in which our universe happens to land. So even for elementary particles, this is a huge mess. Why is a charged electron that versus something else? Even for bulk or astrophysical properties with which we characterize our universe, things like the expansion rate-- that's the letter H, the Hubble expansion parameter-- the rate at which galaxies recede from each other, or things like the value of so-called dark energy, or the cosmological constant, even the bulk properties with which we try to characterize our universe at large, each of those numbers, the actual quantitative value would, in principle, depend on which one out of these 10 to the 500 distinct string vacuum states. So how do we make any predictions or how to make any sense at all, empirically or quantitatively, out of the universe we live in, either from data like the Large Hadron Collider or from all of our astrophysics cosmology, if suddenly we have no way, or at least no practical way, it seems, to relate this kind of seemingly fundamental description of nature at these very tiny string scales to the things we can actually probe, measure, and wonder about? That became this new twist in the string theory saga in the early 2000. So here, once again, is Alan Guth. That idea about the so-called string landscape-- these 10 to the 500 equally self-consistent, low energy states of string theory-- that set of ideas kind of collided with some cosmological ideas which had first been developed quite independently. So we talked, in the most recent class session, about cosmic inflation, a set of ideas that I dearly love. It turns out early on in thinking about inflation, people, including Alan and Paul Steinhardt, and actually a number of folks we mentioned briefly last time-- Andrei Linde-- began to recognize something curious about these inflationary models. Basically, each inflationary patch-- each inflationary phase, let's say-- has some kind of natural lifetime. It's almost like a radioactive decay, that the configuration of those Higgs-like fields that temporarily drive this very rapid, accelerating expansion of space and time, they have a kind of natural decay, or natural lifetime. It's not infinite. In fact, we know it was a very short-lived period of inflation. And in fact, we can estimate for a given model what a half life would be for how long that phase is expected to last. It's probabilistic, it's a quantum mechanical process, but it's very similar to calculating a radioactive half life. And what became clear is that for many, many, many of these models, that otherwise seemed to match predictions on our sky beautifully, they're models that seemed to have many nice features, that the half life, the decay rate, so to speak, is actually smaller than the expansion rate. That in regions that have not yet decayed out of that phase, space should be stretching exponentially quickly, more quickly than the rate at which neighboring patches of space will fall out of that inflating phase. This became called eternal inflation. And the idea is that-- at least it seems self-consistent to suppose, based on what we otherwise know-- that if inflation begins anywhere in the universe, it never stops. It will stop at that location. It's not inflating for us right now. It'll stop at that location, but there'll be some other regions of spacetime that are still growing exponentially quickly. And that's because there's this competition between the rate for any given location in space to fall out of inflation. That's the kind of capital gamma, the kind of lifetime of inflating phase, and the rate at which the volume of some neighboring region continues to grow. So even though it's perfectly self-consistent for our universe that we see around us-- our observable universe-- not to be in that inflating phase right now, or to have phonon out of the primordial one, for all we know it looks more likely than not, at least in this scheme, that some regions so far away from us, we haven't received even a single light beam yet, they could still be in an inflating phase. And that could happen forever, because when that neighboring point falls out of inflation, some other volume of space will grow quicker still. So that led to this idea that Andrei Linde in particular really championed, and Alex Vilenkin at Tufts-- many scholars who were interested in cosmology-- began wondering about a so-called multiverse. Could it be that our observable universe-- everything we've seen, everything we could measure, the CMDB and all the rest-- is that actually just one self-contained region of space and time within an arbitrarily larger region in the midst of which other patches are still inflating exponentially and others have [INAUDIBLE] out as well? Could you have an infinite multiverse? If inflation never ends, if it really could happen towards T equals infinity, then in principle, you have an infinite volume of space. We occupy, we can observe around us, a very tiny, self-contained bubble within an infinite bathtub. We're like a soap bubble within a bathtub that actually is infinitely large in volume. So that was the idea of eternal inflation, suggesting-- not proving, but suggesting-- there could exist a multiverse, a huge collection of pocket universes well beyond our own with which we can gather information. And that idea then collided with, or was merged with, this series of ideas from string theory in the early 2000s, in particular the so-called string landscape. If there are 10 to the 500 distinct, self-consistent, low energy string states, and you have an infinite number of chances for them to be realized, because you have an infinite multiverse within which you can have these self-consistent pocket universes constantly inflating and then falling out of inflation, while neighboring regions inflate, could it be that every single string state is actually realized an infinite number of times? So 10 to the 500 is big. It's infinitely smaller than infinity, right? You can see where-- I hope it's clear that I'm not endorsing these ideas. I find these ideas pretty unusual, is the nicest way I can put it. But I can at least appreciate why so many of my colleagues have been led to wonder about them. You can see the thought train that leads to this as the next question or tentative conclusion. You can also see how it's getting harder and harder to get the reinforcement, or the input, or the kind of dialogue with empirical data, unlike, say, the early days of quark physics. I hope I can at least convey to you, as a collegial neighboring skeptic, why people ask these kinds of questions, this kind of merger of ideas from string theory and from things like inflation. OK. So if every one of the parameters that we measure empirically-- the masses and charges of elementary particles, larger properties like the Hubble expansion rate-- if they were slightly different, people began to wonder, would the universe, as we know it, would our observable universe have shown anything like the pattern of evolution that we have actually empirically measured? Could there have been galaxies that actually formed stably if the expansion rate were slightly larger? The idea was if the expansion rate were actually a little bit larger in early times, could galaxies never have overcome the expansive pull of gravity, where they've never coalesced to become gravitationally self bound. If the expansion rate were slightly larger, then it happened to have been in our observable universe, maybe there would never have been galaxies. If there's never galaxies, could there be things like stable solar systems or planets? You can play this game all the way down. Could there be life as we know it, if any of these seemingly fundamental parameters have been slightly different from the values that we happen to measure? Now it turns out historians can remind us, that's a very old argument. If that sounds familiar, especially to our historians of science in the group, you're right. That idea-- that if any parameters of our environment had been slightly different, then we wouldn't be here-- that goes back at least, in the European tradition, at least to the late 17th century. I'm sure we could find clever anticipations even earlier. But some of the most famous articulations of that come from this very charming book by Bernard de Fontenelle, a French natural philosopher, translated as Conversations on the Plurality of World. It's still in print. You can buy a cheap paperback. Very soon after that, Isaac Newton was writing letters privately to a theologian friend of his-- they were published soon after Newton's death, his friend Richard Bentley-- where he was musing on this as well. To these folks, this kind of fine-tuning argument was to them absolute, incontrovertible proof that God exists. And to them that was a fairly recognizable Christian God of the Bible, in their reckoning. The idea was that God must have made the world just for us. If the Earth were slightly further from the sun, it would be too cold for us to have flourished. If the Earth were slightly closer to the sun than it is, it would be too hot. These kinds of fine-tuning arguments have a very, very long history in astronomy and astrophysics. And for a lot of the early advocates, this was taken as literally proof of a biblical story of Genesis, that God made the universe literally for us. So is that how these later string theorists formulated? Not really-- not at all. So 2/3 of that fine-tuning argument sounds just like what Isaac Newton or de Fontenelle had been saying 300 years earlier. But some of these later, more recent advocates give it an interesting twist-- the appeal not to a religious or a kind of supernatural force, they appeal back to us in something called the anthropic principle, which again you might have read about. The idea, as I've mentioned before, as even Fontenelle and Newton had had recognized, is that the natural constants seem to have to be within very specific and actually narrow, fine-tuned ranges for anything like life as we know it to have survived, to have evolved and survived. So if those parameters were not within those finely-tuned ranges, then we wouldn't be here to ask these questions. So it comes back to what's the precondition for cosmologists, or any humans at all, to be asking questions about our universe? And so again, here are some other very nice, popular books that came out around 15 years ago or so. One by Alex Vilenkin, who teaches at Tufts. He's very nearby, real pioneer in this work. Another book by Leonard Susskind, who teaches at Stanford. And they're both discussing these more recent developments when the string landscape meets the multiverse. And you can see each of their answers are very clearly not appealing to a kind of religious explanation. In fact, they say the basic structure of the argument is if there are 10 to the 500 distinct string states, each of which has an infinite number of chances-- because this eternal inflation argument-- each of which is realized in an actual physical volume of space, an infinite number of times, then it just takes pure, random chance. It's almost kind of Darwinian, that we would evolve where we are to measure the constants we do because we had no choice. That is say, any difference in those constants, which might really be happening arbitrarily far away, people like us wouldn't have evolved there to ask those questions. So the explanation-- why do we measure this value for the expansion rate or that value for the mass electron-- is not, they argue, because we have to know why one special value out of those 10 to the 500 gets picked out, but simply to say they're all out there. Each of the 10 to the 500 is out there beyond our immediate vicinity. We should only ever expect to measure the constants that we do, because of the chain of causes that had to happen for life, at least as we currently understand it, to have been able to come around to ask those questions. So it's not that you pick out one string state physically. You say, we'd only ever be able to measure a certain very small subset. So some critics-- I'm going to say say critics within physics-- critics, Nobel laureates, within high energy theory for example, some of them really, really don't like this idea. I personally am still rather ambivalent, to put it mildly. Critics have called it, in print, dangerous, disappointing, a virus-- which was scary even before coronavirus, now it really makes me shudder-- an abdication of what physicists should be doing. This inspires strong reactions. Lenny Susskind, in this book here, knows that perfectly well. He's no stranger to making bold statements. So Susskind actually borrowed, for the epigraph of his book, the famous, and probably apocryphal, statement from Pierre Simon Laplace in answering Napoleon. Napoleon was so impressed by Laplace's 18th century, Newtonian cosmos. They could account for the motion of planets and comets and everything with such great precision. Napoleon supposedly had asked Laplace, where is the room for God in your theory? And Laplace, like Susskind, says, your Highness I have no need of that hypothesis. He doesn't need, as Newton seemed to need, an appeal to a kind of religious or biblical supernatural force. He says, you just have to have 10 to the 500 distinct states and an infinite number of chances. And we would measure, around our universe, only those that would be consistent with us being there to measure them. Again, I'm not endorsing that view. I find that interesting, although I personally am rather ambivalent. But hopefully, you can see how the kind of chain of reasoning has unfolded to where this is now talked about with some great attention, and inspires great passions among physicists and cosmologists. So let me pause there again. Any questions on the string landscape, on eternal inflation, on anthropic principle, any of these juicy ideas? Very good, Steven. Thank you. That's a great question. The short answer is, in principle, yes. If we had a arbitrarily powerful microscope, if we could zoom in to length scales of order-- the Planck length, 10 to minus 35th of a meter, which we can't-- that it could be that each of those was separately realized at each point, that we had separately called X equals 1, X equals 2, X equals 3 of the macroscopic space that we otherwise can move around and measure. So it's not that they must have been, but actually that they could have been, that these are each self-consistent, viable solutions to this complicated set of superstring theory equations. And, in principle, each of them could independently be realized. Or it could be that there's some kind of coherent structure, where they all look the same in one region, but then in some macroscopically distant region, some other set of values might have taken place. When the idea gets combined with the string landscape, it's more the latter, that there are actually microscopic regions where there would have been one of those states versus another. But then something like the Higgs field or an inflation-causing field settles into one of those vacuum states versus another, and then the volume of that space-- the three dimensional volume of that space-- grows exponentially. So when you start combining the 10 to the 500 with ideas about inflation, then it becomes more like we should only be able to measure one of those versions macroscopically, but all the others could still be realized elsewhere. So to the string theorist alone, the idea was each of those could be a self-consistent solution at any arbitrarily small location in the X, Y, Z space we use. But if you start thinking about combining that with inflation, the idea tends to be that all those are near each other, so to speak, at early times, but then whatever's going to start driving a macroscopic inflation for the dimensions that we live in, will actually stretch the space containing that one string state, rather than the neighbor. So I guess in an eternal inflation multiverse model, I think the idea would more be like we would expect to have a coherence, that if we could zoom in with our microscope we'd measure the same string state within our own macroscopic bubble, even though, in principle, there could be bubbles out there where it's different. So Alex asks-- oh, go ahead. Steve, go ahead. Do you have a follow up? I think, in principle, they could, but if you add on the dynamics, that you're actually expanding-- really stretching one region of space to the exclusion of another-- then you're stretching the space that was filled with one of those states. I think that's the idea. So Alex asks, of those 10 to 500 states before the infinities, how many are actually livable? Right. So that's a good question, Alex. And part of the question is, honestly, I don't think anyone really knows. Because we could take what we think we know about the origin of life-- what I know about the origin of life is approximately zero, but people who look at it more carefully-- or even something that may be closer to astrophysics, like what would it take for stable galaxies to form and not get ripped apart by a too fast expanding early universe? That's one that people looked at in more quantitative detail. So if you change one parameter, in this case, the early expansion rate, you either do or don't get stable galaxies to form. But we have a lot of parameters we could tweak. What if we tweaked the electromagnetic force, and the unit strength of gravity, and the expansion rate, and-- right? So who's to say, in this multi-dimensional system of constants, even the ones that we know about, that one couldn't find compensating solutions? Could we find galaxies for them if we did have the faster early expansion rate, but if we tweaked three other parameters of our choice? And that's, I think, a well-posed critique of this wiggle one parameter, find things do or don't work. And to be honest, I think a lot of the work so far has still been, understandably-- to make it computationally tractable-- understandably, it's still mostly been wiggle one constant what changes. If the value of the electric charge was slightly different, could you have stable hydrogen atoms, yes or no. OK, well, if I tweak that in five other things, maybe I can again. So the short answer, Alex, to your very well-posed question is, I don't know and, frankly, I don't think anyone knows. The way people have tried to pose that is in this somewhat simplified or, let's say, tractable approach. But I agree. If we're honest about it, I think it's a wide open question. Kay asks, is there any connection between the anthropic principle and the [INAUDIBLE] idea that quantum values don't exist until we choose to measure? There could be connections. It's a great question, Kay. One of my coauthors-- so now it's getting close to home, I've been tainted by all these crazy people--no-- a very dear friend has actually published on a quantum multiverse. What if the many worlds interpretation from quantum theory-- which I think is kind of like what you're referring to here, Kay-- what if the many worlds interpretation of quantum theory were actually realized in physically distinct but disjointed regions of space, like in the multiverse? So one can at least try to pursue this question. It's not required. So typically, these are actually separate kinds of many worlds that people talk about. And so typically, they're kept separate. But certainly, conceptually, one can try to build these self-consistent ideas and maybe even see if that leads to new predictions. So far I haven't seen any. But one can try to see if these sets of ideas could even fit together in principle, let alone do they help explain one another. So Kay, that's a great question. So people do work very squarely on that. Fisher asks, since these X dimensions are on the scale of the Planck length, I imagine there's no experiment we can perform to determine if they have any physical meaning? Good. So I should say, the limits, empirically, are that these extra dimensions can't be larger than about millimeter scale. And millimeter is a lot bigger than the Planck length. There's no compelling reason why they should be millimeter scale and not larger, right? So that would still take some explaining. But the high precision tests of gravity-- that's where these limits mostly come from. Testing gravity in a classical, general, relativistic framework, people have done pretty compelling experimental bounds, saying there's no measurable departure down from like millimeters out to tens of billions of light years. That is just unbelievable. So could it be that these extra dimensions were as large as the size of a proton, let alone the size of a millimeter? Maybe. So one way that people try to look for them, if these extra dimensions were small on human scale, but not so small, is with something that's called missing momentum. It's different from the missing mass problem. So at very high energy particle accelerators, like the large Hadron Collider, is there basically momentum that goes missing, in otherwise very careful momentum conservation balance? When you smash, for example, two protons together and collect all the junk that comes flying out, every time they've done it so far, the momentum balance is perfectly. We should think that's really great, that's great. If there had been some of these particles, in some sense, taking some of their journey, taking a kind of detour, like a shortcut through some of these extra dimensions of space, then the momentum balance that we measure in our three dimensions should, in principle, be off, and maybe by a measurable amount if those shortcuts were over long enough distances, like microns, not Planck length. So there is a program to use particle collision data to put limits on that version of a shrunken down, extra dimension. I should say, I mentioned Lisa Randall's book, Warped Passages. In that book, Lisa talks about one of her own very, very, very ingenious interventions, was what if these extra dimensions actually never got small? What if they are still macroscopically large? What she found with a colleague back in the late 1990s-- and I did some work on these models, but really Lisa was the driving force-- was could find self-consistent, gravitational configurations in which we would live on a 3 plus 1 dimensional slice. And in fact, quarks would stay stuck here. Even gravity would appear to be stuck on this kind of sliver self-consistently. It's almost like bound states for gravity. It's like an analogy to bound states. So the graviton might be exchanged, but might be exchanged only within a subspace of all that are out there. And you could find, again, simplified toy models in which gravity would behave that way, and Lisa Randall was really the pioneer for that, starting in, I think, '87, '98. So there are other ideas around to try to live with, at least conceptually, large extra dimensions, without having to curl them up. Now the challenge there was could one find a model that has the symmetries that we actually observe in our universe and not more? So those models work really very, very lovely, if you have very, very highly symmetric spaces for the ones that we would live on, that look either exactly like Minkowski space or something very similar. We live in a slightly less symmetric space time than Minkowski space. And it turns out, there was at least no mathematically demonstrated way to embed even that simplified toy model with one fewer symmetry group in that structure. That doesn't mean it couldn't be done. So that's an open question, but there are many ways to try to live with-- make peace with-- many extra dimensions, but they all require a pretty big leap. And none of them so far has been compelling, forced on anyone, based on new experimental observations or anything else. So it's still in the realm of the hypothetical. Good. Other questions. I think part of it is that, really, at least to its critics-- and frankly, I'm sympathetic with this critical view-- it seems to give up on the aim of trying to explain the universe that we live in, based on physical forces or laws that we could explain. It basically says that the universe that we live in is the outcome of, basically, random chance. It makes it more like the way we account for change in Darwinian natural selection, but now with infinities instead of just large numbers. It could just be random twists and turns of fate, and maybe that is the right answer. But that kind of grates against-- maybe it's a philosophical or aesthetic preference, it's not like it's a logical necessity-- but it grates against a kind of form of explanation that has, I think, otherwise been successful as a goal. It's aspirational for things from Newton's day, through Lisa Randall's day, for that matter. Efforts to try to account for a specific set of, let's say, fundamental forces, acting on a specific set of elementary constituents, whether they're quarks and gluons or superstrings or name your favorite, and being able to account for bulk properties that we could compare with observations. I think the thing that sticks in the craw, so to speak, for critics of the anthropic principle is that at the end of the day, people have to say, just random chance, and there's no kind of deeper explanation for why we measure the things we do. And they not only could be different, they actually might be different, and maybe even are different, infinitely different ways, out beyond areas where that are kind of outside our own causal horizon. So it's changing, so to speak, the explanatory goals. It's not that it's logically, internally inconsistent. It's a different kind of aspiration or aesthetic choice or philosophical aim for what one thinks the nature of explanation should be, in an area of science like physics. And they might be right. Let me press on for this last part much quicker. So I mentioned that string theory has some great features, at least features that got people excited. We could make sense of those bursts of enthusiasm on the publication pattern. But actually, I borrowed the term package deal from Lee Smolin, one of whose books I mentioned earlier. It's like when you want to go buy a used car off the lot. You don't get to choose exactly which features you want. And so, again, Lee Smolin uses this analogy. It's like, what if the car you really love-- this one on the used car lot-- has just the right kind of transmission you're looking for, but not the stereo system you want, or vice versa. It's like a package deal. And so string theory has many features like it includes massless spin two graviton, it seems like it could avoid some of these infinities from quantum electrodynamics, and yet it can only be formulated self-consistently with these as yet undetected symmetries, the so-called supersymmetry, doubling every known particle in the universe, and also many extra dimensions. And so critics like Lee Smolin have said, are the things we don't want starting to outweigh the things that we do want? Is this a is this a good deal in this package deal? And so again, for your reading list, these are now kind of classic books. Brian Greene's book came out first in 1999. Brian has been, for a long time, one of the most, I'd say, eloquent proponents of the superstring theory approach. And this book was a finalist for the Pulitzer Prize. It's a beautifully written book, very accessible. And then again, a very clear book by one of these now outspoken critics, Lee Smolin. This one came out in around 2005 or 2006, called The Trouble with Physics. And I think these two together give you a pretty nice range of, again, what the stakes are in these so-called string wars. What are the hopes that got people like Brian and so many people excited, and what's the kind of cold water? What are the cautions that some critics like Lee Smolin put on the table as well? Those are two you might enjoy. Let's go back to counting things. As I tried to indicate and we've seen many times in our course together, these rivalries or debates are not happening in a vacuum. They're not happening in a string vacuum, not even happening in a macroscopic vacuum. So let's go back and ask about what else has been going on that might help nudge or push and pull the state of discussion among, let's say, high energy physics, especially but not only in the United States in recent decades? So here is another thing I like to count. The blue is all-- with this axis here-- are all the PhDs per year in the US on any area of high energy theory-- high energy physics, including [? experiments, ?] all high energy physics, particle physics. And the red, is once again, dissertations now on string theory. It looks a lot like the curve of publications, not surprisingly. So again, you see a very different set of pattern of when you have a growing number of people in particle physics broadly, and then going into a decline-- a pretty clear trend line there-- starting in the early to mid 1990s. Meanwhile, a very steep climb, a kind of almost runaway growth, on the attention to string theory, and they're kind of out of phase. Overall particle physics starts to decline, while string theory starts another one of these large periods of rise. Well, that's not happening alone. One of the things that happens right around that pivot point is something that I never stop talking about. It was the cancellation of an enormous project called the Superconducting Super Collider, or SSC. This was a project approved in the US in 1985. It was under construction outside of Dallas, Texas, in a tiny town called Waxahachie, Texas, not too far from Dallas. It was actually going to be three times larger than the Large Hadron Collider, which at that point, had not yet been built. So it would have been a-- let's see-- a 52-mile circumference or 54-mile circumference ring to accelerate protons and smash them together with correspondingly three times larger interaction energies. The argument was that people could have hopefully found things like the Higgs boson, maybe evidence of supersymmetry, in the 80s and 90s. The problem was this was, as you might expect, unbelievably expensive. They actually wound up excavating almost the entire tunnel many, many, many hundreds of feet underground. They had not finished installing the superconducting magnets or all the rest. The cost was growing very quickly. It was approved with a budget of a couple billion dollars. By the time the project was canceled, the budget had ballooned to $15 billion. That's now-- on a US appropriations budget-- that's a lot of money. That kind of budget now conflicts with things like military expenditures, not basic science expenditures. So this project was heading for a collision. And in fact, after many fits and starts, the Congress canceled the SSC with one fateful vote in October of 1993. I remember the date very well, because I entered graduate school in September of 1993. So I don't take it personally. But the US Congress killed off one of the reasons I went to grad school one month after I entered graduate school. I don't think they had me in mind. So you can see here, look at US funding for high-energy physics in inflation-adjusted or constant dollars. Funding for the entire field-- not just for that one experiment, for the entire field of high-energy physics-- fell in half in one year. That was an even sharper fall than the otherwise sharp falls we looked at many times in this class in the early 1970s. In the early 70s, the funding for physics fell in half over the span of four years. This time, Congress was even more efficient and reduced the budget for energy physics in half in literally one year. And over the rest of the 90s, after that, funding for physics across the board continued to lose ground, both because of budget cuts and because of inflation. Many, many books that have looked at the troubled history of the SSC, in particular. There was even a novel about this by the novelist Herman Wouk. He's the same novelist who wrote The Caine Mutiny-- a very accomplished novelist. He wrote a novel about physicists in Texas watching the SSC get canceled. I took it to heart. Why did Congress cancel funding? In short, the Cold War had ended. There's many moving parts, but the overwhelming cause seems to have been, as many of these scholars have agreed, that the reason to keep spending very large amounts of money on so-called basic research in areas like high-energy physics had always been a competition with the Soviets, and the idea that if an outright warfare broke out, you'd have all these well-trained people with good equipment. And after the Soviet Union dissolved in summer of 1991, those arguments no longer held the same kind of sway that they had for generations before. And one of the most visible symbols of the end of the Cold War for science funding in the US was this cancellation of the huge accelerator. Let's go back to that plot of-- remember this is now all PhDs in particle physics, PhDs in the US strictly on string theory. And here, because green means money, green is now that budget curve I just showed in the last plot. Look at the amazing correlations here, that you stop getting more people entering particle physics when funding falls in half, meaning you don't replenish the people who graduate. So people graduate here, but not as many people enter afterwards. So you start seeing the characteristic, five-year time scale fall off that we can at least correlate with this very dramatic change in funding for the larger field. Meanwhile, string theory costs approximately pencils and espresso-- well, and health insurance-- all of which are important, none of which cost $15 billion for an atom smasher. So dissertations on string theory start growing exponentially just at the moment when other trends within the subfield are looking less and less fiscally viable. It's not only the US. This is a complicated plot that I like staring at. Let me just tell you briefly what it's showing. It's comparing the decade by decade averages. I was trying to capture lots of moving parts, across many countries in the world, where the blue curve shows the change between the decade of the mid 80s to mid 90s, the change of that decade compared to the next decade-- 90s to mid 2000s-- the change in proportion of the world physics literature, contributed by physicists in that country. That's a very close proxy for budgets. Other people looked at this more carefully. So you can really think of this as kind of budget trends. You can see the budget for physics research across the board fell in the United States after the end of the Cold War. That's consistent with what I showed you before. Likewise, for many of these former Cold War so-called superpowers-- the USSR and then the post-Soviet republics, same with Britain. Meanwhile, some areas of the world during this period were investing like crazy in the basic sciences after the end of the Cold War, countries like Brazil and China for some time. And you have a very clear anti-correlation between countries that stopped investing so much in physics overall and the proportion of physicists there who began working on string theory, versus the inverse, countries that were investing across the board more and more aggressively, across the full range of physics, and you see either a slower growing proportion or a falling proportion of physicists working on string theory. That doesn't mean, there's no reason to work on string theory intellectually. It does remind us that these decisions are not happening in a vacuum, that what are the resources available, with which to even ask certain questions. Those change not only based on arguments about string compactifications or eternal inflation, physics, then as now, is embedded in a pretty messy world, a world that, in this case, went through a pretty dramatic change with the end of the Cold War. So let me wrap up. Early in our own millennium, physics seems to be just as much in the flow, embedded in the world of people, culture, politics, geopolitics, and budgets and all the rest, as it has been throughout the whole period we've looked at this semester. We saw, for a good chunk of the term, there was a moment in the middle decades of the 20th century, largely coming out of the wartime projects, when physics, especially in the US, seemed to have a unlimited range of resources and a command of respect and so on, as well as heightened scrutiny with McCarthyism and all the rest. It was center stage, for good or bad. It wasn't all good. But it was a central player in a way that was unlike what had either come before or indeed since. I got my own taste of this about the changing fortunes of the cultural capital, let's say, of high-energy physics myself. About 15 years ago, Alan and I had written a very brief review article on inflation. In fact, it was one of the readings for the previous class. It was published in Science. One week later, there was a point-by-point rebuttal posted on a creationist website that someone emailed us to pay attention to. They went through and showed all the reasons why this inflation account and, even the standard Big Bang, was baloney and, in fact, instead, much as the Christian Bible seems to suggest, at least to some readers, the universe is only 6,000 years old. It's not 14 billion years old. And they wanted to show why every one of our arguments was wrong. They conclude, quite correctly, with the following. "We had to show you, in their own words, what these MIT eggheads are saying," I said, well, guilty. That one they nailed. That was empirically astute. "Guth and Kaiser need to take up truck driving," they wrote. I'm listening. "That would get them out of their ivory towers at MIT and into the real world, where they would be forced to look at trees, mountains, weather, ecology." Plus, what do you see on the interstate but Taco Bell and Motel 6-- all these wonderful features of nature that signal the evidence of design, in this case they meant design by a biblical God. "Realities that proclaim design purpose and intention." So I've had a wonderful time talking with you this term. I'm sorry we had to do it remotely. I hope that was, nonetheless, a reasonable experience for you. Please don't hesitate to email me. But for now, I got to head out because I got to get back to my rig. So it's been great. Good luck with the end of the term. And please don't hesitate to reach out if you have any questions, any time. But I'll stop there. Any questions on that? No questions about MIT, I guess. I guess that part was just self-evident. OK. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_2_Faraday_Thomson_and_Maxwell_Lines_of_Force_in_the_Ether.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: OK. Welcome everyone. Welcome back to 8225, which is also STS 042, Physics in the 20th Century. Let me start by asking just quickly if there are any logistical questions, any kind of course structure or other questions like that we can try to clear up quickly. And then we'll jump into the first main lecture. In general, it's good to do the readings, at least to skim the more technical readings before the assigned class. So for today, that was a brief excerpt from Maxwell's treatise on electricity and magnetism, for example, and a chapter from Bruce Hunt's book on the Maxwellians. So in general, it'll be helpful if you keep up-- if students keep up with the readings because it's just going to snowball. Otherwise, some of the readings, as you probably have noted with the Maxwell in particular, are pretty-- they're not exactly a page turning novel, right? What's going on with that. So I'm going to be talking about what Maxwell thought he was doing, what some of Maxwell's contemporaries-- how they read that book in today's lecture. So today hopefully will exemplify the kind of use to make of readings like that, as we go through the first worked example with Maxwell today. Other readings, as you'll see, like the chapter from Bruce Hunt for this week, are by physicists, or historians, or sociologists, or other more recent scholars trying to help us make sense of that older work. And for those, I think it does make sense to read them a bit more closely. What's the actual argument the author is making? What are the kinds of evidence that the author brings to bear? Because what we'll work on together over the course of the semester is actually working on your own written essays, your own analyses, where you will-- we'll help you build up toward it. But you'll be working on articulating an argument, defending that argument with evidence and examples from various sources. The readings that we would call a secondary source, readings by historians, or more recent physicists, or philosophers, or sociologists, or journalists, or whomever, those are more like the kinds of writings we're going to try to do ourselves over the course of the term. And plus, I think, hopefully they're interesting. I chose them very carefully. So I do recommend you try to read those a bit differently, let's say, than something like skimming over Maxwell's treatise, which might look both pretty bland, and weird, and a little unusual, given how we do things today. So there are different ways to read the different kinds of sources. And I guess, hopefully, today we'll actually give a bit of an example of that, as we talk a little bit about that Maxwell reading throughout today. So that's the plan. And if there's no other questions, I think we'll just jump right in. So I'm going to go ahead and share screen. And hopefully, you can see that first slide. So today, we're going to start our journey in earnest. We're going to turn the clock back about 200 years and talk about some of the leading natural philosophers, some of the leading people who are trying to understand the physical world. And what do they think their job was? What were the settings in which they did it? What were the conceptual tools they brought to bear? And we'll always be asking ourselves, what looks kind of similar and what looks really, really unfamiliar from our vantage point today? So the discussion today has three main sections. We have a brief warm-up on physics then and now, just to try to prep us to put our minds back into the world of the early 19th century. Then we'll spend a bit of time talking about Michael Faraday, who was one of the most widely celebrated scientists or natural philosophers of his day. And we'll see some of the things that he introduced during his rather unusual career. And then the last main section for today, we'll talk about some of the folks who are a bit younger than Faraday, who took up his mantle who tried to continue his line of work, but, again, using some different kinds of techniques and tools, in this case William Thompson and James Clerk Maxwell. So that's our goal for today. So let's just pause and remember that we are entering a kind of imaginary time capsule and going back about 200 years to start. So the assumptions that we might bring in as physics students, as MIT students more generally about how the world works, about what the job of a physicist is, and so on, those won't map on perfectly well to some of these earlier folks when we start talking about. And I find that that's part of what's so fun for me about working in the history of science, is exactly those moments where things don't look so familiar, as well as the continuity we can trace over time. So one of the biggest questions we might ask about physics today is, what's the world made of? What do we think, as modern day scientists, are the basic building blocks out of which the world is made? If we were to conduct a survey of most working physicists at MIT or really anywhere around the world at this point, we'd get an answer along these lines. We would hear about something called the standard model of particle physics, which is amazing. And we'll actually talk a bit about the standard model later in the semester. It describes the world as made up of a very small number of fundamental entities, elementary particles like the quarks, and electrons, and tau mesons, and neutrinos, and even the long elusive Higgs boson. There's a collection, not a very big collection, of stuff out of which we think literally everything else is made. We hear the physicists today talk about the tools that have been developed to try to study the interactions of those elementary bits of matter, how different particles can interact with each other, can scatter off each other, can transmit the fundamental forces of nature by exchanging certain kinds of particles in a vacuum. Depending on which physicist, we ask, we might get an earful, maybe more than we invited, saying that actually all those particles are themselves really only the efflorescence of more fundamental entities called strings that live in some abstract space. The point is, we're going to get an answer like this about, basically, particles exchanging particles in a vacuum. Now, if we were to build our time machine, which I sincerely hope we can complete soon, and we go back 100, let alone 200 years to, say, talk with leading natural researchers in Britain, and we told them this answer, that physicists today said the world is made of particles exchanging particles in a vacuum, they would have thought we were really not with it. I like to think they would have shouted bollocks or at least something maybe more colorful. Every single part of this modern answer would have looked extraordinarily wrongheaded to the people that we're going to spend some time reading about and talking about at the start of this term. And why is that important? I mean, if we did this exercise with the ancient Mayans, or the medieval Turks, or the Renaissance French, we'd also expect some disagreement. What I like about this is that we're turning the clock back not so far, compared to the present day. And yet, we already see a huge divergence in the answers. As we'll see throughout today and actually the first several class sessions for the term, we use their equations. We still use Maxwell's equations, for example, and a lot of the ideas that Michael Faraday introduced. And yet, what they thought those equations meant, what they thought they were describing about how the world is made up are almost 180 degrees opposite from how we start our investigations today. So even though some parts have remained the same going over these 150 or 200 years to the past, how we make sense of those ideas or those techniques have gone through a really quite amazing change. So what do we think the world is made of? That's not a static set of answers. And we see pretty dramatic change, even over a relatively short span of history. There's another kind of shift that we want to get our heads into, attune ourselves to as well. And that concerns who pursues the study of physics and in what kinds of settings, what institutions. So during the first half of the 19th century, when we'll start this class, people who worked on what we would call physics-- electricity, magnetism, optics, other phenomena, heat-- they also worked on things that we would call not physics, like chemistry, or mathematics, or astronomy, physiology, even beyond that. For them, this was all one field of study. It was often grouped under the term natural philosophy rather than physics, or chemistry, or mathematics, per se. In fact, the term for a person who studied these things-- in English, the term we would call a scientist was actually only invented in 1834. It's remarkably new, as far as the English language goes. It was invented after some of the developments we'll actually start talking about today. So even having the job description scientist or the title is actually relatively new. And what these particular kinds of scientists that we would now call physicists-- what they thought they should do-- again, some pretty broad changes compared to what we're used to today. Not only do they think their job description was different than what we might imagine for ourselves, but where they did the work and what kinds of settings or institutions, that was also pretty different merely 200 years ago, let alone much further back in human history. So a lot of stuff we'll start by talking about today was not done in schools or universities. They were done in separate institutes or academies. And the key thing is these were not places where students took classes to get degrees. There was a separation between learning institutions-- universities existed throughout many parts of the world, including Western Europe and the United States. Universities existed. But that was not where people were doing what we would call scientific research. In fact, some of the most prominent natural philosophers or leading physicists were not only not professors at universities, some of them had never even attended university themselves. And that's true of the first figure we'll look at today, Michael Faraday. So there's a big shift toward a placement of this research that would look more familiar-- things more like what we'd expect to see ourselves. That shift begins in the middle of the 19th century, so more like 150 or 170 years ago, rather than 200 years ago or more. There's a shift to where research in fields like physics gets done more often than not by people who are professors in universities who also spend some of their time teaching students, like what we are more used to today. That has a history. It's actually relatively recent. And it emerges really in the middle part of the 19th century. So the overall theme of this very first warm-up for the class is to put ourselves on guard. We shouldn't assume that our current sense of what's expected, or normal, or familiar either about how we choose to study the natural world, what concepts we use, let alone how we think we should be pursuing that, and what kinds of institutions or job descriptions-- We shouldn't expect that what's familiar to us today is just how people did things in the past. And in fact, we'll see some pretty dramatic changes over a relatively short span of time. So let me pause there and ask if any questions just on that early warm-up. Anyone have any thoughts or questions about that? If not, then I'm going to press on. Again, feel free to jump in on the chat if things come up. Let's talk about this first figure I already mentioned, Michael Faraday. We're going to spend a little time talking about Faraday. I think he's fascinating. So he became one of the most successful and actually one of the most famous natural philosophers in his lifetime-- in fact, really of the entire 19th century. He had a pretty remarkable personal story. And I think we'll also see he helps make clear some of the larger social and institutional changes going on as well. He helps us understand his own larger time period. Personally, he had this really inspiring rags to riches story. He was born into a rather poor family. He never attended high school, let alone college or university. He was apprenticed to a bookbinder at age 13. That means he literally left his family, moved in with his main tutor, and was apprenticed in a kind of trade. So in that sense, his career path started out closer to what we might expect of the Renaissance or even the Middle Ages than to the modern period. It was a time still of guilds and of trades associations. And his role, his way to make a living, was to learn the kind of manual dexterity skills of being a bookbinder, starting early on. He met the very famous chemist or natural philosopher Humphry Davy when Davy gave a series of public lectures on natural philosophy at the Royal Institution in London. And we'll talk a bit about the Royal Institution. It was not a university. It was a place where research was done and public lectures were given. So Faraday was just mesmerized by Davy's lectures. He actually took very careful notes on these public lectures. Faraday then bound them into a beautiful, very fancy book. And here's how we know he was so smart. He then gave that fancy book to Davy as a gift. And I have a hint here of what you can do with that information. He basically bribed his way into a new kind of apprenticeship. Davy was very flattered and impressed by the kind of manual skills, by the dexterity that Faraday had shown. So Davy hired Faraday for basically a new kind of apprenticeship. Faraday had never attended high school or college. It wasn't that he was trained in mathematical physics, quite the contrary. But Davy figured Faraday could do some new kinds of manual dexterity skills in the laboratory as a laboratory assistant, a continuation of his bookbinder apprenticeship. So that's how Faraday enters what we would now call the world of natural philosophy. So years and years go by. They worked very closely for a long time. Davy eventually steps down. He retires. And Faraday, in fact, became his successor, now the director of this Royal Institution in London. So much like I mentioned a few moments ago, Faraday worked on really a kind mind-boggling array of topics, mind-boggling from our modern point of view, quite familiar to his own contemporaries. So he worked on what we would separately classify as chemistry, optics, electromagnetism, electrolysis, and more. To Faraday, these were all elements of a single, unified approach to nature. And he was particularly fascinated, even more so than some of his contemporaries, with this interconvertibility of forces. How could electrical effects make physiological effects happen, like using electric shocks to make a frog's leg jump? How could chemical effects be catalyzed by electric currents? So the way that one kind of phenomenon or force could have effects in some other domain. Now, Faraday was partly inspired in this interconvertibility view by his own religious background. Some really very interesting work by a number of historians and biographers of Faraday who have tried to understand why Faraday took this notion even further, even more fervently than many of his own contemporaries. And these other historians make a pretty compelling case. Faraday belonged to a rather small religious minority in England. He was a Protestant, but he was not an Anglican. He was not a member of the official church of England. He belonged to this small sect called the Sandemanians, Protestant but not Anglican. And the Sandemanians, even more than the standard church of England at the time, emphasized an underlying unity to all of nature-- we're all part of one natural world-- and that people and things are somehow connected to each other. And that was part of what the Sandemanians themselves believed. And it had a theological import for them. And there's a really interesting parallel, at the very least, in some of the ways that Faraday then explored the natural world as well. He became a beloved, very successful public lecturer. He continued the tradition that Humphry Davy had done at the Royal Institution, giving these very, very well-attended so-called Christmas lectures. These still continue from the Royal Institution to this day. Faraday would really fill the lecture hall and give these dazzling, spectacular demonstrations of electrical effects, and chemical change, and all the rest. That's partly why he became so beloved and well-known in his lifetime. You can't quite see it in this engraving. It's hard to tell. This really was the definition of popular. The tickets were not too expensive. That's partly how very young and poor Faraday himself was able to attend as a child. It wasn't like attending the fanciest opera performance in high elite London. There'd be cheap seats. You can see how large the hall was. So the very nice, fancy seats, then as now, were expensive. And tickets would go to the affluent and well-to-do. But there really was what we today would call a kind of outreach. There was a real effort to reach across several social and economic classes-- not all the way, but broader than only the very elite. And they became a really beloved city institution within London. People would come in from out of town to attend them. It became an annual tradition. And Faraday became especially beloved as a presenter in that tradition. So they really were reaching not only people who had any interest in natural philosophy. It was really a way of showing a kind of sophistication and a way for members of the so-called aspiring classes, meaning those who weren't so well-to-do, to be able to participate in things that they saw the real elites doing. So it really had an interesting social cross-section to them. Historians have written a lot of interesting stuff about these public lectures. They really were talked about broadly, covered in the broadsheet newspapers and all that. In this time period, the main mechanisms were largely books. So even scientists, whom we would call physicists, were mostly writing monographs and sometimes writing short articles for fellow specialists. There were scientific journals, including some of the earliest that have been founded in England, going back to the 1660s, similar ones in other parts of Europe. But the real transfer mechanisms were often through monographs. And it's partly why Maxwell's treatise, that we'll talk about later today, became so widely known and so influential. There were then these lectures. There were also so-called itinerant lecturers, people who made their whole living just giving popular lectures on topics like electricity town to town-- so not just in London, but to the outskirts and likewise throughout provincial France and other places. So there were popular lectures. In that sense, you can think of it almost like PBS television today, like Nova specials. But for fellow specialists, there were just the start of professional meetings. The British Association for the Advancement of Science were having annual meetings of the sort that would become so familiar for fellow specialists. Sometimes, those would be covered in the national newspapers. It was a hodgepodge. Some things would look familiar to us. Some things might seem a little more unusual. But Faraday wrote books and books and some articles, for example. And certainly, some of this would get into the curriculum for universities. And we'll talk more about that actually especially later today and in the next class session. But the way that kind of fellow researchers interact with fellow researchers, it was largely through books, sometimes through journals, and through these different kinds of lectures. Great, great questions. I'm going to move on. These are great questions. I'll try to answer more that I don't get to as we go further. But I want to get to one of the topics that was really, really high on Faraday's list of most important because it's really going to set our agenda for several classes to come. And that's what was called the aether, a presumptive hypothetical medium that pretty much everyone that we're going to talk about, Faraday included, just assumed. They knew. They had an extremely strong confidence that this medium was filling all of space, filled every nook and cranny of the universe. And it was this medium that filled everything that enabled various processes to affect one another. That was the interconvertibility of forces. Why did electrical effects lead to magnetic effects? Because they're both ultimately effects of this aether that connects everything. So what's the aether? Just during Faraday's own childhood, another amateur British naturalist named Thomas Young, building on some work by French scholars from around the same time, right around the year 1800, had advanced a wave theory of light. This was opposite actually to Isaac Newton's theory of light, who had a particle or corpuscular theory. So Thomas Young, like some of the French scholars as well, suggested that a whole bunch of optical phenomena, like interference, like reflection, and refraction could really only be made sense of if light was a series of waves. And I'm sure you've seen this. There's a very simple illustration. If two waves are perfectly in sync with each other-- we'd say in phase-- you can get constructive interference. The peaks will become larger than either wave alone. The troughs become deeper. Of course, if those two waves are exactly out of sync with each other, if they're perfectly out of phase, you can get destructive interference. And the resultant wave will vanish altogether and anything in between. So Young and others were convinced that these interference phenomena were natural indications that light is a wave phenomenon. But then the next question is, well, waves of what? Waves in what? Ocean waves are clearly waves of water. Sound waves or compressions are waves in the atmosphere, in the air. What are waves of? So they said, well, any wave we know about is a wave in some medium. There must be a medium that carries these light waves. It was called the luminiferous aether, which is a very fancy word. You can see how to parse it here. If you remember either Latin or Harry Potter, I find them equally helpful. Think about the Lumos spell in Harry Potter. The lumini- part just means light, referring to light. -ferous means carrying. Think of like a Ferris wheel. So the formal name of luminiferous aether really just means the light-carrying or the light-bearing aether. This was the medium in which light traveled, in which everything, all of the physical phenomena, were immersed, which is why these folks would never have said the stuff of the world is elementary particles traveling in a vacuum. There is no vacuum. Everything at root is filled with aether. That's where they were starting from. So why would you need a medium for light? Again, even before, around the same time as Thomas Young in England, a number of French scholars had found that the speed of light-- the speed at which light waves travel depends on the medium in which they're traveling. In fact, they're slower in water than in air. That's opposite, by the way, to sound waves sound waves. They always travel more quickly in a dense medium. And that led to things like Snell's law, the whole set of knowledge about refraction and how light behaves when it crosses from one type of medium to another. You can characterize these media in terms of what became known as an index of refraction. And so everything about the behavior of these waves seemed to depend on the medium through which they were traveling. So there must be some kind of baseline or reference medium, the aether, with respect to which any of these deviations would then be measured. So water is one type of medium. But even if you take all the water away and have a vacuated chamber, you still have light waves traveling in the remaining aether. That was how they organized these studies. All right, so much for light. What about other things like electricity and magnetism? So again, during Faraday's early career-- by this point, he was already ensconced at the Royal Institution, still working closely with people like Humphry Davy-- there was a headline-grabbing event not found in Britain, but actually nearby in Denmark. So when another natural philosopher, Hans Christian Oersted was giving a public lecture very much like at the Royal Institution, same kind of format-- these were popular throughout Europe. Oersted was giving a public lecture. And he accidentally discovered, in the midst of his talk, something that he really didn't expect. He had an apparatus that looked a lot like this, a kind of conducting coil. So this is a conducting wire. And he had nearby a compass, so a magnetic needle. And he found that when current was flowing through the wire, the needle would be deflected. He did not expect that at all. None of his audience expected that. No one else expected that. Somehow the electric current, the flow of electricity, could affect-- could induce magnetic effects. It could change the behavior of that magnetic needle of the compass. This became headline news. If there had been 19th century Twitter, this would have broken Twitter, #theneedlemoved. I'm sure this was literally what everyone was super excited about, including people like Michael Faraday. This was, for Faraday, exactly this kind of interconversion of forces that he was so interested in, partly from his Sandemanian faith, partly because he was a natural philosopher of his time. So not only does he quickly replicate Oersted's finding-- he separately finds that he can get electric current to move a compass needle-- he then finds the inverse. As shown here, he takes a bar magnet and starts moving it in a pattern near a conducting coil from which he can measure the current. He has an ammeter attached. And he finds the inverse can happen as well, that a moving magnet, a changing magnetic effect can induce an electric current. So he's investigating induction, both electric currents inducing magnetic effects, magnetic fields or magnetic effects inducing electric effects. And he wants to keep track of this stuff. So he starts sketching these helpful doodles that he calls lines of force. As far as Faraday was concerned, these were lines of force to depict the underlying state of the aether. Remember, everything, as far as he's concerned, is what's going on with the underlying aether. So it must be, he begins to reason with these pictorial aids, that the aether could be put under states of strain or tension. If I have an electric charge that I place within the aether, it's going to have an effect and put a strain on the aether immediately around it. So he draws these lines of force, emanating outward from positive electric charge or collapsing in toward a negative electric charge. And then he begins using his middle school level mathematics. He'd had a little bit of geometry before leaving formal schooling. And he puts that to work. He supposes, what if the number of lines that come outward from a positive charge is fixed and is proportional to the amount of that electric charge? So a charge of, say, seven units would have seven of these lines radiating outward, affecting the surrounding aether. A charge of 10 units would have 10 lines radiating outward and so on. Then you can imagine what would happen if you surrounded that positive charge with a kind of imaginary sphere at some distant radius. Then you can ask, how many lines per unit area will cross through that imagined sphere? The density of field lines will fall out. The lines are dispersing as you get further and further away from the electric charge. So the density of field lines, how many of those lines will intersect a unit of that sphere, will fall off like 1 over r squared? You take the fixed number of lines, divide by the surface area that you've surrounded that charge by. The area goes by like 4 pi r squared. So the density should fall off like 1 over r squared. That was really exciting to Faraday because he had learned, again, from French scholars like Coulomb, not too long before really, just in Faraday's own childhood, that there was this Newton-like force of attraction or repulsion among electric charges. We now call it Coulomb's law. The magnitude of the force between two electric charges falls off like 1 over the square of the distance, exactly as Faraday had reasoned toward by thinking about states of tension in the aether and how that strain would be dissipated at more and more distance. So Faraday went on. Remember, he was so excited not only about one topic like electricity, but also about magnetism. What if you plop a bar magnet down in the aether? Then he could do that with these things like iron filings and trace out the kind of lines of influence around the magnets-- something you've probably done even as a child. And he realized that he could make sense of this also in terms of lines of force, that these iron filings were helping him map out the magnetic effects, the strain that the bar magnet, with its north and south magnetic poles, could do for the surrounding aether. What's really interesting here is that Faraday starts to say that sometimes forces might not be transmitted only along the straight line shortest path between A and B. That was the rule for all Newton-like forces. The magnetic lines of force would bend. They'd curve. They would have these looping paths. And so maybe the magnetic force did not always go on shortest distance paths. And more generally, Faraday starts saying, let's keep track of all these states of tension in the aether. He introduces this notion of the field, something that physicists to this day use every single day of our careers. We now call it field theory. It largely comes as being introduced by Faraday, as he tries to make sense of, with convenient mnemonic devices, these states of strain in the aether. So the electric and magnetic fields, as far as Faraday was concerned, were simply ways of characterizing the local state of the aether as represented by these lines of force. What were the stresses and strains at that spot in the aether? How do they change if I go to this spot? How do they change at that one spot over time? This is what he says is the notion of a field. It's just shorthand for the state of the underlying aether. But is that real? You might still wonder, is the aether just a helpful thing to think about? Or did Faraday and his colleagues think it was really part of the world? And it was unambiguously, for them, part of the world, not just convenient to think about, but really the stuff from which the world was made. And we'll talk about this in some more detail over coming class sessions. Faraday thought he found the golden proof in 1845. And now he's a much more senior researcher. How could you demonstrate the aether was real? Well, let's study the interaction between magnetic fields and the behavior of light, in particular polarized light. This is now known as the Faraday effect or the Faraday rotation. It's one of these things that's literally still in our textbooks. We use it all the time. It comes from Faraday's efforts to prove to himself and his colleagues that we are all immersed in this elastic, bendable aether. So Faraday and his peers-- the idea was the following, that if you put the aether under local strain by putting a local magnetic field in some region of space, that magnetic field would have a twisting or curling effect, a kind of torsion on the underlying aether. And then this light wave is traveling through this deformed local bit of the aether. So its behavior will change. And in particular, what's called the plane of polarization, the direction in which its wave is waving, will rotate. There'll be some angle, shown here as the angle beta. So if the field starts out, the wave starts out waving in one direction, it will literally be rotated by some angle. The amount of rotation depends on the strength of the field and the distance through which this aether is under tension. This is not just a good idea for 1845. It's how we make sense of the world today, minus the aether part. But it comes from trying to figure out is the aether real. Just last year, I had fun working with a really terrific UROP student. Some of you might know Max Daschner. He and my friend Joe Formaggio in physics, we used this exact same phenomenon, that an external magnetic field will rotate the plane of polarization of light, not to talk about the aether, but as a way to jam quantum encryption indefinitely. So in this hyper-modern physics of the moment, we're still going back to Faraday rotation. And yet, Faraday thought this was a way of characterizing the aether. I just love that. We're going to see many examples of that throughout the class. Let me wrap up the Faraday part. I'll pause again for some more questions. So Faraday, as we've seen, worked on a variety of phenomena, what we would call separately chemistry, and optics, electrolysis, even physiology, let alone electromagnetism. He was especially interested in the interconvertibility of forces partly. A compelling case had been made. This resonated especially closely with his religious minority views, his Sandemanian views and that there was this kind of underlying unity of nature. Faraday was never trained in advanced mathematics. He had really just the rudiments of this pictorial geometry, like we saw with the lines of force. He was an apprenticed tinkerer. He had tremendous manual dexterity skills and also great cleverness. He became this leading experimentalist based on not formal training at a university, but being apprenticed as a bookbinder, and then essentially apprenticed as Humphry Davy's assistant at the Royal Institution, not at a university. He introduces these things which we still use to this day, like lines of force, and then fields. For Faraday, these were ways of characterizing the one big prize of natural studies, meaning the aether, the medium within which all these things unfolded, and the reason as far as Faraday was concerned why there could be this interconvertibility and unity. So in place of Newtonian action at a distance that somehow one object could instantaneously affect or exert a force on another one, Faraday starts explaining all physical phenomena-- He has the ambition to explain everything in terms of these local fields there's a local disturbance here at position x equals 1 and it could convey effects to its nearest neighbors point by point. So you have to study how these fields change over time and spread out through space. Everything is now meant to be local fields, not action at a distance. Let me pause there again and ask any questions. I see at least one more item in the chat. Any questions on the Faraday material? Very good. So it was common to attend the public science lectures. And in fact, it was seen as a way of being elegant. Again, this is really fascinating work on the social class emulation going on, especially heightened, as you might know, in Britain in this time period. So to be considered a fashionable lady in the terminology of the day, one had to show or at least feign interest in a range of topics, one of which was these elegant demonstrations of the glories of the world. For many people, this, by the way, had a kind of overt religious significance, a kind of natural theology. Not for everyone, but for many in this period, the idea was that learning about how electric shocks make frogs' legs twitch or how a magnetic bar magnet can change-- can induce electric current, these are ways of revealing the glory of God's creation, as many people argued at the time. So It was seen as actually a function of moral or spiritual uplift. And therefore, again, according to the assumptions that-- we might say prejudices at the time-- therefore, all the more important for young women to be exposed to this, was the thinking of the time, with their tender, moral souls that needed special care, again to speak in their idiom, not my own. So there actually was a great encouragement for women to attend these public lectures, not, as you might expect, to move into the laboratory, not to do the work. That was an almost uniquely male enclave in this period, certainly in the Royal Institution. But to be exposed, to be part of the broader, polite discussions, it was one way of showing that you were-- you had an appropriate upbringing. And again, there's lots of cool stuff I'd be glad to talk more about that or send some references for more readings. The short answer was, yes, to attend the Christmas Lectures at Royal Institute, that was an awesome thing to bring your daughter to. But at the time-- and that's something we'll see change rather slowly, but change over time throughout the semester-- At that time, it was not something that was seen as a kind of acceptable or appropriate day job or a career path. So that was a very, very clear gender disparity, but a great openness to have women participate in that limited way in the public lectures. So very good question. We'll come back to that theme quite a lot. Great questions. Oh, that's so good. So we'll actually come to some of that in this very next unit right now. So partly, I'll just punt. But the short answer is there was a growing recognition in just this time period among many influential government leaders-- not only in Britain, in many parts of Europe, in fact, many parts of the world-- China and Japan, certainly North America, a growing recognition this during this exact time period that many nation states would benefit from investing in what we would now call science and technology or science and engineering. So there certainly were hot button issues, then as now. Natural evolution, Darwin's theory of evolution, which comes just around this time first published in 1859, that becomes yet another flashpoint for tension and controversy. But for a lot of this work in what we would now call the physical sciences, as we'll see, there was actually a kind of investment, a new kind of priority in a lot of this work not across the board, not without its own hiccups. We'll come to that. But it was actually a turning point in a new kind of investment, partly because many of these places were undergoing their own nation-based industrial development. And we'll see that actually even more explicitly in the next class, some of it today with Maxwell and in the next lesson. It was a great question. It's a theme we'll trace throughout the semester. But even in Faraday's own lifetime, that was beginning to change into a more-- a higher priority in investment in exactly these topics. Great question. Any others? Should we press on? These are great. So keep them coming. These are great historical questions to help us think about what was different back then, what might have been similar. So we're going to keep that kind of back and forth in mind as we enter now just the last part for today's class. I'm going to talk about William Thompson and James Clerk Maxwell. So as we saw at least briefly, Faraday had very little formal training in mathematics. And he used very little in his own studies, his very voluminous studies. And that's not surprising, given what we know about his own early childhood and education. There's a really remarkable shift, one that I still find unbelievably fascinating. So we're going to spend quite a bit of time talking about it today and next class as well-- this amazing shift that begins really late in Faraday's own lifetime, in the middle decades of the 19th century. It doesn't only happen in Britain. We're going to look at it because it's especially dramatic or blown out of proportion, we might say, in Britain at places like Cambridge University. It's happening also in places like France and other parts of the world, even beyond Europe. But Cambridge in Britain gives us the capstone picture of that in its almost pure essence. So there's a real sea change going on between Faraday's early life and even the later years of his own career in the way that people would try to approach natural phenomena like electricity and magnetism. Two exemplars, two examples of that shift are names that might be familiar to many of you-- and we'll spend a little time today talking about some of their work-- William Thompson, who was later known as Lord Kelvin. That's the same Kelvin from the Kelvin temperature scale, absolute temperature familiar to you, and many other things. So Thompson on one hand and James Clerk Maxwell on the other. Again, I'm sure Maxwell's a familiar name, things like Maxwell's equations and all the rest. So these two folks were about two generations younger than Faraday. They were coming of age when Faraday was already a well-known natural philosopher. They pick up on Faraday's work. They were avid readers of Faraday's books, often monographs, not just his articles. And they begin to translate it into a very powerful mathematical formalism of the sort that Faraday had had no inkling of whatsoever. It turns out both Thompson and Maxwell were born in Scotland. They were not from London or the immediate English surroundings. They were both Scottish-born. They both moved to Cambridge for their formal university training in the middle decades of the 19th century. So both Thompson and Maxwell became deeply involved coming exactly to the question that was raised just in that last question period. They became deeply involved with what we might call engineering aspects or applications of natural philosophy, all around the physics of the aether. They had both had very long, illustrious careers, not only when they were university students. They were pursuing these very fancy, sophisticated mathematical, formal mathematical approaches to electricity, magnetism, and the physics of the aether not only for the sake of pure mathematics. As we'll see, they were deeply involved in very practical ways with things that governments wanted to pay for, like telegraphy and other practical electro-technical devices. So among the most important in their own careers was indeed telegraphy. So as you know, Britain, then as now, was an island. It's still an island. Back then, it had lots and lots of colonies. That's what's changed. It was a period of still rapid acceleration of the British colonial adventure. So distant outposts or so-called possessions, territorial possessions, really all around the world. Again, many of you might recall the old quip, that the sun never sets on the British empire. In that time period, there were colonial holdings or territories that spanned really the entire globe. So at any given time of day, the sun was shining on some British outpost. So what do you need if you're going to try to administer a world-spanning empire? You need ways to have efficient communication. And that was an enormous impetus for things like telegraphy. Even with some of the former colonies, like the United States or Canada, some kind of formal colonial holdings, there was still an effort in this same period, the middle decades of the 19th century, to have efficient, long distance communication for things like commerce and diplomacy. So there were all kinds of reasons why electromechanical investigations were given a very high priority in Britain in this time period. So we get a taste. Here's our first quotation from that little bit of the Maxwell that I assigned. Maxwell published this voluminous treatise, a two-volume treatise in 1873. The two volumes together run to about 900 pages. It makes for fun beach reading. But in the very opening of the preface, we get this really interesting observation. He tells us partly why anyone should read this book. And he says here, the important applications of electromagnetism to telegraphy have reacted on pure science by giving a commercial value-- and indeed, a governmental, diplomatic value-- to accurate electrical measurements and by affording to electricians, as say experts in the field, the use of apparatus on a scale which greatly transcends that of any ordinary laboratory. They're getting huge investment to conduct very large investigations with very elaborate apparatus. The consequences of this demand, this government and commercial private interest demand for electrical knowledge and of these experimental opportunities for acquiring that knowledge have already been very great, both in stimulating the energies of advanced electricians and in diffusing among practical men a degree of accurate knowledge which is likely to conduce to the general scientific progress of the whole engineering profession. PS, buy my 900-page book. It's literally worth something. That's what he's announcing in the opening pages of his treatise. So let's go a little bit into some of how William Thompson and James Clerk Maxwell start treating Faraday's intellectual legacy. So Thompson began reading Faraday's books, as well as some articles, as an undergraduate. Thompson was still basically 18 and 19 years old at Cambridge when he starts devouring Faraday's work or continues to do so. He began to realize that he could put Faraday's work-- he could translate it, on the one hand, and also Newton's work on mechanics within the same mathematical structure. This was the stuff that only in the middle decades of the 19th century was what university students at Cambridge, like William Thompson, began to really be immersed in. We'll talk much more about that in the next lecture. So he realized that could make this bridge. You could treat all these disparate phenomena within electricity and separately all these disparate phenomena within Newton's mechanics-- blocks sliding down planes, the attraction of gravity-- introduce one similar mathematical construct, the potential, a scalar potential, which is often just labeled by the Greek letter phi. If this is brand new to you, don't worry. If it's very familiar to you, great. The idea is that a scalar function, a scalar field like this potential, it has values at every point in space. Those values can change over time. But it doesn't have direction. It only has magnitudes. You can think of making representing the temperature in the room at every location in space, and then tracking how those temperature readings change over time at each location in space. That's a scalar function. Temperature is scalar. It has magnitude, but not direction. That's hopefully review. Other things, like these fields that Faraday had introduced for electricity magnetism, those have direction as well as magnitude. Those are vector quantities. Well, as Thompson was learning to just devour in his undergraduate studies, you could create a vector quantity by taking rates of change of a scalar quantity, what we now call the gradient. So the gradient, often with this upside down triangle or nabla symbol, the gradient of a scalar function gives a vector function. And in particular, one could represent these vector field-like quantities of either electricity or gravitation by first writing down a simpler mathematical object, the scalar function, scalar potential. So he first reviews what had already been worked out by a bunch of 18th century savants throughout France, so French scholars throughout the 1700s downstream from Newton himself. They had really translated Newton's own work into this language of directional quantities, like fields, like forces in the gravitational field, and then introduced this simplifying mathematical structure of a scalar potential. And then you could study not just how the field is related to changes in that scalar function, the gradient. You could see how the field would diverge, the divergence of that field related to where the stuff is, the density of sources of gravitation, like the mass density. This was basically a translation or retranslation of Newton's work from the 1680s, translated into this more modern language throughout the 1700s by the French scholars of directional vector quantities being related to derivatives, to rates of change of simpler scalar quantities. That was literally textbook stuff. That's what Thompson is learning as an undergraduate. It's now Thompson, not those French folks in this case, who says, can we make a one-for-one map? He's absorbing this cool Faraday stuff. He says, what if we represent an electric field as a state of strain in the aether, exactly as Faraday had begun groping toward pictorially? Those electrical forces can be forces related to an electrical field, the state of disturbance of the aether. Sorry, how did that happen? Sorry, gang. OK. You can then relate the electric field again to rates of change of a simpler quantity, a scalar potential, exactly analogous to the gravitational case. You can then ask how the electric field is diverging throughout space and time, like those lines of force getting more and more spread out from each other in Faraday's simple pictures. And that's related to where the stuff is, to the sources of this electrical disturbance-- the charge density, how many electrical charged objects are distributed throughout space per unit volume, say. So he makes a one-to-one map. And he says there's really a precise mathematical analogy between electrical phenomena and mechanics, like Newton's laws. And the mapping is done by this a little bit more formal, more fancy mathematics. So far, that's not so shocking. Thompson was among the first to take this textbook side and start doing new things with the Faraday cool E&M stuff. Within one year of when Faraday introduced what we now call Faraday rotation-- remember that thing about the external magnetic field rotating the plane of polarization for light-- Thompson does the same thing. He says, oh good, Faraday has some new cool stuff. Let me put that through my very fancy universe of mathematics. So now, he's still an undergraduate. Thompson is now making brand new stuff-- not just rewriting the known regularities among electrical phenomena, but in fact formalizing the brand new stuff, like Faraday rotation. He introduces a new kind of mathematical object, the vector potential. That was not known even to the French savant. We still call it the vector potential A. And he introduces a new operator, a new way of manipulating that quantity, which we call the curl. Thompson invents curl as an undergraduate. Just let that sink in for a second. So the new semester is early. You all can invent all kinds of new math before, let's say, November. Thompson is literally doing this for fun on the weekends because he can't get enough math. So he introduces a vector potential and this curl because, as Faraday had intuited, the effect of a magnetic field on the surrounding aether was a turning or twisting strain, unlike the straight line strain of an electric field. And so Thompson wanted to formalize what that curl, what that twisting torsion would be like as a different kind of strain in the aether. And he formalizes this new operator, which we now know, and love, and use every single day. It comes out of Thompson trying to treat mathematically formally what he was still learning very actively from people like Faraday, pretty amazing. So far, following Faraday's lead, Thompson begins to interpret this mathematical result mechanically. Remember, we saw that one to one bridge between the Newton mechanics stuff and the electricity stuff. He says, if the same mathematics applies, maybe it's the same physical explanation undergirding all of it. He had learned from people like Michael Faraday and others, the aether seemed to be this inelastic, physical medium. It was literally a kind of stuff spread out everywhere. And that medium could be put under local states of tension or strain with twists, with directional properties described by these vectors. So these new mathematical techniques like curl could quantify this physical, mechanical medium's response to pressures like electric and magnetic fields. He really took that to heart. In a famous set of public lectures, much like these London ones, he actually came to Baltimore in the United States and gave very, very well attended popular lectures in Baltimore. Thompson basically tried to give his audience a feel for what these kind of experts were doing. This is a direct quotation. So again, people in their fancy opera gloves, like out for a night on the town, hear the great Lord Kelvin by this point, the great, ennobled Lord Kelvin describe about the latest advances. And he tells his fancy audience, stick your hand in a bowl of jelly, OK, and see how it wiggles and vibrates as you move your hand around. That's what it was to do physics because the jelly is just like this elastic aether. It's everywhere. And the job of super smart, ambitious people like William Thompson was to quantify the wiggles and vibrations of that aether, much as you would if you disturbed this pot of jelly. I love that. This becomes known as a mechanical worldview. If the mathematics was the same, make this one-to-one mathematical map, then maybe you could model electromagnetism literally on mechanical models. You have an elastic stuff under states of strain with directional properties. And if the models reproduce all the relevant phenomena, like Coulomb's law and all the rest, all the electromagnetic phenomena, then maybe these aren't only analogies or models. Maybe the aether really was merely a mechanical system just like that jar of jam. So it becomes a mechanical worldview. All of the things of the world are to be understood on mechanical vibrations or responses of this space-filling aether. The last part, then I'll pause for more questions, almost done-- we're going to turn to Thompson's slightly younger colleague, James Clerk Maxwell. I mentioned they had a similar upbringing a couple of years apart-- both Scottish, both come to Cambridge for their undergraduate studies. Maxwell is a couple of years younger. I don't think they actually overlapped in Cambridge. They came to know each other later. Maxwell, like Thompson, becomes steeped in this new Cambridge mathematical training. We'll talk a lot more about the Cambridge training in the next class. And Maxwell develops further analogies between these electromagnetic aether-based systems and mechanical systems. Here's a famous example. In this, by the way, you get some more description of this in that chapter by Bruce Hunt in the reading for today. So to model the effects of a magnetic field on the aether, Maxwell imagines this elaborate system. Basically, it's like a contraption of pulleys, and conveyor belts, and gears. We don't have to worry about the details so much. The ambition is what I want to convey, that there's a physical model of an actual system, a mechanical system of gears, levers, and straps that he thinks is what's really going on in the aether when we do things like put a little patch of aether under local strain from a magnetic field. So between these vortices, as depicted here, are these idle wheels. They stay in place, but can change the rate at which they turn. It really is like these conveyor belts. If an external force changed the rate at which one of these vortices was spinning-- you could change it by imposing one of these external fields like a magnetic field-- you change the rate at which this unit cell is spinning. That changes the rate at which the immediately nearest neighbors are going to be behaving. It's all this Faraday field idea of local neighbor to local neighbor, not action at a distance. So this unit cell starts to change its rate of rotation. That changes the speed at which these intervening, these so-called idle wheels spin. That then changes the rate at which their nearest neighbors spin because they're literally connected. So it's all local effects. So all effects are local. And the way that a magnetic disturbance affects something over there is by these local effects propagating through the medium in this gears and straps conveyor belt sort of way. And again, he gives us hints about this even in the brief excerpt from Maxwell's treatise that I assigned for today. So he talks about how the action of one electrical system on another is affected not by direct action at a distance, not the way Newton had said, but rather by means of a distribution of stress in this medium, this elastic medium, or the aether extending continuously throughout space. Likewise, he says a few pages later in the excerpt, the distribution of stress in this medium, this aether, is precisely that to which Faraday was led in his investigation of induction, of the fact that a moving magnetic-- moving magnet can induce electrical effects, like an electric current. At every point of the medium, there's a state of stress. Every part of the aether is under some kind of stress, such that there's tension along the lines of force, pressure at right angles. He's really building a mechanical model, almost sometimes a kind of hydrostatic one and other ones more like the gears of a factory. This became how the Maxwellians-- not just Maxwell, but even his nearest followers, and students, and proteges-- that's what they thought it was to study the natural world, to build more and more articulated mechanical models of the aether with which they could model and even predict more and more features of the natural world, like electromagnetic effects. So one of his acolytes, Oliver Lodge, who himself went on to a very, very distinguished career in physics himself, he wrote a book a few years later than Maxwell's treatise called The Modern Views of Electricity. And it was all about this Maxwellian approach within this mechanical worldview to account for states of strain in the aether using this much more articulated mapping, the fancy math that people like Thompson, Maxwell, and soon the whole Cambridge group would articulate. Well, this didn't impress everyone. I love this book review. This is the nastiest book of view for a long, long time. Here's a review by the French mathematical physicist Pierre Duhem, reviewing Oliver Lodge's book, this Maxwellian book. Duhem hoped he would like it. He also was very interested in electromagnetism. But he just sniffs in this wonderful sneering, French way. It's worth quoting. Duhem says, here is a book intended to expound the modern theories of electricity. And there are nothing but strings which move around pulleys, which roll around drums, which go through pearl beads, which carry weights, and tubes which pump water while others swell and contract. Toothed wheels are geared to one another and gauge hooks. This is the real kicker. We thought we were entering the tranquil and neatly ordered abode of reason. But we find ourselves in a factory. This was not high praise. He was like, I thought I was going to learn something edifying for my soul. Instead, this is literally low class stuff. This is the most obnoxious takedown book review that I could think of. So this Maxwellian British approach doesn't please everyone. And yet, as we'll see in the last little bit, they're starting to make some pretty remarkable strides nonetheless. Last little bit, then we'll pause for questions, the last part for Maxwell. So while working out this mechanical model, how would local disturbances propagate through this elastic medium of the aether, Maxwell starts putting in some quantitative values for the way to characterize the response of the aether. These are really like spring constants. How stretchy is the aether? Or how much stretchiness or tension do you get per unit applied external field? When we apply an external electric field to this part of the aether, how much does the tension change or likewise the magnetic fields? So really, like spring concerts, analogous. And that's what he and his colleagues are measuring in their local laboratories and studying. Then he goes back to these mechanical models to ask how quickly would a mechanical disturbance begin to propagate within this mechanical aether. And he finds, really to his surprise-- he didn't see that coming-- these disturbances in the aether, the moment by moment local effects would move at a speed he already knew very well, the speed of light. Here's a direct quotation from his very famous paper from 1865. And he says, the velocity of transverse undulations in our hypothetical medium-- I just love that phrase, transverse undulations in our hypothetical, meaning the speed at which these waves of aether are traveling. That speed agrees so exactly with the velocity of light that's calculated from optical experiments-- and actually, most of the astronomical tests-- that we can scarcely avoid the inference that light consists of the transverse undulations of the same medium, the aether, which is the cause of electric and magnetic phenomena. Translated into how we might say today, light waves were literally nothing, but traveling disturbances of electric and magnetic fields within this light-bearing, all pervasive aether. He gets that-- he gets this unification of electricity magnetism and optics by thinking very, very hard and very quantitatively about the mechanical response of this all pervasive aether. So let me wrap up, and I'll pause. I see a bunch of comments coming up in the chat. So I want to make sure we have time to chat. So let me just wrap up. And then we'll get to discussion. Leading natural philosophers of the 19th century were really quite convinced-- and they had very good reasons-- that the world is made up of just different kinds of stuff than how our contemporaries would answer that question. That opening question, what's the world made of? The folks back then, world-leading, rightly celebrated scholars-- Faraday, Thompson, Maxwell, and for that matter people like Pierre Duhem and his contemporaries as well-- they were convinced that the world was not made of elementary particles that skitter around in a vacuum. But rather, there was no vacuum. And particles were a kind of afterthought-- we'll Come back to that in next class-- that the world is filled with elastic, physical medium, the aether. And everything that matters is a response to that aether to various kinds of mechanical disturbances. As Lord Kelvin says, stick your hand in a bowl of jelly. See how it wiggles. That's what the world is made of. Up until the middle decades of the 19th century, many leading, highly accomplished, and very influential natural philosophers, like Michael Faraday, could enter research careers based on this apprenticeship model closer to the Middle Ages or the Renaissance than to today. They often worked in institutes, which had no formal teaching function. They were not, for the most part, based at universities. That's what begins to change in the middle decades of the 19th century. So Faraday, Thompson, and Maxwell, in their own different ways, pursued these analogies between this light-bearing or luminiferous aether and mechanical systems-- Faraday with his rather rudimentary geometrical arguments and his very clever pictorial aids, Thompson and Maxwell with this increasingly sophisticated mathematics. We'll look at that some more next time. They all became convinced of what became called a mechanical worldview. The electromagnetic effects were literally nothing but mechanical effects, mechanical effects on this all pervasive aether. Moreover, this is the birth of what we now call field physics. Local causes yield local effects. There's no action at a distance, unlike Maxwell's description-- excuse me, Newton's description of gravity. A can affect B But it'll take some time for the effects to propagate. And certain kinds of propagating effects, like these electric and magnetic transverse undulations, they'll show up as nothing other than what we call light. Let me stop sharing the screen there, open it up for questions. There's a huge set in the chat. I think I won't get to them all today. Great, great question. Thank you. So you're right. So there was a spectrum, even among what we now call the Maxwellians. They didn't all have exactly the same ideas, but they shared a lot. And so indeed, some of them wondered, is this a useful analogy? And if so, should we nonetheless keep pursuing it and see what we get out of it? Others were convinced that it's more than that. It's not merely an analogy. It's a helpful analogy precisely because of some deeper similarity. And individuals views changed over time. And the whole group would have a, let's say, evolving center of gravity. They were more like each other in the Cambridge scene than like, say, the French, or the Germans, or indeed communities further afield, as we'll look at. So I don't want to make it sound like they all exactly had the same notion. They were pretty similar, compared to-- actually, we'll see a really interesting contrast next class-- I think interesting-- with some leading scholars within the German states. So there was maybe the strongest contrast. But you're right that some of these, even within Cambridge or the close Maxwellian colleagues, they were all delighted to pursue the mathematical analogies. And they were getting trained in these very, very elaborate mathematical machinery, as we'll talk more about next time. They shared that. And They. Were impressed by what the analogies would bring them, either to explain something or to prompt new questions to then try to go investigate in the laboratory. It was a productive set of analogies. That was already what they agreed on. Some then said, it's not just analogy. The world is filled with jelly, effectively. It's a great question. It looks like Julia has very kindly compiled a bunch of questions for me. Let me answer some that Julia has private chatted to me. And if I miss some others, we'll have time for some more. So someone asks, by way of Julia, if university and research institutions were separated at the time, who used to teach at the universities? Oh, very good. People would teach at the universities who had gone to university. So it was self-replicating. Now, that's still true today. But we've achieved a kind of monopoly, whereas back then it wasn't such a monopoly. That is to say, people who would teach at universities had indeed been trained like William Thompson, or James Clerk Maxwell, or their own students. And in fact, we'll talk quite a bit about this in the next class and something that I find really interesting. And that was still separated from people who could enter really elaborately, amazingly prestigious careers, like Michael Faraday or even Thomas Young before him, or Hans Christian Oersted, or many other names we encountered today, without going through that kind of formal university setting. So the apprenticeship model-- get your hands dirty on effectively an elaborate internship and just like never leave-- that was still a viable path in to conduct really world class, cutting-edge research. Then you'd often write your monograph, your whole book about it, maybe deliver some fancy lectures. Maybe some of the scholars at the university would be interested, and maybe write a textbook, and maybe lecture on it in their classes. But they were separated. The separation begins to be reduced, starting in the roughly 1850s-- 1840s, 1850s. And they become much, much closer associated-- not automatically, not without some switches back and forth. But the trend really sets in, starting in the middle of 19th century. And again, we'll look at that a bit more directly next time. Very good question. So another question is, why do we go from the convention used in Maxwell of denoting a potential as psi to now using a phi? Oh, very good. Are they different? No, they're not different. So I guess I skipped ahead to the modern-- I showed my prejudices in my slides here, just using the Greek letter phi for potential. The short answer is I don't know what that particular letter changed. What I do know-- and we'll talk about, , again next time-- is that as you got a taste of it again in the reader today, Maxwell was not using any of this fancy pants vector stuff, right? Maxwell was writing everything out in explicit Cartesian coordinates. That's probably why-- I joke-- why his book was 900 pages. If he just used vectors, I think it would have been a 20-page pamphlet. We'd be done. And so I talk a lot about that, actually, in the coming class. So the particular question, how did psi become phi, it's interesting. And I actually don't know. But the larger question about Maxwell's notation-- and in particular, how it could become so valuable for people like us when he writes these incredibly clunky, pain in the butt kinds of things, partly why I gave you those excerpts from his reader, from his treatise-- it doesn't look just like our favorite textbooks today, even when we're using, quote unquote, "his equationss." So it's a great question. The conventions actually show a lot of interesting stuff, even the particular question about psi and phi. I admit I don't know. Regarding the wheels pulleys as analogies to physical reality, how is continuous rotational symmetry of the physics explained? Oh, that's interesting. I think the idea-- oh, that's good. Yeah, that's good. So I think they gestured toward it by those molecular vortices. That's the two-dimensional slice. And they certainly were aware of things like spheres. So a lot of it, they talked about ball bearings. So you actually have spherical symmetry. And so they constructed even more elaborate of pulleys and gears. So they were certainly very attuned, especially thinking about these mathematical, fancy mathematicians, like Thompson and Maxwell. They were very attuned to what we would now call symmetry arguments in real space. Could I rotate the system around x, y, or z? And so a lot of their models were actually, frankly, spherical, like billiard balls or ball bearings, as opposed to only circles. So we'll come we'll come to that as well. These are all good questions. Other questions here from the chat? Or from not the chat, anyone else have any questions that they want to pose for the group? Feel free to just raise your hand at this point too, now that we can see each other. I like the comment about putting your hand in jelly, now your keyboard is sticky. My apologies for your keyboard. I'm just trying to catch up on the chat here. Ah, there's a really interesting comment here from Sabrina. I like this. So Sabrina writes that I don't know what a GUT is or grand unified theory. But it seems like we're moving back towards everything being the same thing again with a smile emoji. So one of the things that maybe it comes back to [? Iabo's ?] question. So for some of these folks, the aether was a word for something that you didn't really know what it was-- a subject, a topic they figured had to be everywhere because they could infer its effects. If I wiggle something here, something happens there. There must be some medium that transmitted the influence from A to B. How could a moving bar magnet possibly affect an electric current? Because they're both immersed in the shared medium, this all-pervasive medium. And so they gave it a name. They gave it a name long before they had any articulated descriptions or analogies of it. They called it the light-bearing aether, luminiferous aether decades before they had any more specific ideas about it. And so in that sense, I think it is kind of like we're in a similar situation today-- and we'll talk a bit about this near the very end of this semester-- with our quest to understand what's the world made of. And we have some other-- we have very finely articulated models, like the standard model, and this kind of neutrino, and that kind of electron, and up quarks and down quarks, and gluons, and Higgs boson, all this very specific stuff. And that amounts, as you may know, for not more than 5% of the kind of mass balance of the universe. We've explained 5% of the universe with unbelievable precision, out to 12 decimal places. Thank you very much. And we know Nothing with a capital N, Nothing about the other 95% of the energy balance. So we give it names. We call it dark matter and dark energy, again as we'll come to near the end the very, very end of the term. And I think, in some sense, I have some sympathy with either my present colleagues and myself and/or with the people of Faraday's and Maxwell age. They didn't know what the aether was, but they knew it had to be there. It was conveying all these effects. They could measure the effects of disturbing the aether, as far as they were concerned. So they gave it a name, a kind of placeholder name, partly to help organize their studies. Oh, I'm studying the luminiferous aether as well. That helps get scholars on the same page. They would give their best estimates of what they thought the thing was like and how it would behave. But they didn't know, right? They had better or less well compelling responses. And I think we're in a very similar situation today-- not by design, not by our hopes-- because we have been stymied now for decades to get any more carefully articulated knowledge about what specifically is dark matter that seems to be filling every galaxy much, much more so than ordinary matter. And what particularly is this other separate kind of stuff that we now call dark energy? So I think, again, I don't want to say we've learned nothing since 1820. But we can see some maybe humbling, humility-inducing parallels between our best efforts today and the efforts of very, very earnest, talented folks from long ago. That's a great-- again, we'll keep that in mind as we march forward throughout the term. Julian asks, why were they so reluctant to accept action at a distance? And how did we come to accept it? Oh, good. Let me take the first part first. Action at a distance, that's a great topic. Newton famously says, when it comes to the phenomena of gravity, I feign no hypotheses. Hypotheses non fingo was the Latin. He says, I think that quantitatively the effect of one mass exerting a force on the other goes as the product of the masses divided by the square of their distance is the law of universal gravitation. What's the underlying reason for that? He says, for that, I feign no hypotheses. He actually had a lot of hypotheses. He just knew not to write them down. A lot of scholars have argued that he was actually convinced, at least by analogy, that gravity was a kind of alchemical effect, even at a time when alchemy was in poor repute. So he said outwardly and publicly, I don't know what it is. But he seems to have at least been inspired in part by these very-- what we would now call occult forces, where something can somehow have an influence on something else instantly across time and space. So Newton was comfortable with the notion of action at a distance, that somehow the Earth could instantly tug on the moon and vice versa without having to wait for some influence to travel between here and there. That, even in its day, was criticized by a number of scholars, even in Newton's own day because they thought it did sound occult. It sounds like witchcraft. How could this thing possibly affect something over there if everything else I see-- I throw a pond-- excuse me, a rock into the pond. I see ripples spread out. The distant edges of the pond aren't affected by my rock until the ripples get there. There were all kinds of common sense observations that suggested to people like Leibniz-- I mean, like Newton's very smart contemporaries-- that the physical world seemed to convey influences by requiring some time for influence to travel from A to B. So Newton's action at a distance already at least had a curious reputation among the savants who cared. And then you get to someone like Faraday, and Oersted, and Thomas Young. And now, they figure they have a reason to finally give up or to finally reject that action at a distance because they have separate reasons based on this wave theory of light saying, oh, there's a seat. There's a physical explanation available to us for how all these physical phenomena are transmitted from A to B across space. They're all sitting in this elastic medium, this aether. And that can have a local disturbance that propagates. So there was, so to speak, an added reason to pile on and say, not only did people wonder about Newton's action at a distance in Newton's own day. Now, roughly 100, 110 years later, we have new inputs. We have new evidence from our investigations that suggest that Newton hadn't known about it or hadn't taken into account at all, all the more reason to try to find out a local causes yield local effects kind of response. So that starts getting more and more compelling to this next generation. Now, Julian asks, because he knows what's coming, why do we now accept action at a distance? We do and don't. We have a love/hate relationship with it. I assume, Julian, that you're referring to quantum entanglement or so-called quantum nonlocality. If you're not referring to that, I'm going to hijack this. And I'm going to talk about it because I just love it. So we're going to talk about that actually in a few classes, a little bit later this term. So how have we made our awkward peace with a very certain kind of action at a distance, not easily, not right away, and thanks to a lot of hard and contested thinking? That's a preview for lecture 12, I think. So we're going to talk about quantum nonlocality because it's one of my favorite topics in the universe. So we'll get to that, too. But for a long, long time, between let's say 1800 and the 1960s or '70s, people were quite content to say that action of distance is always, always bad. We never want it. We never want it in our physics. And they had a range of reasons why they thought that was going to be an acceptable approach. Aiden and asks, how accessible were the lectures and later universities' research? I mentioned that Faraday did not have much money when he was younger. But was he the exception to the rule? Or did class discrimination prevent-- oh yeah, very good. So in Britain, in British history, I mean, the rough shorthand is we're always talking about class discrimination, so including in Faraday's time. Faraday was absolutely the exception who proved the rule. It was, as I say, this rags to riches story. It wasn't that there was no social mobility. And you can read any Charles Dickens novel to see some characters who have a breakthrough and rise above their station in the language of the time. But in natural philosophy, Faraday stands out as a pretty unusual case. And the universities as well, there were scholarship students at places like Cambridge, then as now. So it wasn't only very, very rich children of very rich families who could go to university. But it was tilted very strongly that way. And in fact, it's only in Britain in the middle decades of the 20th century, like 100 years after the days of Thompson and Maxwell, when there's a very large university reform throughout Britain to make much more accessible colleges, universities for many, many more for people from many more backgrounds. That large scale national university reform comes really, in earnest, like 100 years later. So there were students on fellowship. In fact, Isaac Newton was a scholarship student when he went to Cambridge in his own lifetime in the 1600s. Often, they would have to be basically like a butler or a maid for their roommates. Not an ideal situation, right? This was not like work study. It was like, you have to go clean up for your rich roommate, not what I recommend today. So there were ways to be a student at some of these elite British universities from more humble backgrounds, not great. This was well outside the reach of someone like Faraday. So again, the latter didn't extend arbitrarily low. But there was a range of social classes in universities in this time, not a full range. And that begins to change pretty slowly all over the world, including in Britain, including, frankly, in the United States. And of course, we still have challenges even here. So that's a great question. Faraday was, indeed, the exception who proves the rule within natural philosophy. Yeah, great question. Any other questions there? If not, I realize we're pretty much at the hour. Those are great questions. Thank you so much. I hope it's not too clunky. I'll try to get better at flowing in the questions and discussion throughout these Zoom lectures. Anyway, thanks, everyone. Great, great questions. I'll post the video as soon as it's done processing. It'll go to the Canvas site. And we'll pick it up there for next week. Everyone stay well. See you soon. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_6_Reception_of_Special_Relativity.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So today, we are talking about the reception of Einstein's work from 1905 on the electrodynamics of moving bodies. So we'll talk about the reception of what we have come to call the Special Theory of Relativity. So as a quick reminder, we're going to-- sorry, excuse me. The class today has three main parts, like they usually do. We'll talk about some of the early reactions to Einstein himself and to this body of work. And then we'll look at two specific examples of those reactions in a bit more detail. Get more of a sense of what ways were people could have engaging with Einstein's paper from 1905, what did they think it was really about, what did they get excited about, what did they ignore that otherwise seem to have been really important to Einstein himself. So we're going to talk about not just did people read the paper and make use of it, but in what specific ways? How were they creatively making this work relevant from where they sat intellectually and institutionally? And we'll see that sometimes that that diverged-- sometimes in dramatic ways-- from what Einstein himself thought was most relevant or most interesting or most important about his own work. And that's a theme again we've seen already a few times so far in this class. Different people engaging with Maxwell's equations in different ways. It's a theme that we'll see really throughout the entire semester. And here's a class I like-- this is pretty fun material because I think we get to really sit with some examples in some detail and really watch people wrestle with what was a fairly new material, Einstein's paper from 1905. So just a reminder. What did we cover in our previous discussion just on Monday? Einstein had many different threads that he brought together in this paper from 1905 entitled, "On the Electrodynamics of Moving Bodies." He was a recent college graduate. He was working off-hours on his PhD thesis. He had a full-time day job working at the patent office in Bern, Switzerland at the lowest level, entry-level job of patent officer third class. And in the evenings, he would hang out with his friends like Maurice Sullivan and Conrad Habicht and another friend, Michelangelo Besso, in a small circle. And they would read physics and philosophy together, and they called themselves the Olympia Academy even though it was really just three recent graduates drinking beer in the pub talking about things like the work of Ernst Mach. It's not a bad way to spend your time after college. So one of the people they were really interested in was Ernst Mach, and again, we have this great reading that was on the Canvas site I think designed for this for Monday's-- this past Monday's class by our own Amanda Gefter, one of our TAs, who helps us really even more understand what was Mach all about, what did Einstein take from Mach's work? And so I encourage you to go back and reread Amanda's piece as well. So Ernst Mach is this Viennese polymath who introduced this philosophy of science in the later years of the 19th century, really during Einstein's childhood, that came to be called positivism. And as far as Ernst Mach was concerned, the best way to make progress scientifically was to focus on objects of positive experience-- that's where the positivism comes from. Things we could really concretely observe or measure. And not focus on things that could never, even in principle, be subject to these kind of empirical inputs. So we had this extensive, often very biting, very significant critique of Newtonian physics, which Mach was-- as far as Mach was concerned, was riddled with these things that could never ever become objects of positive experience. And that really was very, very inspiring for young Einstein and his circle of friends. So one of the lessons that Einstein took from that, among several, in these years leading up to around 1905 was that it would make sense to focus on kinematics, the motion of objects through space and time, things we could, at least in principle, really measured and observed, rather than starting our inquiry with dynamics, the study of forces, that was really turning on its head, what had become quite a standard way to approach topics like electrodynamics. We saw when Hendrik Lorentz considered electrodynamics. He would start with the forces exerted by the ether on objects and so on. Einstein said, let's start with kinematics first. It was that kind of thinking that led him to postulate these opening postulates in his 1905 paper, which hopefully is now quite familiar to you in these opening few pages. This is the subject for your first paper assignment. And to paraphrase these two postulates that Einstein elevates in those opening paragraphs, the first is a extension or generalization of an idea that was actually centuries old by this point. He was really building on Galileo's work. And Einstein says, not only should the laws of mechanics be valid for any observer who is moving at a constant speed, as long as they're not speeding up or slowing down-- they're not accelerating, then all the laws of mechanics of ballistics of a ball you might toss and play catch, all those laws are valid whether you're on a boat moving at a constant speed down the river or standing still on the shore. Einstein says, that's true-- that should be true for all of physics. For electricity, for magnetism, for optics, for thermodynamics. So part 1 is elevating or generalizing what was already a fairly familiar notion to many, many physicists. And part two, at least at first, he says, it looks apparently irreconcilable with the first. The second postulate, according to Einstein, is that the speed of light, c, is a constant independent of the motion of the source. And we now know-- he came to that in his own private thinking, or in talking with a small circle of friends-- by a series of these thought experiments like, what would it be like if you could catch up to a light wave like this surfer riding at the same speed as the ocean wave? You would see this frozen field configuration which, Einstein was convinced, could never really happen. So how do he make sure no one gets caught in that absurdity, makes sure no one could ever catch up to a light wave? Well, if light's always going to travel at a fixed speed no matter how fast you're chasing after it. If I'm on a supersonic jet plane, I'm still going to see the light wave travel past me at this fixed speed of light, then no matter what fancy engines I sup up on my means of transportation, I will never, ever be like this surfer, I will never catch up to and move at the same speed as that light wave. And that was, Einstein considered, really central as a postulate. He doesn't prove it, he doesn't even really motivate it in this paper very strongly. He elevates it to an assumption then sees what will follow. And then it's from those starting postulates where, again, just in the opening few pages of this article, where he comes to what he calls the relativity of simultaneity. That events that we might consider to have happened at the same time, to have been simultaneous, will not appear to have been simultaneous to an observer who is otherwise fully legitimate, meaning all the laws of physics should hold perfectly valid for her, she just happens to be moving at some constant speed with respect to us. We will disagree on what counts as simultaneous. And from that, Einstein was convinced, would lead these other phenomena like length contraction and time dilation, not because of forces exerted by an ether, but because we disagree on when to implement our measurements, because we disagree on what counts as simultaneous events. And that comes for him-- from starting with kinematics, not dynamics. And again, that's the form of reasoning that Einstein puts into those opening pages of his paper from 1905. OK. So what were the early reactions to that body of work? Remember, it was published in, really, the leading, or certainly one of the leading journals in Western Europe for professional physicists. It was a perfect venue to try to get some attention. And yet, we now know historically the first reaction was no reaction at all. The baseline summary for today's class is no one paid attention or very few people paid attention. We now know Einstein's name. We might own Einstein's swag-- T-shirts and coffee mugs and bumper stickers, but that simply was not the case in 1905 or 1906 or 1907. It really was quite a delay. So at the time Einstein, as we saw, was this really little-known patent clerk. He was not employed as a research scientist. He was not teaching in a university. That had been his ambition. He just was not able to get that kind of job at first. He was publishing, before then, perfectly adequate, not very special articles on other topics in the main journal. He was building up a kind of portfolio of perfectly competent, not very earth-shattering work. So no one was hanging on, waiting at their mailbox to get the latest issue of the journal to see what Einstein had done lately. Very few people noticed anything at all. So this next point is one of my favorites. Not only did Einstein still not get the kind of job he had most wanted, an academic position, he didn't even get a raise or promotion at his current job. He remained a patent clerk third class even after publishing all of these articles even in the space of just one year within the leading physics journal. Not just the one we've looked at so far, even others we'll look at soon. He didn't even become patent clerk second class. This was really, really a complete shattering silence in response to his work. So one of his close friends, an exact contemporary of his, Max von Laue, whom he happened to meet, who was following a more traditional physics research and academic career-- von Laue was the same age as Einstein, but was making advances of the sort Einstein had hoped to make but was not yet making. Von Laue did his PhD quickly, he got a very prestigious postdoctoral position, he was soon teaching at a central research institute. Von Laue had befriended Einstein, they would talk, and because of his direct connection, Max von Laue and a very small number of others began trying to get a little more attention to Einstein's work on what we would now call special relativity. So for example, from his better-established academic position, von Laue wrote a review article that people did pay attention to. He was seen as a rising person to follow. He wrote an article fully four years later in 1909 which helped to bring Einstein's work to others' attention, Einstein's work and that work of others as well. So a few years later, some more people begin to pay attention not because they were following Einstein, per se, but a little bit of a kind of percolation starts. So reaction one is really no reaction at all. The second reaction didn't make Einstein much happier. So when people did notice the work, the minority, the exceptions to that rule who paid any attention at all, they tended to read Einstein's paper, the one that for all immersed in now, as a clever rederivation of what was, by then old news. They saw this as a really kind of smart way to get back to the results that leading physicists, like Hendrik Lorentz and Henri Poincare in Paris and others, had been finding for a decade since the 1890s. Remember, we even-- we saw this in our own class discussions a few sessions ago. Hendrik Lorentz had already published on length contraction. He was directly responding to the Michelson-Morley experiment in Lorentz's case. He hadn't really argued about there should be a contraction. He had derived quantitatively an expression involving this factor gamma that we've already seen many times, 1 over the square root of 1 minus v over c squared. That was the exact same quantitative result to which Einstein had arrived by very different forms of reasoning. Lorenz was a very senior, very established, very productive mathematical physicist. People really did want to see what was Lorenz's latest work. They really would hang on his latest articles. So the small number, the exceptions who noticed Einstein's work in 1905 said, oh, I've seen that before. It's a clever way of getting back to what was by then 10-year-old results. And so if anyone gave Einstein credit in the early years, it was as like second fiddle. They would refer sometimes to Lorenz's theory, Lorenz's electrodynamics, or a few of them would refer to the Lorenz-Einstein theory. No one referred to the Einstein-Lorenz theory. So the second reaction didn't make Einstein much happier. Either people ignored it or they somehow, as far as he was concerned, misunderstood this as a trivial reworking of more established results. So even when people gave Einstein's work any attention within this rubric of the framework that Lorenz had been establishing throughout the 1890s, few people seem to notice that the conceptual underpinning was really quite distinct. Some of the equations were identical, the same factor of gamma, the same coordinate transformations and so on. And yet, they seem to be based, as we would now see with hindsight, on very, very different starting assumptions. Lorenz's work was all about the ether. The ether wasn't only central, it was exerting forces, these dynamics according to Lorenz's view. It was literally squeezing the atoms and molecules in the arm of Michelson's device. It was all about the elasticity and the forces exerted by a physical ether. And as we've seen in Einstein's case, in the opening few paragraphs, he's dismissed the ether as superfluous. His work is not about the ether. He says the ether was a century-long distraction, which is a very different starting point even though they both arrive at some very similar equations. So even when people gave it attention, they misread it or they read it in ways different from how Einstein probably had hoped. And again, this lasts not just until 1906 or '07. Even after 1909, review articles by people like Max von Laue. Here's a great example that I take from Andy Warwick's work. We had some more of Andy's work to read for today, and I'll talk about that later in today's class. But he has this great example in his article. That even as late as 1913-- now we're talking nearly a decade later. So fully eight years later. A leading British physicist had written about what he called "The abstruse conceptions" of Einstein's 1905 paper, which were, quote, "most foreign to our habits of thought," and "as yet scarcely anyone in this country," by which he means Britain, "professed to understand or at least to appreciate." So even the people who knew there was this person named Einstein who worked on the electrodynamics of moving bodies, this was not kind of commanding attention in the way that we might expect because we know how the story ends. We know what Einstein would go on to be recognized for. It was a pretty slow transformation. And we see this reflected in Einstein's own career as well. So as I mentioned, one of my favorite factoids for the entire class today, Einstein didn't-- not only didn't leave the patent office, he didn't even get promoted within the patent office in the light of this quite extraordinary work, his first break, the first opportunity for the kind of career we know that he really wanted-- we know from his personal letters that have survived and are available, he had been hoping for an academic position from his early, early days. His first big break along that new career path comes actually in 1909, around the time that von Laue was writing that review article and others, where he was offered an assistant professor position in Zurich, which was actually very exciting. That's where he had done his own school-- his own university training. Then two years later, he's given a promotion, an offer to move now to Prague. He spends about one year there, then he's hired back at a different institution, back in Zurich. And then finally, in the spring of 1914, in April or May, he's invited to join the very, very prestigious Prussian Academy of Sciences in Berlin. So roughly a decade later, he's promoted to the kind of position he'd been really angling for all along, and it comes not one year, not two years, not three years later, but nearly a decade later. And after a lot more work has come out, not just on the basis of this 1905 work. So has a slowly accelerating academic career starting from this very unusual starting point. So I'll pause there. Any questions on that early discussion of the reception of Einstein's work? This is a good moment to remind you, try not to annoy all your professors in college. You'll get better letters of recommendation that will help you on the job market. That was one of many things Einstein had done so poorly because he was kind of a dummy. Einstein, as I like to say, was no Einstein when it came to his own schooling. So that's life lesson number one for 8225. Write that down. Any other questions about Einstein's-- the early reception of either his own career or this body of work? If not, I'll gladly press on. OK. If questions come up, put them in the chat, we'll have more time to talk about them. But I'll go back to the slides. We'll talk about now this next part. We can dig in a bit more to-- an example of a bit more significant engagement, a stronger reaction and response to Einstein's work. And that came actually somewhat surprisingly from one of Einstein's former mathematics teachers, Herman Minkowski. So Minkowski was a very elite established professional mathematician. He was a professor at the ETH-- that's the university that Einstein had worked so hard to get into, the Eidgenossische Technische Hochschule, which I never tire of trying to pronounce, we can just call it ETH. That was the Swiss Federal Polyatomic Institute in Zurich, kind of like MIT. Minkowski was a mathematics professor, not a physicist. And through a kind of roundabout way, Einstein's work was brought back to Minkowski's attention a few years later post-publication. So it turns out, Minkowski was certainly not looking for Einstein's work, and we have, again, some wonderful, juicy, colorful letters that survive of Minkowski basically talking behind Einstein's back and vice versa. So we know that when Einstein was a university student, he'd worked so hard to get into ETH. What did he do? He began cutting classes, especially classes of faculty for whom he had little respect, and that happens to include Herman Minkowski. So even though he was registered and took classes for credit from Minkowski in the Math Department, he rarely showed up, and Minkowski noticed that. So Einstein used to borrow notes from several friends, from his, at the time, girlfriend, who became his first wife, Mileva Maric. She was also a physics student at the ETH and doing actually quite well in her mathematics classes. She would very dutifully attend, take very careful notes, share them with her good-for-nothing boyfriend Albert Einstein. He would cram before the exam and kind of do OK. Another one of Einstein's close buddies from undergraduate days, Marcel Grossmann, did the same thing. Grossmann went on to a career as a mathematician himself. So Einstein would basically cut classes from people like Minkowski and then catch up and just skate through. He would just like barely pass by cramming the night before. Don't do that on paper 1, please. So a friend of Einstein's encouraged the professor, Minkowski, to read this 1905 paper a few years after it'd come out. And Minkowski wrote back, I really wouldn't have thought Einstein capable of that. He wrote-- Minkowski wrote to another colleague, Einstein's paper came as a tremendous surprise, because in his student days, Einstein had been a lazy dog. He never bothered about mathematics at all. So much for Minkowski having a high opinion of young Albert Einstein. So once Minkowski was convinced to even look at this again a few years post-publication, he quickly became convinced that Einstein was still a lazy dog or at least was still making things unnecessarily complicated. So Minkowski thought Einstein had a few interesting ideas in this paper from 1905, but had really messed things up as far as Minkowski was concerned. That he had really missed the main point of his own work. Minkowski was convinced that Einstein's understanding of Einstein's work was not the best understanding at all. So in fact, Minkowski set about reformulating this work in his own way in around 1907, it was actually published posthumously in 1908. Minkowski died rather young, so he gave this famous lecture and it was published soon after he passed away. Who was this person, Herman Minkowski? He wasn't only a mathematician at the ETH in Zurich, he was, in particular, a geometer. His specialty within mathematics was pure geometry. In fact, he was like an evangelist for geometry. He had written a book, first published in 1896-- the cover here is from a later edition-- called the Geometry of Numbers. He actually wanted to remake abstract fields of pure mathematics like number theory, make those part of geometry as well. So he wasn't only an expert in geometry, he thought geometry would unlock the secrets for all of math and that's all he cared about. That was the key to everything. So he really was a geometer all the way through. So when he then later turned to Einstein's work about two years after it had been published, he did so as a geometer. Not as a physicist, not as a philosopher interested in math or positivism. To Minkowski, Einstein's work held lessons that were best understood through using the tools of geometry. So one of the first things Minkowski did is something we all take for granted today. Many of you have probably seen this before. We now will call these things space-time diagrams, or often we'll call them Minkowski diagrams. These were one of the things that Herman Minkowski introduced once he grudgingly began to pay attention to this paper of Albert Einstein's. So what are we going to do with these space-time diagrams? To make things simple, we'll consider motion in only one direction of space, let's say the x-axis. So quite typically, the convention very quickly became, we'll measure locations in space along the horizontal axis or the x-axis. And we'll measure changes in time along the vertical axis. So now we have a two-dimensional plot to measure-- with which to make sense of motion in space and time. These are space-time diagrams. What are the next things that became very, very common very quickly with these kinds of graphs was to use very convenient coordinates, coordinates in which light waves would travel one unit of space, one tick along the x-axis, for every one tick, for every one unit of time. So for example, if we're going to measure time in seconds, a very common unit with which to measure time, then we better not measure spatial distances in either meters or feet or kilometers or even parsecs. We're going to measure them in light-seconds, which is the distance light will travel in one second. So we're going to measure the distances in units such that light will travel to one of those spatial units in one unit of time. For example, seconds and light-seconds. So what we're really doing, in effect, is scaling the speed of light to be 1. And therefore, when we go to plot the motion of light rays or light waves on these space-time diagrams, they'll follow these very simple 45-degree diagonals. Their slope is fixed to be always inclined at 45 degrees because we've chosen our coordinates such that they will always traverse one unit of space in one unit of time. Now why will they always travel at that slope? Because if we take Einstein's second postulate seriously, light will always be traveling at that single universal speed, the speed of light. Therefore, they'll have a single fixed slope. They could travel off to the left, and therefore, then we'd have a worldline like this inclined at 45 degrees pointing off to the left. They can travel to the right, but they can't change the slope. That's the impact of Einstein's second postulate. OK. So far, so good. We can do things like we can plot more than just the path of light. Here's this-- this is real data. This is how I spend my time these days in quarantine. I don't go anywhere. So my position, the plot of my motion through space and time is really easy now. I just sit in one location in space, like this chair, and I don't move. So all that happens is time ticks inexorably by, I just get older and older and older, but I'm not moving around, or barely. So we can plot what's called my worldline. That's the vocabulary that gets used now with these space-time diagrams, dating really back to the era of Herman Minkowski. So for any object, we can chart its motion through space and time in one of these plots. If it's not changing its location, but only moving through time, it will have a straight line heading straight up the page. It will move-- if it's light, it's going to move at this fixed inclination or slope. And of course, if my family forces me to get off the chair and go out for a jog, then I'll have a more complicated worldline. I'll be moving, I'll be changing my location over time, not nearly as fast as light, not nearly as fast as shown here. This is like supersonic jetpack. But I will sometimes be moving to the right at some speed. I might change directions later and move to the left. But whatever I do, I'll be moving at a speed slower than light, that's for sure. And therefore, the slope of my worldline is limited to never be as sharp as inclined as a 45-degree line. So I might have a nontrivial worldline, but it's a way, nonetheless, to chart my motion through space and time. Many of you have probably seen that before. This comes from the geometer Herman Minkowski trying to make sense of Einstein's work. OK. Now let's go back to one of the scenarios that Einstein introduced in those opening pages, the part that you have for your assignment of his 1905 paper. Let's imagine that I'm standing still-- so I'm back to my very trivial straight-up-the page worldline, I'm back in my chair. And I'm standing exactly equal distance from two colleagues who are at locations A and B. So I'm an equal distance along the x-axis from each of my colleagues. I've asked them to please shine their lanterns at me at the same time. If they've really done that, then I'll receive the light waves from each of them at the same time. The light wave traveling from B toward me and the light wave traveling from A toward me will arrive at my location at the same time-- sorry. And if so, then I know that A and B emitted their light waves simultaneously because the speed of light is fixed. So the light wave from, say, the lantern held by person B couldn't have either sped up or slowed down and had no choice but to travel along this worldline at exactly 45 degrees. Likewise for the light wave from my friend at position A. So if I know I'm equal distances along the x-axis from A and B And I receive both light waves at the same time here, then they must have been emitted simultaneously there. So now I can establish lines of simultaneity. Every point along this x-axis would have the same value of time in my coordinate system. The value of t on my t axis is the same. So the way I've set things up here, event A corresponds to the time t equals 0, event B corresponds to time equals 0 at time equals 0 sitting here on my chair. Likewise, every other moment line of simultaneity is parallel to the x-axis, they're all parallel to the line t equals 0. So I can imagine every event in my set of coordinates that had the assignment t equals 1 instead of t equals 0. All of these were simultaneous with each other, not simultaneous with events A and B, or every point along the line t equals 2 and so on. This is, I'm sure, very, very familiar. This is how we use graphs all the time. Minkowski's trying to just give us a more straightforward geometrical interpretation for what Einstein belabors in those opening pages about what do we mean by simultaneous events in our frame of reference. These are events that occur along a line of simultaneity. Well, now we can use our set of coordinates, our x and t, and map the motion of some other object that's moving with respect to us. For example, Einstein's favorite object, a train. So now we can chart the worldlines of the back of the train, the middle of the train, and the front of the train as the entire assembly moves past us while we stand still on the train platform. So this is our mapping of the moving object's motion from within our set of coordinates with respect-- the coordinates mark our frame of rest, and we're going to watch this moving train move through our coordinates. Now remember, we have a partner who rides in the middle of the train, again, very much like what Einstein was describing. So we have a colleague who's sitting in the middle of the train. She knows-- she's exactly equal distance from the front to the back of the train. She's asked friends on the train to shine their lanterns toward her and she'll be able to determine whether or not her colleagues A and B shine their lanterns toward her simultaneously. Well, light can only travel along 45 degrees on these diagrams for me or for her. Even on the moving train, the light wave that's emitted from point B doesn't speed up, it can only but go along a 45-degree angle. That's the real force of Einstein's second postulate. Likewise, the light wave that's emitted from point B can only travel at the speed of light. So in my coordinates, as much as in my colleagues coordinates on the train, those light waves must travel at 45-degree lines, at the constant speed of light. So our colleague who's moving on this train sitting in the middle, she receives the light beams from point A and point B at the same time. So now she knows those emission events must have been simultaneous. How could they not have been? She's an equal distance, the light waves couldn't have either sped up or slowed down. So therefore, all the points along this new line, we can call the x prime axis, all those events occur along a line of simultaneity for her. In the moving train, she knows exactly how to establish which events are simultaneous. She can take advantage of light waves, they travel at a universal speed. It just turns out that what events are simultaneous for her no longer match the set of events that are simultaneous for us. She has a different set of lines of simultaneity. And Minkowski was saying, in effect, you idiot, Einstein, it's just a coordinate rotation. You're just establishing a new set of coordinates like any student of geometry should be able to do. He said it only slightly more nicely in his published article. Meanwhile, the t prime axis is nothing other than the worldline of the zero point of space for the moving observer. That is to say, it's the worldline of the back of the train. The origin point of her coordinates is, say, the back of the train, x prime equals 0. She can measure any distance she wants with reference to the back of the train. She's moving with the train. She can always lay out meter sticks and measure how far the front of the train is from the back. So the origin, the spatial origin of her coordinates is the back of the train, how does that move through space and time? That just becomes her t prime axis. Much as our location of the point x equals 0 as it just sits still and moves through time, that maps out for us the t axis. The next thing Minkowski showed very easily for him as a geometer is that the angle between the x and the x prime axis, some angle theta, is exactly the same angle as that between the t and the t prime axis. In fact, that angle is directly related to this ratio v over c. So the faster the relative motion between us and the train, the more steeply these lines are inclined, the larger the angle. As the relative speed v rises with respect to that constant universal speed, the speed of light, this angle becomes larger, the x prime axis becomes tilted even more far away from x, the t prime axis gets tilted even more inward away from t. But all we're doing is establishing a new set of axes with respect to which we can mark the motion of objects through space and time according to the geometer Herman Minkowski. So now we come to some of these strange-sounding ideas that Einstein, again, had belabored as far as Minkowski was concerned. Einstein belabored with all this lengthy discussion of Ernst Mach-inspired measurement procedures. Minkowski says, it's just more geometry. Let's take the example of length contraction. We all agree, the procedure for measuring lengths is to measure the location of the front of the object and the back of the object at the same time and then take the difference. OK. So we measure the length of the train to be this length I've called L prime. We have our friend at the back of the train at position A, we have our friend at the front of the train at position B prime. We know those are simultaneous because they lie along a line of simultaneity for us, or a little bit more concretely, both the event A and the event B prime share the coordinate t equals 0. They have the same time coordinate in our coordinate system, in our reference frame. They lie along a line of simultaneity. So we've therefore marked the position along the x-axis of the front of the train and the back of the train at the same time in our reference frame. Now we can just take the difference, subtract the location of this point B prime along the x-axis, subtract it from the origin. OK. That's our length of the train. Front and back, same time, length L prime. Meanwhile, for our friend who's riding along the train, she knows exactly which events are simultaneous because she can have her friends test this with the exchange of light signals. And she knows very, very clearly that events A and B are simultaneous. For her reference frame, A and B lie along a shared line of simultaneity, just not our line of simultaneity. So her friend at the front of the train can mark that location, her friend at the back of the train can mark that location. We know they're doing it at the same time in the moving reference frame. Take the difference, it's the difference L, which, as you can see now, just trivially is longer than L prime. So what Einstein called-- and for that matter, Lorentz called length contraction, our measurement of the moving object is short compared to the measurement conducted by someone moving with that object. L prime is demonstrably shorter than L. And Minkowski says, in effect, of course it is. We're just projecting our measurements onto different sets of axes. We are geometers, we reckon objects with certain sets of coordinates, and we just happen to have different coordinates with which we're making sense of the world. He doesn't just derive the qualitative phenomenon of length contraction, he gets exactly the answer that both Einstein and Lorentz had gotten, even though they had very different forms of argument. He even gets the exact form of this gamma, because as you may remember, the angle here between x and x prime is directly related to v over c, as is the factor gamma, a few lines of algebra, not very hard, to see the relationship between this line segment and that line segment. What's important, again, to underscore is not the algebra-- I'm confident we can all do that. Is to say that for Minkowski, this is not about the ether, it's not about Machian observations that are subject to positive experience, this is just geometry as we transform between different coordinate systems. And you can almost hear him saying, "No duh!" to Einstein who cut all his classes. Minkowski goes on in this article, first worked out in 1907. Using these space-time diagrams, he then shows that the full Lorentz transformation was nothing but geometry. Remember, to Minkowski, everything was geometry. This was just a rotation in space-time. Now to warm up for that, again, we'll do something probably a little bit familiar to start. We'll talk about just an ordinary x-y plane. Two directions of space that are perpendicular to each other, the Cartesian plane, or, if you like the Euclidean plane. We can lay out our coordinates. x will be our horizontal axis, y is at right angles heading up the page. These are two directions of space. We can identify some points in the plane. We'll call that point P. We can label the location of that point in our coordinate system. We'll project what that means. Really, is we're projecting the location of point P onto our perpendicular axes. So we label the x-coordinate of point P as its projection along the P-- excuse me, along the x-axis, it has some value x1. We identify the location of point P, the projection along the y-axis, that's its y-coordinate y1. Fine. Now we all know-- even Einstein knew-- we're allowed to rotate our coordinate systems in the x-y plane. What if a friend of ours had used an inclined set of coordinates, x prime and y prime, that are rotated by some fixed angle away from our original coordinates? The point P hasn't moved. All we've done is change the labels we assigned to the coordinates. In our new coordinate system, in the rotated coordinates, the same point P, which itself hasn't moved, is assigned a different value of its x prime coordinate projecting now to the new axis and a different value of its y prime coordinate projecting to the new y-axis. These are related, quite straightforwardly, by a rotation matrix, by-- if we want to be a little fancy pants, we could say it's a one-parameter group of the rotation in two dimensions. But it's just a set of sines and cosines to take into account how the x and x prime axes are related to each other by this angle theta and the y and y prime axes. So it's actually very straightforward to relate the coordinate labels in one set of coordinates to those in the other. Nothing magic has happened. We can see that what I had called x prime was related to x in this very straightforward fashion, taking into account the different coordinate systems and their angle between them, and likewise, y prime in terms of x and y. So Minkowski says, that's all that's happening with space-time diagrams as well. Now let's imagine some event in space-time. So now we're going back to these Minkowski diagrams, the spatial direction runs along the horizontal axis, the time direction runs vertically, we have some event that happens in space and time. That is to say, we assign to it a time that happened at 12:00 noon at my house, a spatial location, some point. Now the observer on the train is going to label that exact same event, but give it different coordinate values because her coordinate system is different. The lines of simultaneity for her frame are different, the worldline of x equals 0 for her is different. So she has the x prime, t prime coordinate. She's still perfectly capable of labeling the event P and assigning to it values in her coordinates x1 prime and t1 prime. And again, remember, the angle of inclination here is directly related to the relative speed between these two sets of coordinates, these two reference frames. Moreover, what Lorenz had derived because of the-- what he assumed to be the elastic forces of an ether acting on the molecules, what Einstein had derived quite differently by thinking about the relativity of simultaneity, Minkowski says is just the set of rules for making rotations in this only slightly more complicated geometrical space. It's still basically like making rotations in the x-y plane. We have, again, to be fancy pants, a one-parameter family of rotations. The one parameter that changes now is the relative speed v, or if you like, the ratio of v over c, but of course, c is a universal constant. The only thing that really varies here is the speed little v. We have some other set of rules. These take the place of our sines and cosines of the relative angle. So once again, we can now trivially, geometrically relate the labels that the moving-- the person on the moving train had assigned to the same event P, relate those to the labels that we assigned standing still on the station platform. He gets precisely the Lorentz transformation, not because of the ether, not because of Einstein's arguments about how we perform measurements, but because he's just a geometer making rotations in a particular kind of two-dimensional space. So to Minkowski, the Lorentz transformation itself was nothing but geometric rotations in space-time. Again, so not surprising since we know Minkowski is a geometer above all. OK. Now this gives us something that's actually new. He's no longer just rederiving stuff in a new way. Now he gets stuff that neither Lorentz nor Einstein had actually recognized or realized. This next part comes really from Minkowski taking geometry very seriously. As he knew as a geometer, even in ordinary x-y plane-type geometry, when we can perform rotations, the coordinate labels of a given point will change, but some quantities remain unchanged even under rotations. For example, the distance. So what's the distance between the origin of my coordinates and the point P? We'll call that distance d. That doesn't change even if I change the labeling of my coordinates. So when I rotate by some angle theta, I change the labeling the description of x and y versus x prime and y prime. I didn't pick up the point P to move it further from the origin. The distance between, say, the origin and point P is invariant under those rotations. And again, it'll take you about 28 milliseconds to figure that out for the geometrical case just using sine squared theta plus cosine squared theta equals 1. Use the usual trigonometric identity between these trigonometric functions, and then it's very straightforward to see that the distance is invariant even though the coordinate labels have changed under rotation. That's what now becomes really new that Minkowski pushes forward as the first real concrete advantage as far as he's concerned of this geometrical approach. There's a generalization of distance that Minkowski introduces. It comes be called the space-time interval, often abbreviated by the letter S. It's like a version of distance for space-time instead of only a spatial problem, only space-space. So what is this space-time interval that remains unchanged even though the coordinate labels might be switched under these coordinate Lorentz transformations? So again, we come back to this way to relate our x and t coordinates to our x prime and t prime, that's what Minkowski is rederived from his rotation. And now he shows, using the definition of gamma, that just for the exact same reason that the Euclidean distance, little d, in the previous example remains unchanged, just because of the definitions of sine and cosine of a given angle, given the definition of gamma, that way of relating the kind of rotation, the degree to which x prime is inclined with respect to x, we get an unavoidable outcome. That whether we had chosen to label the event P in our x and t coordinates or x prime and t prime coordinates, the space-time interval, little s, has remained unchanged. Something remains invariant even though our different descriptions of spatial distances and temporal durations have changed. So to Minkowski, this shows that really, geometry-- in particular, the geometry of space-time, that, to Minkowski, becomes the only thing that matters. Not the ether, not this Mach-like show me how I can taste it or measure it or make it subject of positive experience, it's this geometrical feature of space-time. And in particular, not space and time separately. It's actually Minkowski's work, not Einstein's at first, that really introduces this notion of a union of space and time that very quickly becomes called by a single name, space-time. So that's what Minkowski is convinced is the only really important insight or outcome of Einstein's work, it is geometrical. So as he famously concludes in this article-- it was published soon after he passed away, laying out all this work on space-time diagrams, the invariant space-time interval and so on. He says, "Henceforth space by itself and time by itself are doomed to fade away into mere shadows." These are really just projections on a kind of idiosyncratic choice of coordinate axes. These are mere shadows. "And only a union of the two--" a single union of space-time-- "will preserve independence." So let me pause there again and ask for any questions. Any questions on the Minkowski stuff and on where he was coming from? So Silu asks, what were Einstein's reactions to Minkowski? Good, very good. So the first reaction of Einstein to Minkowski was also thoroughly predictable. He hated it. He never liked Minkowski as a person, he cut his classes. He was like, oh look, this guy still understands nothing important. The feeling was totally mutual. So Minkowski thought Einstein had made a mess of things and only grudgingly paid attention. Einstein read this paper and said, oh, this guy understands nothing, he's not reading Mach, he missed the real conceptual innovation. He's just playing with axes. Now the difference was, Minkowski died and Einstein didn't in 1908. So Einstein later, over-- and in fact, we'll look at this in our upcoming class session. Over the next several years-- again, about four years after Minkowski's work was published, did Einstein slowly, grudgingly, with other kind of inputs and nudges, begin to appreciate the geometrical view that Minkowski was putting forward. Einstein's original reaction was, this guy still doesn't get it. I would cut his classes again if I were still there. I mean, really, it was just this mutual, like, blah. And it's really about four years later, as we'll talk a bit about in the next class, that Einstein grudgingly comes along to say, geometry is really cool, space-time as a unified notion actually seems important. And then he says, oh, I wish I had cited all that math after all, and he goes back to one of his friends, Marcel Grossmann, and says, oh, not only can I borrow your notes, can you teach me all the stuff I should have learned when I was in college? Lesson number 3 is don't act like Einstein. And Gary asks, if Minkowski had lived, who would have been deemed the greater genius? Oh, that's an interesting question. I don't know. That's a time-varying question. In 1909, still Minkowski. In 1915, I'm not sure. In 1945, I don't know. It's interesting to see. And I imagine the mathematicians then, as now, would herald Minkowski's work for a lot longer. We'd see, I'm sure, some different opinions continue to vary. Jesus writes, it's scary to realize that nobody cared too much about these groundbreaking results for so long. Makes me wonder what kind of groundbreaking work may have actually fallen through the cracks and never gotten a full exploration at all. Yes, that's a fair point. That's something we can keep in mind throughout this whole semester. Not only-- there's all kinds of gatekeeping about who determines at what time what work is worth paying attention to. And institutions play a role, individual rivalries play a role. Soon, we'll come to examples where worldly events play a role, well beyond the control of any given individual or institution. Sexism plays a role, racism plays a role. I mean, there's all kinds of things that factor into who determines this is of value and worth paying attention to at any given time. And that's a general historical lesson we grapple with all the time, not just in the history of science, but much more generally. And we'll have a lot of examples of that. Really, actually, more examples even pretty soon in our own class, alone examples we might think of from other instances. Here's an example where, to his credit, Einstein later, grudgingly, came around to say, oh, that stuff is important, Minkowski does deserve credit. I now see it was really valuable. But it took Einstein himself quite a few years to get there. Doesn't always happen, or there can be a much longer delay. Any other questions on Minkowski's reaction to or remaking of Einstein's work? So Minkowski actually thought this was really a big deal. Minkowski thought his own work was a big deal, which is not shocking. It's also not unique in the history of science. And in particular, he thought it wasn't actually only about coordinate transformations and rotations of axes. He thought that was the most obvious thing to do, and anyone who doesn't do that is a dummy. He had very clear ideas about the proper tools of research. But even in that first article that gets published soon after he passed away, he develops in the later sections a larger worldview. Literally, a view of the world that's based on this union space-time notion. It gets very wrapped up with certain trends in idealist German philosophy. He's very inspired by the work of Immanuel Kant, for example. Einstein was, too, but in different ways. And so as far as Minkowski is concerned, he thought he'd found a new deep level of reality. The reality wasn't the ether to Minkowski. The reality was there was this single thing, space-time, and there are certain things on which all people will agree, these invariances, even as we disagree separately on measurements of length or measurements of time. And so that, for him, reintroduced a kind of absolute, an absolutism. Not absolute rest like in Newton, not absolute space, because space and time, he says, are now just mere shadows. But there is an absolute past and an absolute future. There's an absolute away, the things that we would now call, if you've had some more coursework in this area, you'd recognize this things like spacelike-separated and so on. He starts mapping sets of events that could not possibly have exchanged even a single light signal yet. Those are absolutely separated from each other. And he gives these very absolutist language to them. So he's actually mapping what he considers like the ultimate absolute structure of space-time. And as far as he's concerned, that's the real lesson that the geometry leads us to. So he actually thinks it's not a theory of relativity at all. Relativity is like an accident and boring and trivial and the stupid thing that no one should pay attention to. Relativity-- oh, we differ, the person on the train says this wasn't the same. He says, that's just accidents. Who cares? What's actually important are these underlying invariances or absolutes. We all agree on certain combinations of things, and that's the real lesson. So for him, this becomes a re-injection of the absolute. Not because for all sitting in a physical ether. He couldn't care less, he's a geometer. Because there is a unique structure to space-time that the tools of geometry are uniquely well-adapted to help us understand. So does that makes sense? Does that address your question? OK, cool. Thank you. So let's see. Fisher asks the question in the chat. We now think of space-time as something is deformable. Hey, that's cheating. That's-- we're going to get there next week. You don't know that yet. It's 1905 or 1908. But certainly, we'll talk about quite a lot starting in this coming-- the next class session where Einstein himself gets to is to begin thinking about space-time as one thing that also can respond, to be deformable. Originally introduced as a mathematical artifact, was a point where we moved from thinking about it as a mathematical concept when general relavity was published? Yes, very good. OK, so Fisher, that's a great question. Thank you for that. And we're going to come-- we'll get to sit with that actually quite directly on our next class, so I'm going to I'm going to punt on that for now. The short answer is it's not-- it doesn't come only when Einstein publishes general relativity. That comes very late, 1915. But it's an evolution in Einstein's own thinking with some colleagues, like Marcel Grossman and a small circle, over the intervening decade. So starting around 1911, '12, '13, he has a lot of mistakes, a lot of blind alleys. He starts wondering about that more in the terms as you describe it. Starting well downstream both from his own work in 1905, downstream by four years even from Minkowski's work. It's a real intellectual journey. So that's a preview, we'll get there next time. Great. Gary asks, how did folks react to Minkowski's work? Again, I think maybe-- sorry, maybe I mentioned that or you asked earlier, who was the-- who made the greater contribution? Yeah, Minkowski was on people's radar screen. He was a big fancy senior mathematician, very widely published, very influential at a central-- center of learning in Western Europe. So his work was paid attention to mostly by other mathematicians and by some mathematical physicists like Hendrik Lorentz. Not every single physicist, and certainly not every experimental physicist, but it was not buried and forgotten for years and years. It was appreciated in certain specialist communities right away, and then as we'll talk about later, even other people like Einstein belatedly come to admire it as well. So it becomes very, very well-known, partly, I think, because Einstein goes back to it. And Einstein himself becomes, then-- eventually someone that everyone pays attention to, many people. So it has a bumpy few years. But it was not launched into obscurity because Minkowski himself was actually already quite well-established and well-known. Abdul Aziz asks, what was Lorenz's stance with regard to this? Good. Did he jump on the space-time wagon? Great. So the short answer is-- we'll see an example of the kind of reaction actually the next part of today's class. Not Lorentz's, per se, but an example. Lorentz, really, for the rest of his career was still pretty convinced that the ether exists. And he had a great appreciation for some of these very clever mathematical techniques. He certainly liked it that Einstein's, and now Minkowski's work, led back to equations that he himself had derived from his own set of arguments. But he really was convinced that there's something physical about this elastic medium in which for all immersed for a long, long time. So he thought these were more cool tools, more mathematical tools with which to keep asking questions he already considered important. Those weren't, as we've seen, the same set of questions that Einstein thought were important or that Minkowski thought were important. And in that sense, it's a lot like the example we'll look at now from Cambridge. Yeah, very good. So again, we'll look at this much more directly in the upcoming class session next week. The short answer is, it wasn't really Minkowski, per se, because he just had no respect for Minkowski, he kept cutting his classes. It was actually an independent series of thought experiments and conceptual puzzles and calculations that Einstein got immersed in, and he kept getting stuck, and that led him back to realizing geometry might be really important, by which point, Minkowski had died, he went to his former classmate, his buddy, Marcel Grossman, and had to do a crash course on the stuff that he kind of should have learned when he was an undergraduate. So it was-- he came to an appreciation of the geometrical approach, of Minkowski's work in particular thinking about a single object called space-time. He gets there. But not because he reads Minkowski's paper and says, oh, my great professor, I'm totally convinced. He's like, that's dumb, it's wrong, it's boring, he's still irrelevant, I would cut his classes again. That's the reaction for like four years, three or four years. And it's a separate set of lines of thought that-- and interactions with other people, including friends who were better-versed in mathematics, that bring him his own thinking back-- or bring his thinking towards a more thoroughly geometric approach. And then he never leaves. Then he becomes a card-carrying geometer himself once he gets a crash course from real geometers and gets a newfound appreciation because of his own questions have led him there. So he comes to greatly admire Minkowski's work, and indeed, to try to build upon it, as we'll see next time, but not because he thought, oh, this is a cool work by a smart person. It was, again, a kind of roundabout thing. And so-- and we'll look at that in a bit more detail in the coming class. Any other question on this stuff? If not, I'll go to this last part of the lecture on some of the pretty wacky work that the Cambridge gang starts doing with this as well. Yeah, thank you, Fisher. So as far as Minkowski was concerned, Einstein's paper, especially the first few pages, which you are all dutifully reviewing and maybe even rejecting, or not, or asking at least for revisions, the first few pages, at least, are really, to someone like Minkowski, a jumble word salad. This is like, of course I know how to calculate distance traveled if the person's traveling at constant speed. That's like middle school. This is not like, why are you belaboring this paragraph after paragraph? Why are you using this Mach-inspired language without even citing Ernst Mach about how objects should be subject to empirical measurement? Why are you telling me how to have my friend shine a lantern in my face? It's like, get to the point. Show me equations, show me what you're doing, show me the measurements you performed in your laboratory, show me the missing factor of 2 pi in Maxwell's equation. Do something. Calculate something, you lazy dog! I think what Minkowski was reacting to was this kind of-- not just a kind of everything's in a specific coordinate basis, it's not very unified, we would learn fancy techniques later. It's not just that he doesn't-- that Einstein doesn't scale out the speed of light to make things more dimensionless. These are all things that we would admire or come to take-- to consider a natural move to make because-- on the benefit of work by people like Minkowski. I think to Minkowski, this was just a lot of philosophical blah, blah, blah that wasn't advancing any new mathematics. It wasn't analyzing concrete experiments in any detail. It wasn't telling us anything new about things that other researchers had done. It was like, hey, I read some stuff and I think it's confusing, why don't people talk about it? I think that was the kind of response from many readers, which is why many people ignored it. Those who bothered reading past page 5 said, oh, I've seen that equation before, that's just Lorenz's stuff. Oh, it's the Lorentz-Einstein theory. I think it's that kind of response. And for Minkowski, I think he would have seen an insufficient precision and clarity. And now that's a value judgment, and that depends on our own personality, on our own training, our own toolkit. So for Minkowski, if you're not talking about what happens when you perform rotations, there should be something invariant. Like that should be less than 0 in geometry. The distance between my house and the town square hasn't changed even if I rotate the street map. Like, "Who doesn't think of that?" I think would have been Minkowski's response. Once you think in a geometrical way, according to Minkowski, there should be unavoidable follow-up questions, and this lazy dog is too busy talking about, he doesn't like the way textbooks describe Maxwell, like I got no time for this. I think that would have been a pretty reasonable approximation to his first reaction. And then he's like, oh, actually, OK, the speed of light is constant and I can perform these very clever sets of rotations, and of course, therefore, there's an invariance, and of course, there's an absolute away. So he has his own-- he's on his own train. He's on a different reference frame, so to speak. And certain questions are the obvious next ones to ask because he has a certain set of tools and starting point. And that's true of-- I think of all of us, and we'll see a contrasting example in this next part of today's class, we'll see that over and over again throughout the term. Not only about relativity, but quantum theory, about cosmology, about particle physics. And here's a kind of early example where we can sink our teeth into it and say, these people weren't just misreading it, they were doing really cool productive stuff, some of which we still take advantage of today, and yet, what they thought they were doing is neither what we think we do, nor what the original authors thought they were doing. So we have this plasticity of meaning and interpretation even when our equations agree perfectly, let alone when we come up with different equations. I find that wonderful. I love that. And in fact, they're all writing down the same factor gamma, that's fantastic, because then we can really map what is the same and what's not the same conceptually, let alone in the exchange of light signals. So keep that kind of question in mind. We'll come back to that theme, really, over and over this semester, it's a great question. Let me go on to the last part and look at our favorite Cambridge Wranglers, or at least my favorite Cambridge Wranglers. So, Minkowski wasn't the only person who slowly, sometimes kind of grudgingly, began to read Einstein's work on the electrodynamics of moving bodies. Another group that really did dig in in pretty substantive ways was a subset of these Cambridge Wranglers, the folks that now we've talked about a few times at Cambridge University. For them, much like for Minkowski, there were elements of real value in Einstein's work, they just weren't the elements that Minkowski considered valuable or that Einstein considered valuable. So let's take a little look at the kinds of things that they did with Einstein's paper. As a warm-up to make sense of this-- and by the way, for this part, of course, I'm drawing very heavily on the article in the reading for today from Andy Warwick. And I'll talk through some of the specifics-- some of the details of that article might be pretty confusing, so we'll talk through what were they doing at least a little bit more. The historical point I want to clarify is really just like Minkowski in the sense that very smart professional researchers were making sense of Einstein's work differently than Einstein did, and that didn't make them wrong, it made them-- it indicates that they were finding elements of value and of interest in Einstein's work different from Einstein's own and sometimes different from our own. That's the historical lesson. And if you get a little bit lost with image charges and conformal transformations, that's OK. In this class especially, that's OK. But I think it's still fun to see what were they really doing, so that's what we're going to talk about a little bit here. So many of you, again, might have seen, even maybe in high school classes on electricity and magnetism, there are all these really clever tricks that we still take advantage of to try to make pretty complicated problems simpler and yet can still get exact solutions. One of those that we learned pretty early on is called the method of image charges in electrostatics. So let's say some meanie, one of my mean old colleagues in the Department of Physics, assigned to you the problem to solve for the state of the electric field between a stationary point charge with a positive charge, some charge plus q, and some infinite in extent, some infinitely long, perfectly flat conducting plate that grounded. That's a pain in the neck to find the exact value of the electric field. It's a vector quantity to worry about how it's changing through space. So one thing that you may remember from studying Maxwell's equations is that for this kind of scenario, the field lines of the electric field all must intersect the plate at right angles. That's to remain consistent with the fact that this is a grounded conducting plate. If they didn't all intersect at right angles, you would actually build up a voltage difference, it would not satisfy the boundary conditions of the problem. It's a pain in the neck. Meanies. So there's a pretty clever trick. We can solve a much, much, much simpler mathematical setup to get the exact solution for the original problem. Forget about the plate. Throw the plate away for a few steps. And instead, insert a second point charge of equal but opposite sign. Equal-equal magnitude, but opposite sign. So now in this new scenario, we pretend there's nothing physical here, but we have an equal distance away from that where the plate had been in the original problem. We put down an imaginary image charge that has a negative charge, but at the same magnitude as the original positive charge. Now we can easily solve for-- much more easily solve for the value of the electrostatic potential. It's just two bodies. It just goes like a 1 over r between them. We can take the gradient, see how that changes through space. We can do everything we would ask to do to solve for the exact behavior of the electric field by replacing one problem with a simpler but equivalent one. That's the method of image charges. You can learn about that on Wikipedia or on any textbook. It's a standard technique that goes back really centuries. It's pretty powerful. Well, that's baby stuff. We're talking about Cambridge Wranglers here, or for that matter, MIT students. We don't have to do one single point charge behind what would have been an infinite unbending conducting plate. That is boring. Let's do some crazy stuff. And that's what the Wranglers got very good at doing with their personal tutors for the Tripos exam. So a very similar related technique became known as the method of inversion. And that's, again, really just mapping a difficult problem into a simpler one. And again, this illustration is taken from Andy Warwick's article, which then is elaborated upon in this book of his that I really can't recommend strongly enough. I love this book, Masters of Theory. But the article itself really gets this part across. Let's first do a little geometry, a little mapping exercise, which is what the Wranglers would have learned to do very early with their coaches. Take any single point A. Here's the point a for our case. Any point that happens to be inside a circle of radius k. So the point A is contained within that circle. We can then do a unique mapping to an inverse point, kind of like that-- where we place our image charge from the past example. There is a one-to-one map for a point corresponding uniquely to that point A that's now outside the circle. And so the way we determine where that point will be is by requiring the distance from the center of the circle to our original point, that distance OA, times the distance between the center of the circle and the new point, the inverse point. The product of those distances must equal the square of this radius. That's enough to do this mapping. It's actually called a conformal mapping. Now what if there were some point that point A is actually moving along this curve, this orange curve between points P and Q? We can imagine, there is a whole set of points, each of which lies entirely within this original circle. For each of these points along the orange curve between p and q, we can do the same trick. We can map them to each of their corresponding inverse points. For every single point along this orange curve, we require this relationship to hold. And we map them to an inverse arc, a collection of inverse points that are entirely outside the circle. So if we imagine swooping from point p to q within that circle, the mapped motion, the uniquely corresponding motion would swoop from point p prime to q prime. Now this transformation is called a conformal transformation. We use this all the time-- I use these all the time, even to this day in my own physics research. They're really, really helpful in topics like general relativity and cosmology. The Wranglers are doing this for electrostatics to get ready for their exam. So this transformation preserves angles, the angles between any of these arcs, any of these line segments, this angle marked w here. Those angles are preserved, the lengths are not. You can see, the line segment pq is clearly a shorter overall length-- the distance between points p and q is shorter than the distance between the corresponding inverse points p prime and q prime. So very generally, conformal transformations preserve angles, but not lengths. If that sounds confusing to you, you're in good company. The point is, that's the kind of stuff these Wranglers would have learned with their coaches pretty early. There are properties of these geometrical transformations, the upshot of which is to make complicated problems easier. So now let's get to an actual Tripos-type problem. Now let's say, because you're at Cambridge in 1880 or 1890, they don't ask you to solve for the field between a static positive charge and an infinite conducting plate. Come on. We are Wranglers. Instead, we want to solve for the electric field lines, or let's say the equipotential surfaces where the electrostatic potential is equal-- so equipotential surface. Due to a single charge at location O that's near a grounded conducting sphere centered at B. So here is the actual physical parts of our problem. Some charge here sitting at rest, electrostatics, some electric charge at point O that's near some spherical grounded conducting sphere-- there's a harder version than our infinite plane. Well, let's use this method of inversion to treat what turns out to be a much simpler problem. So we have to map the simpler version first. Instead of considering either point O or this conducting sphere B, we introduce the equivalent of a kind of image charge. We consider a charged conducting sphere centered at point A. We put in this-- like our image charge-- our new geometrical simplified system some positive charge and a sphere at A. Then we draw the field lines and the equipotential surfaces for that. Now that's very easy. It's just a sphere. They're just going to be concentric circles around it for the equipotential surfaces. The field is just radial. It goes back to Faraday's lines of force. It couldn't be easier for these Wranglers. Now we use that method of inversion to map it back to the original problem. We're going to close this imaginary object A in a sphere of radius K. And they will map every point along this orange curve to the corresponding image points outside of our circle. And now we've mapped the equipotential surfaces for the original problem based on this much simplified problem. If that's confusing to you, first of all, good. You've led a healthy life. This is not on the exam. This is just meant to be an example to make sure that the kind of reasoning that these Wranglers would have been immersed in of the sort we've seen a few times. This is like standard Tripos stuff. And remember, unlike us here at MIT, the undergraduates at Cambridge in the 1880s would have done all this on a timed exam that determined their graduation rank and would be published in the national newspaper, so we won't to ask that of you. So now let's come back to what Andy is writing about in this article. This method of inversion and these geometrical techniques, like conformal transformations, that was like daily stuff for these Cambridge Wranglers by 1900. These techniques apply to electrostatics. Imagine, you have a fixed charge sitting still at point O, a fixed grounded sphere, nothing changes over time. And that is to say a little more quantitatively, you're always solving for situations in which the electric potential is not varying. It's a problem in electrostatics not varying over time even though it varies over space. So what these young recent Wranglers wanted to do, Ebenezer Cunningham and Harry Bateman, was to actually use Einstein's work to tackle a problem they cared about, which is to generalize these inversion problems and conformal transformations to situations where objects might be moving around to electrodynamics of moving bodies, not to electrostatics. You can see why Einstein-- the title of Einstein's paper would have caught their attention. They wanted to understand how to generalize these Wrangler-like techniques to the question of the electrodynamics of moving bodies. And they thought Einstein's paper had some pretty clever tools in there to do that. So they could identify, using Einstein's work, all the transformations, which we would now call the Lorentz transformations, lambda-- Einstein had separately rederived those in a later part of the paper that I didn't assign for our class, but it's in Einstein's paper, too. These are the transformations that would leave the original wave equation invariant. So then they could use these inversion, these mapping techniques, the characteristic Wrangler techniques to map into new dynamical solutions that could vary in time and not just space, keeping in mind that now the time coordinate, as well as the spatial coordinates, might need to be shifted thanks to this Lorentz transformation. So they wanted to use Einstein's paper to find the most general class of transformations lambda that would leave this harder problem invariant. To do for electrodynamics what they had already learned with their coaches to do for electrostatics. That's what relativity was all about for these folks. They were reading Einstein's work very carefully. They were doing extremely productive, hard, original work. They just were doing things that Einstein didn't notice or care or think was important or that Minkowski did. So they weren't trying to understand the ether, per se. Again, they weren't trying to understand Machian positivism. They weren't even really that concerned about Minkowski space-time. For them, it's about wrangling most of the manipulations of mapping complicated problems into simpler ones, and Einstein's paper then gave them additional tools with which to do their problem. So they weren't ignoring Einstein's work, but they also weren't card-carrying Einsteineans. So that's what I wanted to go through just to make sure that the real historical point of Andy's paper would be clear. It gets pretty complicated, the details of these inversions and all that. I'd be glad to chat more about that. But the main point-- the main lesson for us is really that they're also doing real work with Einstein's paper, just different work than Einstein. So let me wrap up for today. Researchers didn't just read Einstein's paper and become convinced, they didn't become like converted Einsteineans for a long, long time. Most people ignored the paper altogether. Those few who paid attention often thought it was just a minor elaboration of previous work. The few who really did pay attention to it much more squarely did so from within their own context. We can even jokingly say, within their own frames of reference, much like Einstein's favorite example of observers on the platform or on the train. They do different stuff with it. Different parts of the paper are relevant to them and of value. They're not misunderstanding Einstein's work, they're like utterly understanding it. They're doing real stuff with it, just not what Einstein had first intended. So we saw Minkowski reinterpreted it in terms of a certain kind of geometrical vision. The Cambridge gang, Cunningham Bateman and some of their immediate circles, they do other kinds of geometrical things with it because they have other concerns more related to the properties and-- transformation properties of differential equations. The point is, none of these readers seem to care much about what was most important to Einstein. As far as Einstein was concerned, none of them were getting the point of his article even though these were among the people who paid any attention at all. Einstein had argued in those opening paragraphs that the ether was merely superfluous. Who cared? They didn't-- that wasn't the part that landed with them. Moreover, Einstein had insisted that we start with kinematics instead of dynamics. Forget-- that didn't wash, that didn't register. Instead, as we've seen before, and we'll see again, even the exact same equations could inspire quite different interpretations or different meanings. So I'll stop there. Any other questions on that last part about the Cambridge gang? And again, if the particulars of conformal transformations or inversion went by too quickly or Andy's paper was confusing, please don't worry, I'd be glad to chat more if it's of interest for you, but the historical point is just to say they were doing real work. It's still in our textbooks. It was valuable work. It was just different work than Einstein's or Minkowski's or Lorentz's and so on. That's really the point I think we can hold on to in this class. Any other questions on that? Anyone want to volunteer to do more inversion problems for conducting spheres and-- I hate that stuff. That stuff, yeah. Julia says no. I'm with Julia on this. That stuff makes me-- that really makes my head hurt. And you want to do that for five hours before lunch like a good Wrangler? That's another question for you. Your math homework before lunchtime. OK. If there's no more questions for now, then I'll pause there. We'll pick it up early next week. And we'll-- and we'll get into many of the questions that were raised even for today. What is Einstein himself do with all this work in his march toward what we would eventually call the General Theory of Relativity? So we'll come to that. That will occupy us, then, for our next class. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_9_Rethinking_Matter.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So for today, we're going to continue this story, or this unit, of what we began thinking about together last time, a body of work, a set of ideas that came to be grouped together under this title old quantum theory. And as I mentioned last time, that only makes sense once there's a new quantum theory to contrast it with. So this is a term not only made up many, many years after the fact, this was introduced even by many leading physicists right around the years that we're describing. So by the later years of the 1920s, it already had become common to make this kind of marker to distinguish between the scattershot developments that had been unfolding between roughly 1900, and 1924 or '25, or so, and to bracket those together under this umbrella term, old quantum theory. Because as we'll see, starting actually in the next class, by the mid 1920s, there had been some new bodies of work put forward that look like this could really become the new quantum mechanics and not only a kind of hodgepodge of suggestive ideas within what then became known as old quantum theory. So as last time, I just want to remind you these dates that I include here are convenient and limited. So as we've seen throughout the entire term so far-- we'll keep seeing examples of this well beyond even our quantum theory unit-- the dates that we assign to these things have to be taken with some grains of salt. We have to squint at them a bit. They're helpful by giving us a rough sense of the flow and where within this evolving body of work do these particular developments fit in. But remember that it's not like someone conducted a single experiment or published a single paper with a date. And then afterwards, everyone agreed and moved on. And again, we'll find that to be true for these examples today. We've certainly see it true in previous discussions. So with those caveats, today, we're going to look at this more purplish rectangle to contrast it with this reddish pinkish one. So last time, we looked at some of these moments of physicists sometimes very begrudgingly rethinking the nature of light, not always going into it with that as their plan, but coming out having rethought how physicists should describe light. And we saw instances of more and more attention to quantization, discreteness. Maybe light even had a kind of specific particulate or particle-like behavior. And today, we're going to look at a kind of inverse or the other half of those discussions, as physicists began rethinking the nature of matter. Again, not always going into that with that in mind, but that's how these developments really began to add up. So today, we'll look at this second half, the mirror half of the old quantum theory, rethinking matter. So by the closing years of the 19th century, really in the last say two or most three decades of the 19th century, there really seemed to be a coalescence, a pretty amazing achievement among physicists and chemists in many, many parts of the world. It looked like physicists and chemists, natural scientists more broadly, had figured out a really deep regularity to the stuff of which the world is made. Here's an illustration of one of the very first published versions of Dmitri Mendeleev's periodic table. It doesn't look quite the way we would expect it today. It's actually oriented in the opposite orientation than what we're used to. But this was a remarkable effort at ordering, ordering the stuff from which all other things seemed to be made. And on the strength of work like Mendeleev's and many others, by the late 19th century, it seemed clear that matter consisted of chemical elements that would behave certain ways in chemical reactions. And a given chemical element-- hydrogen, beryllium, iron, uranium-- consisted of physical atoms. So there was a two-part way of trying to make sense of the world. And that seemed really great until suddenly more complications began to really crowd in. So not too long after this really amazing, synthetic achievement to bring together lots and lots of disparate phenomena in things like the periodic table and this association between chemical elements and physical atoms, pretty soon, there were some new challenges to that whole scheme, starting really in the mid 1890s and moving sometimes very quickly. This caught a lot of people by surprise. Some of this was accelerated by new types of instruments. There were whole new kinds of ways of trying to probe the behavior of matter and radiation-- cloud chambers, fluorescent screens that would glow and they interacted with certain matter or energy, photographic techniques. Remember, photography of any kind was still pretty new in the middle decades of the 19th century. So trying to make reliable, repeatable use of photographic plates, that was still pretty new, even by the 1880s and '90s. So through this constellation of new techniques, some accidental incidents in various laboratories, unexpected, researchers in many parts of the world, in the 1890s in particular, began to identify several different kinds of radiations or emanations, some kind of energy that could be flowing out of what had once seemed like stable, boring matter. And so there are a couple of these points along the way that became worldwide sensations in real time. Not only do we look back and say, oh, that was exciting. But even at the time, this became literally headline news in many, many parts of the world-- I mean, throughout multiple continents. And so we can again attach names and dates to them, but understanding these were extended activities. One of those had to do with what became known as radioactivity and though there were many people involved in this, very quickly among the most prominent, the most famous researchers on this area included Pierre and Marie Curie, shown here. Marie Curie, originally Marie Sklodowska, a young student from Poland, she'd made her way to study in Paris. She met Pierre, who was quite a bit older. They got married. They teamed up. They became a remarkable research team, investigating some of these radiations or radioactive behaviors of a range of elements-- in fact, identifying whole new forms of matter and radiation interaction. Right around that time, roughly a year later, there was a distinct set of developments coming, in this case, out of Cambridge University, associated with JJ Thompson, who is shown here. He was experimenting with different kinds of stuff, and different instruments, and different techniques. He began identifying what became known as cathode rays. We would now call those electrons, a different sort of energetic beam that could come out of one form of matter and cause effects on other forms of matter. So all these activities, which were often quite surprising when first identified, they suggested that atoms could fall apart and that they have internal structure. And this was a big deal because, as you may know, the word atom itself was derived from an ancient Greek word that means indivisible or unbreakable. And so all this effort from Lavoisier through Mendeleev to understand the regularity of chemical elements, associate them with physical atoms, that was really under some new strain if atoms themselves were actually not atomic if they really could be broken apart, if they have some internal structure. And so by 1900, with this remarkable series of developments throughout the 1890s, researchers had begun to identify different kinds of stuff that could fly out of matter, different kinds of radiations. And they had different properties. So for example, some of these emanations, or radiations, were easily deflected by an external magnetic field. You could actually watch, see the bending path. Others seemed not to be deflected at all by magnets. Some of these could fog, could begin to expose a photographic plate, leave a kind of smudge, even if the plates were protected, wrapped up in dark paper in some desk drawer, not in the direct line of fire. So they had different properties. So what researchers began to do was just try to classify them by different kind of neutral sounding names. They just began at the start of the Greek alphabet and labeled one kind alpha rays, another kind beta rays, a third type gamma. And they figured they'd just keep going, as many letters as needed. They didn't know what caused these emanations or radiations. But they could at least begin to classify or distinguish among them because the radiations themselves really did seem to have some different properties. And again, this was remarkably interesting news at the time. The Curies, in particular, became really media sensations, especially Marie Curie. As you may know or can imagine, this was really, really unusual to have a world class leading researcher in the sciences who happened to be a woman. And there was tremendous speculation about was she just like other women, was she a caring mother or not, really from today's point of view horribly, horribly gender stereotyped, often very unfair attributes attributed to her in the wide press. When her when Pierre passed away, she had a relationship with another researcher. It was a French scandal. And people wanted to know these scurrilous details of her private life. On the other hand, she was constantly in the news. And people were really curious. And she was, of course, the first person to win two Nobel prizes-- one in chemistry and one in physics. And I just find this remarkable. This wasn't only true in turn of the century Paris. But as recently as last year, there was a major film some of you might have seen starring a major film star Rosamund Pike playing, yet again, yet another film adaptation of the life of Marie Curie. So her personal story continues to fascinate as well. And that was true really from the beginning. In real time, she was already a kind of media sensation, as were these findings, things like these mysterious ghost-like rays that often people's eyes couldn't register, but could have effects across the room. It was a really remarkable conjunction. So one of the younger folks who got really excited about this was Ernest Rutherford. He was a bit younger than the Curies. He was the next generation coming up. He grew up in New Zealand. But he was a very promising young student. And he received a fellowship to come to England, to Cambridge, to finish his studies there. He actually studied under JJ Thomson, the person who, in 1897, was experimenting with these cathode rays that I mentioned briefly before. He joined Thompson's group at the Cavendish really right at that time. Rutherford was there just in the midst of this tremendous excitement over some of these new rays, like Thompson's cathode rays. Rutherford became fascinated by this topic of radioactivity. It was all the news at the Cavendish, in Paris, and beyond. And he began identifying new sources that could emit by now familiar rays, meaning the alpha rays had been identified people now knew they had different properties than the beta rays and so on. But Rutherford found new sources that could give off that kind of radiation. And he also began trying to systematize the study of radioactivity more generally. It's to Rutherford whom we owe the concept of a half-life. The Curies were very much involved with this. Rutherford helped to formalize it-- the half-life being a term that's probably quite familiar to us today, the time during which the radioactivity of a sample was measured by Geiger counters or other things would fall by half. So that's the characteristic time during which you would measure these kinds of emanations coming out. And Rutherford in particular became fascinated by what we now call decay chains. So he could start, much as the Curies would do, with a certain material, and then watch that material decay, for example, by emitting alpha rays. And then that original substance would transmute transform to a new kind of substance that would later be called radon. If he began with radium, one of the elements first scrutinized by the Curies, what would happen to a sample of radium after a bunch of alpha particles would come out? And that was the kind of thing that Rutherford wanted to study and systematize. So the decay products from alpha decay of radium would be identified eventually as radon. The chain we keep going. Radon would emit alpha particles, become radium a. But now, it's called polonium, actually in honor of Marie Curie's homeland of Poland, and so on. So Rutherford got very quickly into this game, into this study of radioactivity. One of the things he did then, when he was only a little bit further along, was to team up with another colleague, Hans Geiger, originally from Germany, but at this point working in the UK. We now refer to Geiger counters. They were actually developed in partnership between Rutherford and Geiger, in this first decade of the 20th century. Here's one of their earliest own illustrations of how this would work. This is, of course, a more modern version, a kind of Wikipedia version. And the idea was that this was a device with which to measure these emanations, especially electrically charged radiations coming out of these radioactive materials. Very clever and actually very inexpensive and robust. That's partly why these things are still in such widespread use to this day. Again, in the modern version, you have two conducting coils or beams-- the anode and the cathode. And you have a voltage difference between them. Inside the chamber, you have inert gas. It could be a kind of noble gas. If ionizing radiations, one of these emanations from some radioactive source, enters the chamber, it can actually ionize those otherwise neutral and fairly inert atoms. It can, as we now know, rip off an electron or something like that. So now, you have a charged ion running around in this chamber. It will be attracted to either the anode or cathode. You'll complete a circuit. You'll have a readout. So you can convert certain kinds of physical interactions within the chamber into measurable electric outputs from this otherwise very simple closed device. So by 1908, this had now become a new kind of tool in many, many laboratories because it was both cheap and reliable. And the idea behind it was pretty easy to convey. Using these kinds of tools, Rutherford, and Geiger, and their immediate circle in and around the Cavendish were able to determine that alpha particles actually carried twice the electric charge of Thompson's still pretty new beta particles, but of the opposite side. Basically, they would go to the anode, not the cathode, and vice versa. So you could begin measuring more and more properties of these different kinds of radiations or emanations. So very soon after there was a reliable way to characterize these new rays, these alpha particles in particular, Rutherford got a new job. He was now a young professor at Manchester University in northern England. He built up a big, big flourishing research group. And the idea now was not only to study alpha particles as the subject of research, as he'd been doing since his student days, but to flip it around and use alpha particles as a tool to learn new stuff about other things. So the alpha particles were domesticated, I think, really remarkably quickly from being the topic you'd scrutinize to being just another tool on your lab bench to learn about other things. And Rutherford was very proactive, as others were. But Rutherford was really very creative in that transition. So beginning as early as 1909, his group began directing alpha particles. They would have a radon source of the kind he'd been studying for years. It would be what's called now an alpha emitter. So the natural form of radioactive decay of a radon source is to shoot out beams of alpha particles. So that was in this block here. And they could kind of collimate the beam. They could encase the radon source in lead shielding with a very narrow opening to have the alpha particles shoot out in one direction. They would shoot them towards a target of a very, very thin metallic foil, often gold foil very, very thin, and then surround almost an entire foil by this new fluorescent screen. So when the alpha particles would ricochet off that foil, they would light up this fluorescent screen. This was incredibly-- that's very easy to describe. Even a nice Wikipedia-level cartoon makes it look so straightforward. This was incredibly painstaking work. The researchers had to sit in a darkened room for like a minimum half an hour just to let their eyes adjust. Imagine sitting basically in an entirely blackened room-- your mind starts playing tricks on you. You start seeing flashes where that might not be any-- just so that their eyes would be sensitive to these very, very modest little flickers from the fluorescent screen. And you basically had to sit there alone in the dark and count not just how many flickers but, as I'll say in a moment, where along this circle, what angle did you see most of these little tiny short flashes of light appear. So what they began to find is that most of these alpha particles would pass right through the foil, meaning most of the little flashes of light would line up directly behind the target. There would be a straight line. And they began to reason, Rutherford and his team, that these alpha particles were probably considerably smaller than the target atoms within that foil. If that was a gold foil, they would assume these gold atoms are really big on the scale of alpha particles and spread out on the scale of alpha particles. So most of the time, these little emanations, these alpha particles, could zoom right through that thin metal foil and never get deflected or scattered at all. So in that case, you'd expect most of the very faint short flashes on the fluorescent screen to line up directly behind the source. There's no deflection. Sometimes, there'd be a glancing scatter, a small angle scatter, a few of them over here. And that is, indeed, what they found most of the time. But every now and then, in a way that by doing this over and over and over again over months in the darkened room, painstakingly-- they could quantify it was roughly one out of every 100,000 scattering events. An alpha particle would actually scatter by very large angles. It became known as backscatter. It would scatter almost 180 degrees, coming back almost towards its own source. So most of the time, there was either no deflection or very minimal deflection. But some small number of times, there would be large angle scatter instead. They began making essentially these histograms of the number of scattering events as a function of the scattering angle. And as you can see, small angles is where almost all the events would cluster. And very rarely, about one out of every 100,000 approximately, you'd find a very large angle scatter event. That was not at all what they expected to find. Rutherford very famously recalled-- I love this quotation. Referring to his experiments, he said, "It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you had fired a 15-inch artillery shell--" think of it like from a cannon-- "fired that huge shell at a piece of tissue paper and it came back and hit you." That's what it seemed like to have these little tiny alpha particles ricochet all the way back for very large angle scatter. So they were collecting that data starting around 1909 or so over months and months, ultimately over about two years to really build up lots and lots of data and statistics. During which, Rutherford then worked hard to make sense of the pattern quantitatively, to work out why it might be that you have not just rare scattering events at large angle, most scattering events at small angle, but really to retrace this particular shape of that curve, that particular kind of exponential decay of the scattering rates with angle. And he finally convinced himself with a pretty short derivation that this kind of scattering pattern would only make sense if most of the mass of those target atoms, the little gold atoms in the thin gold foil, were concentrated in a very dense nucleus in the center of those atoms, that most of the mass of the atom was actually in the very, very center. And otherwise, most of the rest of the volume that the atom might take up was empty. Atoms were mostly empty space. In fact, about one part in 100,000 was the ratio between the size we might attribute to an atom and the size we would attribute to this massive inner core of the nucleus. The nucleus, much like the alpha particles, he goes on, must have a positive charge. They were repelling each other, he concluded, whereas these lightweight electrons, which at this point were what Thompson's cathode rays or beta particles have been identified with, those must have the opposite charge. And again, you could get a sense of the size of the charge from things like Geiger counters. So the model that Rutherford begins to piece together, building on these multiple years of data collection, is a kind of solar system picture of the atom. There's a very, very tiny massive core, the nucleus, that's kind of like the sun. And lighter weight objects are going to somehow move around it. And that's what makes up an atom. So he introduces this solar system model by around 1911. But of course, that led to several new questions on its own. So one question was these electrons, these JJ Thomson beta particles, would be constantly accelerated as they moved around the nucleus. If you thought about them like the Earth in the solar system, the Earth is constantly being accelerated as it moves around the sun. Its direction of travel is changing. Its velocity, in fact, is changing. Its velocity vector is changing. It's suffering constant acceleration. Wouldn't that be true of these negatively charged electric particles, the electrons, as they move around in any old pattern around the positively charged nucleus? And then the question comes up, according to Maxwell's remarkably successful work on electromagnetism, anytime electrically charged objects accelerate, they should radiate. They should give off light or electromagnetic radiation. So question 1 from this new Rutherford model was, where's all that light? If every atom filling every bit of matter in the universe has its inner parts constantly accelerated, electric charges accelerated, why don't we see everything glowing all the time? And perhaps an even trickier question was, if those electrons are actually emitting radiation, if they're giving up some of their own energy in the form of light, it's going to carry that energy away, then why are any atoms ever stable? Wouldn't those electrons be giving up their energy as they irradiate? Wouldn't basically their orbits decay, decay, decay? Wouldn't they just crash into the nucleus? Why don't atoms fall apart all the time and, in fact, very quickly? So these two questions became at least as challenging as the successes of Rutherford's nuclear model. Let me pause there and see if there are any questions about that. Alex asks in the chat, were all the detection events detected by eye? Yes, in the very first ones, they were. This is, again, astonishing. There were no the fancier things we would use today with, say, CCD cameras and so on, CCD detectors. Obviously, those didn't exist yet. There were early photographic methods. They didn't have the resolution that people would need. And they didn't have the timeliness. If you just set up the photographic plate and didn't have a way to rapidly take those plates out, and develop them, and put new ones in, then you would lose the kind of information they were looking to find. And so they really just had to have researchers sitting in the dark counting for hours, locked in a dark closet, both because the human eye, once you sensitized it, was, in its day much, more sensitive than these photographic techniques at the time, both in terms of spatial resolution, but also for temporal response. Your retina will go back to neutral much more quickly than these than these large photographic plates would. So they would really do basically naked eye observations. And I just don't know how they could do that for so long. Silu asks, did they choose to use gold foil for a specific reason? Why not any other metal? That's a good question. I think partly it was very pliable. So they could make very thin gold leaf. Then as now, bakers can make gold leaf on cakes. I mean, you can make very, very malleable, very, very thin layers of gold leaf. And there were already chemical notions, chemical measurements, to suggest that the gold atoms should be relatively massive. They wanted big scattering targets, basically. Even by the rough estimate of, say, 1 mole of gold atoms, how much would that weigh, they knew that had a place on Mendeleev's periodic table that looked pretty massive compared to other lighter targets. So those are the kinds of reasons. I think they did experiment with multiple types of thin foils. But they were all choosing them from around that part of the periodic table and that were both relatively inexpensive. They didn't need tons and tons of gold bullion. They just needed a little trace amounts of gold that was workable, had the right kind of macroscopic properties, and that also had a placement within Mendeleev's table, which already was ordered, roughly speaking, by atomic weight. So I think those were the kind of criteria they used, though indeed they did experiment with multiple types of targets. Those are great questions. Any other questions? I mean, we all have the benefit of many things in 2020, like high school, including high school chemistry classes. So the fact that atoms are made up of nuclei that have a lot of mass and positive charge, the scales might not yet have fallen from your eyes because you come in knowing that. But that was really, really pretty surprising, very surprising in 1911. The prevailing models until then had suggested that atoms really were made up of different kinds of charges. It was clear, by this point, that Thompson's cathode rays carried a negative electric charge. It was clear that alpha particles carried a positive electric charge. It was clear that atoms were electrically neutral. So people figured there must be equal numbers of these constituent parts, so the total electric charge balances out. But the prevailing model actually at the time, before Rutherford's experiments, was known as the plum pudding model, which is very British. The idea was that atoms might be just like this kind of mush, like a plum pudding, which I think is actually pretty gross, of this undifferentiated goo with positive and negative charges just distributed at random so they average out within some volume, that they didn't have a kind of structure, let alone that the positive charge was associated with such an imbalance of the mass. I mean, all those things were really quite unexpected, which is, I think, why someone like Rutherford says, it's like firing a cannon shell at tissue paper and having it smash back. If you fire a cannon shell at porridge, you don't expect it to come back in your face. So the fact that these atoms had very, very hard, massive scattering sites, these very, very dense nuclei in the middle, that was genuinely unexpected. Even though now we're like, oh yeah, I grew up knowing that. So sometimes, the novelty is hard to put our heads back into that moment. Any questions about what Rutherford was up to or the broader excitement about radioactivity and all that stuff? If not, I think I'll press ahead. As always, of course, please feel free to chime in, in the chat. We'll have another question break pretty soon. Oh yeah, Gary asked, why did Rutherford think the alpha particle is like a cannonball. Very good. Thank you, Gary. So by this point people knew, thanks to some of his own work, that the alpha particles were unbelievably heavy compared to the beta particles. These things were on the order of 10,000 times more massive. In their artillery minds, it would be like a BB gun versus a cannon blast. It really was, in that sense, a huge difference in just the heft. People would conduct what soon became called charge to mass ratio experiments. Some of you might know this. You might have even done some of these in junior lab or earlier. This was building on things that JJ Thompson had been doing in the 1890s and that Rutherford learned as a student. Once it became clear that both beta particles and alpha particles carried some electric charge, then people were like, great, let me go mess with them. One of the best ways to mess with electric charges is to use magnets, use a strong external magnetic field. And then you can measure the ratio of basically how strong an external magnet you need to get a certain kind of arc in the trajectory of those particles. And that will help you learn the ratio of their charge to their mass. Then separately, with things like Geiger counters, you can start measuring the ratio of just their charges. You combine all that. You say, oh, the mass of those alpha particles was, again, on the order of 10,000 times greater or more than the mass of the beta particles. So they were using each of these to fire on stuff. These quickly both became projectiles to shoot at new targets. But one of them seemed like it really was just this monster of the radioactive realm. And to have that kind of cannonball shell, artillery shell bounce back if you thought you were firing it at otherwise undifferentiated porridge, I mean, I think that's what Rutherford's surprise was coming from. That's a great question. Any other questions on that? OK. Let me march on now. We're going to hear about some work by one of Rutherford's postdocs who was named Niels Bohr. So Bohr was a young Danish physicist. He had just finished his PhD in theoretical physics in his native Copenhagen. He grew up in an actually relatively well-to-do family. His father was a very prestigious professor of physiology at the University of Copenhagen. He grew up in an academic household. He remembered from his childhood having lots and lots of distinguished scholars come visit for dinner all the time, that kind of thing. And Niels himself was a very ambitious young physicist. His younger brother went on to-- Harald went on to a distinguished career in pure mathematics, a very elite, academic family in Copenhagen. The place he most wanted to go after finishing his PhD was to join Rutherford's group. This was right at the time that Rutherford was doing all these amazing experiments on things like the scattering that led to the nuclear model. So Bohr was accepted as a young postdoctoral scholar with Rutherford's group. He moved to Manchester and spent about two years there, typical postdoc time, right at the time that these scattering results were first being shared. So Bohr became really fascinated by this new notion, partly for the reasons I was describing. It really did seem so unexpected that atoms would have such a hard, dense, compact scattering center deep in their middle and other [INAUDIBLE] mostly empty space. So that was really cool for Bohr. But he was puzzled, as many of Rutherford's contemporaries were, about how to make sense of the stability of matter. If these electric charges are somehow whipping around that central mass like planets around the sun, how come they're not radiating all the time? How come they're not losing all their energy? Why do atoms not just collapse on themselves in very short times? So Bohr, maybe because he came from this very elite academic family, he set himself a very modest goal for his postdoc. He said, I'll explain the stability of every single kind of atom forever. It was remarkable. He wanted to basically solve for the stability of every single entry on the periodic table. And you got a taste of this in the reading for today by historian John Heilbron. One of the things that I think we often forget-- I didn't know when I first learned about the Bohr model. And I think that's my favorite part about the piece by Heilbron-- is that Bohr's first attempt was actually to focus on multi-electron atoms and even molecules. He didn't first start with hydrogen, the way we now usually associate with his work. He actually figured, maybe there is some kind of what we now call classical equilibrium, a kind of compensation mechanism that would allow these electrons to be outside of a nucleus, but not constantly accelerated, constantly losing their energy due to radiation. So he wanted to find some combination of electrostatic repulsion between the electrons repelling each other and the magnetic effect that he thought could maybe counterbalance and make a kind of equilibrium. And maybe that would explain both the structure of atoms and why atoms don't immediately fall apart. That's quite an ambitious postdoc project. And it didn't work. And pretty soon, it became clear he just could not get this kind of equilibrium compensation thing to work. The multi-electron systems were very complicated mathematically. Any time you have many things acting on many other things, as you know to this day, the mathematics quickly becomes pretty, pretty intense. So he finally agreed to try a simpler case, which is always a good tip in physics. Don't try to tackle the hardest, most complicated case first. Try to tackle a simpler case first. So in a desperation, Bohr switches gears and starts to think hard about the simplest of all the atoms, the very first placeholder on Mendeleev's periodic table, the hydrogen atom, which even by then was known to have one electron and therefore was assumed to have one compensating proton in that nucleus. So that way, you don't have to worry about multi-electron repulsion within a single atom. And again, for simplicity, he considered a circular orbit. It could have been a more complicated motion. But he said, let me just get started by assuming really like a simplified version of the solar system, that the proton, the positive nucleus, sits in the middle. And the electron orbits in a perfect circle in a simpler version even than the planet Earth around the sun. In that case, now he knew how to start. And again, this is how I think any of us would start. He knew there was an attractive force between the negatively charged electron and the positively charged proton. There'd be an electrostatic attraction, the strength of which would go like the square of their charges divided by the square of their distance between them. There'd also be a kind of mechanical motion of that electron constantly moving around, undergoing centrifugal acceleration. And you could just take the a Newtonian expression for that. You have the planet Earth moving around the sun or a ball moving on a string or in a circle. There'll be an outward-directed effective force, the centrifugal force. And he just said, there must be a stable orbit when that inward directed force of electromagnetic attraction is perfectly balanced by that outward tendency of the centrifugal force. Balance the forces, and now, he can solve for this speed, the speed squared, associated with one of those stable orbits. This is all Newtonian and Maxwellian or even pre-Maxwell, just Coulomb. So as far as Bohr was concerned at this stage, both the speed, the magnitude of the velocity, v, and the average radius, the distance between the electron and the nucleus, these were assumed to be variables that could take any one of a continuous range of values. It could be at 1 unit, or 1.003 units, or 1.035 units, et cetera. Then Bohr says, well, if we just stick with that, we have all the challenges of Rutherford's model. Why are these atoms stable? Why aren't they radiating? And so on. So now, Bohr takes inspiration partly from Max Planck. Remember, this is now 1911, '12, '13. This was a decade into to Planck's work on blackbody radiation. But we know from Bohr's writings and letters at the time, he was even more directly inspired by Einstein, who was just a couple of years older than Bohr himself. And by this point, Bohr was thinking about Einstein's re-explanations of things like blackbody radiation and the photoelectric effect. And Einstein, even more than Bohr, had been emphasizing a kind of discreteness, that we have to break the classical rules, at some point, and say certain quantities can only take quantized units, like Einstein's explanation for Planck's work saying that light quanta could only hold fixed amounts of energy-- h times nu or 2h times nu, but not any non-integer value. So Bohr says, let me try that trick again. Basically, by now, it's a strategy to try. So Bohr just gives, again, a heuristic hypothesis. He doesn't prove it. He doesn't derive it much, like Einstein was doing with his heuristic suggestion of light quantum. Bohr says, what if a quantity that ordinarily we would treat as having any old value at all, the angular momentum of that electron's motion in the atom, which classically for a perfectly circular orbit has a very simple expression-- the electrons mass times the magnitude of its velocity times its radius-- and what if we just impose now a discreteness, much as Einstein had done in other contexts? What if we say the angular momentum of that moving electron can only take integer values of a scale set by Planck's constant? And again, as many of you likely have seen already, there's an abbreviation that very quickly becomes common. This is called h-bar. So h, lowercase h, had been introduced by Planck in his 1900 paper, became soon known as Planck's constant, had a very particular numerical value in, say, erg/seconds. And then a very convenient shorthand is to divide that numerical value by 2 pi, which comes up a lot if you're thinking about things like circular orbits. So Bohr says, what if the total angular momentum of that moving electron could only take unit values at a scale set by Planck's constant? He doesn't say why. He says what if. It's a heuristic kind of hypothetical suggestion. So now, Bohr has two expressions for v. He has the classical balance of forces expression. He now has a new quantum condition expression. That's his term. He can solve each for v and set them equal to each other. And then he can solve the resulting single equation for r. So now, he has basically two equations with two unknowns. He wants to know both v and r. He has two equations-- one here, one here. Equate them, and then he can solve for the allowable radii, distances from the nucleus at which an electron should be found if its angular momentum is subject to this new quantum condition. And he finds that the radii-- the radius can't take any old value. It no longer falls within a continuum of possible values. It now snaps into place with n being any positive integer. So it could be 1 squared, 1 times some basic shortest length. It could be 2 squared or 4 times that length, 9, and so on. But it can't be 2.05 or 8.96, right? So the radius at which one should expect to find electrons, if we impose this new discreteness, this new quantum condition, becomes set by some new integer, n, which becomes known as the principal quantum number. It's just any old positive integer. And there's now, again, a scale in which Planck's constant plays a central role. The combination of these constants of nature, h-bar squared divided by the mass of the electron times the square of its charge, that has dimensions of length. That's the unit radius. An electron should be found at 1 times that, 4 times that, 9 times that, et cetera. The radius itself is about half an angstrom, if you've heard of the units angstroms. 1 angstrom is 10 to the minus 10 meters. And so this is about half of that. So roughly 0.5 angstroms is what this new unit of length works out to be. So now, Bohr has this picture emerging not just of a nuclear atom, like from Rutherford, where most of the mass and all the positive charge are concentrated in this dense central nucleus. But now, there's a more finely articulated structure for where one should find the electrons outside of the nucleus, as the electrons circle around, at least in a simple atom like hydrogen, which only has one electron. As I was saying a moment ago, the electron can be found at 1 squared times its basic length, a0, at 2 squared times that length, 3 squared, and so on. And so you have this series of discrete orbits. So now, Bohr wants to know, well, if we have this solar system model where the electron can only be at certain distances, certain radii from the nucleus, how would the energy of such a system-- how could we characterize the energy of such a system? So just as he'd done before, he goes back and starts with the perfectly normal textbook expressions from both Newton and Maxwell. You have a charge, a charged object in motion around some other charged object. The moving thing has some kinetic energy. That's always 1/2 mv squared. There'll be some, in this case, potential energy. It's an attractive energy because the charge is-- the electron and the proton have opposite charge. That's why they attract each other. So the potential energy has an overall minus sign. Now, already from his previous classical balance of forces argument, he already had an expression for v squared. So now, we can just sub into this expression for the energy. He also, from the previous exercise, had an expression for r. Now, the energy depends only on r. Well, he knows he has an expression for r. This part's newer, once he uses that discrete quantum condition by quantizing the angular momentum. And so finally, he can plug in his new basic unit of length. It soon gets called, in his honor, the Bohr radius. And now, Bohr has an expression for the energy of an electron in any of these quantized orbits or quantized forms of motion. It depends only on various constants-- the mass of the electron, its charge, and Planck's constant, and that integer n, which marks which of these orbits or states of motion the electron is in. So the electron now has very particular kinds of motion it can undertake. There are quantized amounts of energy associated with any of those orbits. So now, that means one could take the difference in energies between any of those discrete allowable orbits. What's the difference in energy between, say, an electron over here with n equals 3-- so 3 squared is 9-- versus the energy of an electron here for n equals 2 or for any two discrete states of motion labeled by some integer n? The energy difference between them then just has this pretty simple expression. It'll be the difference of the inverse squares because the energy for either state alone goes like 1 over n squared. And now, Bohr starts to get excited. In fact, this is why anyone began taking him seriously at all. Much like with Einstein, he hadn't proven that the angular momentum should be quantized and discrete. He hadn't even really given very good arguments for it. But he basically says, give me a moment. Let me show you what we get if we adopt that assumption. And why was Bohr excited? Because by taking on that simple looking but otherwise not well-motivated assumption, this special quantum condition, he now found that the energy differences between these discrete states of motion of an electron in hydrogen should reproduce the already well measured, well characterized behavior of the light that comes out of hydrogen atoms when we excite them. If you add some energy external energy to a gas of hydrogen atoms, they will radiate at very specific colors, the emission spectrum. The spectral lines that will come out of excited hydrogen have a very specific pattern to the frequencies. And this is actually real data. You can actually see these with the naked eye. Three of the lines that come out are in the visible range for ordinary human eyesight. And they're very discrete emission lines, the so-called spectral lines. These have been measured in the 1880s-- in fact, it turns out, the very year that Bohr was born. These were measured by a German schoolteacher, a German language-speaking schoolteacher named Johann Balmer. These were known as the Balmer series. After that, there were other series that had been measured that were in either the infrared or the ultraviolet. But the three characteristic lines to which our eyes are usually sensitive is a simple series of three lines. And these had a numerical regularity to them that had otherwise been pretty much unexamined and unexplained. But it became clear you could identify each of these parts of the Balmer spectrum as being a transition between one integer squared and another. And there was some overall constant called R named for a different researcher, Rydberg, that set, again, the quantitative scale for the actual frequencies of that shade of purple, that shade of blue, and this shade of red. And so the Balmer series would come about if you simply said there's some overall empirical constant, R, the Rydberg constant. And the light that we see would correspond to a transition between, say, n equals 3 to n equals 2. That would be the red one, relatively long wavelength, lower energy. There could be a transition from n equals 4 to n equals 2. That would light up to our eyes like this nice turquoisey blue. Or from n equals 5 to n equals 2, that would give us the deeper purple. So now, Bohr could say, well, there's an energy difference between these discrete states of motion. When an electron moved, made a transition from an excited state, and we had to add external energy to these hydrogen atoms, maybe, Bohr says, that bumps them up to an excited state of motion. n gets larger. At some point, the atom will relax back down to a lower state of energy. And maybe the light that gets submitted, this emission spectral line, is simply the difference in energy between the excited state of the electron's motion and the lower energy state. So the energy that comes out in the form of light, the energy difference is exactly the difference between these otherwise stable, quantum stable states of the electron. So not only did Bohr find the same form for the difference between the energies of those two states, he can now calculate this empirical constant, the Rydberg constant, from first principles. And when he plugged in the values for the mass of the electron, the charge, and Planck's constant, he got a remarkable match to this empirical value for the Rydberg constant. If you have trouble keeping that in mind, I recommend this historian's way to remember the Balmer series. Many, many years later, well after Bohr's death, his collections of his most important works were republished in a three-volume set. And the editors very, very amusingly colored the books to match the Balmer series. So volume 1 is the red. Volume 2 is the turquoisey blue. And volume 3 is the purple. OK. So at first, Bohr was actually reluctant to publish those results. Partly, remember, he'd only treated hydrogen. And he set out at the start of his postdoc trying to explain every single atom on the periodic table, every single element, which is pretty ridiculous. So he had fallen approximately 100 steps short of his goal, 90 some-odd steps short. Happily, his postdoc advisor, Ernest Rutherford, said, explaining hydrogen is pretty good. And you should publish because you have this remarkable, unexpected agreement, an accounting now for things like the Balmer series of these spectral lines. So Rutherford convinced the young Bohr to publish. And Bohr actually published his trilogy of related articles in 1913, On the Constitution of Atoms and Molecules. So what really became central was this new kind of hypothesis of this quantum condition. For reasons that still weren't at all clear, they were not proven, they weren't even really derived or motivated, but if one takes a heuristic suggestion that maybe the angular momentum associated with the motion of electrons could only take on quantized or discrete values and h-bar, then that would immediately restrict the locations, the distances between the electrons and the positive nucleus. Now, you have a discrete series of states of motion. Only for those, for some reason, would the atom remains stable. There's some new ingredient that had been missing in Rutherford's own picture that now might account for the stability of matter. When an electron is in one of these special quantized units, it won't radiate. It will only radiate when it jumps between these levels. And that will account for the emission spectrum. Now, of course, some questions still remained. Why don't we see this kind of discreteness in ordinary experience? So one of the next things that Bohr occupied himself with-- and it actually went over the next four or five years. It was a non-trivial exercise-- became known as the correspondence principle. Again, this was something that Bohr really championed, and worked out, and then became enormously influential in these kind of later years of what we call the old quantum theory. These quantum ideas seemed to impose some new break from experience. Something that we usually think of as continuous can only be discrete. In our ordinary experience, we don't see that kind of lock in step discreteness. And so Bohr wanted to make sense of the two different regimes and see if there was a way one could slide over into the other. And so that's what became known as the correspondence principle, which was to say, in the limit of very large quantum numbers, for very highly excited atoms in this case, where n is much, much larger than 1-- still an integer, but much larger than 1-- these quantum systems should begin to reproduce the more expected continuous or classical behavior. And again, if these next few steps don't make immediate sense, you're in good company. Don't worry. The slides are on the Canvas site. You can go through these steps, the algebra on your own. I just want to make sure that we understand his reasoning here. So I to go through his main argument for the correspondence principle. According to Maxwell's, again, highly successful work on electromagnetism, the frequency of light, or of any electromagnetic wave that's emitted-- so we'll call that new radiation. That's the frequency of the electromagnetic waves that come out-- that should be equal to the mechanical motion, the frequency of the mechanical motion. If you think about a dipole antenna, if you shake an electric charge up and down on a conducting wire, the electromagnetic waves that come from accelerating an electric charge should be directly related to the motion of the charge itself. So the frequency of radiation should be equal to or maybe a higher multiple. But it should be basically proportional to, if not directly equal to, the mechanical motion of the charge whose acceleration is emitting that radiation. That comes directly out of Maxwell's work. And we use that all the time in things like antenna theory. It still works to this day. So Bohr said, OK, well, let's figure out these two different quantities and see how they relate to each other. He begins by thinking about the mechanical motion, the frequency of that electron constantly whirling around the nucleus. So the frequency would be 1 over the period-- how long does it take the electron to complete an orbit, just like the Earth taking a year to go around the sun. So the mechanical frequency is 1 over the orbital period. And that, for a perfectly circular orbit to keep things simple, would be the orbital velocity, the magnitude of the velocity, so the speed divided by the distance it has to travel. It has to travel one circumference, 2 pi r. It does so at a speed, v. That's how we determine the inverse period, or the frequency. Well, once again, from the balance of forces, he has an expression for v, from balancing the classical forces with the quantized angular momentum. So he can substitute in for v. He has a new expression for the mechanical frequency. This is now of the electron in motion around the atom. And now, he wants to turn to that other thing he wants to compare it to, the frequency of the radiation that comes out. Now, in his model, the radiation frequency is not given in that earlier derivation by anything having to do with the mechanical motion around the nucleus of the electron. It actually comes from the energy differences when an electron in an excited state will emit some of that energy and drop down to a lower energy state, rather than having anything to do with this mechanical motion. That's why it looks like it's in conflict with Maxwell's electrodynamics. So we saw Bohr had derived this expression for the energy, for the frequency of the light that comes out as radiated. But again, he can start plugging in things he knows. He can use the definition of this radius, the Bohr radius. And now, he can consider not the regime of n equals 1, or 2, or 3 of small principle quantum numbers for states of motion that are near the lowest energy state. But he wants to think about large quantum numbers, away from the most quantumness of the behavior of this hydrogen atom. Is there a regime? Is there a different regime in which the behavior predicted by this quantum theoretic description will carry over to the behavior of the expected classical system? So let's consider a very large principle number n, a very highly excited state. And now, let's consider the radiation that would come out between two neighboring, but very large high n states. So consider both n1 and n2 to be very larger than 1 and near each other. Let's say the difference between n2 and n1 is small compared to either them alone. Then we can expand this 1 over n1 squared minus 1 over n2 squared-- just do a Taylor expansion-- to first order in this tiny difference. We get an expression that looks like this. Now, we can begin comparing the frequency of the light that comes out, the electromagnetic radiation, compare that to, indeed, the mechanical motion, motion of the electron itself in one of these very high n, large n orbits. And now, we just take the ratio of these expressions, once again using other things that we've already worked hard to learn. Now, you find that in that regime, not near the lowest energy state, where we expect the quantumness, the discreteness to be most on display, but at some very highly excited state, the ratio should, in fact, go back to 1. Maxwell said that ratio should be identically 1. In the limit of large principle quantum number, that ratio, once again, goes back to 1, even though the way we account for it is totally different. We don't say, according to Bohr's model, that the radiation comes out with frequency nu because that's the frequency at which the electron is physically spinning. But it turns out the difference between neighboring highly excited states obeys basically the Maxwellian relation after all. This becomes known as the correspondence principle, that the behavior should smoothly morph onto the classical Newtonian Maxwellian behavior in an appropriate limit. It might break from that expectation dramatically in a different regime-- in this case, small principle quantum numbers, when n is 1, 2, 3, or 4. But there should be some smooth, almost like an asymptotic matching because, then as now, Maxwell's equations work really well for understanding things like the radiation coming off of a dipole antenna. So Bohr didn't want to lose all the extraordinary successes of Maxwell or Newton, but wanted to have an account for why things might not always only look like that when you get really down to the scale of atoms. So for large quantum numbers, quantum systems really do start to behave like classical ones. In more modern language, we'd say the emission spectrum looks basically continuous, even though it looks highly discrete far away from that regime. OK. I'm going to wrap up the Bohr part. We'll pause for discussion in just in a moment. So Bohr further developed Rutherford's solar system or nuclear model of atomic structure. He remarkably was able to account for empirical results, like the emission spectrum from hydrogen. But much like Rutherford's work, this starts raising several new questions, even as it starts to answer some. For one thing, again, despite Bohr's remarkable ambition, this whole scheme only seemed to work for single electron atoms or ions-- so a hydrogen atom, which is electrically neutral, or an ionized atom of helium, which one of its two electrons has stripped away, you only have one electron in orbit, or doubly ionized lithium, that idea. Moreover, again as I've tried to emphasize, Bohr really didn't have any answer for why we might impose this brand new seemingly ad hoc quantum condition that the angular momentum should only appear in integer units of h-bar. Another question, when people began to try to visualize these warring electrons in these discrete states of motion, what happens to electrons between stable orbits? Are they literally falling from some large, excited state towards another one? What determined when an electron would make one of these sudden discrete jumps? How did an electron know how much energy to radiate? It has to give up a delta E exactly equal to the energy to be radiated away. And yet, if the electron can only land at a particular value of its lower energy state, how does it know, so to speak, in advance where it's going to land and, therefore, how much energy to radiate? How come it never messes up and radiates a different amount of energy for which it could no longer land at a stable orbit? And then another question was-- it became clear well before Bohr's time that, in addition to a very particular pattern of the colors, the frequencies of these emitted spectral lines, they also came with different brightnesses, different intensities. Not every line that you could measure of these emission lines was equally bright. And Bohr's model didn't seem to have any way of accounting for that other kind of equally salient set of measurements about the spectral lines. So some remarkable and surprising positive steps from this Bohr model. But again, opening up a whole new set of questions as well. So I'll pause there and ask if there's any questions on that. Oh, very good. So there's a question, as you can see in the chat, we have this reading, which I like a lot-- and we'll talk a little bit more about this reading in the next class as well, the reading by Megan Shields Formato who, by the way, used to TA this class. So it's a nice homegrown body of work. Megan is a great colleague. And when she was still a grad student, she worked with this question. And Megan has this really, I think, very lovely, very eye-opening essay about what kinds of work got credited then or now. So we always talk about Bohr did this, Bohr did that. And Bohr did a lot of things. But he was almost never doing them on his own. And we'll see a lot of examples of that, again, actually in the next lecture. I placed it in the wrong slot. And I think that reading works even better for Wednesday's class. But Bohr was not only always, always talking with younger postdocs and younger students of his own, people who would come to be like Werner Heisenberg, and Wolfgang Pauli, and the whole circle. He was always talking with his very, very patient and talented wife, Margrethe, who did not have a formal training in physics, but could certainly follow a logical argument. And she seems to have been a very steadying force to help discipline Bohr's own unfolding syllogisms-- if A then B kind of thing. She was his constant sounding board and a kind of like a-- not debate coach, but just his-- he would try things out. And they would talk about things very actively. And then he would write them up because she had helped him crystallize or clarify the order of his arguments and say this is a red Herring, this doesn't make sense. So Margrethe was doing, as we now know, a ton of unseen, behind the scenes work, actual, conceptual work for which we would then usually just credit Niels Bohr himself. It would be written up usually only with Bohr's name, not Bohr and Margrethe, not Niels and Margrethe Bohr. And I think Megan's piece really draws it out very, very evocatively with some pretty cool use of primary sources. I think she's exactly right. So we'll see that again and again, especially with Bohr himself next week. So the question here is, did Einstein have a similar relationship with his own partner? His first wife, Mileva, I've mentioned a few times, had also been trained-- in fact, they met as university students. Mileva was studying physics and mathematics at the ETH. And they were classmates. And here, it's really complicated. I won't spend too long on this because if I don't watch myself, I'll talk about it for the next eight hours. And you all have things to do with your lives. But I would be glad to direct you to lots of readings. The short answer is they began talking together all the time, very much like we now know that Niels and Margretha Bohr were doing. Einstein and Mileva Maric were doing that absolutely all through college. They have their charming, almost sappy sweet love letters between them when they would be apart. I can't wait to talk about my problem set with you, my dear cherub. They were very sweet and geeky in a way that might appeal to this group in particular. They were study buddies, as well as developing a real deep relationship. And as I mentioned before, Einstein would cut his classes. Mileva was a more diligent student. She would often take notes and let him study them. And so there's plenty of evidence of them also actively talking through questions about, for example, electrodynamics from like 1896 through 1901, 1902. Then their path begins to be different, at least from all we can reconstruct from the historical documentation, different than that between, say, Niels and Margretha Bohr. What happens in the Einstein marriage case is that they have a young child. In fact, they have a first child out of wedlock, a little girl named Lieserl, who then was given up for adoption. And no one knows what came of her. So they first have a first child they don't raise on their own. Then they very quickly get married. Then they have another child and eventually two. And then Mileva starts doing what was very typical, gender-typed family responsibilities expected of the time and with very little variation at the time. Even though they both considered themselves radical bohemians, who cares what people think, I don't have to live by society's rules, pretty soon, they began living by pretty typical Central European rules or norms. And that meant Mileva did not pursue her scientific career. In fact, I think she didn't ever complete her degree. She dropped out, I'm pretty sure. I can look that up. But she either barely got through with her degree after some interruptions or perhaps had to interrupt her studies. I can't remember. I'll look it up. She becomes basically a full-time homemaker, is what we would now say, raising their children while Einstein then had the full-time job at the patent office. And then they seemed not to have talked with anything like the same intensity about the scientific work, starting a couple of years before 1905. So there have been all these kinds of efforts to tie relativity, in particular, to a joint conversation. Maybe Mileva was a key thought partner. And I would love for that story to be true. I mean, it would cap off this college romance. It would just be thrilling. And I've been convinced by the work by historians who've looked very squarely at this, including a lot of women's history, gender and science scholars, who were very attuned to partnerships that don't often get a lot of credit, people whose work we'll read this term. And I'm convinced by their argument that that seemed not to have been the nature of the Einstein marriage relationship, starting in like 1902, 1903, let alone '05. They quickly became really embittered. And so by the later 19-teens, they were basically in open warfare. And they couldn't wait to get divorced. And so Einstein couldn't afford to divorce her until he won the Nobel Prize. He was so sure he'd win the Nobel Prize that he promised his Prize winnings to be the divorce settlement before he won the Prize, like four years before he won the Prize. Then the Prize came through. They got divorced. Einstein, by that point, was already seeing his second cousin, Elsa, who soon became his wife. So the story seems actually pretty different than this very lifelong intellectual partnership between Niels and Margretha, which, nonetheless, never really got credited the way that today, I think, we would find much more appropriate. That was a long answer to a really good question. I'd be glad to say more. There's really a lot, especially in the Einstein marriage relationship. But I'm glad that you raised a question about Meghan's piece about Neil's marriage as well. That's a great point. Any other questions on that? Let me jump in to the last part for today. The last part is shorter. We have plenty of time to do this last part. We saw, with Bohr's work, there were all these dangling questions about the Bohr model. And so we're going to see this last moment that really helps to at least better motivate some of these open questions from the Bohr model comes fully 10 or 11 years later. It's introduced by literally a French Prince-- we don't often get to talk about royalty in this class-- a French aristocrat named Louis de Broglie. He was actually the younger brother of Maurice de Broglie, who also, by this point, was a very accomplished experimental physicist. They had a laboratory in the basement of their enormous castle. So Louis had first studied history, like a good aristocrat, and all the rich humanities, and then stubbornly said that, like his older brother, he'd like to study this grubby physics stuff instead. So he finished his PhD in 1924 in theoretical physics, I think at the Sorbonne or some very elite Paris School. And here, he began to wonder, is there a way to account for Bohr's powerful yet as yet unexplained hypothesis about the quantum condition, that the angular momentum of an electron should take integer units set by h-bar? So de Broglie tries to make an explicit symmetry. He really has a this happened here, can I make the same thing happen here kind of framework. He's very explicit about that. He also returns to Einstein's work on light quanta, which, by this point, people are now taking seriously. This is now fully two years after Compton's results with X-ray scattering. The notion of returning to Einstein's work in 1924 was no longer controversial the way it would have been in 1905, or 1910, and so on. So for de Broglie, that looks like a pretty sensible place to start. And as we saw in the previous class, there's a Maxwellian expression for the momentum of a light wave related to its energy and its speed. We can then take on Einstein's light quantum hypothesis. The energy of that light quantum can only take certain units set, again, by Planck's constant, h. And so then as we saw, Compton himself used this relation. The momentum for an individual particle of light, a light quantum, should be inversely proportional to its wavelength with a proportionality set by Planck's constant. That, by now, had become pretty familiar, even to people who were grudging in that. What de Broglie says is simply assert-- he doesn't derive. He doesn't prove. He simply asserts whether the same thing happens for particles of matter. This is all about the behavior of light. So de Broglie says, what if we relate the momentum of a little electron, for example, moving around the nucleus in an atom? So there would be some expression for the momentum of that particle coming just, say, from Newtonian mechanics. What if we also relate that to an inverse wavelength, just as Einstein had by this point convinced many physicists to try for light? In that case, you can rearrange this expression and say, what if there's some waviness associated even with chunks of matter, particulate solid matter? de Broglie says, what if there's an inherent waviness of a scale set inversely by the momentum, just like you'd expect here, and proportional to Planck's constant? Why would anyone ever take this crazy French Prince's word for it? We never see this in real life, in ordinary life. This is now after things like Bohr's correspondence principle. de Broglie knows to wonder about the scales at which one might ever see such a strange sounding phenomenon. So if you imagine a fastball-- he didn't use this example, we can, for those more familiar with American baseball, or cricket, or fill in your favorite human-sized ball throwing game. If you imagine a fastball pitched in a baseball game and calculate 1 over its momentum scaled by h, the waviness associated with that, if we took de Broglie at his word, would be unbelievably small, 10 to the minus 32nd of a meter. That's smaller than the smallest known elementary particles. So of course, we don't see this somehow inherent waviness on everyday scales. But de Broglie goes on and says, for an electron, if we plug in a typical momentum that we estimate for its motion, say, in a hydrogen atom, the waviness, the actual wavelength somehow associated with that electron, is on the same length scale as its own radius, as its own orbital path, that Bohr radius, a0. For an electron, this new, hypothetical quantum waviness is on the same scale as its own mechanical motion. So maybe, de Broglie says, we shouldn't ignore it. We can absolutely ignore this somehow hypothetical quantum waviness when playing cricket, or baseball, or any kind of sport. But maybe it's actually really critical when we're talking about the behavior of atoms and parts of atoms. So if there really were some inherent wavelike structure, wavelike feature associated with the motion of an electron in an atom, could that begin to address this biggest as yet unanswered question about the Bohr model, why certain configurations or certain discrete states of motion for an electron would lead to stability of matter rather than unstable behavior? And he says, what if there is a kind of constructive interference? And what if the only stable states for an electron are those for which there's an integer number of wavelengths that can wrap around the orbit? So the wave, in some sense, joins back on itself and constructively interferes to make a stable standing wave, basically a standing wave. Whereas if the radius of that orbit were ever so slightly different, you would no longer be able to wrap an integer number of wavelengths around the radius. And you'd start getting the wave out of phase with itself. So the crests would actually start meeting up with troughs. And the whole wave would basically fall apart. So de Broglie is saying, if we grant this hypothetical as yet unseen waviness to things like electrons, could we start accounting for that stability of matter? What would be the criterion then? You have to have an integer number of these wavelengths fit around the circumference, fit around 2 pi r of that circular orbit. Well, now, he has an idea for what that wavelength should be. He's now working directly in analogy to Einstein's work with light quanta. So plug in what he thinks lambda would be. And then presto chango, he's now derived, or at least found, an expression that exactly matches the form for what Bohr had put in by hand in a totally unexplained and ad hoc manner. So now, this might account for why this quantity Bohr had begun with, the angular momentum of an electron moving in a circular orbit, could only take on integer values of h-bar. That arises from this physical criterion of stability, of constructive interference to get the waves to line up into basically a standing wave pattern. That's pretty astonishing. Einstein reads a thesis, and is immediately impressed, and helps to draw news to it, draw attention to it. It's still, again, merely hypothetical. It's clever, but it's not like it's been proven. And so quite remarkably-- and the pace of this I find just astonishing-- just within three years of that clever suggestion in a French aristocrat's PhD thesis, there was some pretty compelling new empirical evidence to start taking that seriously, in this case coming from what would soon become Bell Labs. It wasn't quite Bell Labs yet, but from an industrial research laboratory here in the United States, work led by Clinton Davison and Lester Germer as early as 1927. So they were actually firing electron beams in a souped-up version of JJ Thompson's experiment. They had basically a cathode ray from which they could beam electrons at a certain crystal by other techniques they new, characteristics of the crystal structure of that nickel, what was the spacing, for example, between layers of atoms. And then they could measure the amount of scattered electrons as a function of angle. So as they moved their detector to different angles, they could measure how large an intensity they would measure of the rescattered electron beam. And they found exactly this wavelike interference pattern, that there would be a criterion for these two scattered waves from scattering off distinct layers within that crystal to line up with an integer number of wavelengths between them that changes with the scattering angle. They just did a kind of classical wave interference analysis, not having known things like the lattice spacing for that crystal. And they found exactly this kind of waviness interference pattern on exactly the scale to expect given de Broglie's surprising hypothesis. So let me wrap up. Between roughly 1900 and 1924, '25-- we'll see even a little bit more of this in the coming lecture-- in this period that comes to be called old quantum theory, physicists really went through a pretty dramatic series of new ideas and assumptions about both light and matter, almost interwoven among each other. So in contrast with these great triumphs of 19th century physics-- the wave theory of light that we've looked at in the beginning of the class-- several physicists for several lines of reasoning begin to explore a kind of discreteness or particle-like attribute of light. We saw several steps along that way during last class. Meanwhile, in parallel, there's an inverse motion led ultimately by people like de Broglie to salvage this other new form of discreteness, Bohr's quantum condition, by suggesting that just as wavelike light could actually be thought of as particle-like, what if particle-like matter had some wavelike attributes? And so by the early 1920s, as each of these partial successes begins to draw attention, many, many physicists were convinced that something had to change. There was some quantumness that simply was not consistent with either Newton's physics or Maxwell's physics. In fact, they began to use the term classical now to speak about what was not this new quantum stuff. They used that new label. And the pattern seems familiar. Start with the textbook classical expressions for force, or energy, or distance, or speed, and then somehow just plug in, staple in some as yet unexplained new discreteness condition-- either the energy, or the angular momentum, or some wavelength. So you have this ad hoc stapling onto otherwise perfectly familiar classical expressions. That began to really upset a bunch of people. There's a marvelous book I used to assign in this class. I dropped it this year. But I encourage you, you might enjoy it. It's a novel by historian Russell McCormmach shown here called Night Thoughts of a Classical Physicist. What McCormmach does is try to really, I think, very movingly, very compellingly recreate what it felt like to be a physicist living through this period of conceptual disruption. What it really felt like is the solidity of the world began to shake. And it didn't just shake. It looked ugly. It looked piecemeal and ad hoc. So McCormmach, who had poured over just literally tens of thousands of letters, and laboratory notes, and publications from a whole generation of these mostly German-based physicists during this period of change, he distills that into a 160-page novel of a composite character named Victor Jakob, to watch Jakob react to this kind of like-- that kind of makes sense. But it's disjointed from over there. I can follow you here. But this looks really weird. It just starts to feel broken. It starts to feel disorganized and ad hoc. And so when people began calling this old quantum theory, it was both to say this happened before something new, but also to try to bracket it saying, it's old and kind of broken. It just felt conceptually dissatisfying to a number of people. And people were pretty well convinced that couldn't be the last way these things will stand. So I know we're just about at time. I see there's a question from Iabo. How did de Broglie waves impact the wave particle debate? Thank you, Iabo. So basically, it heightened this even further. I mean, the short answer is it made it even harder to ignore this strange, unsteady, and unstable combination of concepts which otherwise had seemed clear, but separate. And that's the kind of thing that this novel I just mentioned, Night Thoughts of a Classical Physicist, is actually really beautifully helping us appreciate, that it looked dizzying. People would have like a physiological reaction. What's solid? I know what a wave is. And by this point, think about the Wranglers, or Lorenz, or anyone else, there was a huge, highly successful quantitative series of tools with which to characterize wave phenomena-- interference, refraction, diffraction, wave equations, and all the rest. And then people would say, I know what particles are. And they could use Newton's laws to study ballistics and so on to extraordinary accuracy. And now, it looked like they had to somehow mush these together in a way that didn't-- without any rules, in a way that didn't seem to have any roadmap. And so the de Broglie hypothesis, especially after 1927 with his Davisson-Germer work-- and they received the Nobel Prize instantly. This, again, was seen as immediately important work when they did those experiments. That was just heightening this clash and made the stakes, the conceptual stakes seem even higher. It's a great question. And we'll see how that plays out, actually, in the next several classes. It doesn't get resolved. What happens is physicists resolve themselves to keep dealing with this strange connectedness as opposed to saying, now, it's one or the other, as many of you likely know from other classes. Any other questions on that? I don't want to keep you too long. I'm sure you have a full afternoon. Anyways, good to see you all. We'll pick up the story again on Wednesday. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_7_A_Political_History_of_Gravity.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Our last main lecture, at least for quite a while in this class, on our relativity subunit. And today, I want to talk about a project that I've been working on for a long time, probably since-- not since before you were born, most of you, but since many of you were probably, like, literally toddlers. I've been working on this project for a long time. It's slow. It's crazy slow. But it's also been really fun for me, and hopefully some of it will be of interest for you as well. So this is a book that I'm just slowly, slowly kind of working on, little pieces here and there. And I think it kind of fits into this part of our class, so I thought we'd talk about this as a kind of wrap-up for this early relativity unit. And then on Wednesday, we'll launch into yet another crazy new intellectual adventure. We'll start thinking about quantum theory. So today, we'll have our heads squarely in relativity. So I like to think about this material in the following way. I like to start out this way. If many of us, maybe even before this class had started, had been asked to picture in our minds Albert Einstein, I bet a lot of us would have a picture kind of like this one-- would come to mind. So late, late in his career. He's an old, old man, kind of rumpled in his sweatshirt, near the end of his career. And more importantly in this picture, he's alone. He's just kind of sitting quietly with his thoughts. All he seems to need are note paper and quiet. He once had famously told a journalist that his ideal occupation would have been a lighthouse keeper. He really, really wanted to be left alone, at least that was the story. That's what it was meant to emphasize. And yet as many, many scholars have been finding for decades now, this kind of image, this kind of carefully crafted image of Einstein as all alone, as separate or apart from the world, from the world of human affairs, that image doesn't really hold up very well. Once we start doing a little digging, we don't have to work very hard to find a very, very different image of Albert Einstein the person-- not one aloof or apart from the world, but actually as a kind of thoroughly political animal, someone who is deeply enmeshed in political moments, political causes throughout his entire life, very bravely as a young person, right through the end of his life as a very, very famous celebrity. So you don't have to take my word for it. This is of my favorite examples. The US Federal Bureau of Investigations, the FBI, considered Albert Einstein such a dangerous political radical-- that was their concern-- that they kept him under surveillance, much of it totally illegal surveillance, for 20 years. They thought he was a thoroughly political creature and, as far some were concerned, a kind of dangerous one-- so radical. So here's one example from his FBI file. It was classified for decades even after his death. It was released in response to a Freedom of Information Act some years ago. So now anyone can get his entire file at the click of a button. Just Google it, and you can find the whole PDF. So we have primary sources, like the FBI'S own reckoning of this very political figure Albert Einstein. We have lots and lots of really interesting materials by other scholars more recently as well. Einstein's own writings on politics, his essays in newspapers, his speeches before large gatherings, and analysis of the kind of changing political views that he was espousing and debating about. There's a very interesting book you might enjoy by Fred Jerome called The Einstein File. It's actually really based on this FBI file and other materials. So we have lots of examples from early in Einstein's career all the way to the end that he was, in many ways, thoroughly engaged in the political world, very much unlike this kind of stereotypical image, with which I started, of Einstein alone and apart. So that really has been driving my questions for this book project for a really long time, which is how to think about what is arguably Einstein's most significant scientific legacy, the general theory of relativity, which we'll talk a lot about today. That scientific work has also usually been cast, much like Einstein himself, as this thing apart-- as somehow beautiful, and pristine, and separate from the kind of messy dramas or the human history over the course of the 20th century. One of my favorite examples of that comes not only in verbal descriptions, but even in pictorial form. This cartoon I'm showing you in the middle of the screen here is a cartoon called the "Temple of Relativity." It was drawn by the very accomplished physicist and cosmologist George Gamow. Some of you might recognize Gamow's name. He helped invent what we now call the Big Bang picture of the cosmos. He was a very accomplished nuclear physicist and cosmologist. He was also an award-winning Popular Science writer, and he did his own illustrations. He loved making these whimsical cartoons. So in one of his popular books from the early 1950s, he drew the "Temple of Relativity". It's meant to bear more than a passing resemblance to the Taj Mahal. It is supposed to be some pristine temple, sort of apart from the messiness. You can see here-- we'll talk about this soon today-- carved into this seemingly marble dome are what we now call the field equations from Einstein's general theory of relativity. We'll talk about those. Here is something called the geodesic equation. These are actually equations taken directly from Einstein's relativity, but meant to be this kind of pristine temporal apart from the messiness of the world. It wasn't only Gamow. This was reinforced over and over again. Here's another one of my favorite examples in the preface to a relatively influential, very nice textbook by the mathematical physicist John Synge back in 1960, so even after Einstein's death. And he writes in the preface of this book all about general relativity. "Of all physicists, the general relativist has the least social commitment. Let the relativist rejoice in the ivory tower, where he has peace to seek understanding of Einstein's theory, as long as the busy world is satisfied to do its jobs without him. Which seems to me to sound an awful lot like that image of Einstein the person, kind of alone and apart from the hustle and bustle of everyday life. And so what's been driving this research project for me for, well now, going on 20 years-- the better part of 20 years-- is is this any more accurate for our understanding of general relativity than that earlier portrait of Einstein was compared to the now enormous documentary evidence we have to the contrary? Einstein himself was actually not an anti-social, anti-political person. He was thoroughly enmeshed in the flux of everyday life. And as I've been working on this book project, some parts with very dear friends and colleagues, but it has that same question about general relativity itself. Is that also part of the kind of flux of the human history and not somehow separated or apart? So today, we're going to talk for the first part of the class about Einstein's own roots to what became known as the general theory of relativity. And for that, of course, you have two different primers, one of them totally optional, on the Canvas site. One of them I assigned as part of the regular reading is hopefully a helpful orientation, very little mathematics. And strictly as an option for those who might be interested and have a little more time, there's a little more quantitative primer I wrote up some years ago for some UROP students in my physics research group-- totally optional, but for those who are interested. So we'll sit for a while. What was Einstein thinking about? What became this theory that we now know as general relativity? Then we'll look at two moments in the kind of expanding history of that work, one of them pretty close to Einstein's own day, where Einstein himself was still a central partner in helping other people think about this new work-- teach them, engage with them, debate with them, and so on. And then we'll jump ahead in the timeline a bit. We'll look at some efforts soon after the Second World War, actually closer to home, some of it handled right here at MIT and associated laboratories. So for that we'll jump ahead in our timeline, and then we'll kind of turn the clock back a bit for Wednesday when we jump into our quantum theory unit. So that's our plan for today, is we're going to sit with general relativity really across a pretty long stretch of time. Now, the first thing to recognize about the general theory of relativity is that it took Einstein a long, long time to get there. He worked for the better part of a decade, nearly 10 years nonstop. Between the publication that you now are very familiar with is a publication on the electrodynamics of moving bodies, what soon became known as the Special Theory of Relativity. That was published in 1905, perhaps over the objections of referees like yourselves for your paper one assignment. That came out in '05. He wrapped up what we would now recognize as the General Theory of Relativity fully 10 years later, late in the autumn of 1915. This is an amazing story, where we have unbelievable documentation of many, many, many of these steps along that 10-year intellectual journey. We have, for example, Einstein's own personal notebooks. Here's just a few pages. There are a bunch of these. These are from a critical moment around 1912-13 along this path between '05 and 1915. These have been kind of lovingly poured over by an international team of physicists, and historians, and philosophers of science, transcribed and then translated oftentimes into English. His notes, of course, were in German. I love these because they show him making mistakes all the time-- crossing things out, starting all over again. He made enormous numbers of mathematical mistakes, which gives me at least a little bit of hope. And we can now see almost day to day for certain critical periods how was his thinking changing? In addition to his notebooks, we also have this unbelievable resource called the Collected Papers of Albert Einstein, which had been edited and put together again by an international team of physicists and mathematicians and historians and philosophers-- a very large team. They've been working at this for decades. And they put out a new volume roughly every year and a half to two years. They've now gotten all the way into the late 1920s, starting from his birth in 1879. He lived until 1955. They have 30 years still to go. I wish them luck. It's an amazing resource, and they also do regular translations into English as well. It's an amazing, amazing resource. Many, many, many of these volumes now are available entirely for free online, too, which is even more extraordinary. So anyone can now just read through Einstein's mail, even in English, let alone in German. It's extraordinary. So with these resources, we can document, to unprecedented precision, this 10-year intellectual work. I'm not going to take you through that in real time. This class won't take 10 years. But I want to give some highlights along the way. So we know from that body of work that Einstein began working toward what would become the General Theory of Relativity as early as 1907-- not right in 1905, but pretty soon afterwards. He began thinking of it because a different, very prestigious German physicist-- in fact, one who would go on to win the Nobel Prize, named Johannes Stark-- invited Einstein to write a review article. Stark was not an expert on relativity. He was immersed in the behavior of atoms in external electric fields. Some of you might still recognize Stark's name. We still talk about the Stark Effect, or if we're speaking German, the Stark Effect-- same guy. So things like the impact on the radiation that comes out of certain excited atoms, if the atoms are in some electric field, that was what Stark was immersed in. He was an editor of a different journal, not the Annalen der Physik, but a kind of annual reviews journal. And he invited young Einstein, saying this stuff is pretty interesting. People aren't paying attention yet, which was correct by 1907. As we saw last time, very few people had turned to pay much attention to Einstein's work, so Stark very generously invited young Einstein to write a review of his own work to try to get a little more attention in this other journal. And while working on that review article, Einstein began trying to generalize his work on special relativity. As you may remember, for special relativity, he had restricted attention to observers who moved at a constant speed. They didn't have to be sitting exactly at rest. It wasn't clear what exactly rest would mean anymore if motion was relative. But he would restrict attention to people who were moving at constant speeds with respect to each other, neither speeding up or slowing down-- not accelerating. That's what was meant by this term inertial observers, or inertial frames of reference. So now, the next obvious question to ask would be can you generalize those results to arbitrary states of motion, not restricted to only constant speeds? What if an observer were speeding up or slowing down? Can you accommodate this framework of relativity to accelerations? And that's what he began thinking about in the context of expansive review article for Johannes Stark's journal. And as Einstein recalled some years later, he was sitting at his desk at the patent office. He still had his day job. He hadn't even been promoted or whatever. And he looked out the window, and saw across the street a window washer up several stories high, who had an accident and slipped and fell off the scaffolding. Now, luckily, Einstein said, it had a happy ending. The window washer was caught by a kind of canvas canopy. He did not crash to the ground. He was unharmed. But what Einstein focused mostly on was the experience of that window washer as he was falling between when he left the scaffolding and when he was captured by that canvas awning, the canopy. During that intervening period, as he was accelerating in free fall toward the ground, his tools were falling at the exact same rate as he was-- his bucket, his squeegee, anything, all his tools that he had with him up there on the high scaffolding. So while he was accelerating, everything with him was falling at the exact same rate, and that suggested to Einstein that that was different from what would have happened if he had stayed at rest and just dropped his can. It seemed that the acceleration had somehow counterbalanced or canceled out the effects of gravity. If the window washer was standing still and dropping stuff, they would fall further and further away from him. Instead, everything is falling exactly with him, as if they hadn't been dropped at all, as if they were feeling no gravity whatsoever. Relative to him, everything fell at the same rate. It looked very much like acceleration was somehow canceling the ordinary impact of gravity. He starts thinking about this. He calls it, in fact, "the happiest thought of my life" as early as 1907. He then does what Einstein often did in this part of his career. He begins thinking through very simple sounding scenarios, just like the opening passage in his 1905 paper on the electrodynamics of moving bodies. Now he's not thinking about a moving bar magnet and a conducting coil. He's thinking about people like that window washer in various states of accelerated motion. So first he considers someone at rest in an elevator, and the elevator has no windows. So this person is sealed, basically, in this small windowless box, and she's sealed in there with two very important scientific tools-- a tennis ball and a stopwatch. No lasers yet, because they weren't invented. So the person inside the steel box, at rest on the ground floor, no motion, conducts a very extensive experiment. She holds out the tennis ball, lets go, and then times how long it takes until the ball hits the floor. OK, then unbeknownst to her, her advisor, who's not a great advisor, grad students, raises the entire elevator up to the top floor, and then cuts the cable. And now, knowing that she has time for one last experiment in her unfortunately short career, the grad student inside the elevator does this one more time. She drops the tennis ball and times how long it's going to take till it hits the floor. It never hits the floor, at least not while she's watching. While she and the entire elevator are in free fall, much like for that window washer, everything falls together at the same rate, which means the tennis ball doesn't really leave its initial position relative to, say, her hand. The ball does not race toward the floor as it had over here when she let go when the elevator was at rest. While the entire laboratory frame-- while the entire elevator is falling, in fact accelerating in free fall-- then, much like for the window washer, everything inside that laboratory environment falls together at the same rate. So you don't have a relative motion of the ball leaving her hand and racing toward the floor, OK? Then Einstein takes this to the opposite extreme. Now, imagine that this grad student in the elevator has an identical twin. She's sealed up in a spaceship with no windows, out very far away from any large planets or stars or any masses. So she's out in basically empty space. She turns on the engine on her rocket ship, so now her rocket begins to accelerate upward at a faster and faster speed while the engine is firing. So now her entire windowless frame is accelerating at a constant rate upward. She does the same experiment her sister had done. She releases a tennis ball, and times how long it takes to hit the floor. And, in fact, once she lets go it will hit the floor. We would say from outside, if she lets go, the ball hangs still. There's no external forces on the ball, but meanwhile, the floor of the laboratory raises up-- accelerates upwards toward it. But as far as she knows, all she sees is the following. She releases the ball, and some little while later, some time later on her clock, the ball smacks into the floor. Now she turns off the engine, turns on retrorockets, brings her ship to rest. So now her ship is at rest away from any large masses, away from planets or stars. It's not moving, let alone not accelerating. It's just at rest in outer space. She does the same experiment-- releases the ball, and now the ball just hangs there. It doesn't change its relative position with respect to her hand or the floor. It doesn't crash into the floor. So now we have these four scenarios. Sometimes when the observer releases the ball, a little while later it smashes to the floor. Other times when the observer releases the ball, it hangs there and doesn't race toward the floor. Now, Einstein pulls the same trick he pulled in 1905. He says, there's an asymmetry in the explanation. There aren't actually four distinct phenomena, only two. The ball either does hit the floor or it doesn't. Again, it's not like the coil is really in motion-- excuse me, the magnet really in motion and the coil at rest, there's a relative motion, moving magnet induced current. He's pulling the same kind of argument here at the start of his thinking about how to generalize his Special Theory of Relativity-- only two phenomena does the ball hit the floor or not. However, until this time, physicists had given totally separate, non-overlapping physical explanations for what Einstein argues was really the same thing. They'd given too many explanations for too few phenomena. So let's take the cases where the ball does crash into the floor. When we're describing the situation where the elevator is at rest on the ground floor here on Earth, we appeal to things like gravity. The massive planet Earth exerts some force on that tennis ball. It exerts a gravitational attraction that pulls the ball downward, and so therefore the ball, under the influence of gravity, smacks into the floor. That's case one. For the case of the accelerating rocket ship in outer space, we never use the word gravity at all. We never even talk about forces really. The ball is subject to no forces. Once she releases it, nothing is touching the ball. Nothing is exerting a force. But the floor of the laboratory is accelerating upward because of its firing engine. We certainly never need to use the word gravity there. And so as far as Einstein's concerned, if these are small laboratories with no way of gathering information from the outside world-- no windows, no radios, no telegraph stations, and so on-- then how could anyone ever distinguish which scenario they were in? How would one twin know that she was really, really in an elevator at rest in a gravitational field on Earth and her identical twin really, really know that she's in an accelerating spaceship very far from the Earth or any other planet? All they could do was perform local experiments in their local laboratories. At least that was the argument Einstein had made. You can still hear the echo of a Ernst Mach positivism. What could become objects of positive experience for them? Conduct tests like drop a ball and time how long until it hits the floor. So these thought experiments are what help Einstein begin to flesh out that intuition or that insight that he seemed to have gotten watching the window washer fall from his window. The gravity and acceleration become interchangeable, that the effects we might attribute to one could just as readily be attributed to the other, depending on situations that might be beyond our observation. He calls this the Equivalence Principle-- the deep underlying equivalence, as far as he's concerned, between the effects of gravity and acceleration. That's what he calls the happiest thought of his life. Now, that sits with him for a little while. He goes on to his other work we'll look at quite-- he was busy with many things in the same time period, Quantum Theory, as we'll look at in the coming classes. But these ideas can kind of sit with him. And a few years later, he develops his follow-on thought experiment-- still thinking very simply, not quantitatively really at all, very, very simple kind of pictures and thought experiments. Imagine that spaceship, now in outer space, really does have some windows. Let's say it has a window near the top hatch and near the floor. So it has windows near the top and near the bottom. Maybe it's all windows, very strong, reinforced Plexiglas or whatever. And now, let's say that a light beam pierces the ship, enters the ship from near the top-- one of these top windows-- while the ship's engine is firing, so while the ship itself is accelerating upward. The light beam comes in through a top window, and while the light beam traverses that ship, in the short while that the light beam travels across the ship, the ship itself is accelerating upward. So by the time that light beam crosses the spaceship, it exits near the floor. It enters near the ceiling, exits near the floor, because the ship itself has accelerated during the time the light wave has been traversing the ship. Now, he says, what would that look like to observer inside the ship? Well, she would see the light beam enter by the top hatch and exit by the floor. She would see the path of that light beam bend. What the observer inside the ship would see is oh, the light entered there above my head and left below my feet. Its path has been bent. Now, if she really believes the equivalence principle, she could say the same phenomenon that we attribute to acceleration must occur in situations when we talk about gravity. If gravity and acceleration are interchangeable-- that's the upshot of this Equivalence Principle-- then this bending of the path of light that we might have attributed to the ship's acceleration, that exact same phenomenon should be happening in a gravitational field as well when the ship is at rest. So Einstein convinces himself-- so far, no one else-- by 1910 or 1911 or so that gravity must bend the path of light. If we can attribute those effects due to acceleration in a frame that otherwise has an absence of gravity, then what if we had no particular motion? The ship is at rest, but in a gravitational field-- Equivalence Principle says the same thing should apply. Now, that begins a new train of thought. Remember, for Einstein, he had convinced himself by 1905 that light is special. Remember, that was the second postulate of his 1905 paper on the electrodynamics of moving bodies. Nothing travels faster than light, and we all agree on its speed, at least according to Einstein's second postulate. So people can use light to chart shortest distances between two points in space. Nothing could possibly have gotten from point A to point B faster than a light beam, if everyone agrees on its speed and nothing can go faster than light. So light becomes a kind of mapping tool. We can use light beams, at least in our imagination as a thought experiment, to map out the shortest possible distances between different locations or positions in space. So now, if a light beam could be-- if its path could be curved by gravity, due to this argument from the Equivalence Principle, that sounds to Einstein like saying that space through which the light is traveling itself could be curved by gravity, that the path that the light is mapping out is telling us something about the underlying space and not just the accidents of the path of the ship or the presence of this or that planet. This mapping tool, on which we should all agree, should enable us to map the shortest possible distances throughout space. And if the path becomes bent by this kind of argument then maybe space is bent, or even post-Minkowski space time. This is the first time where Einstein begins to give grudging admiration or respect to the work from Hermann Minkowski that we looked at together last time. This is where, remember, Minkowski, the geometer, had said the real meaning of Einstein's Special Relativity was that space and time themselves meant very little. There was a union called spacetime that mattered, and we should map that using all the tools of geometry. Einstein, years later now, four or five six years later-- four years later-- comes back to that kind of geometrical view as well. We should think about spacetime and not consider the geometrical properties of that object space and time fused together. And if spacetime could be bent-- that means the rules of geometry that we apply to quantify relationships in that spacetime-- they might not be the geometry that he learned as a young student. They might not be Euclidean geometry. In fact, they would, in general, be non-Euclidean. They would be the geometry that professional geometers had developed in the middle decades of the 19th century, just before Einstein was born, to generalize what we used to just call geometry. Now we have to specify, oh, we mean Euclidean geometry, as opposed to these self-consistent but different variations. And, by the way, here I want to pause as well. Again, some of you might be familiar with this. There was a really pretty cool 10 or 11-minute YouTube video that my son shared with me. He's in high school and getting really into computer graphics and computer gaming, unfortunately. That's in a making games, not just playing them. But he found some really cool YouTube videos on visualizing non-Euclidean geometry, and I thought one was pretty cool. So I put that in the Canvas site as well-- totally optional, but if you've got 10 minutes to spare, I think you might enjoy it-- much better dynamic animations than this. But the idea was what even in Einstein's day, this stuff was well known to the professionals, to the mathematicians who were geometers like Minkowski. If we practice geometry, say, on the surface of a sphere, what comes to be called a positively-curved spatial surface or spatial manifold, then a lot of the regular rules we're used to from ordinary or Euclidean geometry, they no longer apply. So for example, we can construct parallel lines, two lines that are each perpendicular to the equator, say, of this sphere. So they both cross the equator by making the same angle-- sorry, the same right angle. So they're perpendicular to the same line. That should suggest they're both parallel to each other by Euclid's usual rules. And yet, they don't remain parallel forever. Those lines that are parallel to each other here converge at the North and South Pole of the sphere. So parallel lines don't stay parallel. In fact, they can intersect. Likewise, we can construct triangles, the shortest possible paths to make a closed, three-sided surface or three-sided shape. We can do that with, say, these two lines that each cross the equator and then converge at the North Pole. Well, now we've made a triangle on this positively-curved surface. But the sum of the angles inside that triangle, by necessity, have to add up to more than 180 degrees. We have 90 degrees here-- one right angle-- another 90 degrees here-- that's two right angles. We've already hit 180, and there's some angle-- it could be 1 degree, it could be 100 degrees, but some angle larger than 0 up here. So the sum of the interior angles of our triangle exceed 180, and that should never happen in Euclidean geometry. So there are all these ways in which the geometry of space time that Einstein kind of grudgingly begins to think he has to worry about are going to depart from the ordinary rules of Euclidean geometry. And now he begins to sweat, because he cut almost all of his math courses as a university student. Remember, Minkowski had such a low opinion of him and vice versa. When he was a college student, Einstein used to borrow the notes for math classes from mostly two people-- his girlfriend, who became his first wife, Mileva Maric, and from another friend of his, Marcel Grossman. Grossman loved the mathematics. He didn't cut any math classes. And, in fact, he went on to a very illustrious career as a professor of mathematics. Grossman's own specialty, in fact, became these non-Euclidean geometries. He loved this stuff, like doing geometry on the surface of a sphere or on a hyperbolic satellite surface. So as luck would have it, as I mentioned briefly in our previous class, Einstein started getting new job offers starting in like 1909, 1910. By 1912, he'd been hired back to Zurich for a new academic post. In fact, he was at the same university where Grossman was now teaching mathematics. So we went to his friend Grossman, and said basically, can you help me out again, just as you bailed me out in college? So they began collaborating very actively together. Grossman took on Einstein, his peer. Einstein was a professor of physics. But Grossman began tutoring him in what was to mathematicians by this point pretty familiar geometry, the geometry of non-Euclidean spaces. So together, around 1912 and '13, they began publishing a number of articles together, in particular Grossman teaching Einstein the rudiments of how to do geometry on curved surfaces. And then Einstein eventually moved to a new position in Berlin. They stopped actively collaborating. But Einstein had learned enough from Grossman during that very intense period of collaboration, around 1912, 13 or so, that he was able to then continue those new mathematical techniques on his own. And then by November of 1915, two more years still even after connecting with Grossman-- two or three years-- he arrived at what we now call the Field Equations of Einstein's General Theory of Relativity. He didn't get there with Grossman, but using the tools that he learned with Grossman, he got to this expression. This was what was carved into that make believe marble temporal in George Gamow's cartoon. I would like to say I'm not advocating body art, but if you're looking for a new tattoo, you can do a lot worse than this. It's pretty awesome. It's short enough to tweet. This is just an extraordinary expression. What it's doing is quantifying what become the new rules for Einstein's General Relativity. This side over here, the R's and the G mu nu and all that, that's really what he had learned from Grossman. That's the mathematics of characterizing the curvature or the warping of a curved spacetime, the degree to which it deviates from ordinary or Euclidean geometry. This is doing things like gradients, like rates of change, how much does a surface bend between point A and point B, and how do you quantify that? And again, that was a bit vague. I have a bit more on that in the optional primer on the Canvas site-- totally optional. This is the mathematics of how to quantify the bending and warping of space and time, mostly with the techniques of which he learned from Grossman. The other side is characterizing where the stuff is. This is talking about the distribution of matter and energy throughout space. Where is this stuff? How densely is it packed here versus there? And so as the later general relativist John Wheeler famously liked to say, we can read this equation in either of two ways. We can say that warping space tells matter how to move, or matter tells space and time how to bend. It's this incredible union between the geometry of space and time and the distribution of stuff, of matter and energy. And that's what becomes the General Theory of Relativity. So by the end of this 10-year journey, by no means when he started-- by the end of that, Einstein had convinced himself, albeit, as we'll see, almost no one else-- he convinced himself that gravity was nothing but geometry, that there was no force of gravity at all. There was no need to talk about a big mass like the Earth tugging, exerting a force on, say, the tennis ball in our lab assistant's hand, or on the moon, or the sun exerting a force on the Earth. The entire language of gravitational force, like from Newton, was simply kind of beside the point. It was superfluous, much like Einstein thought the ether was. What's really happening, according to Einstein, is that objects are following the shortest possible paths they can, but they're doing it through regions of space and time that need not be flat. So things are moving on as short a path as they can through a geometry that might be pretty unusual-- through curved space and time. So in this regular example you've probably seen before, you can think of warping spacetime as a stretched, taut trampoline-- a kind of rubber sheet. We can plop a large, heavy bowling ball in the middle. That could be like the sun, a very large mass. It will have a larger, quantitative effect on the geometry near it. It'll warp space and time considerably. And then smaller objects, like a ping pong ball, which could represent the Earth-- they're going to skitter around in that now-warped surface, and that warped spacetime. So the Earth is trying to move on as straight a line as it can. It's just getting constantly deflected from what would have been a straight path because the surface on which it's moving has been bent, has been warped or deformed by the presence of this large mass of the sun. So that's what Einstein convinces himself after this 10 years-- really eight years-- of pretty solid work. That's how we should account for gravitation. So let me pause there, stop sharing my screen, and ask for any questions. Any questions on any of that? No questions at all? This is totally intuitive, everyone already knew this? Come on. I mean, you're doing better than Einstein by not cutting your classes, so maybe it all adds up that this would be totally sensible for you. If not, I'm happy to march forward. I want to make sure we have that kind of basic sense of how Einstein, over this long period, has to develop both new tools, or actually has to learn what were by then standard tools, and apply it to what seemed like a very familiar situation. Aidan asks, were people believing Einstein, or were they still doubting? Very good. So let's talk about that in the next part. So the short answer is, most people hadn't even heard about this yet, for reasons that we'll talk about in the next main unit. And some of those who did hear about it were very, very skeptical, and maybe for understandable reasons. That's even stranger sounding, I think, than all that relativity stuff, right? There's no force of gravity? Every single success from Newton's physics is thrown out? The Universal Theory of Gravity means nothing-- that kind of thing. The public's perception of gravity at this point-- good question from Silu-- was really, for those who had at least a little formal schooling, even through high school, would have learned about things like the great triumph, as it was seen at the time-- the great triumph of Newton's universal gravitation. It wasn't just that Newton seemed like he got it all right. Newton's work had been subjected time and time again since Newton's own day right up to Einstein's day to increasingly precise quantitative tests, where it passed virtually every known test, from astronomy, from celestial mechanics. It helps astronomers predict where various objects should move in the sky by assuming there was a force of gravity between, say, the sun, and the Earth, or between the Earth and the moon. There were a few little anomalies that kept people busy, but it was otherwise this crowning achievement. People had used it in the late 1700s, early 1800s-- I can't remember now. I'd have to look it up-- to predict whole new planets, from the wobble in other objects. So the planet-- was it Uranus or Neptune-- was discovered that way. Now, I'm really forgetting. There was so much confidence in Newton's laws, including his Law of Gravitation, that people could actually find new stuff in the sky that had been literally unknown to the ancients, on the strength of using Newton's law of gravity to go out and find it and make sense of it. So it was a pretty big deal for Einstein to say, oh, there's no force of gravity. That's a lot like saying, oh, there's no ether. And so, not surprisingly, people needed a little more evidence, and more than just his say so, to jump on board. And as Horace knows from reading ahead, we're going to see a lot begins to change in 1919. We'll come to that real soon. Excellent questions. Any other changes here? Let's see. Jesus says, what does it mean when we talk about the energy of an orbit? Ah, that correlates to a difference in energy for a different geometry of spacetime. Yes, that's very good. So it gets actually a bit complicated. Einstein himself didn't even quite realize this at first. One of the first people to really tackle that was actually the extraordinary mathematical physicist Emmy Noether, who was in Gottingen, who began thinking about how can we characterize energy in a self-consistent way in the context of this new framework for curving spacetime? So how to handle energy, or energy densities in general relativity, was actually really tricky. Einstein himself was certainly not the first to get there. We now have tools, some of them dating back as early as to Emmy Noether from around this time, with which we can actually characterize the energy-- at least within one observer's frame, the energy of, say, a particular orbit. We can compare energy differences. We can certainly talk about the energy density, how much energy per unit volume of space. So the energetics get complicated as well. And Horace now tells us he wasn't only reading ahead. He actually grew up near where part of the next story is going to take place. So that's cool. So let's turn to the next part. Actually, good questions, if any more questions come up, of course, as usual, feel free to put it in the chat or wait till our next round. Dia says, how do we move the equivalence principle to the energy density tensor? OK, yeah. So that is a very good question. So that's really what he learned from Grossman. So Einstein's first efforts on this were long before he'd actually learned any of this fancy math from Grossman. He was trying to do things with scalar functions, potentials-- a lot like the kind of work that William Thompson and James Clerk Maxwell were doing for electrodynamics. And he was, therefore, trying to generalize the Laplace's Equation. If you remember from your classical mechanics, one can characterize Newton's Law of Gravity in terms of not only forces, but in terms of simpler mathematical quantities like scalar functions-- sort of a vector force, you can have a scalar relationship, so scalar potential phi. And then you can relate the changes in space, the gradient squared, or nabla, of that scalar function. That would then be sourced by the energy density or by the matter density, by where the mass is distributed per unit volume. So Einstein began thinking about gravity should still be somehow related to where the stuff is in a kind of density. But then we learn from the mathematician Grossman there are more complicated mathematical ways to characterize these non-Euclidean manifolds, like these mathematical objects called tensors. And you can learn more about those, if that's new to you, in that optional primer. Then to keep a tensor relationship to make the equation self-consistent, Einstein had to relate a tensor quantity for the curvature of space and time to some tensor generalization, no longer a single-scalar field that would characterize where the stuff is. So Einstein began thinking about a very simple single scalar potential, and therefore a single scalar matter density, and he began learning from Grossman how to treat that in a more mathematical generalized way. Victoria says, if General Theory says objects essentially fall on the shortest paths, what point are they trying to get to? Ah, so they're still trying to minimize a way of characterizing the energy of their path. Einstein didn't know that yet. That wasn't true, and that wasn't on his mind in 1915. But the paths these objects will take still come from minimizing the kind of total energy associated with the path. They're just doing so through a spacetime that's no longer Euclidean. And that's the kind of work that people like Emmy Noether, and David Hilbert, and eventually Einstein himself began to work on in the months and years after November 1915. It's a great question. Let's take the last one from Sarah, then I'll move on. Sarah says, non-Euclidean geometry is a term describing anything that isn't Euclidean. That's correct. There are indeed several types of non-Euclidean geometries. It's totally right. And there are two very simple kinds, to characterize global spaces, that have a constant curvature. And those can be either positive with the same curvature everywhere, so some non-zero positive value, or some non-zero negative value everywhere. Or anything in between. It could be lumpy and could have a much more complicated surface. And in that video that I linked to, that YouTube video on the Canvas site, actually, the person who made that video does a really nice job going through some of that stuff in the kind of most symmetrical forms of non-Euclidean geometries that depart from ordinary Euclid. So you might enjoy that short video as well, or a little bit more in my optional primer. I do that chat more. These are excellent questions. Good. Let me press on. Let's go to the next part of the material. It picks up on some of these questions here, after all. Who took any of this seriously? How did people find out about it? Now, you might recall I mentioned Einstein got to this form that we now recognize, the Field Equations, relating the warping of space and time to the distribution of stuff to where the matter and energy are. He got there in late November 1915, In fact, it was November 25. We know the exact date. He gave a talk on it at the Prussian Academy of Sciences on Thursday the 25th of November, 1915. Well, as you may recall, that was not a quiet time in the history of, say, Germany or much of Europe. By that point, the First World War-- the War to End All Wars, so people first thought-- had been raging for more than a year. The First World War broke out in August of 1914 in the midst of Einstein's quite significant struggles toward the General Theory of Relativity. So we immediately are thrown back into the question of relativity and the very messy, very confusing, chaotic, and bloody state of human affairs by asking the simple question. Who else could work on relativity? We're thrown immediately back into this question of people in real times and places, in these cases in quite dramatic, often very dangerous times and places. So here's a map of Europe around the time of 1914, around the time of the war. I've shown in the big star here is roughly where Berlin is. That's where Einstein was by the time he was finishing this work. And we want to ask the question of how did he begin to share working knowledge or information about this exciting new set of ideas with colleagues who were not in Berlin, with colleagues elsewhere in Germany, with colleagues in other countries, both to the East and to the West? And we'll see to begin asking that question, we're immediately thrown into the realities of the First World War. One of the very first people anywhere on the planet, as we get kind of up to speed on this new material of General Relativity, was a Russian mathematician named Vsevolod Frederiks. It turns out Frederiks was originally from St. Petersburg in northern Russia, but he was very talented, and was doing a kind of post-doctoral appointment, a kind of research position, with the world-famous mathematician David Hilbert in Gottingen within Germany. So Frederiks, the Russian mathematician, was working with David Hilbert in Gottingen. And we know from Einstein's voluminous correspondence and his diaries and so on that Einstein had befriended Hilbert, Frederik's main sponsor, and would actually make long visits to Gottingen-- in fact, two weeks at a time. Those visits could continue even after war had broken out. After all, Einstein was a German civil servant. By this point, he was working. He was employed by the Prussian Academy of Sciences, so he was certainly allowed to travel within Germany, even after war had broken out. So we know he not only could, he did make multiple lengthy trips from Berlin to Gottingen to visit with friends and colleagues, like Hilbert, like Emmy Noether, whom I mentioned who was also at Gottingen, and other colleagues. He would often stay for two weeks. Sometimes he'd say as a personal house guest of Hilbert. He'd actually live in Hilbert's house. They were very-- had lots of opportunity for informal discussion. Frederiks met Einstein on one of these visits. He began to learn from Einstein directly and began to pursue the work, even after Einstein had went back to Berlin, with this circle in Gottingen that was studying material. After war broke out, after these visits had started-- now, Frederiks, this Russian mathematician, was a Russian in Germany after those two countries had declared war against each other. So he was detained almost immediately as a civilian prisoner of war. He was literally locked up. He was imprisoned, because he was a Russian citizen in Germany after the nations had declared war. He had a very powerful friend and sponsor in the person of David Hilbert. Hilbert was a very prominent kind of local figure. So we know that Frederiks didn't suffer nearly as badly as many others did in these sometimes quite horrible conditions in these prisoner of war camps. In fact, Frederiks could, on occasion, still have visits from people like Hilbert and other members of the Gottingen research group. He could continue thinking about relativity, even while locked up. After the war had ended, he was released. He was repatriated to his native St. Petersburg, which, by this point, had undergone the Russian Revolution-- was now known as Leningrad or Petrograd. It went through several name changes. And once he was there, he began teaching the first-ever classes anywhere within Russia on General Relativity. One of his first disciples was a very talented Russian mathematical physicist named Alexander Friedmann. They began teaching a course together on relativity. They taught people like Vladimir Fock, George Gamow. He's the person I mentioned earlier with the cartoon drawing of the Temple of Relativity. They taught Lev Landau, and then Landau taught everyone else in Russia basically-- in the Soviet Union. So he created the first school of relativity, but he couldn't do it right away. And he happened to learn the material and be able to transmit the material, all because the changing dynamics of the First World War. There was another effort to try to move working knowledge of relativity eastward from Berlin in the midst of war. And that involves Einstein's close colleague, Karl Schwarzschild, shown here. So Schwarzschild was actually an observational astronomer, as well as a mathematical physicist. At the time, that was actually not so uncommon. We'll see several people even just today who did both of those roles. Those roles would specialize and separate over the course of the 20th century. But Schwarzschild was publishing papers in mathematical physics and in observational astronomy throughout his career. He spent quite a large part of his career in Gottingen. He knew that crowd. By the later time that Einstein moved to Berlin, Schwarzschild was nearby in Potsdam running an observatory close to Berlin. He and Einstein got to know each other. So Schwarzschild got to learn directly from Einstein about this developing work on what would become the General Theory of Relativity. And then when war broke out in the summer of 1914, Schwarzschild, at age 40, volunteered for the infantry. He was such a patriotic German citizen that he decided he could best spend his time by literally fighting as a soldier, boots on the ground to fight. So he was sent to the Eastern front. He was sent to the Russian front. Now, he's a German soldier far from Berlin, but still in the German army. So he was still able to exchange mail with Einstein. Einstein was a German civil servant. Schwarzschild was a member of the German army. So even though Schwarzschild was basically in Russia right on the front, he and Einstein could still at least trade letters and postcards, many of which have survived. We have this amazing documentary material. Einstein mailed to Schwarzschild some of the updates as he was working more and more on relativity through the autumn of 1915, after Schwarzschild had already deployed. Einstein was convinced that no one-- not Einstein, not anyone-- would ever be able to find an exact solution to his new equations. As I explained in that optional primer, the equations are highly nonlinear. They don't have the same structure, say, of Maxwell's equations from electromagnetism. The gravitational terms can act on themselves. You have a highly nonlinear form of the mathematical equations. And in general, there's no rule that says nonlinear equations can always be solved exactly. Einstein was convinced he'd always have to find approximations of the sort he had already been working out. So imagine Einstein's surprise when he gets another communication back from the Russian front, from his colleague Karl Schwarzschild, who, on some downtime between fighting, was able to work out the first known exact solution to Einstein's own field equations. Schwarzschild found what we now call the Schwarzschild Solution. We named it in his honor-- the solution for the warping of space and time in the vicinity of a spherical mass like the sun. Let's say you have a large mass sitting still, at rest. How much will it deform the surrounding spacetime in this kind of spherically symmetric way around it? We still use that to talk about motion on the solar system. It led to later thoughts about things like black holes. We use this solution all the time. Schwarzschild derived the first exact solution to Einstein's field equations literally in the middle of war during a pause of fighting. Einstein was unable to arrange for Schwarzschild's paper to be published, even though Schwarzschild was away. It came out in the proceedings of the Prussian Academy of Sciences. And just weeks after submitting the mail, after sending the update to Einstein, Schwarzschild succumbs to a rare skin disease on the front and died. Here's effort number two to spread working knowledge of relativity eastward. You have to either wait till the war ends and a prisoner of war can be repatriated, or you have this momentary kind of excitement on the Russian front that has ended the way so many lives were during the wall. OK, one more effort to try to spread working knowledge eastward. It was yet another astronomer colleague, right in Berlin, who had befriended Einstein. His name was Erwin Freundlich. So Freundlich, much like Schwarzschild, was an expert astronomer. And he used to love hearing Einstein talk about this developing work on what would become the General Theory of Relativity. Einstein would often pepper Freundlich, who was much younger-- he was almost a kind of informal assistant. Einstein would ask Freundlich, could astronomers possibly measure this or that? What might be observationally feasible? And so they would have these kind of informal discussions. Einstein, by 1912, 1913, had convinced himself from this argument, like I gave before, that gravitation-- the warping of spacetime-- should bend the path of light. And he began to wonder and began to talk with Freundlich, the astronomer, if this could ever be observable. And Einstein hatched a plan, saying maybe you could just barely measure this effect during a total solar eclipse. And here was the argument. He actually wrote letters to other astronomers. Here's a letter he wrote to the American astronomer George Ellery Hale in 1913 trying to encourage astronomers to really look for this effect. The idea was that the astronomers would take a photograph of some field of stars, some constellations, some evening when they had clear viewing and a point in the Earth's orbit when the sun was, so to speak, behind us, when the Earth was between the sun and the distant stars being photographed. So photograph a constellation at a moment of the Earth's orbit when the sun was, so to speak, behind us. Then wait roughly six months, photograph that exact same constellation, that same field of distant stars, but now when the sun was immediately between the Earth and that constellation. Now, of course, if the sun is directly between us and those stars, it'll be too bright. It'll be daytime. We won't be able to see them. So Einstein says, oh, wait for a total solar eclipse. Wait for a time when the moon moves exactly into position to block nearly all of that glare of the sun. During that very brief moment of totality, when it's dark again on Earth, take a photograph again with telescopes of that particular constellation, and compare the apparent positions with respect to still more distant stars-- distant away in the field of view. And Einstein's prediction was that the apparent positions should shift, should splay outward when the light was bent by passing near the sun, compared to when the sun was nowhere near the starlight's path. This is what would soon come to be called gravitational lensing, that the large mass of the sun should act like a lens deforming the space and time around it, bending the path, focusing the path from the more distant stars. So as we trace back the path we assume that light must have taken, we'll attribute it to a further splayed-out position, compared to reference stars that are even further away that pass kind of nowhere near the sun. So you have a larger field to set your coordinates and ask, do you see any apparent displacement by a measurable amount? Einstein could now begin to calculate how much he thought it should splay out. Do you measure any displacement of the stars during a total solar eclipse? Freundlich said, that actually is possible. We could actually perform measurements with the requisite angular resolution. We could do that. They calculate when the next total solar eclipse would be and where the best viewing would be. Einstein now uses his new prestige in the Prussian Academy of Sciences to get Freundlich a grant to do it. Freundlich puts together a small team. They get equipment. They set out for where the best viewing of this new eclipse will be. It's going to be best seen in late August 1914 from the Crimea. They get there just in time, unfortunately not in time for the eclipse. They get there just in time for the outbreak of war between Germany and Russia, Russia having claimed the Crimea. So Freundlich was now in what was recently claimed to be Russian territory just when the Russians and the Germans had declared war against each other. He and his whole team were captured, locked up as prisoners of war, their equipment confiscated. They could not conduct the observation. So this third effort to move working knowledge of relativity eastward is once again foiled by the very human situation of the First World War. Let's take one or two examples trying to move working knowledge westward. Now let's go in the opposite direction from Berlin. Even after war had broken out, Einstein could and, in fact, did make several trips to visit colleagues outside of Germany. He was a German civil servant. He was literally a government employee of the Prussian Academy of Sciences. But he could travel not to countries that were at war with Germany. He couldn't go to France or Britain. But he could go to countries that were still neutral, like the Netherlands. And that was very good for Einstein, because he had a very, very dear friend shown here. It's a little fuzzy photograph. This colleague here, Paul Ehrenfest, was leading a research group in Leiden. They had been friends for a long time. And Einstein, as we know from his letters and diaries, made multiple long trips to visit with Ehrenfest's group in Leiden, even and after war broke out, again staying for a week or two at a time. It was during those trips after war had broken out when Einstein first met and befriended another one of these researchers, Willem de Sitter, who, like Karl Schwarzschild, was both a mathematical physicist and an observational astronomer. So he was able to coach de Sitter in person during these long visits about this developing work on general relativity. And, in fact, they could trade mail even afterwards, because de Sitter was in a neutral country the mail could get through. And so de Sitter learned literally directly from Einstein through these coaching sessions, many of them in person, about the ins and outs of this new work as it was being developed. One thing Einstein could not do was either visit or even trade a single postcard with colleagues in Britain. So the war had choked off all direct contact, even indirect contact, like the mail, between, say, Arthur Eddington, who was in Cambridge, England, and Einstein in Berlin. Eddington was a third one of these kinds of creatures we've now met. He was both a tremendously talented mathematical physicist and an active observational astronomer. Eddington had been Wrangler. He might even have been senior Wrangler, certainly a very high ranking Wrangler in his student days with the Mathematical Tripos. He was one of the few people anywhere on the planet who was especially well equipped to handle the new mathematics of non-Euclidean geometry and apply it to real questions of astrophysical interest, as any astronomer would want to. And yet he couldn't learn about the work from Einstein. He couldn't get access to the German journals from the blockade. He couldn't get letters from Einstein, or postcards, or vice versa. So what we now know happened was that he began getting letters from Willem de Sitter, that Dutch astronomer whom Einstein met and befriended in Leiden. The Netherlands was a neutral country. The mail could travel between the Netherlands and England. It could separately travel between the Netherlands and Germany. No mail could travel directly between England and Germany. So de Sitter began writing out English language primers on this not-yet-published work from Einstein, trying to get Eddington up to speed as a kind of correspondence course on this very new, otherwise little-known work on General Relativity. There's some really interesting work on Eddington, a lot of it by my friend Matt Stanley. Matt and I were in grad school together. He's now a professor at New York University. He's written two books on this. The most recent one came out in 2019, pretty recently. Matt is really an expert on Arthur Stanley Eddington and his work. So Eddington, it turns out I learned from Matt, was not only a mathematical physicist and astronomer. He was also a Quaker, a very devoted Quaker. And therefore, during the First World War, he became a conscientious objector, meaning he refused to fight for the British army, because he refused to fight on general principles. He was a pacifist conscientious objector in keeping with his Quaker faith. That was a very, very difficult thing to be in wartime Britain in the First World War. Again, I learned from Matthew's work many British historians have cataloged for a long time. The typical response for conscientious objectors in the First World War in Britain was to lock them up, to jail them, or send them to the front for some ambulance duty, whether they were trained as medical professionals or not. So either they were locked up for the duration of the war typically, or sent directly to the front to work-- not to fight, but to work a kind of first aid ambulance duty, whether or not they had any medical training-- very, very, very difficult to be a conscientious objector in Britain in the First World War. Eddington, it turns out, had again-- much like the mathematician of Vsevolod Frederiks, Eddington had some powerful kind of senior colleagues who were able to wrangle a different situation. Among them was the astronomer royal Frank Dyson based in London. Frank Dyson thought this was-- basically arranged a scheme to save Eddington from jail. Eddington was willing to go to jail. He had no objection-- but to save Eddington jail, and at least as important to save Cambridge University from embarrassment. It became more and more bad press to have this young, fit, able-bodied young person, Arthur Eddington, at Cambridge not going to go help out as so many other young men from Cambridge, both faculty and students, had done. So it was at least as much to save Eddington's skin as to save face for this very elite institution of Cambridge. They worked out a deal where Eddington's wartime service would be spent preparing a new scientific experiment. Eddington would prepare the next version of these eclipse expeditions, none of which had actually been really successful yet. The same exact thing that Erwin Freundlich had tried to do, but was foiled by the onset of war, Eddington would spend his service in honor of the British crown preparing for this eclipse expedition-- quite extraordinary. It's good to have powerful friends. So the next total eclipse, beyond the one that was spoiled in August 1914, was going to happen in May of 1919, as Horace knows very well. And, in fact, the path of totality-- the path that would be best seen for astronomers-- stretched in the opposite direction, not into Crimea, or Eastern Europe, or into Asia, but instead stretching from the tiny spit of islands off the coast of Western Africa not too far from Morocco, but in the Atlantic Ocean, stretching all the way toward Brazil. So a swath from basically just off the Western coast of Africa through parts of South America. And so Eddington and the astronomer royal Frank Dyson put up two small observing teams to go head out to each of those locations. One would set up on the island of Principe, one of the islands not too far from Morocco, and the other would head off for Brazil-- for Sobral, Brazil. The eclipse would happen in May of 1919. Of course, no one at the time knew when the war would end. They began planning this early in the war. As it happened, the war ended in November of 1918 on November 11. In fact, we still celebrate Armistice Day. In the United States, we often call it Veteran's Day. So the war ended just before the eclipse, but after many of the plans had been set in motion. The eclipse itself happened during continuing wartime privations. It was still enormously disruptive to travel anywhere, let alone through these international steamer trips. The two teams set out. They performed these measurements. They bring all the equipment and the photographic plates back to London and Cambridge. They boil down the numbers for six more months. And in November of 1919, almost one year to the day after the end of fighting in the First World War, Eddington and Dyson call a special rare joint meeting of the Royal Society of Britain and the Royal Astronomical Society. Eddington steps up to the podium and says, in effect, Einstein was right. This becomes unbelievable news all around the world. Now you have a British team confirming the work of a German scientist in the midst of a war to overthrow the King of English science, Isaac Newton. This couldn't be juicier international news one year after the end of fighting. Here's one example, one of my favorite examples, the headline in the New York Times coverage. "Lights All Askew in the Heavens. Men of Science More or Less Agog over Results of Eclipse Observations." I always say we don't use the word agog nearly enough anymore. "Einstein Theory Triumphs. Stars Not Where They Seemed or Where Calculated to Be." But nobody need worry. It's not that the sky is falling and so on. This is what catapults Albert Einstein to worldwide fame. This is why more and more people begin to at least consider the possibility that spacetime is governed by geometry and could be curved. This is when Einstein himself becomes not just a well-known figure among working scientists, but actually literally a worldwide media sensation. Not too long after this announcement, Einstein sets out for his first world tour. He sails literally around the world, all the way to the United States, to Japan, travels back, and ends up back in Berlin. He's greeted in New York City like a celebrity. He's paraded through the streets. On his follow-up visit, he meets movie stars like Charlie Chaplin. This is why we can buy t-shirts and coffee mugs with Einstein's face on it, because of Eddington's amazing announcement in November of 1919. OK. So very soon after that, in fact, just while this news was still reverberating all around the world, Einstein gave a follow-up interview for a reporter in the London Times. And he wrote in November of 1919, and he said, "Today I'm described in Germany as a 'German servant' and in England as a 'Swiss Jew,'" because the English didn't want to give any credit to the Germans, and the Germans wanted to claim all the credit for him. He goes on to say, "Should it ever be my fate to be represented as a bete noir"-- if it looks like my work is wrong, if the results don't hold up, "I should, on the contrary, become a 'Swiss Jew' for the Germans and a 'German savant' for the English," meaning they'll each claim the other side is responsible for this turkey. They both love me when the work is right. They both want to blame the other side if the work is wrong. Because the English and Germans still don't like each other. And he closes with a flourish. He says, this is yet another application of the Theory of Relativity. He was a pretty good media guy. Well, it turns out even this prediction turned out to be true much sooner than Einstein had expected. Just months after this interview, months into this worldwide hoopla about the results of the eclipse expedition, events in Germany began to turn very dark, very quickly. There arose what became known as the Deutsche Physik movement. It's usually translated not as German physics-- would be a literal translation-- but as Aryan physics. This was some of the earliest proto stirrings of what would grow into part of the Nazi party. As early as spring of 1920-- April of 1920-- just months after this November announcement in 1919, groups began holding rallies-- political rallies-- to denounce general relativity, not to denounce Einstein, the Jewish internationalist pacifist who himself had spoken out against the war. It was over determined that these folks wouldn't like Einstein the individual. They were denouncing the warping of spacetime. It was mostly organized by, as we now know, by political opportunists who were looking for any reason to gin up media attention. Einstein was in the news. He was an easy person, therefore, to have as a target to get your own efforts in the news. That hasn't changed to our Twitter field day today. These folks were also media savvy. But it wasn't these kind of political operatives who got the spotlight. It was, in fact, these two gentlemen shown on your screen here-- Johannes Stark-- that's the same Stark who had invited Einstein to write the review article in 1907. In the interim, he had received the Nobel Prize in physics for his work on radiation from excited atoms. He was joined by Philipp Lenard, who was also a Nobel laureate, a German experimental physicist. We'll learn more about Lenard's work. Lenard got his Nobel Prize in 1905 for having conducted experiments on the photoelectric effect. Einstein would later win his Nobel Prize for a theoretical explanation of Lenard's results. These folks should have been able to get along, right-- two Nobel laureates with close intellectual ties to a lot of Einstein's other work. And yet it was not to be. They became the kind of front figures, the faces, public faces, of this Aryan physics or Deutsche Physik movement. They staged these rallies in sports arenas and opera houses-- tens of thousands of people, as early as 1920, to denounce relativity, to denounce this very strange notion of the warping of spacetime. Lenard became an active author on this topic. Here's a later publication, published now in the midst of the Second World War, 20 years into this effort, called Grosse Naturforscher Great Men of Science, it's usually translated, Great Researchers of Nature-- A History of Science in Biographical Form-- [GERMAN] So what did he do in this book? Here's a sample page from Lenard's book, Great Men of Science. He argued in a kind of 2-point strategy. The first part, according to Lenard-- and this is a thick volume with many examples to try to prove his point. The first part is that Einstein's work is disgusting. It is repugnant to the Aryan sensibility. I'll give one example of that. What did he mean by that? That's part one. Part two, he stole it from us. Key results, argued Lenard, have been plagiarized from properly Aryan researchers. So it's disgusting, and it's ours. It's a very strange form of argument. Then again, he was a Nazi, so what do you want? OK. Step one, it's repugnant to an Aryan sensibility. There are examples like this that get written up in pamphlets and books all over the place. They would say things like the concept of force, which was introduced by Aryan scientists, by which they mean Newton and Galileo and Descartes-- not exactly Germans or Teutons-- they were introduced by Aryan scientists, obviously arises from the personal experience of human labor, of manual creation, of working the land, a very active rhetoric. This had been and is the essential content of the life of Aryan man, a kind of romantic nostalgia for an agrarian German past, when the pure race, in their terms, had worked the land. They knew the meaning of force in their muscles, because they pushed the plow. Only in their words, only an effete cosmopolitan Jew like Einstein, could ever dispense with the concept of force. Part one, the work is disgusting. Part two, he stole it from us. There indeed was a little-known German-speaking naturalist-- long before there was a country of Germany, but he was in one of the German territories-- named Johann Soldner. As early as 1803, long before this time, he published an article, which then Stark and Lenard, the Nobel laureates, republished in the Gestapo press in 1921. Soldner had done a very clever calculation. He had used purely Newtonian gravity to calculate the deviation of the path of starlight as it moves near the sun, not because he thought spacetime was curved. He thought there should be a balance of forces. There's a gravitational force-- a universal force of gravity-- that should have an impact on the momentum vector of that light wave. It's a thoroughly classical Newtonian calculation, very clever, and thoroughly forgotten. Now, it didn't seem to matter to the later Nazis. That soldier's result was exactly one half of Einstein value. It is not what was tested by the eclipse experiments. It was not consistent with the results announced in London. That was a nicety they could dispense with. The point was, everyone in the world is excited about the bending of light by gravity. That was a German, and, to their minds, a properly Aryan result. To prove that all this work had come from Aryans, Lenard would include these portraits in his book. Here is his portrait of Isaac Newton to show that Newton had this proper-- literally proper racial facial features, like he didn't have a so-called Jewish nose. So he would have these portraiture to use physiognomy-- the external features of the face-- to prove, quote, unquote, "prove to demonstrate" that all the best work had been done by people who were racially pure in their terms. And the work that Einstein was getting famous for had been stolen by others. Let me pause there and stop sharing screen. Any questions on any of that? Yeah, great, thank you. And thank you to Horace and others in the chat. Very good. So astronomers have been conducting eclipse expeditions with great precision-- I mean, really precise scientific expeditions-- to measure very precise photographs of the fields of stars since the late 19th century. And that's important. I learned this, by the way, from a few colleagues. But it's also true. And if you're interested in one of the books whose covers I showed you-- you can find it on the slides-- the recent book by my colleague Dan Kennefick, which is on one of the slides, called No Shadow of a Doubt, also a fantastic book, also just came out in 2019. And so I learned from Dan and from another colleague, Alex Pang, that there was a tradition of eclipse expeditions, especially by British astronomers. In fact, there was a division of the Royal Astronomical Society called the Joint Expedition Group or something like that. And Frank Dyson had had a lot of experience with that. So they were used to lugging equipment around to often difficult, out-of-the-way places to conduct tests, often of the sun itself. It was already seen as a way to try to learn more about the sun, the outer corona of the sun, when you could see the outer part-- was usually obscured by the much brighter glare. So there's lots of reasons to try to photograph the sun or the field of stars right around the sun during an eclipse. So people like Frank Dyson especially had a lot of experience of that. The effort to try to actually measure light bending in particular was really not on most people's radar screens, so to speak. Einstein began writing letters in 1912, 13 to astronomers all over the world, asking them to look for it. As I had that one little excerpt from his letter to the American astronomer George Ellery Hale, he was talking a lot to Erwin Freundlich. To my knowledge, the reason eclipse expeditions prior to the 1914 attempt, which was scrubbed by war, were not actually to measure light bending per se. They were in this existing tradition of measuring all kinds of things you could learn from things like stellar astrophysics, by photographing the sun very carefully during an eclipse. So there was the equipment and the know how-- different questions. I think that's important, and that tradition goes back quite a bit earlier. So in some sense, Frank Dyson could kind of build on existing knowledge, teams, equipment, and infrastructure, keep Eddington out of prison during the wall, and think about doing the same kind of experiment now in a different way. And there's more to say. I'd be glad to send more references to other work, but that's what I've learned in general about these eclipse expeditions. And Horace also had asked, was it difficult to get there? It was indeed very difficult in general. Most people were still operating under kind of wartime deprivations. Food was still rationed. There was often not available medicine in many places. The supply chains had not sprung back to action. Global commerce was still very, very disrupted. Sea travel was very, very uneven. So even just getting the teams to either Principe or to Sobral was highly difficult. And again, you can learn a lot about that in both Matt Stanley's and Dan Kennefick's recent books. Amazing. Einstein's view on the war, Silu asks-- very good question. And again, there's a lot of that in Matt Stanley's most recent book, the one I mentioned called Einstein's War. The short answer is he was against it and very bravely vocal about his opposition, which was unusual at the time. So there was early, after the outbreak of fighting, 93 leading German academics, including people like Max Planck, who we'll hear a lot about soon, signed a manifesto a kind of proud declaration that was saying the reports of atrocities by German troops when they invaded other countries were mistaken, basically saying what we now call fake news, that it can't be true. These German soldiers were upstanding gentlemen was the claim. And that they had been provoked, and that no matter what happened, they had not been burning libraries or raping women, which it turns out, in fact, they really were doing. So there was this manifesto from 93 very highly prestigious German academics in favor of defending the German war effort, and saying, in fact, the Germans should conquer all of Europe, because they were in their own minds. They kind of had reached the apex of learning and culture. The rest of Europe should be glad to serve under German rule, because they were the best at everything. Just ask the Germans-- not a very strong argument to non-Germans. Well, this only further impassioned the anti-German sentiment in places like Britain and France, for maybe understandable reasons. Einstein, as a kind of mid-career academic in Berlin in the heart of the leadership of Germany, was one of three people-- not 93, only three-- to sign a counter-manifesto that was, in fact, so explosive no newspaper would even publish it until after the war. Because they figured-- they feared the newspaper would be bombed. His counter-manifesto said Germany is to blame, war is terrible. We should all find out other ways to handle international disputes. He was already very bravely by 1914, 15, 16 an outspoken pacifist internationalist, not only in letters to his colleagues, but he tried at least to be outspoken. Sometimes it was seen as so radical the newspapers wouldn't even print it. And again, there's lots and lots of stuff on that. But that's the not-so-short version. Yeah, and Jesus says, it's crazy that science was so politicized by the Nazis. Yeah, and we're going to see even more examples of this throughout much of the next several weeks of the course. I agree with you. I mean, it is crazy. It's sad. It's also, unfortunately, a recurring pattern for better or worse. Humans have this ability to do horrible things to each other in a whole range of settings, and this brings out some examples that we hopefully can try to learn from historically. So I think it's a great point, and Tiffany asked me to elaborate on this as well. Let me just say a bit more about what they're referring to for the later measurement. So for the 1919 one, the one that was done in Sobral, Brazil, it turns out, maybe not surprising to Horace it turned out to be-- it was May-- an unusually hot day. So before the eclipse, the sunshine was even brighter than they expected. And so they actually messed up their calibration. It literally began to warp some of the lenses or mirrors. It was that hot on their equipment, that they saw things getting out of this very careful alignment. And they hurried to try to recalibrate, but they couldn't take all the time, because they knew totality was coming. So the team in Brazil was dealing with at least one instrument. The larger, more precise in principle of the two telescopes they brought was physically deforming before their eyes because of the hot Brazilian sun-- Horace may tell us if that's typical or not. And so it was falling out of alignment, and they couldn't change it. They took photographs, but they couldn't trust that, because they had no longer a proper calibration. They had a smaller telescope that was not nearly as warped. It was maybe in the shade, or I don't know how it worked. And they took a bunch of photographic plates with that one, too. And so it turns out one of the things that Frank Dyson wound up doing, and in his book by Dan Kennefick, he goes through this in great detail-- was Dyson, who had all this experience with these eclipse expeditions, basically kind of discounted the data from the visibly-warped, no longer properly calibrated larger telescope from the Sobral from the Brazil expedition. When they're boiling down their data doing this statistical analysis, he basically tossed a bunch of data, because he used his experience. He drew on his experience to say, I can't trust that, because it no longer had met-- the systematic error would be too large, we'd say today. That later caused a lot of controversy. People made it sound like people who wanted Einstein to win were massaging the data. If you included that data, it would look more like a tie. It would be kind of a wash between the Newtonian prediction and the Einsteinian one. I think Dan Kennefick's analysis is really quite watertight, saying this was both proper of the standards of its time, not done by the people who were biggest fans of Einstein's work. In fact, done by the one person who is most vocally skeptical about Einstein's results, meaning Frank Dyson, and seems not to have been a conspiracy at all, but, in fact, an understandable technique. The fact that we would do things differently today-- we would weight data differently to calculate systematic error-- doesn't mean what he was doing was a conspiracy. The 1912 one I think is saying that if they had tried to measure the deflection, which is an if-- if they were actually trying to do that kind of measurement, they were set up, I assume, to do solar physics, I'm guessing. We can look it up. But if they had tried to do this further star deflection experiment, and if they had good luck and measured the right answer, the answer now would be one we'd expect. Then it would not have matched Einstein's prediction at that time, because, as you rightly say, Einstein's own calculations were changing this period. In 1912 and 1913, he did not yet come to the form of his equations that we would now recognize. That comes in November 1915. And indeed, his earlier predictions were off by exactly that factor of 2 to make them look like the same answer that one could get from a purely Newtonian calculation. No one knew that at the time, and it was only 1921 that the Nazi Nobel laureates republished that much older, very different calculation. I'm going to pause there. I won't bother, of course, going through that last part of the talk. I'll try to talk about it later. It's on the slides. I'll be glad to chat about it during office hours. But that gives us a pretty good taste to wrap up our Relativity unit. And then on Wednesday, we'll jump in to the Quantum Theory unit. So I'll pause there. Please remember to work on paper one, which is due this Friday. Thanks, everyone. Stay well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_19_Counterculture_and_Physics.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: OK. So for today, it's kind of fortuitous given, again, all the sources of let's just say swirling uncertainty in the wider world. Maybe it's a lucky timing that the lecture for today is on maybe a little bit lighter material. So we can wrap up our unit on physics, and the state, and the bomb, but ending in a very different place than some of the pretty heavy material we were looking at only a few class sessions ago with Nazis, and nuclear weapons, and wartime, and huge Cold War drama after the war. So today, we're going to be talking about a longer tale, some of the unanticipated shifts in the field coming about a quarter century after the end of the Second World War, when this Cold War system that we looked at now for a few classes in a row was really going through some pretty significant changes. And that will set us up then to launch in, in earnest, on next week, for a week from today, the last main section for the course, where we start looking at post-war developments in high energy physics, astrophysics, and cosmology. So for today, we're going to wrap up this middle unit for the class on, as I say, a bit of a, I think, more fun or more lighthearted set of themes, perfect for the timing. So what we're going to be talking about is under the general topic of counterculture and physics. And for this talk, for this lecture I have some more colorful than average sets of slides. The material is drawn mostly from a book I wrote a few years ago called How the Hippies Saved Physics, and then worked on with some friends and colleagues expanding upon that in this book called Groovy Science. And one of the readings for today was by a different colleague of mine, Cyrus Mody from this more recent collection. And so today, as usual, we have three main topics, three main sections for the discussion. We're going to look at some of the shifting topics. What counted as relevant topics to study by whom, and where, and to what ends in physics? And then how was this newer work being supported? So we've seen a lot, especially in the previous class session, on some of these quintessential Cold War formats for supporting research in the physical sciences-- huge emphasis on military funding, on a huge expansion of the university system for training young physicists. Today, we're going to look at some of the ways that system came in for a readjustment during the 1970s-- late '60s, early '70s. And then lastly, for the third part, we're going to be looking at what were some of the ideas that wound up having a longer than expected legacy-- some of them reaching really literally right up to today-- that came from this moment of real uncertainty in the way that some people were approaching the study of physics and particularly questions within it. So that's our path for today. So I began working on this project that became that book that I mentioned by trying to understand how we've all come to live in a world in which really exciting, very significant new topics were bubbling up all over the place in a field that's now often called quantum information science or QIS. And that includes topics you probably have heard about-- maybe you've been taking courses in by now-- topics like quantum encryption, quantum computing, and then related areas, and basically putting some of the strange features of quantum theory that we have looked at together, even if briefly over this semester-- putting those to work in real world engineering scenarios and not only thinking about strange features of quantum theory in the abstract. And some of the most dramatic examples of that are already being rolled out in real world implementations, not only on chalkboards or planning. I think the furthest ahead among this collection of topics is quantum encryption, or quantum cryptography. Already now 16 years ago-- it's quite a long way-- there were already these real world beta tests being conducted in many parts of the world. So I have these headlines here, press releases. One of them, from Vienna in April 2004, was the first successful use of quantum encryption to conduct basically a wire transfer, a money transfer in the city of Vienna. And what they did is both the mayor of the city and one of the leading banks collaborated with local physicists, led by Anton Zeilinger and his group, to use this new form of encryption to protect the electronic message going between City Hall and the bank to wire some money from one to the other. And the signal was encrypted not in the way most of our signals even to this day are encrypted, by using fancy algorithms to manipulate very large numbers that are much easier to multiply than to factorize. So our ordinary computers take a long time to break these secret codes. Instead, this the quantum encryption, or quantum cryptography, exploits features of quantum theory-- of quantum theory itself, including, most principally, things like quantum entanglement to create signals that should be protected by the very laws of physics. These are signals that aren't protected by the practicalities of current day computing algorithms, but actually by very fundamental laws like quantum entanglement, like the uncertainty principle, and so on. And this was demonstrated in a real world test with fiber optic cables strung through the sewers, the underground sewers of Vienna. I always joke that's why you actually need graduate students, to get under the ground and sling this stuff through the muck. And it was a highly successful test. Right around the same time, a few months later, there was a similar demonstration of the same basic kind of technology in Geneva, in Switzerland in this case, to protect electronic voting. So there were regional or cantonal elections throughout Switzerland. And to transmit the results of those local elections, again, government officials worked closely with a different team of quantum physicists, Nicolas Gisin and his group, to demonstrate that the electronic signal could be transmitted successfully and encrypted with encryption protected by the very laws of physics. So very exciting. These are already now quite old examples. What I wanted to figure out, when I was launching into this project, is how do we get into a world like that, where bankers and politicians entrust their most sensitive information to quantum physicists, where quantum physicists could be harnessing some what appear to be basic laws of nature in these increasingly practical engineering contexts. And so as I dug in a bit more, it took me back to this plot that we looked at together in the previous class session, how that new field-- this now flourishing multi-billion-dollar field, which is being pursued in earnest by physicists and engineers in many, many parts of the world, both in universities, and in private sectors, and in government labs-- to understand how we got there we collectively got there. We come back to this plot with this really dramatic series of changes coming out of the Second World War, a second immediate Cold War response around the time of the launch of Sputnik by the Soviet Union, and then this really precipitous collapse. And we looked a little bit at this collapse, the downward turn in the previous class as well, this constellation of forces that really shook what had come to seem like a very straightforward Cold War response in the years after the Second World War. And we captured that in these really extraordinary job placement statistics so that by the early 1970s-- pardon me, early 1970s, there were 20 times more young physicists applying for jobs than jobs available. If you look at this entry from 1971, the gap had widened to just an extraordinary degree. And that counts not just academic positions, but positions advertised for PhD physicists in industrial labs and in government labs as well, just a complete collapse of the kind of job market or employment path for young physicists in this country, very similar trends in many other parts of the world. And so last time we saw this is like a speculative bubble. It's like a stock market crash. You have this remarkably rapid exponential rise that is self-evidently not sustainable, that can't keep rising forever. And there's an equally sudden, equally dramatic adjustment. Rather than noodling around some equilibrium, you have this unstable rise and fall. And it turns out that transition in the field, in the realities of trying to become a young physicist in the United States and in many other parts of the world-- that really is critical to help us understand what would become this now flourishing field of quantum information science. And so to dig in, to try to understand what was it like to try to become a young physicist during that moment of pretty dramatic shifts in the expectations for career paths or job opportunities in the field-- to make sense of those shifts, I wound up looking at this group that called themselves the Fundamental Fysiks Group. I always joke that, then as now, physicists don't know how to spell. They spelled physics with an F. They're actually just being very playful. It was a group founded in Berkeley, California in 1975. It had roughly 10 core members. And then they would hold weekly informal discussion sessions, some of which would swell to have 40 or 50 members for a given session. Other times, it would be more of the 10 to 12 or 15 core members. The people who founded the group, who really kept it going for several years throughout the '70s, had almost all of them had earned their PhDs in physics already. Some were still grad students at the time. But they were being trained in some of the most elite physics programs in the country. They'd earned PhDs from places like Columbia, and Stanford, and UCLA, and University of Illinois, Urbana-Champaign, and so on. They were exceptionally well-trained. They had entered the field during that post-Sputnik boom when fellowships were plentiful, when it looked like the sky was the limit, during that extraordinary ramp up in the field that we did look at again a bit more directly at last time, when the enrollments were really booming. They entered their graduate study in the years after Sputnik. And then their main miscalculation was completing their PhDs just when the bottom fell out, which, of course, no one really saw coming. So they were entering very elite programs, doing very well, publishing articles in mainstream journals. And then unfortunately for them, by the time they were going on the job market, there were hardly any jobs available. So they were caught right up in that boom and bust cycle. One of the founders is this person here, Elizabeth Rauscher. She was at the time still pursuing her PhD at Berkeley. This is a photograph of her in the control room for one of those huge particle accelerators. I think it was the Bevatron, one of the huge accelerators at the laboratory. So folks like Elizabeth, and like the others who joined her in forming this group, they were trained in that era of what we looked at last time, the so-called shut up and calculate, where the emphasis had phonon away very abruptly from the philosophical, or interpretive, or open-ended investigation of, say, the mysteries of quantum theory, the meaning of quantum theory. And instead, they were taught during this period of exponential growth in the enrollments and a kind of narrowing of what counted as pedagogically of ways to deal with quantum theory. They were taught how to calculate, but not really to entertain these bigger questions of the sort that kept people up the generation or two before, the kinds of debates about Schrodinger's cat, about superposition, about entanglement or EPR. Those things that had really bothered people like Albert Einstein, and Niels Bohr, and the young Werner Heisenberg, for this generation trained after the war, those questions were deemed really just on the sideline and really not relevant. We saw that shift in the pedagogic materials. We talked a bit about that last time. So this group that wound up forming the Fundamental Fysiks group, they had entered the field because they were really enchanted by these big what does it all mean questions. And yet, in their formal training during that period of the exponential growth of enrollments, that was really not what their training was focusing them on. And so Elizabeth said, many years later when I wound up talking with her about it, what was her motivation for joining this-- for founding this informal discussion group? She said, it would be easy to learn about all this material, which was not being covered in the textbooks or problem sets by that time, if we got together for informal discussions and lectures. And that's what they set out to do. So Elizabeth was still a full-time grad student. She was entitled to reserve space, to just reserve a room on Friday afternoons at the Lawrence Berkeley Laboratory, where she was doing a bunch of her research. And so she basically signed up a room that was available. And she and her friends began putting out flyers and saying, open to anyone, come, let's discuss the big questions in quantum theory, and modern physics more generally, Friday afternoons. And it began in 1975. And it ran virtually every week for about four years, a long stretch of time. We have records of the topics that they talked about informally virtually every week. We have pretty good paperwork from that time. And we can see then that the one topic that totally dominated their discussions was something we looked at together in this class a few sessions ago, quantum entanglement, as crystallized by that work by John Bell. So you may remember, Bell published this work that we now call Bell's theorem, related to Bell's inequality. We had some optional lecture notes on this. We had we looked a bit at this some class sessions ago. Bell's really famous paper was published in 1964. This group was getting together just a little over a decade after that. So they were really, really just mesmerized by Bell's inequality and what Einstein had called, as we saw, spooky action at a distance. As a cartoon reminder, entanglement concerns systems where there's some source that emits pairs of particles. Their properties are correlated because of the nature of how the pair of particles were produced. And then we can subject each particle separately to measurement. We can choose to measure various properties of one particle or the other. And then as Bell demonstrated they must-- according to quantum theory, the outcomes of those measurements are more strongly correlated than anything like an Einstein-like explanation would ever allow or could ever account for. So you could really say that entanglement truly is the case in which the whole system-- the system particles A plus B-- is literally more-- in fact, it's infinitely more than the sum of its parts. Knowing everything we could about this system A and B is completely insufficient to tell us anything definite about particle A on its own and vice versa. So there's this really strange quintessentially quantum mechanical behavior of these entangled systems. And the Fundamental Fysiks Group thought this was just extraordinary and the kind of thing they wanted to spend Friday afternoons talking about together because it was not being emphasized in their main coursework, even in grad school. So today, Bell's theorem, that 1964 article, is literally renowned. Renowned is a technical term to high-energy physicists. Some of you might know this. Physicists relentlessly count our citations. My wife is a psychologist. She has all kinds of theories of why we're so obsessed with this form of measurement. Anyway, for a long, long time, it's been routine to count up the number of citations in the scientific literature that a given article receives and use that, fairly or unfairly, appropriately or otherwise, as a kind of proxy for influence or importance. There's all kinds of caveats to be had there. People are a bit more sophisticated now about that citation counting than we used to be. Nonetheless, we have this machinery for counting up citations. So if you do that exercise, then today, that 1964 article by John Bell is among the most cited articles in all of physics in the history of the universe-- not just in 20th century physics, not just in quantum physics. Take any subfield and go back to the time of Isaac Newton or Aristotle if you'd like, and it's very, very hard to find a single article that has been cited more often in the scientific literature than Bell's 1964 article. It is literally renowned, which is the highest category, the label for the most cited papers when physicists do this kind of chopping up exercise of citation counts. So this is really, really today recognized as an extraordinarily well-known and influential article. And yet, it took a long time to get there. So if we look at the citation counts to Bell's 1964 paper over the first 15 or so years after it was published-- not the half century or more since then, but in the first period of more than a decade, it was really, really not lighting the field on fire at all. It was published in 1964, very late in 1964. There were no other citations to it in that year. There were no citation to it in all of 1965 in the worldwide literature. There was one citation to the article in 1966 that was actually a self-citation. It was a follow-up article by John Bell himself. We might take that one away. So it was really only in the mid-1970s, mid to late-1970s that we see anything like a kind of sustained attention, at least by this proxy measure, by citation counts, anything like any kind of sustained if still rather modest attention to a topic that today is considered literally renowned. If you do a little more careful looking-- don't just count things, but see who is writing the articles that were doing that citing, who was actually engaging with Bell's theorem and quantum entanglement during this period-- it turns out that about almost 3/4 of all the articles that make up this blue area under the curve-- of all those that were written by physicists based in the US, which was already itself the largest fraction of these, 3/4 of that US-based body of work that was citing Bell's theorem came from members of this very strange little ragtag study group called the Fundamental Fysiks Group, the group that was founded by Elizabeth Rauscher at Berkeley in '75. So that helps us at least make sense of why citations aren't picking up around here. This is the period when that group itself was founded and reinforcing this topic as really interesting. Meanwhile, if you don't just look at the authorship but actually read the articles, which is kind of fun, you can see that actually the proportion rises to be almost 90%, about 86% of that US-based proportion here, if you include people who either wrote the articles or thanked members of that first group, the Fundamental Fysiks group, in the acknowledgments. This group was really what we would call the early adopters, these people who mostly got their PhDs, graduated from graduate school, couldn't get a typical job in the field because of these larger scale shifts later in the Cold War. They met each week to talk about quantum entanglement. And they were just mesmerized by Bell's inequality. And they were really helping to get at least some sustained attention to this topic at a time when it was still otherwise quite marginal. They weren't only talking about it. A core member of the group shown here, John Clauser in his lab at Berkeley-- at this point, he was a postdoc at the Berkeley Laboratory. And he and then graduate student Stuart Freedman conducted the world's first laboratory test of Bell's inequality. This was the first laboratory test to try to confirm or to check whether the measurements on these pairs of entangled particles really show this spooky correlation beyond what an Einstein-like theory could ever account for. This was published in Physical Review Letters in 1972. The next year, Clauser was told by multiple department heads where he'd applied for jobs that he not only would not be hired, but that several department heads wrote in to say they didn't think this was actually real physics. So he and Stuart Freedman had conducted the world's first laboratory test of Bell's inequality, found results squarely in keeping with the predictions from quantum theory, squarely in conflict with Einstein's own careful view about how the world should work. And he was told that this was literally not even proper physics. He actually never was offered an academic position, though that had been his ambition, certainly for that stage of his career. So this work was really, really still marginal. And yet, this little study group, the Fundamental Fysiks Group, included people like Clauser who were at the world's cutting edge for trying to explore the topic experimentally. So it turns out this study group had some broad-ranging interests. So the members, not all-- some members of the group some of the core founding members of the group were really curious about quantum theory and, it turns out, a wider range of topics. And for some members, the stew of curious topics bled together. So some of the members of the group actually hoped they could use quantum mechanics, especially quantum entanglement or so-called nonlocality, to make sense of other strange features of the world. And they were especially interested in something called parapsychology or psi phenomena. So it's very convenient. As we know, in quantum physics, we use the Greek letter psi to represent Schrodinger's wave function. And the field, or the pseudo-field of parapsychology, its own practitioners often referred to it as psi phenomena, the same letter. So what's parapsychology? You might know it by other terms. It includes things like mind reading. Let's say alleged mind reading, or ESP, extrasensory perception, telepathy, precognition, which is a fancy word for saying having visions of the future like clairvoyance, remote viewing, a particular kind of telepathy we'll talk about in a moment-- again, alleged telepathy-- psychokinesis-- could you move matter with the power of your mind alone? Pretty out there sounding stuff. So this is where the fact that this group was getting together in the San Francisco Bay area in the mid 1970s matters a great deal. These are hardly typical phenomena studied in academic settings today, though there are some exceptions. And yet, this group was kind of on the margins. And some members, not all, were especially eager to explore all kinds of questions that were marginal. Bell's inequality was very marginal compared to mainstream physics at the time, as were some of these questions about basically mind reading or ESP. They were especially excited when this gentleman shown here, Uri Geller, came to visit the San Francisco area, again, in around 1972, '73, just before the group really started meeting more formally. Geller was originally Israeli, from Tel Aviv. He had made an early career as a stage magician. He literally said, I am a magician. I can do sleight of hand tricks. And then somewhere along the way, he started proclaiming that he could actually perform not just clever sleight of hand, but actual ESP. He claimed he had paranormal abilities, not only exceptionally good kind of magic trick skills. He was whisked to the United States, subjected to hours and hours of seemingly controlled scientific study at this place called the Stanford Research Institute, or SRI. SRI was, until that time, a defense-oriented laboratory affiliated with Stanford University, very much like Draper Lab or Lincoln Lab at MIT. The lab was funded, through the late '60s, almost exclusively by military contracts. But as we saw, there was this collapse in the physics bubble right around 1970. And the Stanford Research Institute was spun off as its own independent institute separate from Stanford, in part over Vietnam War protests. And then they were under-capacity because, like most of physics, they were not getting the kind of grants that they had once done. So some laser physicists, very well-trained laser physicists Harold Puthoff and Russell Targ, wondered if they could spend some of their free time studying basically things like ESP. They wound up doing hours and hours, dozens of hours of filmed seemingly laboratory control tests on Geller himself at the laboratory. They were able to publish their findings in journals like Nature, a peer reviewed journal among the most elite scientific journals on the planet. Likewise in the proceedings of the IEEE for the electric engineers, they were finding what they considered scientifically sound, robust, statistically significant evidence that Geller seemed to have ESP-like abilities. Now, this wasn't uniformly accepted, as you might expect. There were some efforts at debate or rebuttal. Among the most entertaining were actually done by a real magician named James Randy. He went by The Amazing Randy. In fact, he just passed away in his 90s or near age 100 just a few weeks ago. So James Randy would then find ways to give more conventional explanations based on his own quite extraordinary sleight of hand skill. He said, well, Geller might have been able to do this or that. But I could reproduce the same effects. And I'll tell you I don't have ESP. Randy would say, I just am a skilled magician. So this went on and on and on, a circus of back and forth. The point is this was happening in the San Francisco area and getting a ton of attention in the newspapers, and magazines, and television. Geller went on to become enormously famous and well-known. And this was the earliest stages, literally right in the backyard for this Fundamental Fysiks Group. Some members of the group were then hired at Stanford Research Institute, at the SRI, to serve as in-house consultants, to be house theorists to make sense of these and related studies in ESP, and to try to puzzle through, could these strange phenomena, like quantum entanglement, maybe make sense of or account for what seemed to be magical effects otherwise? And so one core member of the Fundamental Fysiks Group named Jack Sarfatti began releasing these press releases downstream from Geller's original visit saying things like this top quote, "The ambiguity in the interpretation of quantum mechanics leaves ample room for the possibility of psychokinetic and telepathic effects." Basically, Bell's inequalities start sounding like telepathy already. That's why Einstein thought it was so, so strange. He called it spooky. "If the events could remain correlated across arbitrary distance," so they began to argue, "maybe there's some correlation between a quantum particle that lodges in Geller's mind, or brain, and some correlated particle that lodges somewhere else. Is that at least consistent with this kind of action at a distance?" Jack goes on to write a follow-up press release. "My personal professional judgment as a PhD physicist," which we now all have to reevaluate what that's good for-- "my professional judgment is that Uri Geller demonstrated genuine psychoenergetic ability. Now, cast your minds back to mid-1970s San Francisco Bay Area or imagine it if you can. And if you have a bunch of PhD physicists working with seemingly a very upstanding scientific research institute finding what looked to be robust evidence of mind reading and ESP, that's going to be pretty exciting news. So here's a picture of some of the core members of that Fundamental Fysiks Group. This one here in the saffron robe is Jack Sarfatti, who just gave those press releases I quoted from. Here is Saul-Paul Sirag. I would literally kill for that hair. Look at that. That is just fantastic. Nick Herbert and Fred Alan Wolf. So they were just clowning around. They were having a lot of fun mostly, because they were out of work. Nick Herbert had finished his PhD in physics at Stanford and at this point was on welfare. He was literally on public assistance because he couldn't get a job. That's how bad the system had contracted. So these young, highly trained physicists were having an awfully good time wondering about deep quantum mysteries and beyond. And so they were highlighted in San Francisco area magazines. There was a big photo spread on the group in a local magazine in '75, which highlighted these folks were going into trances, working at telepathy, and dipping into their subconscious and experiments towards psychic mobility, which is literally groovy man. I mean, this was like totally in-step with the times. And they became really just multimedia darlings. And it wasn't only San Francisco. There were similar stories picked up by the news wires, by the Associated Press, and United Press International. There was a cover article in Time Magazine. It was covered in other publications like Oui Magazine, which I always say was not a publication of the French embassy. It's actually a porno mag. Oui Magazine was Playboy's answer to Penthouse. In 1979, if you wanted to read a detailed article about quantum entanglement, you could find actually a very competent popular science treatment in Oui Magazine, because it was deemed so interesting, but not in many other venues. So the group starts getting lots and lots of attention because there was this strange coagulation of well-trained physicists not just dabbling with strange topics, but claiming they could actually explain it, that quantum mechanics might unlock the power of mind. OK. Let me pause there for some questions. And then we'll move on to the next part. Any questions on that? So Alex asks, how much LSD were these people using? That's actually, literally a fair question. One thing to keep in mind-- I write about this in the book a little bit-- LSD, until right around this time, was actually not considered illegal. It was reclassified first as use of it was a misdemeanor and later made, of course, a more serious felony charge, but only a bit later. In fact, LSD was considered a very interesting research substance on which there was a lot of research being done at the time at Stanford Medical School, at many university hospitals, and biology departments, and psychology departments. And so it does seem that not all members of the group, but some members claimed at least they were having a lot of fun with recreational drugs. Others said they never touched the stuff. It wasn't the whole group. But what's interesting is that the associations around such psychedelic drug use were quite different than what would become more common later on. And so Alex also asked about MKULTRA. That's right. So that was, at the time, still quite top secret. That was a program subsidized at least in part by the US Central Intelligence Agency, top secret at the time, to see if psychedelic drugs like LSD might have some role as a kind of truth serum. Could these have a role in national security or national defense? The results for which seem to be-- let's just say inconclusive. But it did lead to some pretty sensational experiments that were revealed only years later when the secrecy was broken. So these folks weren't directly involved in it. But that's another indication of the time. It's still a relevant topic because many people-- research psychologists, defense experts, and others-- thought that these psychedelics might unlock the true meaning and role of consciousness and so on. It, later on, was come to be seen as more dangerous or illicit with corresponding criminal charges against it. But one of the men I mentioned, Nick Herbert, remembers participating in a Stanford University study where he was a willing subject. He was asked to participate in a psychology experiment, took several doses of LSD under close supervision, and was asked to describe the experience. It was still seen as an on-campus thing through the early and mid-60s. Any other questions about the setup here-- the group, the times, anything like that? OK. Well, if questions come to you, we'll have another chance to discuss them. But let me press on a bit. So I wanted to let us know about what were some of the topics these folks were focusing on. Let's ask, how are they doing it? Remember, this was a time when the conventional career paths for many, many physicists in the United States had really been interrupted. So they weren't just getting positions as assistant professors at universities, though that was what most of them had been aiming for. So how do they support their work? How did they conduct research? How did they share their findings and so on if they weren't in a typical research environment that had at least come to seem typical for the generation before? So again, we start asking questions much like we asked about in the previous class. Who's paying for the work? Who's supporting it? What's the kind of funding structure? And what are the institutions in which the work will be pursued? Let's ask about money first. Let's talk about funding. So on the one hand, there was actually a kind of continuity with the Cold War stuff we looked at in the previous session. A fair amount of the work-- of the funding to support this work actually came from the US federal government in the part of both the CIA, that I mentioned briefly, and also something called the DIA, the Defense Intelligence Agency, which is, as its name implies, a kind of CIA, but within the Pentagon. So they're actually separate agencies, though they have similar kind of roles to play. Here's an example of one of their reports that was later declassified. It was originally a classified report. Now, you can download it on the internet. It was declassified in response to a Freedom of Information Act request. Originally finished in July of 1972, it has this kind of innocuous sounding title called "Controlled Offensive Behavior USSR." What it's really about was an investigation into whether the Soviet Union was how-- put it this way, how far ahead the Soviet Union was in basically weaponizing mind reading and mind control. Think about if you've ever heard of the movie The Manchurian Candidate or so-called brainwashing. The concern from the Pentagon Intelligence Agency here, the Defense Intelligence Agency, was that the Soviets weren't just pursuing advanced efforts in mind reading and mind control, but were actually excelling at it. And therefore, the United States faced, so they estimated, a gap, much as there had purportedly been a so-called scientific manpower gap that we looked at last time, or a so-called missile gap, which also turns out wasn't actually there. So this was a concern that the Soviets were way ahead in a kind of parapsychology program. And so that unleashed lots of funding, or at least some surprisingly generous funding, within US settings to try to catch up to the supposed Soviet advances in things like parapsychology. So between the CIA and the Defense Intelligence Agency, funding began for what came to be called ESPionage, which is using ESP to conduct espionage. Wouldn't it be great if it were possible to spy on the Soviet Union from the comfort of Langley, or CIA headquarters, or some Air Force base in California? So if minds could become entangled, which it seemed people like Uri Geller seemed to suggest could be happening, then could that be harnessed to protect the nation? And both the CIA and the Defense Intelligence Agency said, well, if so, we'd better find out. So they began funding research at places like the Stanford Research Institute, that spin-off defense lab I mentioned in the Bay area and at defense labs actually around the country, a fair amount spent at Aberdeen Proving Grounds in Maryland. And many other of these once highly classified replication studies were being conducted. And again, with the help of Freedom of Information Act, one can begin to learn how much of this kind of dark money was channeled to these once highly classified defense efforts to basically try to figure out mind reading and maybe even mind control. Some of you might have seen the movie called The Men Who Stare at Goats. It was originally a bestselling paperback, and then made into a movie a number of years ago with George Clooney and others. That's actually based on the true story of some of these military efforts to figure out ESP. Other investigative journalists, and historians, and others have pieced together at least some of the spending, which was once highly classified. This was discretionary spending. It wasn't disclosed to Congress directly. But later Freedom of Information Act requests revealed that often millions, sometimes many millions of per year being invested in these efforts. The program was supposedly canceled in 1995. Some people say maybe we started again after 9/11. I've seen no evidence of that. Anyway, this is one way to perpetuate research in parapsychology, was you convince the US federal government that the Soviets were doing it. And that unleashes lots of money because there was a Cold War rivalry. It was still a Cold War rivalry. I find that pretty astonishing. So several members of that Fundamental Fysiks Group, including Elizabeth Rauscher and George Weissmann-- together, they had founded the group-- Jack Sarfatti I showed you, Saul-Paul Sirag-- he was one of the great hair-- they actually began conducting their own unclassified replication efforts as well, some of it partly underwritten by some of this defense money. So they would do the following. They actually published in the open literature, in the open parapsychology literature. So it wasn't classified. It also wasn't in mainstream journals. So they would do things like this, is what became known as so-called remote viewing. They would try to find people who seemed to be particularly sensitive to these shared sensations, like Uri Geller seemed to be. And they found others who seemed to have comparable abilities, at least to their estimation. And they would basically have that person sit in a dark room with very few external stimuli-- no bright lights, no loud music-- have them sit there while someone else traveled to a randomly selected location in the San Francisco area. There would be a series of targets. But they wouldn't know which target. They would try to shield the information from the sensitive person back at home base. And that person who's outside would just go to some landmark location and stare at it very, very hard, like places on Stanford's campus, or Berkeley's campus, or Golden Gate park, and so on. And then the person who's trying to receive these seemingly entangled signals would just start either sketching or verbally speaking what they seemed to-- the pictures that would form in their minds. And the argument was, as Rauscher, and Weissmann, Sarfatti, and Sirag eventually concluded, was that there was actually no better than chance in matching between the descriptions and the targets statistically. It was about even. However, there was what they considered remarkable precognitive effects. So in fact, what the sensitive people seemed to be doing was sketching the next target in line. Or if you relaxed the criteria for matching, they found lots of matches afterwards. So naturally, we leave it up to you about how robust that effect seemed to be. The point is that's the kind of thing they spent a fair amount of time trying to pursue. Turns out again, as we know from Freedom of Information Act declassifications, there were very significant efforts at this at multiple defense laboratories across the country, some of which claimed success, although independent audits usually came down to statistical chance. Anyway, that's one thing. That was one way this work was supported. Basically, go back to the Defense Department and say the Soviets are doing it. We should do it, too. The folks I was focusing on got even more creative, a little more entrepreneurial in how to support their work. And that was not only to appeal to the Defense Department, but also to some of these quintessential California self-made millionaires in what came to be called the California Human Potential Movement. It was very, very much in the rage, starting in the late '60s throughout over the 1970s and beyond. There are a number of these folks who became really nationwide or even international celebrities for, again, either claiming to have unlock the secrets of human consciousness and therefore human potential or at least investing heavily in the efforts to do so. One of them was named-- took on the name Werner Erhard. He was actually born Jack Rosenberg and grew up in Philadelphia. But through a whole series of events, he wound up moving to the West Coast, adopting this assumed name. He called himself Werner, not "Verner." Although, he borrowed the name from Werner Heisenberg because, as a kid, he'd been very interested in modern physics and used to read some of Heisenberg's popular books. So he thought "Verner" or Werner would be kind of a serious sounding name. So he adopted this pseudonym, Werner Erhard. Erhard introduced something that came to be called Erhard Seminars Training or EST. And if you ask your parents, they will almost certainly have heard of this, at least if they grew up in the United States or even many parts of Europe. It became really an international sensation-- EST, E-S-T, for Erhard Seminars Training. This was in the mainstream news regularly, relentlessly over the course of the 1970s. The EST movement could eventually claim devotees among gold medal-winning Olympic athletes, among people pretty close to political leaders-- sons, daughters, cousins of presidents and senators-- that kind of thing, media stars. It became really, very, very pervasive. I think they claimed something like 800,000 paying alumni had gone through these intense-- hotel ballroom intense kind of sessions. So Erhard met up with-- he was, by this point, based in San Francisco. He met up by chance with members of that Fundamental Fysiks Group in a way that I talk about the book. And he basically was convinced that this quantum entanglement stuff really would be the key to unlock powers of consciousness. So he actually helped-- his lawyers help set up a not-for-profit corporation, a way that they could actually collect and spend money, much of it coming from Erhard's own generosity. They call themselves the Physics Consciousness Research Group that actually filed for incorporation in the state of California. And this was, again, a kind of structure to enable the group to raise funds from private donors and to spend funds. And Erhard was among the most generous early donors to get that started. Erhard went on to become quite controversial. E-S-T, or EST, also kind of eventually faded from view. There was a huge series of very high profile scandals and counter-scandals and claims and counterclaims, again, that tails off through the '80s and into the '90s. For some time, Erhard was outside the country. Then he actually wound up winning a bunch of cases that had been brought against him. So he is back in the country, and so on. Anyway, it became a big, big to-do. But over the '70s, Erhard was a kind of almost household name and one of the earliest supporters of this work. So now, you have the group not only getting some support from the CIA and the Defense Department, but also from some of these self-made California gurus of this counterculture scene, like Werner Erhard. It turns out before we say, oh Erhard, that stuff sounds a little flaky. At almost the exact same time, Erhard was also starting up another series in physics that ran for 13 years and was run by one of my own colleagues here at MIT, Roman Jackiw, and my physics PhD advisor, one of my advisors Sidney Coleman. And their meetings attracted people like Stephen Hawking, and John Wheeler, and a number of Nobel Laureates. Gerard 't Hooft's in this photograph somewhere. So at the same time that-- or basically around the same time that Erhard was funding this Physics Consciousness Research Group from the Fundamental Physics Group and digging into things like ESP and quantum entanglement, other physicists at leading institutions, who had pursued more typical careers in the field, academic careers, were also getting support from that same source and hosting what turned out to be a pretty successful series of informal conferences held often in Erhard's personal house-- a big, huge mansion that he bought in San Francisco. So the source of patronage was not a kind of one way or the other. I take that as a further sign of just how much that Cold War set of assumptions had really been disrupted by the early and mid-1970s, where even Harvard, MIT physicists, even Nobel Laureates, were at least sometimes in need of this untraditional patronage because the usual sources were really in short supply. OK. So it wasn't only Werner Erhard. The group, the Fundamental Fysiks Group, also got considerable support from another kind of California counterculture maven named Michael Murphy. Murphy, much like Werner Erhard, had been really enamored by quantum physics as a younger person. He actually studied physics briefly at Stanford before he switched his major. But he was really fascinated by this. And then soon after graduating, he founded this place called the Esalen Institute in Big Sur, California, down the coast about 150 miles South of San Francisco, nestled, as you can see, in this extraordinary location on the Pacific Coast. So Esalen became, over the '60s and '70s, ground zero for the hippie world, for the counterculture world. This is where people would go to talk about things like vegetarianism, which once seemed incredibly strange in the United States. It's where they would go to explore psychedelic drugs, both before and even a bit after they'd become illegal, where people would explore this newer set of ideas about human consciousness and human potential and, it turns out, where they would go to explore quantum entanglement. So as I mentioned, Murphy himself was fascinated by quantum physics. He invited this group, this Berkeley-based group, to come down and host a month long workshop underwritten entirely by Murphy-- host a month long workshop at Esalen for a bunch of their friends and colleagues. And then other people could pay a modest entrance fee to attend the workshop and have naked hot spring baths, massages, and all the rest. So they advertised this workshop that perhaps a new kind of inspired physicist experienced in the yogic modes of perception must emerge to comprehend the further reaches of matter, space, and time. And when it came to yogic modes of perception, Esalen had the market cornered. That's literally what they became known for. Some of you might have heard of the book The Dancing Wu Li Masters. It actually won the National Book Award-- excuse me, the American Book Award. It's sold something like 20 million copies worldwide, many, many translations. It became a publishing sensation. The author of the book, Gary Zuka, was, at the time, the roommate of Jack Sarfatti, one of those members I mentioned before, the one who'd been in that saffron robe. So Jack invited his friend Gary to come along. And the whole book is basically trying to capture the nature of these discussions at this month-long quantum physics workshop at Esalen in January of 1976. In fact, the very first chapter is called Big Thoughts in Big Sur, named for Big Sur, California. So you have this kind of broader circulation of some of these ideas well beyond the Pacific Coast itself. And in fact, much like those Erhard-funded workshops, there wasn't only a one-off. There were 13 years worth of annual, week-long sessions that were follows-up after the month-long one that became known as the Bell's Theorem Study Group held every year at Esalen with members of the Fundamental Physics Group, like Henry Stapp shown here coming down the coast for the week-long retreats every week. And in fact, it became the longest running-- longest running workshop in Esalen's history. So Esalen is known for hippie counterculture stuff-- psychedelics, nature consciousness, vegetarianism, Eastern mysticism and spiritualism. And yet, the longest running workshop, the longest continuously running workshop was actually on quantum entanglement. So you can see the group's getting funds and also something like a kind of institutional base from which to try to pursue their work. So how do you spread the word beyond just the hot spring baths or the conference room at the Berkeley Laboratory? That actually was not so straightforward. It turns out, at the time, the long-running editor of the Physical Review actually banned articles on the philosophy or interpretation of quantum mechanics. In fact, he even went so far as to draw up a special sheet of instructions for referees. And the instructions were, if this summation to the journal, to the mainstream research journal, looks like it's only on the so-called philosophy or interpretation of quantum theory, then don't bother reviewing it. Just check this box. And we'll automatically reject it. It was really a high barrier in the US journal scene. So high, in fact, that a number of follow-up articles by John Bell himself on Bell's theorem wound up appearing in these very strange out of the way places. So some of John Bell's own work was shunted out of the mainstream journals like the Physical Review and into these hand-typed mimeographed newsletters from, in this case, a little group in, I think, Geneva or somewhere in Switzerland and circulated as a kind of clubhouse newsletter. So it wasn't peer reviewed. It wasn't sent to libraries. It was basically you sent in a postcard saying, please send me the next month installment. And so this was a kind of underground newsletter called Epistemological Letters. And literally, some of the work that would later come to be recognized as remarkably important for understanding things like quantum entanglement, this follow-up work, was shunted into these non-traditional circulation mechanisms. Even more untraditional was the following. It became to be known as The Unicorn-- A Unicorn Preprint Service. The Unicorn was a little joke based on the last name of this gentleman, Ira Einhorn. So as you might know, in German, Einhorn means one horn, a unicorn. Einhorn had also been enamored of physics as a younger person. He actually briefly did a double-- he briefly pursued a double major in physics and English as an undergraduate. He dropped physics and just was an English major in the end. And then he got really swept up with some of these pretty big dramatic changes in the US by the late '60s. He became an anti-war activist and even something of a kind of anti-war, iconic leader, showing up at a lot of the famous protests with people like Abby Hoffman and Jerry Rubin. He actually helped to emcee the nation's first Earth Day Raleigh in his native Philadelphia in 1970. At around that same time, he became an unusual kind of management consultant to the Pennsylvania branch of Bell Telephone. So Bell Telephone executives, based in Philadelphia where Einhorn was from, were worried that they were kind of falling behind with what the youth culture was all about, which was almost certainly correct. So a bunch of the executive types in Pennsylvania Bell wanted to get a sense for what the young people of the day, the hippie counterculture scene, was all about. So they hired Einhorn, who at this point had become really well-known in Philadelphia, to come and be their channel to what the youth of the day were into. Einhorn refused any traditional payment. But instead of being paid in cash for his consulting services, he made a deal with the executives at Pennsylvania Bell that he would send to them regularly-- sometimes once a week, sometimes every two weeks-- a big stack of materials-- newspapers clippings, unpublished physics preprints, just a mash of stuff. This is, remember, well before the internet. And so the executives at Bell would have their administrative assistants Xerox for free all these things, make 300 copies of these packets, and then pay the postage to mail it to 300 hand-picked recipients on Einhorn's list. And the recipients were really international, some of them even behind the so-called Iron Curtain in the Soviet Union or at least Soviet-aligned countries in Eastern Europe. It was also, even within the United States, a mixture of people that included renowned physicists like Freeman Dyson and John Wheeler, Gerald Feinberg, at the time, was the physics department head at Columbia University. So these really very prestigious physicists in the US and abroad, as well as people like the handler of Uri Geller, Andrija Puharich, other people who were well-known at the same time for being really into the ESP parapsychology scene. Here's Nick Herbert, remember, of the Fundamental Fysiks group. So Einhorn would concatenate, would collect stuff that he thought these 300 people just had to have, things that he thought were especially interesting. And it was usually about quantum theory and the nature of consciousness. And so he was really instrumental in circulating unpublished or barely published versions of these latest physics investigations by the Fundamental Fysiks Group, including, for example, this preprint by Jack Sarfatti and Fred Alan Wolf that was sent to Ira and then from Ira sent, thanks to Bell Telephone, literally around the world. So they're doing an end run around the ordinary peer review journals, like the Physical Review. This worked really, really effectively until the very end of 1979. He was doing this for about a decade. What happened in 1979 is that while Einhorn was actually at Harvard as a short-term invited scholar at Harvard, at I think the Kennedy School, to talk about leadership, the rotting remains of his, it turns out, murdered girlfriend were found in a trunk in his apartment. He was suspected of having been foul play when his girlfriend Holly Maddux disappeared. And yet, Einhorn, this peace-loving hippie favorite person of Philadelphia, was never seriously investigated for 18 months until finally retired FBI agent hired by Holly's family followed the clues that indeed led investigators to Holly. Einhorn was charged with murder. He was eventually convicted. It's a whole long story. Anyway, as you can imagine, as soon as he was charged, the Bell Telephone folks stopped their circulation service. So this Einhorn Unicorn Preprint Service worked remarkably effectively for nearly 10 years, until it stopped one day with no warning when the situation had changed so dramatically. Long before the arrest, long before Holly Maddux had disappeared, Einhorn had had a second career as a free-form literary agent. He was actually connecting some of these new authors, many from the Fundamental Fysiks Group, to big commercial publishing houses in New York City, which, again, were really eager to tap into this youth hippie counterculture market. And so a second way that the group got their word out, rather than publishing in the Physical Review, was actually in some genuinely commercially successful, very widely circulating popular books, books like The Dancing Wu Li masters that I mentioned and, in fact, a number of related or knockoff books, some of which became really quite well known in their day, many of which were kind of shepherded into print thanks to Einhorn and his connections. So one of the books that came out of that was the topic of the other reading you had for today, my short little piece on Fritjof Copra's book, The Tao of Physics. Copra was, again, one of the founding members of this Fundamental Fysiks Group. He was originally from Vienna, as you read, but had come over to California as a postdoc, and then went back to London when his visa expired and so on. But he stayed in touch. He came back, once again, to California after that. So he wrote this book originally to try to write a textbook on modern physics. He thought this would help him get an academic teaching job, which is the kind of position he'd been longing for. He was sharing drafts of particular chapters with renowned physicists like Vicki Weisskopf and John Wheeler and getting advice how to sharpen his discussions and so on, and then finally was told that you're not going to get a job. You're not going to get rich to support yourself on the royalties of a textbook. Why don't you combine these two interests that he'd been otherwise enamored of, modern physics and Eastern spirituality? And so he rewrote the drafts to become what became known as The Tao of Physics, became an international bestseller. Now, it turns out this book was not only popular with the new age counterculture set. It got really quite warm and enthusiastic reviews in places like Physics Today. A reviewer, a Cornell astrophysicist, reviewed the book for Physics Today and said, the physics is basically right. After all, Copra had done a PhD, and two postdocs, and had really, really senior physicists reading over drafts. It was reasonable, accessible discussion of things like the uncertainty principle. Moreover, the reviewer emphasized, it couches this physics in, quote, "the immediate feeling-oriented vision of the Mystics so attractive to many of our best students." Again, this was a time when the enrollments had crashed, when physics classrooms were emptying out faster than any other field. And this was seen as a vehicle to try to recapture the invagination of some of the disaffected young college students who were no longer flocking to physics classrooms. It was seen as reasonable in the physics and a great kind of PR tool if nothing else. Meanwhile, there broke out almost like a Cold War competition in another very prominent journal, the American Journal of Physics, which, as you probably know, is devoted-- has been devoted since its founding to improving the teaching of physics. So it's not really a research journal, per se. But it's research on how to improve the teaching of physics from high school level through undergraduate and beyond. And so you can trace through, in the pages of the American Journal of Physics, again this kind of playful gamesmanship not whether physicists should use books like The Tao of Physics in their classrooms, but how best to. They all agreed you should because it was pretty clear on the physics and also seemed to have this special appeal to the youth culture of the day. And so you have these dueling lesson plans and whole semester-long courses being paraded through the American Journal of Physics because it looked like this book was too good to pass up. And it turns out other books by members of the Fundamental Fysiks Group played a similar role a few years later. They were published a little bit downstream from Capra's book. And they also were picked up as quasi-textbooks, as well as often best selling paperbacks. Let me pause there. So that unit was really on this question about institutions, funding, how do you get the word out, how do you get a stable platform on which to pursue this work if the folks were not going to be pursuing university-type careers. Any questions on that? Einhorn was often credited, even while he was in jail, with having invented the idea of the internet before the internet. You can see this preprint service was like a kind of-- not quite Reddit, but some kind of likeminded group, where you go there and find out an eclectic group of people from all over the world who might share some niche interests that can find each other and communicate. And that's the kind of connectivity model that Einhorn was trying to build with Xerox machines and postage stamps. It was technologically quite old school. But the notion of trying to get niche communities to be able to communicate that was maybe a little bit ahead of its time. Any questions on that? You're stunned into silence? That's OK. Is it worth reading some of these books? That's a good question. Let me put it this way. When I was a grad student at Harvard some many, many years ago, I was a teaching assistant for a physics course. And we still listed Nick Herbert's book, Quantum Reality, on the syllabus as a worthwhile supplemental reading. And I still think it's a good book, Nick's book. I personally read The Tau of Physics as a high school student. It was still widely in circulation. And I just stumbled on it, like in a used bookstore. And my personal assessment is that I found the physics parts really well-described and really fun for me as a high school kid who was taking physics but not yet in college. And the descriptions of Eastern kind of spiritualism just didn't really do it for me. But that was just me. Those books, I think, were more careful with the physics than others like The Dancing Wu Li Masters, which I will say without hesitation is not very good on the physics. The author had no training in physics himself. And it was rushed into print and sold more copies than many of the others put together. So not all these books are worth reading. I mean, maybe I should make that clear. But I think that Nick's book reflects the earnest efforts to make sense of these strange quantum entanglement phenomena. And it's a popular book with no equations. But it's a well-received example of the genre. And The Tau of Physics, for me, runs hot and cold. But that's because it's doing two different missions at once, whereas Nick Herbert's book is maybe a more straightforward physics popularization. It's like, let me try to explain some cool physics ideas and just march forward. And I think other books that maybe do that better have come out since then. But that book came out in like 1985. It was pretty early in the genre. And I think it was a pretty admirable job in its time. And again, it's not just me. I remember there were books, physics textbooks, aimed at both undergraduates and even graduate level textbooks on quantum mechanics before topics like Bell's inequality had really percolated into the mainstream textbooks. It was a long delay until they entered the main textbooks in the '90s. So in the '80s, one could still find published textbooks recommending both Nick Herbert's book and some of them recommending the physics as helpful supplemental reading. So you don't have to take my word for it. We all have our tastes. And they can rightly vary. But these were seen as serious enough to be helpful supplements. And I think that's a fair-- I think that's a fair assessment. They really were showing up on syllabi and in textbooks really across the US and Canada for a long time. I think there are some better books out now. But that's true of any popular science topic. I mean, a lot's changed since the '70s and '80s. But in their day, I think they were quite reasonable, careful descriptions. So Alex says, like what books? So I mean, maybe I'll talk more about that later. I have my own personal set of favorites. And a bunch of my friends have written them by now. But I think, as I say, for the earlier period when this was not yet such a common thing for other scientists to do, I think these were quite successful and respectable efforts. Let me press on. I've got one more section here for today's class. And then I'd be glad to come back for more chatting here. Let's look briefly at that third part for today, which is why am I telling you about all this crazy stuff. Why on Earth are we hearing about this Fundamental Fysiks Group today? And that's because, again, what I stumbled on without knowing about it ahead of time was some pretty surprising connections, one might even say entanglements, between this very strange Wu Li tie-dyed counterculture scene and some recent developments like quantum cryptography today, which is now really, really considered pretty serious stuff. So what were some of the ideas and legacies that this group helped really nurse and put forward? As I mentioned, they were totally obsessed or fascinated by quantum entanglement. And they really wanted to push on it. One of the things they wanted to do was see, could you use quantum entanglement to basically break relativity? Are they headed for a showdown? Which is a pretty significant question to ask. These are the two main pillars of modern physics. And it looks, on the face of it, like they're not compatible. That's what upset Einstein so much. That's what Einstein called it spooky action at a distance because Einstein was convinced that anything like entanglement seemed to violate a kind of local causes yield only local effects. And that question with the same kind of intellectual stakes really drove a lot of this Fundamental Fysiks Group's discussions as well. So one very clever idea came from Jack Sarfatti, that same guy from the saffron robes. He actually filed for a patent disclosure, the first step toward trying to retain a patent-- and again, you can see he sent it to Ira Einhorn, which is partly why it wound up in the archival collections of many physicists back in the spring of '78. The idea was to use the exact same source of entangled photons that his fellow Fundamental Fysiks Group member, John Clauser, had literally been using in the laboratory, using the exact same source-- they were getting tours of John's lab at the time-- a certain down conversion in excited calcium atoms. You emit these entangled photons, send them both towards double slits. So now, we get to think about the double slit experiment, as well as collecting screens in the back, and have this variable efficiency slit detector. That's pretty clever actually. You'd have a kind of potentiometer on here, a little dial, where you could tune this to either always determine exactly which slit a given photon went through or never and have a continuous range in between. So you could adjust the accuracy with which this slit detector could determine through which slit a given photon passed. So you have a varying sharpness of the resulting interference pattern. This goes back to that wave particle duality we looked at in class some time ago. The idea was then the varying interference pattern here, as you tune up or down the sharpness with which you can tell which slit information, that, Jack argued, should then be instantaneously correlated with a varying sharpness of an interference pattern over here. Then he says, hook it up to a speaker. So now, you could basically be encoding like voice messages instantaneously across arbitrary distance. That can be pretty handy to be able to do. So you harness the variable sharpness of an interference pattern from double slit, and then harness the instantaneous connectedness from quantum entanglement. And now, you should be able to send varying interference patterns instantly across arbitrary distances. That was at least the claim, which was the basis for this patent disclosure. He goes on to say, this isn't just cool. We could really use it. Remember the discussions he's immersed in. He says, "The device could give instant communication between an intelligence agent, a spy, and his headquarters. In this case, we would use correlated psychoactive molecules, such as LSD, affecting neurotransmitter chemistry." Let me just back up and say, what he's proposing here is this will really work if you get both the CIA field agent and the CIA Handler hopped up on LSD, send the field agent into, say, Novosibirsk in the Soviet Union. And then the field agent's brainwaves would be instantly connected to those of the handler. The patent was not granted. Part of the reason the patent wasn't granted was actually because it fell apart because of fully local discussion. So around the same discussion table in Berkeley was another core member of the group, Felipe Eberhard, who was originally a European scientist. But he spent his career as a staff scientist at the Berkeley Lab. And he also had the taste for these fundamental questions in quantum theory. And he introduced what has become the textbook reason why we all now know with some confidence that entanglement cannot be used to send signals faster than light. We know that because these people debated it around a conference table every Friday afternoon in the '70s. So Eberhard got wind of Jack's cool idea because they were talking about it every week. And Eberhard said, that shouldn't work. And it was actually Eberhard who cracked it, [INAUDIBLE].. And it goes like this, if you only have access to one side of a measuring device, one side of a Bell-type test, then all you have access to is the series of measurements that you perform, that you measure, say, polarization in orientation 1 or 2. Remember, we get to choose that. And the outcomes-- was it spin up? In which case, the green light flashes or spin down, in which case, the red light flashes. We know detector settings and measurement outcomes from one box only, if we only have access to a local information. If you only have access to this side, then you don't know that the red and green flashes actually were correlated with what happened all the way over there. We only recognize the correlations inherent in Bell's inequality when we can bring information from both detectors together and compare the log books. When we can compare the sequence of detector settings and measurement outcomes from both sides, then we can go through the data and say, oh look, there were these remarkable correlations in the outcomes. If you only have access to one box, all you see is what looks to be a thoroughly random series of plus 1's and minus 1's, green flashes and red flashes. And I actually put that in the very end of the optional lecture notes from a few class sessions ago on Bell's inequality. So that's easy to summarize now. Now, we can all say, oh, of course, of course. No one knew that. John Bell didn't realize that in 1964. In fact, Bell himself thought that entanglement was on a collision course with relativity. This was pretty subtle stuff. And we all know that as a community, in part because it was being debated over in a very friendly, very kind of fun, ongoing debate by members of the Fundamental Fysiks Group in Berkeley, California in the late '70s. This became the standard response that was packaged up with another one of these very lovely, widely circulating popular books by another person on Ira Einhorn's distribution list, Heinz Pagels. That's also a lovely book, by the way. I recommend that one. And Eberhard's own proof-- you see, he had to publish it in one of the Italian journals, not in the Physical Review because of that kind of ban. And he says and the conclusions that, "Don't be discouraged. Just because this argument fails doesn't mean we should stop asking that question." So he says, "Don't discourage the work being performed by groups like the Fundamental Fysiks Group because these are really important questions that are not getting sufficient attention." So Jack's main patent effort did not go anywhere. But that only upped the ante for another core member, named Nick Herbert, who then introduced the following a few years later. There's again a kind of cat and mouse game on unspooling here with the discussions in Berkeley. So Nick Herbert introduced something he called FLASH, which was a playful acronym that stood for first laser-amplified superluminal hookup. He also writes physics limericks. He's very playful. So the idea was the following. He considered, again, a system that would emit pairs of entangled photons, just like his friend John Clauser was really doing in a laboratory at Berkeley. And he put in a little extra machinery here. Let's go through this one step at a time. Both for classical Maxwellian waves, for regular light waves, and even for individual quantum photons, we can characterize the polarization in several different ways, several equivalent ways. If we choose to measure the so-called circular polarization using a certain combination of wave plates, then we can measure the photon as either being right-handed, right circular-polarized, or left-handed. It has to do with the orientation of the spinning electric and magnetic fields with respect to its direction of the photons trapped. So we have a complete basis for polarization called circular right/left. If we choose to measure polarization actually with a linear filter-- think about the usual kind of picket fence picture of a polarization filter-- then we have one of two opposite outcomes. It's either going to be horizontal or vertical with respect to some orientation in space. So we have two complete bases with which we can characterize light's polarization. In fact, we can do any linear combination-- half/whole elliptically polarized states. But it's enough to consider circular right/left and plane polarization horizontal/vertical. And in fact, at the quantum mechanical level, people long ago long since worked out that you can write one state as a superposition position of the other. They're two complete bases. We can always rewrite one in terms of the other. So at a quantum mechanical level, the quantum state of right circularly-polarized photon could be written as an equal superposition with a particular phase between horizontal and vertical. Left is a different superposition and vice versa. So now, Nick says, well, from this source, prepare exactly the entangled states that John Clauser and Stuart Freedman really were. Prepare these maximally entangled states where if we choose to measure photon A in the horizontal basis-- excuse me, in the linear, in the plane polarization basis, we have a 50/50 chance of finding photon A to be either horizontal or vertical. If we find photon A in horizontal, then photon B must be in vertical. If we find photon A in vertical, photon B must be in horizontal. These maximally entangled. If one is basically the equivalent of spin up, the other must be spin down-- spin down, spin up. That exact same entanglement implies that if we happen to have measured photon A in a circular basis-- let's say we happen to find right-handed-- then photon B must be found to be left-handed when measured in circular basis and vice versa. That's actually just an exact rewriting because these two bases are so easily relatable. So now, Nick gets a little more creative. We subject photon A to some measurement. And again, as usual the physicist over here gets to choose, can choose at the last moment, well after they're emitted, which basis to choose for her measurement. She can choose to measure either in the HV basis or in the RL basis, by choosing by choosing whether or not to put in a waveplate. She makes that choice. Whatever choice she makes, she finds a particular outcome with 50/50 odds. Let's say she happens to measure plane polarization and happens to get a horizontal photon. Then photon B, at that moment, gets put into the quantum state V. It must be in the matching pair, the correlated state. So after its state is determined but long before it gets to its detector station, it enters this magical thing called a laser gain tube, which, at that point, was about 20 years old. There really were working lasers by then. A laser gain tube makes lots and lots of copies of the signal that enters it. So Nick says, well, we'll just send photon B into a laser gain tube, which emits many, many copies of the incoming photon. If the incoming photon was put into quantum state V, then now you have a bunch of copies of photon V coming out, photons in the state vertical polarization. Send those to a beam splitter. So now, half of the output goes to station 1. Half the output goes to station 2. Measure those in the distinct bases. One station is measured in circular, right or left. The other station is measured in plane, horizontal/vertical. And now, you have a series of possibilities. If you chose to measure photon A in the circular basis, you have a 50/50 chance of finding right or left. Let's say you find left. Then photon B must have been put into state right, right-handed. Then you magnify. You amplify. You make lots and lots of copies. So if the detector station is only measuring circular, you should find them all in the right detector, right-handed detector. And they should be equally split because of this property here, equally split in the other station between horizontal and vertical. If physicist A happened to measure photon A in the horizontal/vertical basis, they should get the opposite set of outcomes. All of photon B copies that are measured in that same basis should go to vertical. If A was H, B had to be V. Meanwhile, you should have a 50/50 split at the other station. So now, you can signal instantaneously, or arbitrarily quickly, by choosing here-- it's like you can make Morse code by whether you choose a detector station A to measure in circular or plane polarization. That's pretty awesome, right? Nick wrote it up in a preprint that was circulated in part thanks to Ira Einhorn. This is what the cover looked like because he was really into Eastern mysticism at the time. This was released by the Notional Science Foundation. That's not a typo. He had no support. He was on welfare. This was his joke for like consciousness studies, as opposed to the National Science Foundation. And he worked out of a post office box. This was submitted to another out of the way journal that had only recently been founded called Foundations of Physics, not the Physical Review. I was able to track down many of the referee reports. The referees were mostly split and stumped. They said, this shouldn't be able to work. It seems to violate relativity. And yet, they couldn't find the loophole. The paper begins-- the preprint begins an even wider journey. So it's sent by the journal to be refereed by Giancarlo Ghirardi in Italy. He writes back right away saying, reject it and here's why. He wrote a one paragraph referee report, which Giancarlo shared with me. In the meantime, unbeknownst to Giancarlo, the preprint was also circulating due to the remnants of that Einhorn network. And it gets to people like John Wheeler. And Wheeler passes it on to his students, including Wojciech Zurek, who then gave me copies of his notebook to look at. They're grappling with this challenge. And then independently of each other, both Wheeler's students-- Bill Wootters and Zurek, and then separately Dennis Dieks in the Netherlands, who'd gotten the paper again through the underground network-- all find the reason why this won't work. It turns out the same reason that Girardi had found a year and a 1/2 earlier, but had never published, that Nick Herbert had missed something very subtle, as had most of the referees. Yes, we can write the right-handed circular photon as an equal parts superposition. But what the laser does is it doesn't make n copies of a single particle state. That would violate the linear behavior, the linear nature of the Schrodinger equation. You can't go from one to many because Schrodinger's equation is a linear equation. What happens in the laser gain tube is that you make one multi-particle state, instead of many single particle states. That's really subtle. If that went too fast, you're in exceptionally good company. Wojciech Zurek told me that he stumped Richard Feynman with this, which I believe. This was really hard to puzzle through. This was not at all obvious. Now, we can take it for granted. This became known as the no cloning theorem, which is to say you can't make arbitrary many copies of an unknown quantum state because that would violate linearity. This is as central as the uncertainty principle today. It's in the opening pages of most of our textbooks on quantum information science. We, as a community, know this because people were beating their heads against this really clever challenge from a member of the Fundamental Fysiks Group. That last bit, the fact that you can't make copies of an unknown quantum state, is what was the last missing piece needed for quantum encryption. That's why encryption systems can work. If any eavesdropper tried to grab onto one member of an entangled pair and make many copies of it to then do exactly what Nick Herbert had in mind to reverse engineer that secret signal, it will destroy the entanglement so that neither the eavesdropper nor the intended recipients will get the intended information. So encryption works because you can't make copies of a quantum state without basically breaking the encryption and destroying the original state. That's what leads, with very rapid order, to the very first protocol for quantum encryption, as I wrote about more in the book and I could say more. So let me wrap up. We're almost done. So the Nobel Laureate Roy Glauber gave credit to Nick Herbert for really pushing on this seemingly weak joint between quantum theory and relativity. He said, yes, it was wrong. But it was remarkably productive. We had to learn through that mistake as a community. And that was a worthwhile exercise. So he's not saying Nick Herbert was right. We all know why Nick Herbert wasn't right now. But he's saying, we learned that because of the process, because some people took the question seriously. Very similarly, Asher Peres, another leader in the field who just passed away a few years ago, actually turned out to have been one of the referees of the paper who admitted many years later he couldn't crack it and therefore urged the journal to publish it because he said, this must be wrong. But I can't figure out why. We need to crowdsource this, so we all learn, because this is pointing to something very, very deep. So I find that a productive mistake. So let me close here. This very unusual sounding group, called the Fundamental Fysiks Group, actually left their mark in modern physics in ways that can be a little surprising or counterintuitive. Members like John Clauser conducted the world's first experimental test of Bell's inequality. It was through these really playful, but earnest efforts to understand things like faster than light communication devices that the entire community learned just exactly how to keep quantum theory compatible with relativity. It led to new understanding of how amplifiers work at the quantum mechanical level, which had never been clarified before. A huge area of quantum optics begins to get a boost from that. These are the first kind of publications that are bringing pretty grounded information about quantum entanglement even into classrooms and whole lesson plans being built around that for undergraduate instruction in physics. So this work with all of its clear successes, which are things otherwise I personally don't endorse or believe in, nonetheless was active in a very unusual time to bring some of these really important topics, topics that we all now, I think, can agree are really important, like entanglement and relativity, back into US classrooms. So I'll stop there. Again, I apologize for running a bit long. If people have questions, I'd be glad to stay on longer. But of course, I know if people have to head off, please don't be shy. Any questions about that? Stay well, everyone. Remember, no class Wednesday. See you next week. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Optional_Discussion_The_Day_After_Trinity.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Welcome, everyone. This optional informal discussion for 8.225 STS.042, as hopefully you all know, we're just going to take our time, just very informally chat about really anything that's on your mind around the most recent material, but in particular things that might have been curious, or puzzling, or disturbing, or whatever. Any of the above that might have been elicited in your mind upon watching the documentary film, The Day After Trinity. I actually just watched the YouTube version with some of my family over the weekend. I didn't realize how poor the video quality was. I apologize for that. If you were watching it on a small device, maybe you didn't notice. But if you were watching it on a super big crazy large TV, you would have seen it was very blotchy visually. But anyway, hopefully it was nonetheless clear enough to get the gist. And I have no agenda for today, I'm really just happy to go whatever questions, or comments, or thoughts are on people's minds. So anyone should feel free to jump in. I expect we'll have a small enough group, we don't have to worry about breakout rooms. We can just manage the discussion hopefully pretty informally with the group this way. So I'm curious. I mean just the floor is open. Were there things that surprised you from the film things, that surprised you the film didn't broach or cover, anything? The whole topic is open. Really interesting question. That's a juicy one. And I should also say, by the way, even more than usual, I invite the TAs to jump in as well. Please don't be shy. But let me give you my first crack at that. I think a lot of people who were convinced to work on the project, and the film gives a little hint of that, they really were not given very much in advance to go on. They had to make a fairly snap decision in the middle of very dislocated times. And often because of secrecy, they couldn't be given a kind of full briefing ahead of time. Some of them didn't have much idea of what was coming. And my sense of it, there was a lot of the folks who did volunteer then to join the project, really were deeply concerned about developments in Europe, especially around what looked like the kind of unchecked military progress of the Nazis. So I think there was a sense that they wanted to find some way to contribute to the defense effort, to the war effort. Now, that raises other questions, which are broached at least briefly in the film. If many people's principal motivation had to do with Nazi Germany, what happened between so-called V-E Day, the ultimate defeat of the Nazis in April of 1945, and the remainder of the project, let alone the use of the weapons against Japan? So we should come back to that. But nonetheless, I think people felt strongly they wanted to do something. Many people did, even very young people. And this was a decision formed without full information, because often they literally couldn't be briefed. We talked last time about these, the so-called indoctrination course, that Robert Serber gave once they actually arrived at Los Alamos. For some of them, he had to tell them, oh, we gathered you here to build a bomb. Like he first had to say that, because it might not have been obvious to everyone yet when they gathered. I just find that extraordinary. There's a larger question though, a larger thing that actually a good friend of mine, a historian Michael Gordin has written a really interesting short book about. The book is called Five Days in August, which comes to another aspect of your question, which is how did many people, scientists on the project, military officials, political leaders, journalists, and other policymakers-- how did many different kinds of people come to think about the bomb once they learned about it? And Michael's argument in brief, let's see if I do justice to it. Again, some of the TAs know this book very, very well. So they should correct me. But my recollection of my friend's book is more or less the following. That the bomb was treated by many, many of these folks as not particularly unusual or special prior to its use. And my favorite parts of Michael's book actually have to do with the mechanics of how the weapons were literally put together, and then armed for use against targets in Japan. So on the island of Tinian, there's some footage in the film of the staging grounds on Tinian. And my understanding is it's not that President Truman, the US President Truman, gave direct orders every time to say drop a bomb now on Hiroshima, drop a bomb now on Nagasaki. There was no such order given, not that specific. The decision was made to use the weapons when they were available up to the discretion of the field commanders, just like every other kind of weapons, every other kind of bombing. Again, as there was some footage in the film, and others, you might already have known, there had been campaigns of what was called incendiary bombing, using certain kinds of conventional explosives that were especially likely to cause ground fires against some cities in Europe, but many cities in Japan by the Allied air forces, US in particular. And those weren't separately ordered from on high. There was a chain of command in the field, so to speak, and up to their expert judgment-- were the conditions right, is there a storm coming, should they launch a series of bombing raids or not. That was not a presidential level decision, or even necessarily a general's level decision. That was kind of outsourced to the ranking officers in the field. And what astonishes me, what Michael found, is that in the earliest stages the nuclear weapons were treated like that. They were plugged into an existing system of local decision making about when to use them, under what circumstances. And that Michael suggests shows that they were not treated as a separate category early on. They were treated like another weapon. There had been many other kinds of weapons developed and deployed from scratch in the course of the war. And that these were treated at least logistically or kind of bureaucratically like yet another weapon to be deployed largely at the discretion of the local field commanders. And that to the extent there was a standing order at all, it was to use them when you got them. It wasn't like drop one, wait three days. There was no such coordination of that level. And in fact, there was no third bomb ready. The two were dropped. The bomb dropped against Nagasaki was, and you get a little bit of this in the film, was the schedule was pushed up by the local field commanders because there was a worry about severe weather, a typhoon coming that would have threatened certain kinds of aircraft runs. And in fact, Nagasaki wasn't even the original target city. There was a decision made by the pilot in mid-flight to change paths, I think largely because of things like changing conditions in the weather and so on. So these were not highly staged, or these were not highly carefully thought out and vetted individual decisions. The decision was load them into the planes, and drop when you got them. And they had two, the makings for a third not yet compiled weapon were literally on a boat, making their way to Tinian at the time that Japan surrendered. So the idea was drop them one after the other, much like the incendiary bombing raids have been, one after the other. And what Michael and others have argued, what made the atomic weapons seem special was the fact that the fighting stopped, which was not necessarily predicted. No one thought you drop one, and they'll stop. Or at least no one seems to have thought that. No one thought if we timed the other one very carefully that'll convince them. There it was no such calculation for the timing of the second bomb that was used in this case against Nagasaki. And they were prepared-- they were preparing to drop a third one, and they would have dropped a fourth if they had one. They just literally didn't have the parts. So Michael argues that it was actually in response to the fact that unlike other military campaigns, there was at least a coincidence in time this time around that Japan did surrender within days of the second bombing. That then in kind of a post hoc rationalization, according to Michael, made people think that the atomic weapons were now somehow in the category of their own, or different, or special. That was a long answer. It's a very complicated question. I think a bunch of the scientists who worked for example on Los Alamos, scientists, engineers, I think they thought this was different early on. And some of their concerns which we hear about less so in the film and other reminiscences and interviews and so on, memoirs have been published, is that they were often quite concerned that some of the army figures, military figures, who also witnessed for example, the Trinity test, were not as shocked as the scientists were. I think I mentioned very, very briefly in my previous lecture that for the Trinity test in Alamogordo, New Mexico, one of the things that really, really seems to have left a visceral reaction to many of the folks who then were able to inspect what would soon be called ground zero, was that the heat and especially pressure of the blast had fused desert sand into glass. And that at least, as some of these scientists later recalled, that gave them a kind of visceral sense of the forces at play. And then some of them report being equally viscerally upset that the generals weren't impressed by that. So some of the scientists seem to think we are playing with the forces that power the sun. We're dealing with extraordinarily powerful physical phenomena. And so at least in their later recollections, they seem to have treated this as different, not just another TNT bomb souped up. And therefore, they were concerned that others didn't treat it with a similar kind of differentness. But I'm pretty convinced by Michael's argument more broadly. If you look at the political, high level political apparatus, and the military planning war department apparatus, this was treated at least bureaucratically in terms of standing orders as such, much more like the next thing in the arsenal to drop, as opposed to some total separate category in itself. I don't know if anyone else has any thoughts on that. But that's my not very brief take on that very juicy question. It's a good question. Right. Thank you, Steven. That's also a really hard question. It's a important question, but it's a hard one. And I talked a little bit about this in a previous class session. But it's worth revisiting. So the short answer is many of these folks were aware, as I say, the scientists and engineers, let's say, were definitely aware that there was going to be associated radiation and fallout. And that would not be good for people. They knew there was an inherent clear danger that was not like TNT or other conventional explosives. That really did strike many of them as being importantly different. They did not have anything like the sophisticated large-scale statistically powerful controlled medical experimentation type results or body of knowledge that would develop later. So we shouldn't expect them to have had the same knowledge about the human biological implications of fallout, as the community would later acquire, largely from the use of these weapons and from other kinds of tests. So they didn't know what would later be known. They knew something. They knew enough to treat it very relatively carefully. It is. Yes, thank you. They're aware that that's at least a likely possibility. On the other hand, they were pretty cavalier, even with their own personal safety. So for example, I think I mentioned briefly. One of my college professors who had just retired before I started college, 10,000 years ago approximately, he had served as a very young kid on the Manhattan Project. I mean he like a kid. He was basically like a first year grad student. So even our TAs are more advanced than this kid was, when he joined the project. And he would say, they would carry little radioactive sources in their jacket or vest pockets to calibrate the local machinery. So they would put alpha emitters next to their gut. I don't recommend you do that today. So there was a kind of cavalierness about radioactivity in general for a lot of these folks. Partly, oh there's a war on. We don't have time to be all special and careful. Partly they didn't know what would later be better known. And then partly for people who were working on the project who weren't themselves scientists or engineers, there was a really I think very significant lack of effort to better inform those people about appropriate safety procedures that based on the knowledge that even was in hand at the time. See what I mean? And that's just for radioactive emitters generally. Then we come to the question of fallout specifically from the use of these weapons in a large human population. And that, again, they knew enough to start studying long-term effects. They figured this would not be good for people. They didn't have detailed information about which cancers at what rates, what kinds of demographics, young children affected differently than older people, a role of preexisting conditions, the kinds of things that kind of biostatisticians would come to study much more carefully later. So again, that was a pretty long answer. But they were not entirely ignorant of the dangers of radioactivity, in general, or some of the likely implications of radioactive fallout from the use of these things in a city. And they didn't know what other people would learn in more detail later. That's a great point. I'll just add again quickly, thank you, Tiffany on that point too. Coming back maybe to Steven's question more directly. Meaning more directly from me, than I [INAUDIBLE].. There was a long-term study of people who were in Hiroshima and Nagasaki at the time the bombs went off. Long-term longitudinal study largely under US auspices. So Japan, as you may know, was occupied by the Allied forces, mostly the US, but the Allies of World War II for the better part of a decade, starting very soon after surrender. So there were a lot of US-based long term studies of the effects of the weapons on site, including a lot of these biomedical epidemiological type studies. And yet it still remains deeply, deeply controversial how many deaths to attribute to the bombings, partly because of different definitions about who was exposed in what way, partly to the usual difficulties statistically of saying would this many people have gotten lung cancer even if they hadn't been exposed to this kind of basically pathogen for lack of a better word radioactive source, and also because a lot of the stuff remains classified or remain classified for a long, long time anyway. So one of the themes we'll come to a bit in this class, we've already got a little hint of it, is the role of classification in making it difficult to draw certain kinds of conclusions from otherwise very complicated scientific and technical projects. In this case, it was a lot of the stuff was subject to boring classified strictures, which didn't help with independent statisticians redoing the stats, and stuff the stuff that we would kind of take for granted in peer reviewed scientific studies today. So literally today, I'm sure I didn't look recently. But I'm sure if anyone goes on Wikipedia, and says how many casualties from the bombing of Hiroshima, there will probably be one of these like warning comment signs. People are flaming each other. The editors don't agree. Because to this day, 75 years later, asking what sounds like a countable seemingly straightforward question, how many people were affected, killed, injured whatever, is remarkably straightforward to very well intentioned experts, let alone people who are operating from further away from the original information. So it gets really tricky to assess literally how many people were killed. The immediate blast damage that actually was counted up pretty compellingly clearly. There's not much argument about who died within the first week. But all these kind of long tail, more epidemiological type things, remain actually quite tricky and can inspire some real earnest debate. Yeah, it's a great point Lulu. And it's again, it's tricky. I find it tricky myself. And just biologically, I'm closer to those times than most of the rest of you, but not that much closer. I mean I grew up during the Cold War, and thinking about-- I didn't have to do duck and cover drills during like h-bomb raids, but like my parents did. So I heard about it. Anyway, but nonetheless it's hard to get my mind back to a time of total war. I mean, thank goodness, right? But if you look at the casualty counts, the fatality counts from the incendiary bombings of Tokyo and Dresden, they very likely exceeded at least the short term casualty counts from the bombings of Hiroshima and Nagasaki, at least they were comparable, meaning there were cities where hundreds of thousands of people were killed by dropping of weapons from aircraft multiple times in just the weeks, let alone months, before these to our eyes new kinds of weapons were used in August of '45. So the scale of the human costs of civilian of total war where it's not people in uniform fighting on a battlefield but anyone who's within some geographical territory, is suddenly considered fair game as a target. That wasn't invented with the use of nuclear weapons. And the difference that sometimes people would articulate soon afterwards was it took 1,000 aircraft to level Tokyo. And it took one to level Hiroshima. So some people would start saying these really are different. Let's be careful. Let's think about this carefully. Others would say 100,000 people dead. That's like last Thursday. It's just remarkable what's the comparison class. Let me put it that way. What are you comparing it with, and how do our own individual sensibilities shift compared to what many folks' were immersed in, at the time? And I don't say that to say there's no culpability. I'm just saying it's astonishing to me that might have seemed unremarkable in its day, given 3 and 1/2, 4 years of this really devastating heavy civilian law, so-called total war. So I just find that-- I mean thankfully, I don't think we have that kind of experience in any of our lifetimes of that kind of civilian loss of life in wartime. There's been plenty of bad things happen since. Yeah, anyway so just the scale of civilian human loss, where you were deemed a legitimate target if you happen to live in a city, let alone if you were drafter, let alone if you volunteered, or whatever. Yeah, and again, not to get ahead of ourselves, but for this coming Wednesday's class session, we'll talk about, and you get a hint of this in the film as well, the next step that happened to have been taken in this thread of discussion was the development of hydrogen weapons. Which are, roughly speaking 1,000 plus times more destructive by any parameter you choose to measure, than the weapons that had already been used. Right, right. On the question of Niels Bohr, it's a great point, Lucas. And I think it's a lot like something we talked at least briefly about in a class session maybe two classes ago, on Einstein's very famous letter to President Franklin Roosevelt as well. These for a long time were held up by scientists as examples of these Nobel Prize winning architects of modern physics were also geniuses of the human condition and political wizards. I wish any of that were true. And it's not to take away from either Einstein and Bohr to say that those stories are not wishful thinking. Boer very earnestly tried to serve in exactly the capacity you mentioned, because you're quite right. And the fact is he was completely and kind of predictably ineffective at it, no one had any reason to listen to this mumbling Copenhagen guy who didn't seem to know anything about how governments work. So his entreaties, his efforts were-- he did make the efforts. Let's give him credit for that. He did see ahead. He was predictive, as many people were, of a an arms race unless there was this narrow window of opportunity to try to avert an arms race, mutual suspicions, and distrust growing. But I think there was very little chance of him having any actual success. And it turns out he had approximately zero success. So it's not to fault him for trying or for recognizing this as a likely scenario. But we tend to hear about these stories by like biographers of Niels Bohr, who are enamored of the many extraordinary things that Bohr was able to do during his lifetime. But if we ask the same question of historians of US foreign relations they'll say, Niels who? Because it had literally zero impact, at least from where they're sitting on the very, very complicated relations between US and Britain, let alone US and the Soviets, or anything else. So guess I take these episodes as examples that some of these physicists were indeed trying to think one or two steps ahead, and they were not-- maybe they misjudged their own likely influence. So they get credit in my mind for thinking about it at all and trying, but not much of a tract record in hindsight. It's a great question. Thank you, really interesting. So the first impact was it was not very many from any given campus. So it wasn't like one department was emptied out. And in fact, there was at the time, a very, very strong concern on many, many, many university and college campuses in the US to keep as many physicists on site as possible, because they were involved in teaching what we would now consider basically classical physics, blocks, sliding down planes, and elementary circuits and radio, which was often done in physics departments, not a separate electrical engineering department in many, many places. So there was this huge concern about, literally they talked about rationing and stockpiling physics instructors to teach canonical college students, but also to teach huge numbers of navy and army special students who were drafted sometimes often straight out of high school, were not yet college students, and were sent to many, many campuses around the US for crash course study, very accelerated schedules, to learn basically rudimentary physics, like for radio communication or like sighting or sound ranging. It really was like classical physics. And in fact, physics departments were admonished not to waste their time teaching quantum mechanics or nuclear physics, because that's not going to be of any importance during the war. That makes me smirk. And they should put all their efforts teaching classical E&M and Newtonian mechanics. Whether it's for how do you measure air pressure, what's a barometer, how do you measure angles with a handheld of sighting device. So I find that really fascinating. So the term that's often used you might have heard, the Second World War has often been called the physicists war. And that term has usually been taken to mean, oh, we know what that means. It was all about radar and exotic weapons projects, like the Manhattan Project. And so I actually wrote about this recently. It turns out the phrase was introduced before the attack on Pearl Harbor, before there was Los Alamos laboratory, at a time when radar was still deeply classified. The usage of the term, if you do one of these Google Ngrams, all these fun things we could do now searching in English language corpus, the phrase physicist wars use, it spiked in 1943, at which time anything about the Manhattan Project was deeply, deeply classified. They were not printing editorials about the nuclear weapons in the op ed pages. So the term physicists war was used to respond to the fact that more and more young people, mostly men, mostly boys, had to be trained in crash course settings on what we would call classical physics for practical purposes on the battlefield, by which they meant barometric pressure, measure angles, use sighting on a gun, and elementary electronics, and circuits. So the physicists war, the meaning of the term actually changed quite dramatically after this dramatic revelation about these new previously secret weapons projects and the kind of impacts they had. So the physics classrooms were bulging faster than ever before. They weren't getting drained out. There were more people rushing to study physics, more people who were put into physics classes, whether they chose to or not. And more and more people teaching physics, not just for math and chemistry, but for music. Anyone who had any kind of quantitative skills, political science, you know any statistics? Good, now here's your crash course to relearn calculus, and you're going to teach Maxwell's equations next week. And again, anyone who was caught poaching, that was their word, stealing legitimate physics instructors from one campus were subject to public shaming. So there was an effort to keep more and more physics instructors in the classrooms, not to teach fancy, fancy nuclear fission, but rather these other topics. And that meant that when a couple slipped away, it was I think lost in the noise I have a feeling. Because there was so much tumult about huge kind of throughput. MIT'S campus switched to basically 12-month instruction and we had I think at the height of this three times more-- or not three times-- it was like a 3 to 2 ratio. It was like a 1 and 1/2 times more so-called special students, meaning full-time army and navy students assigned to MIT for six week terms than MIT students. So the MIT's own campus was taken over, was put into high gear. And that was very common in many, many other liberal arts, tiny liberal arts colleges, big universities, and everything in between, these very, very intensive short term crash course things. In fact, if I can find it in time, I'll put it in the chat. One of my favorite photos is actually of 10-250. I don't know if it's your beloved or your hated, a room many of you will probably know, one of our big, big lecture halls, 10-250 filled with young men in uniform taking one of these crash course classes in probably elementary circuits. And it's just a see of faces that all look the same, in all the same uniforms, just 400 people filling 10-250 all in their khakis. That's what campus at MIT was like. That was the predominant thing as opposed to, hey, where did all these people go? They must be doing something top secret. Gary, you're up. And then Alex. AUDIENCE: I just wanted to share a movie suggestion, because you mentioned Dresden. And it's a chilling movie, but it's about how things can get normalized. Bob McNamara, who was the Secretary of Defense during the Vietnam War, but was very involved in World War II with regard to the bombings in Tokyo, the Tokyo fire bombing. I got to know Bob very late in his life. And I thought I'm going to hate this guy. I grew up just hating the whole idea of who this Secretary of Defense and the Vietnam War might be. And he was reflective about it. And he made this movie when I knew him. But it's really worthwhile to watch, The Fog of War. DAVID KAISER: Yes. AUDIENCE: One other story about normalizing, my father was a corporal, never went to college. And he was in Tinian. DAVID KAISER: Yeah. AUDIENCE: And so the day after Nagasaki, the general said to him, Sammy, you want to go up and see what we did yesterday? So my dad flew over Nagasaki literally the day after. 30 or 40 years later, if I'd asked my dad about it, and he was a good man. I love him to this day. But he would say-- I'd say, what do you think? He said, Gary, we did what we needed to do. He was just a corporal. He was a radio mechanic. DAVID KAISER: Yeah. AUDIENCE: But it got normalized. I have a question for you David. Do you know why Jacob Beser was the only man on both? I don't know the answer, but I knew his son. I knew Jerry Beser, his son. But Jacob Beser was the radar man on both flights. And why did they do that? He's the only person that was on both flights. DAVID KAISER: That's fascinating. I didn't know that, Gary. So my guess-- I'm just guessing, I'm speculating. And Wikipedia might answer it quicker and maybe even more accurately than me. My guess is that it was still, it was kind of a shoestring operation. Tinian, I mean they were throwing this thing together. They just barely had gotten the kind of airstrip to dry. It was so last minute that my guess is it could as well have been short staffed as any kind of strategic reason, especially if it was a more kind of specialized role, like the radar operator. It wouldn't surprise me if he was the person who had been trained and was available. I don't know if it was any more than that. DAVID: Yeah, so Gary, let's see something else you said, oh, yeah. Yeah, so let me just say briefly, because it's something that the film didn't really get into. A lot of the scholarship in the 40 years since even the film came out, let alone the 75 years since the events in question, has really gone around and around frankly, or has been a very vigorous debate, I'll put it that way, about basically were the bombs needed that's kind of a second guessing after the fact. Or put a little more prospectively, what convinced people to give this standing order to use these weapons at all? And soon after the war, I think it was 1947, then the person who had been Secretary of War during the Second World War, Henry Stimson, published a famous, famous article basically justifying the use of the nuclear weapons. And he's saying it saved a million lives. And then it became was it either a million fatalities or casualties. But either way he said, if the United States had to actually launch this planned in-person invasion of the mainland of Japan, if there was no way to secure a surrender in an end of the wall other than a fully armed invasion of the mainland which is what was indeed being planned by mostly US Allied forces, then the casualty count both among US soldiers and among Japanese civilians would have been astronomical. And he mentioned the figure 1 million. Well, where did he get the figure 1 million from? It seems to have been invented in 1947. It seems not to have come from any classified or since declassified military planning document ahead of time. So then the question becomes, would an invasion have been necessary. Were there other reasons to have thought, based on information available at the time, that Japan might or might not have been getting closer to a surrender even prior to the use of the bombs, alone would an invasion have been needed? And if an invasion were needed, where do these casualty figures or these projected figures have come from? So I'll just say, I don't know all the answers. But I people who've pored over this very, very carefully. And say each of those questions is subject to, let's say, many compelling and quite different answers. So many US military officials as later came out in declassified documents that were at the time quite secret, were already raising skepticism about whether a full invasion would be needed at all. And then other people say, oh, but that was before the really quite horrible fighting in Okinawa, which was in June. So maybe there was a concern that the Japanese troops would actually not surrender, even if they were clearly overwhelmed numerically. So which military officials even thought the United States would need to mount an invasion, that's already actually pretty complicated based on the documents that have since much later become available. Some people clearly wrote down Japan is on the brink of surrender anyway, prior to the use of the weapons, prior to any invasion. Other people said, oh, we thought they might surrender, but now we're not so sure because of the really quite just horrific fighting, the kind of dug-in fighting, especially at Okinawa, not too long before the use of nuclear weapons. Then there were all these questions like, were the weapons used to end the wall or to secure the post-war? So was it signaling vis-a-vis rivalries with the Soviet Union, which was a popular thesis, or other things? Were they used for a kind of geopolitical strategy at least as much, if not more, than for direct kind of military strategy? And again, there are very compelling arguments based on lots and lots of documents, on not just those sides, but the whole kind of spectrum of that. It gets pretty murky pretty quickly. So there are waves of revisiting these questions of what military role do the weapons play. And I'm not saying one or the other. I think it's complicated, right? But you can get these really interesting studies based on a lot more than what was publicly accessible in 1955 or even 1965, with much more passage of time and much more active use of Freedom of Information Act and declassification and so on. It's pretty complicated. And so again, we go back to an earlier question. Do the bombs end the war? Were they somehow special that incendiary bombings that leveled cities were not capable of ending the war, but these special weapons were? That was a popular interpretation very soon after the war, when these things were revealed in such a dramatic way. And then generations since have gone back to the question. And I think it's more complicated, more subtle, than some of the early pronouncements, either pro or con. And if people are interested, I'd be glad to share kind reading lists of lots of things that have dug into that since then. I think the answer is neither. And here's my understanding, Alex. But again, this is my-- I think it was raised as a serious question long before the test, and settled to everyone's satisfaction before the test. So I don't think it was only a joke. I think by the time of the test date, people weren't worried about that the way they once had legitimately sat down and calculated. They didn't just say, gosh, I wonder. Let's try it. So I don't think it was a joke. But I also think it was no longer a live debated scientific question by mid-July of '45. My understanding is Enrico Fermi was the one who articulated that question, but not like the morning of. I think there was enough time for enough people to sit at that and check each other's calculations, and not only physicists, actually people who know stuff like chemists, and other people who had relevant expertise. I think there was a vetting of that question well before the test date. Yes. So the short answer is yes. But again, but you phrased it very well. Was there any indication? The answer is yes, not definitive indication. That's why it's still-- you hear the phrase war is hell. Like it's hell for information as well as for all the more obvious reasons. So here's what I mean by that. It has come to light since then that, first of all, some countries had cracked the Japanese encrypted cables, and was leaking that information to Washington, and London, and probably Moscow as well. So some elements of the Japanese government, which at this point was highly fractured and not really functioning very easily because Tokyo had been leveled, some parts of the Japanese government were sending out feelers to try to see if a neutral third party could begin peace negotiations or negotiations to end the fighting. It's not clear that they had the authority to speak for the emperor, who was refusing to step down, and that was part of this unconditional surrender and so on. So it's not clear if that represented what fraction of the then existing Japanese government that represented. It's also not clear how far that ever got. But there indeed was some evidence that some highly placed members of the Japanese government before the bombings of Hiroshima and Nagasaki were seeking to start basically negotiations. They were seeking a third party who might broker them. And again, that doesn't mean they were on the verge of surrender. What seems to be more clear than anything else from these other scholars, not my own work but I've learned from other people who've looked at this much more carefully, is that it was just complete disarray. I mean Washington DC was not subject to that kind of bombardment. And so there was something like a functioning chain of command, and something like a fractious but still functioning series of decision makers who could disagree, but nonetheless act on behalf of a functioning government. I think not much of that was functioning the same way in Japan by that point in the war, because of things like the enormous disruptions from the incendiary bombing. There were different factions, as one might expect. There were different factions in the US and in Britain. There were different factions within Japan. And it wasn't just military-civilian. There were different kind of complicated-- different loyalties and groups. So it's not clear who was speaking for whom, and if they even were capable of having the equivalent of like a full cabinet meeting, the way we might expect in a US context. So it's a great question, Alex. And unfortunately, the answer, again, is like oh gosh, it's complicated. So with the fullness of time, evidence has come out that there were some parties looking to do some things. What's not at all clear and maybe it wasn't clear even at the time, was in a sense, would that have carried the day. Who were they speaking on behalf of? Who had the authority at the time? Because I think that had gotten very, very again, let's just say non-trivial. So it's a great, great question. Sure. Yeah, no great question. A lot, lot, lot, lot, lot, lot has been written about the Oppenheimer affair, let's say, the Oppenheimer hearing. Over the years, again, people keep revisiting it as more and more it becomes declassified, more people are interviewed, and there's a lot. The 100th anniversary of Oppenheimer's birth was 2004. And over the 12 months of his centennial birth year, there were 12, like one per month, 12 full length biographies published just in that year, at least 12 that I counted. There could have been more. So I mean when I say there's like an Oppenheimer industry poring over this that's what I mean huge, huge amount of study of this. So let me just say briefly therefore, instead of going on for hours, and again we'll talk a little bit about this also in Wednesday's class. Oppenheimer ended the war really seen as the person who built the bomb, which is not historically accurate. The Manhattan Project had 125,000 people. And even the so-called inner circle was many, many people, not just one or two. But he really became the kind of face of it. And that became both for good and for ill. I mean he was treated as a hero for many people who thought that the bomb had ended the war, and therefore saved lives, which was a dominant interpretation soon after the war in the United States. He was seen as this kind of wizardly philosopher king, like the film says, who had made it happen despite all the odds. On the other hand, that also made some people think that he was uniquely dangerous. If there is a single mastermind or a single set of secrets with which these things can be made, then that invites extra scrutiny, right? And this is something we'll talk a bit more in Wednesday's class. So there was a lot of scrutiny as the political assumptions and fortunes and groups in or out of power within the US and elsewhere shifted after the war. Oppenheimer had made a lot of enemies along the way, because he was very smart, and never, ever, ever shy about letting everyone just how smart he was. He could be terrifically cutting, I mean just mercilessly mocking of people whom he considered intellectually inferior, sometimes of students, often of people who were outside of his immediate circle. So he was mean to physics students. He was horrible to people like the Secretary of the Air Force, which is not a smart thing, by the way. If you're ever in a position to mock the Secretary the Air Force, let me advise you not to do it, whether the person deserves it or not. So that Oppenheimer kept making enemies, basically. He was in the spotlight, and that gave him the opportunity to shoot his mouth off a lot. And much like Galileo, a couple of centuries earlier, he was often adopting positions that one might find perfectly reasonable defensible, but defending them in an often very aggressive and kind of mocking way. Galileo and the Pope, Oppenheimer the Secretary the Air Force, right, not actually so dissimilar in some ways. So Oppenheimer started collecting a slew of really quite emboldened political enemies. And he started giving advice on strategic steps for the nuclear arsenal that many of them didn't like either. And again, we'll talk a bit about this, and there was a hint of that in the film as well. So in less than a decade, he had then made a long series of quite powerful people who were pretty ticked off at him. And so in some sense, his hearing was, I don't know if it was overdetermined. But it wasn't actually such a shock. The fact of the hearing, the fact that people wanted to get him. What was left out of the film, and this largely came to light in more recent scholarship since the early 2000s, is that Oppenheimer frankly was trying to hold on for a long time. As we now know from since declassified documents, he often sold a bunch of his own students out. He was very desperate to protect his younger brother, Frank, who does feature a lot in the film. Frank had been an explicit card carrying member of the Communist party in the '30s, not active, but he joined the party as many, many, many, many academics did in the US, in the '30s. That wasn't so unusual. But that made Frank an easy target. And so it looks like not infrequently Robert Oppenheimer would try to cut these deals. He thought he could control the situation. Don't bother my brother. He's actually no harm. But you know what? Those guys, they're reds as well. And often those guys he'd point to were some of his own former PhD students behind their backs. So he was doing a lot of these things, as we now know, that paint a more complicated frankly human picture than the saintly martyr that often comes up even I think in this film actually. So his own political skills were not quite as sterling as he often thought. His way of coping with fast changing political winds were not what I think any of us would hope we would do within that situation. I don't know how we would do, but it didn't always look great, as we now know. And it comes off as almost a kind of grasping, at least in some of these scenarios. So Oppenheimer becomes, frankly, more interesting and also just much more human. There was a reason to write at least six of those 12 biographies in 2004. There's a lot of dimensions to this person, and a stand-in for larger fast moving currents certainly within the US, and I think beyond that as well. So I'd be glad to talk more about Oppenheimer. But let me shift to Teller. Teller was also a very early recruit, very active at wartime Los Alamos. And already a very accomplished theoretical physicist, that generation they all knew each other. It was a small community before then. He was an emigre. He was a Hungarian Jewish person who fled fascism. He'd been studying in Germany like many of them. So he had very good reason to come to the United States in the '30s and was hired here in that wave that we talked briefly about. And he was very accomplished, and in particular very accomplished in nuclear physics, ahead of the curve on that. He also, I think because of his experiences with these very short lived communist and socialist governments in Central Europe after the end of the First World War, a bunch of these emigres like Teller had experienced what they considered just really scary chaos in their reckoning right after the end of the First World War, when they were basically late teens or early 20s. A lot of them from self-proclaimed communists, or other socialists, short lived. There was a huge back and forth, the far left, the far right, the far left, far right, militias in the streets, hopefully nothing that any of us will experience soon or not so soon. So Teller was a devoted anti-communist before he landed on the US shore, from I think really not because he read Marx and was horrified. Because he'd seen these blood in the streets kind of fights in the 19-teens. And many of the folks who came with him from that era shared those sentiments. So he was very, we might say jingoistic. His adopted home really should develop the most significant weapons. We really should keep all these other kind of bad people at bay was a rough paraphrase. Now he spent the war years at Los Alamos trying to argue that a fission bomb was small potatoes. And why are we wasting our time with this. We should actually be going for a fusion bomb, of the sort we will talk briefly about on Wednesday. So he kept butting heads with leadership, because he thought they had the wrong priorities. This was at a time when there were still like micrograms of plutonium on the planet, no working device had been developed and tested. And he's like, that's such a trivial task. Give me all the resources. I want to build a hydrogen bomb. He didn't quite say it that way. He had very big strong fights about resources and priorities. And so more or less, to appease him, because he was a smart contributing member he was given a small little study group even during the war to work on fusion weapons. And I think even by his own reckoning, they got nowhere. But it was a study group to lay the seeds. In fact, even before there was a Los Alamos Laboratory, he was arguing from literally the very first discussion on Berkeley's campus with Oppenheimer and the rest that fission bombs were trivial. We should be going right now from summer 1942 into fusion. I mean, that was his idée fixe. And after the war, he's more and more listened to by the folks who are getting more and more impatient with Oppenheimer. He seems like he has answers that many of the Air Force leadership after the war actually does want to hear. They want to hear about bigger bombs that would be under the control of the Air Force, that would be the reason the Air Force would be the most important service branch. And all of these things get tied up in kind of rivalries within the US infrastructure. So Teller's star rises basically, or at least he has more people listening to him after the war, including-- he's very influential, not him alone, but he was a big mover in getting a whole second weapons laboratory established, the Livermore National Laboratory, which opened its doors in September of 1952 with the express purpose of working full tilt on fusion weapons, this kind of thing. So he's moving in that. He did testify, as you heard in the film, at Oppenheimer's security hearing in a way that really when the transcript was published, was seen as really devastating by many, many members of the physics community. He doesn't come out and explicitly call Oppenheimer a communist or directly a security risk, which other people were calling him, fairly or unfairly. But he says, Oppenheimer is a complicated person, which was not meant to be a complement. And the famous line, which I'll get pretty close, Teller says under oath, I wish that these big important tasks were in hands I understood better, so therefore could trust more. He says, I basically don't trust Oppenheimer, because he is this kind of complicated maybe scheming kind of person. And so Teller gives this basically damning, what's seen as damning testimony. For decades he outlived Oppenheimer for decades. For decades, people wouldn't shake his hand when he'd come to give a physics colloquium. It really had a generation long impact on how people thought of him. Nonetheless, in other circles many others couldn't get enough. And so he continued to have influence among certain US administrations well past the '80s, I mean really decades after the '40s and '50s. So much like it's easy to venerate people like Niels Bohr, oh, world government, if only he could have carried the day. Like well, I don't think it works that way. Teller is I think often vilified often by the same people who want to see heroes like Einstein, Bohr, and Oppenheimer. All these people are complicated and the world is complicated. And I don't want to leave the impression that Teller was somehow like a masterful demon, which he's often been cast as. He certainly had very, very different ideas about the scientist's responsibility and about the geopolitical stature of the US, vis-a-vis changes in Europe, especially. And he was just dogged, I mean he was just undeterrable with this focus on more and more bigger fancy weapons, and let's make them available more and more quickly. So that became the kind of symbol of this larger split within the community, between the Oppenheimer camp and the Teller camp. Ironically, Teller was being surveilled by the FBI just as much as Oppenheimer was. They didn't trust this strange weirdo European Jew. Who is this guy? He came from places where there are communists. I mean, the FBI was really in overdrive in this period, I say chuckling now. It wasn't so funny. And there are these amazing passages in his once classified, in Teller's once classified file, saying we don't trust this guy. We don't know who he's talking to. He's no more trustworthy than Oppenheimer. So it's a-- I don't want to say it's a shell game. But it wasn't like he was seen as unproblematic. He was seen as someone who said the things that some other powerful people wanted to hear more at that time. I think he was useful. I think he earnestly believed that. But he was also useful to frankly, other people who really did understand a bit more of the kind of hard power plays of Cold War politics. So I think Teller is also, there's much, much less attention paid to him in the secondary literature, even though he lived a much longer life, he was involved in many more unusual events. But I honestly think it's a lingering sense that he was somehow just the bad guy. He's been cast I think in a kind of one dimensional role that I think there's just been less attention paid to the nuance that we might now get to try to do for someone like Oppenheimer. Yeah, yeah. So we'll get a little hint of that. It won't be the Chicago site. But a little bit of that theme certainly comes up in the next film we'll watch, basically a week from now. And I'm going to figure out a way to make sure you can all see it through the Canvas site. The film's called Containment. I think I mentioned it briefly, by my friend and colleague, and actually my former advisor, Peter Galison, and some of his colleagues. So it's a much more recent view of some of the kind of nuclear legacies, in particular, environmental and radiological. And not just of the wartime, but of the kind of Cold War period ever since. It's in some sense still going on. So how does one handle waste byproducts of civilian nuclear power, let alone above-ground nuclear testing, when some of these very, very deeply dangerous poisonous isotopes have half lives in the tens or even hundreds of thousands of years? So the stuff isn't going away anytime soon, to put it mildly. And so that's the kind of thing that the new film, the more recent film, grapples with. And we'll watch that and we can have a similar session to talk about it next week. I'll make sure we all have the details in an email. But yeah, I mean again, it's one of these things where people knew better at the time. They didn't know as much as we know now. But the cavalierness, or even sometimes callousness, I think, with which some legitimately dangerous materials were handled in the '40s and '50s, it's still astonishing. And it's not good to say the only place it was worse was some parts of the Soviet Union. I mean if that's a competition, that's not great in terms of just the cavalier disregard for local populations that the only place more poisonous from nuclear efforts than Hanford is the corresponding site in, I forgot which one it is. Tiffany might remember. But one of these sites within the former Soviet Union, where they had also been doing enormously large-scale plutonium production and similar things like that. Right, in fact, my understanding is that he was exposed because he threw himself on top of it, to knock him apart. He basically threw, like a soldier would on a grenade or something like that, to save comrades. That he literally threw his body on these now joined critical mass. So my mentor that I mentioned who had served as a young kid in Manhattan Project, so the rest of his life my teacher wore Slotin's belt buckle. They'd been buddies. They were like totally young kids together at Los Alamos. And so the accident happened soon after the end of the war, like weeks. They were still working full tilt. And Leonard Rieser was still there, my teacher. And so Slotin died within days of the exposure. It was a very, very intense exposure right in his gut, which is not a good place to absorb lots of these things. And so Leonard, he got his belt buckle for the rest of his life. I only know the name because I heard about it, starting when I was about 18. But you're right, Alex. It's an amazing story about talking about cavalier, anything like a just shop level safety standards, let alone treat these things with some level of awe and respect because of the forces of nature. But just like don't prop stuff open with a screwdriver. Like, come on. Just basic level machine shop, let alone safe handling of nuclear materials. So that's another great example. Good. Any other questions? These are great topics, great questions. No, it was. It was definitely after. It's a good question. Because I don't know the fine details. Some of the TAs might remember better from recent readings. The rough story goes like this though. Soon after the end of the war, and I guess I'll talk about this briefly on Wednesday, it was signed into law I think August of '46, so roughly a year after Japanese surrendered, the US passed the Atomic Energy Act. And that was a sprawling piece of legislation that established the successor agency to the wartime Manhattan Project. Nominally civilian, it created what was called the Atomic Energy Commission, the AEC. Thank you, Tiffany. But it also made all things related to nuclear power, born secret and subject only basically to military or official government control in the US. So it had an unintended consequence of hugely stifling a private sector development of energy generation, reactors. That was then amended, a big amendment to the Atomic Energy Act in I think '53, again, others on the call might remember. I think was 1953. Which began to make certain kinds of-- we might call it private-public partnerships, a kind of limited entrepreneurship possible in the power generation market. It still wasn't a free market. But it was no longer a felony to share certain kinds of documents of otherwise innocuous information, seemingly innocuous, with no direct weapons potential. So you start getting a more active what's the word? Sorry, trying to build up a kind of working quasi-market private-public partnership with major industrial contractors like General Electric, like Westinghouse, building commercial power reactors by the mid to late '50s. It doesn't take off the way other new technologies have done in history, because it still has a lingering period of not clear quite how to get this right. So over the course of '50s, there were power reactors actually coming online, ShippingPort news I think was the first or among the most famous somewhere in Pennsylvania, for example. So it didn't become ubiquitous right away, but they were coming into genuine civilian use in the outskirts of populated areas by the end of the 1950s. For the submarines and things, that was again I think a mid to late '50s development. Admiral Rickover, Hyman Rickover was I think the main moving force behind what was often called modernize the Naval fleet, including making reactor-driven submarines that could stay underwater much longer, for example, let alone armed with nuclear weapons. Which could the power come largely from nuclear reactors on board. For strategic reasons, it wouldn't have to surface as frequently or things like that. So again, I think those were being developed largely within the military. So it didn't have the same kind of public-private, same level of friction. But those were being, again, I don't know when the first one came into service, probably by the early '60s at the latest, certainly under active development and testing by the '50s, late '50s. And as we'll see in Wednesday's class, this also begins to coincide with growing concerns about fallout, so about above-ground nuclear weapons testing. Concerns about safety or otherwise of nuclear power generation was not the first topic of real kind of public consternation or debate. It really was about the fallout coming, back to the questions that Steven was asking earlier. What happens when you blow a bunch of these things of these things up in the atmosphere and stuff comes down? And so we will hear a bit about that in Wednesday's class. So the first public efforts, first widespread anti-nuclear mobilizations were really also starting by the late '50s. And they had these fits and starts of temporarily successful treaties that would stop, and then they'd go back. And so a lot of the kind of official treaty space, as well as more informal political organizing against certain aspects of nuclear technologies, that's also growing over the course of the '50s into the 1960s. They're growing up together. There were efforts, by the way, throughout the early and mid '50s to try to so-called modernize the weapons side, so to shrink down. So both to make enormously humongous hydrogen weapons that were thousands, many thousands of times more powerful than the fission bombs of the Second World War, but also to miniaturize them, so that they would be more likely to be useful tactically, not strategically. Strategically was the lingo, Lucas you might know from talking with the folks in Security Studies, strategic is the idea that is roughly speaking symbolic. Would it be impossible for any enemy to attack you because they'd be assured of immediate annihilation? Because you have city leveling megaton bombs. So that was seen as strategic to have an unsteady balance of power. Because no one would be dumb enough to trigger those enormous things for use. Then there's tactical weapons that people might actually plausibly use on a battlefield, without the high-level hand-wringing about, oh my goodness. It's mega-tonnage. So people were making so-called Davy Crockett, basically like shoulder fired bazooka sized nuclear weapons, which were actually developed and deployed in the field, thankfully never used, but certainly tested. So there was both a miniaturization to make these things more ubiquitous in certain kinds of tactical US military planning, not only US, but including US. And then the enormity, the monsters that we'll talk a bit about in next class that were really just made the Hiroshima Nagasaki weapons look small. It was both these kind of fronts of development on the more weapons technology side. Those were actively under development throughout the '50s. Yeah, absolutely right. And again, we think back to that. And yet that was not sufficient to end the program, much is the fact that poison gas first deployed in the First World War. Also, the winds would change, and you would not infrequently wind up gassing your own troops inadvertently. So the competing calculations about what's going to be a worthwhile device to pursue, unfortunately that was not unique either. Yeah. Yes, it's an idea that comes and goes. As many of you may know, the US for the time being, is still subject to a limited test ban treaty, which suggests that there can be no-- I'm getting the wrong treaty, actually more recent treaty. There was a more complete test ban treaty the US signed starting in 1992. It was one of the last things that President George H.W. Bush signed before the Clinton administration. So the US presently, by treaty statute, is not allowed to test new nuclear weapon designs. And that led to an enormous multi-billion project called stockpile stewardship. So there was testing allowed, non-nuclear tests allowed to be performed to maintain the aging stockpiles of existing weapons. Epoxies dry out and change. All the kind of conventional materials don't have a shelf life of 10,000 years. So stockpile stewardship is meant to enable laboratory style testing for materials to maintain the existing arsenal, not to develop and perfect new weapons. And so some of these areas get-- some critics at least say there's a little gray area. Some things that some people will say are being used for stockpile stewardship might actually have implications to help with new designs. It's again, it's an area where there's ongoing debate and push-back largely away from full public view, because so much of it still remains classified. Yeah, Amanda. I agree. That is a chilling part. And that as I said, those lists were made. In fact, they were made I think by the so-called interim committee, which included Oppenheimer and a couple other physicists, as well as the Secretary of War, and other military leaders. It was a group of maybe a dozen people. And so for example, there was a decision made early on I think by that group or at least the recommendation from that group, not to bomb the city of Kyoto, either by conventional means, let alone with the newer weapons. Because that was seen as too culturally significant. It was a kind of millennium old city of art culture and kind of heritage. And that wasn't a kind of nukes versus incendiary. That was like, please don't bomb that city, the US committee advising the US military on that. I think it was part of that same exercise to say what cities would be so-called legitimate targets for use with the nuclear weapons versus other kinds of attack. And I don't know that means that they thought the bomb was special. I think they thought it was new. And I think that was the idea of, for any new element in the collection, so to speak, you want to understand, its parameters, its characteristics. So I don't know that they thought it was somehow in a category unto itself. I think my sense of that is that it's a new thing. We want to learn about it, because we intend to use it a lot more. At least we have the option open that this will become a more common element of the arsenal. We'll talk a bit about that actually, again, on Wednesday as well. That there were, even amidst the debates over whether or not the United States should pursue a hydrogen weapon so much more powerful, the debate was rarely as we now know nukes or no nukes. It was fission versus fusion. And one leading reason not that was put forward not to pursue fusion weapons, because it would slow down the production of lots more fission weapons. Because the idea, at least among certain influential people, including Oppenheimer, was that the US should have hundreds and thousands of these things at the ready. Because at the time, they were very, very few in a stockpile. So the concern wasn't these things are so terrible we should never use anything in this whole category again. People had that concern, but not these groups behind the fence advising we now know. Their concern was making these things which might not even work anyway will interrupt too much making the things that we now know more about, because of these things like the usage against actual cities and the fact that they were not previously destroyed targets, environments. So you could learn a lot from these post-bombing surveys. Yeah, anyway, so that's a bit of a preview. We will talk a bit about that in Wednesday's class. Yep, yep, yep. No, that's an excellent point. Good, I think we'll pause there. That was a really interesting discussion. Thank you all for taking the time. It was an option during a busy part of the term. So I hope it was an interesting and worthwhile discussion. If you have other questions, of course, please don't hesitate. I'll have my regular office hours on Wednesday morning. Feel free to email me to make a separate appointment, of course, at any time. And otherwise we'll meet at our regular class time on Wednesday. So stay well, everyone. See you soon. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_14_Radar_and_the_Manhattan_Project.txt | [AUDIO LOGO] DAVID KAISER: OK. Let me start. I'll share a screen, and we can start today's material. So on Monday's class, we talked a bit about some of the kinds of work that was going on, largely within Germany or at least other parts of Europe, but a lot of it right within Germany itself around topics like the discovery of nuclear fission and early efforts to put nuclear fission to more practical use, including both for early ideas about nuclear weapons as well as for power generation nuclear reactors And we saw that was deeply, deeply bound up or caught up in this head-spinning swirl of the onrush toward the Second World War, the rise of the Nazis, the soon overt fighting across Europe and, of course, much beyond that before long. So Monday's class was largely about developments within Europe. And today, we'll talk about a range of developments, mostly in the United States. And so we have, as usual, three main parts for the class. Here at the bottom is my note reminding us, again, as I just mentioned, about the film The Day After Trinity, and I wanted to add that note because the course material today, the lecture for today, is mostly going to be looking at conceptual ideas, physics and engineering ideas, as well as new institutions or institutional arrangements in which many physicists began to find themselves. And we won't be talking in today's class session about broader questions, ethical, moral, contextual about the actual use of these new weapons. And that's partly I want to make sure that we do have time to talk about that, at least to start those discussions with our optional discussion section on Monday. So today we'll be not exactly a technical history of the wartime projects. But we'll get into a bit more of, what did physicists, mostly in the United States, find themselves being drawn into or being wrapped up with during the late '30s and throughout much of the 1940s? So we'll start, actually, by talking a bit about radar, which often gets kind of overlooked these days. The drama of the nuclear weapons tends to obscure many, many other full-tilt defense projects or weapons projects that physicists and engineers were really immersed in during the war. And so we'll talk about radar for a good chunk of today. And then we'll shift and talk about some aspects of the Manhattan Project and the film will cover other kinds of aspects in addition. So let's talk first about radar. So radar was around since before the Second World War. The first working units had been developed actually in many countries independently. Simultaneous discovery is a phrase that historians will often use. There were groups working independent of each other in different countries in Europe and the UK, some in the United States, in Japan, in Soviet Union and other places, it's been found, that all came across similar ideas-- came upon similar ideas in the mid 1930. The idea for radar is to emit electromagnetic waves, Maxwell waves, classical radiation, let those waves reflect off of some target, some object, and then collect the echo, collect the rebound, the reflected waves that come back to your device. And then you can do things like use the fact that these are Maxwell waves. They're electromagnetic waves. They should be traveling at a constant speed of light. And if you have very good electronics, good timing, then you should be able to measure the difference between when you send out your own pulse when you generated your waves and when you receive the echo. So that gives you the time. And if the constant speed at which those waves are traveling, you can then determine the distance toward that object or your target. That was the original idea. There were working units, as I say, in many parts of the world already by the early and mid 1930s. More sophisticated units that were developed actually during the Second World War, so by the early 1940s, built in a very clever addition, which was to measure not just the time of arrival of that echo but also measure the frequency shift. They had very quick Doppler analyzers to measure the shift in the frequency of the return signal compared to the signal that the unit had sent out. So then you could actually measure the speed-- at least speed along the line of sight-- of the target object as well. So now these devices could measure both distance and speed of the targets, much of that before the start of the war. So both British and us based researchers had developed these long wavelength radar systems. And by long, that meant the wavelength was measured in meters, sometimes tens or even hundreds of meters, more radio waves. And those were what had been operational before the start of the war. And then once the war actually broke out, once UK declared war against Germany and US began to mobilize even before it declared war officially, that was really one of the earliest experiences that many, many physicists in these parts of the world had with a direct involvement with military matters. So it was the radar project that was for many, many physicists and other engineers, physical-science-based engineers, this was, for many of them, their first experience working closely on direct military projects. So one of the first challenges, a real, genuine, hard research challenge was to design new kinds of radars that used much shorter wavelength waves, so make the outgoing signal not meters or tens of meters but more like centimeters, or at most say, tens of centimeters. They want to shrink down the wavelength of the outgoing signal. And that was because the nature of the challenge had shifted quite dramatically. Remember, with shorter wavelengths, you can resolve. You can make sharper images of smaller-scale things. So if you only have a very long wavelength wave, you'll never be able to make out short-scale, short-distance phenomena. Why would they need suddenly to worry about centimeter-scale phenomena instead of meters? After all, airplanes are many meters long or large boats on the water. The problem was, beginning with right around the outbreak of the war itself, the famous or infamous German submarines, the U-boats, had become an enormous threat, a very deadly threat to both US and UK shipping interests, both for commercial shipping but also for Naval ships as well. Now the boats were ordinarily underwater. They were basically impervious to this kind of radar. But if a part of the boat would reach the surface, often as little as just a periscope, just a little sighting device to allow the members on the German subs to see their targets, if that breached the water, that would be a couple of centimeters scale target for these radars. So in order to try to have any hope against this now very deadly force of the German U-boats, the radar challenge became to find centimeter scale radar systems, not just meters or tens of meters. OK, so by 1940, just a few months into the real heart of the U-boat campaign, physicists and engineers within the UK had really made a huge advance. They had developed what became called the cavity magnetron. And here's one shown in the image here. You don't get a clear sense of scale. This is just a couple centimeters across. You can hold it in your hand. This is a handheld scale device that could emit very high power, high intensity electromagnetic waves of this shorter wavelength, of roughly one or early ones or were about three centimeters wavelength instead of meters. So you could create very powerful outgoing beams of short wavelength electromagnetic waves. By that time, however, the UK was under both bombardment from the air, the German Luftwaffe. The German Air Force was now doing very, very successful bombing runs, penetrating London airspace routinely and with devastating effects. So the blitz, the bombing in London and other parts of Britain was nearly constant. And so it seemed impossible to scale up this kind of bench-top level research into really full-scale research, development, and production. So they knew how to make one or two of these devices. They didn't know how to make factories worth, humming around the clock to make hundreds and thousands of these devices because that kind of industrial capacity was under constant threat from. Bombardment So in the autumn of 1940-- and remember, that's more than one whole year before the US even officially entered the war. So long before the surprise attack on Pearl Harbor, which finally was the reason why the United States officially entered the Second World War, long before that, a British delegation came over, quite a dangerous journey. They came over by boat, despite the German submarines. It was a delegation led by Sir Henry Tizard. It became known as the Tizard delegation. They came to the United States. And they set up a meeting in Washington DC to try to get US-based colleagues to partner and ultimately take over the lead on these next steps for radar development. They wanted to cooperate, but especially, have a safer home base or a headquarters within which this work could then be expanded. They came over with lots of blueprints, lots of paperwork, and also, literally one, one cavity magnetron. They were so rare, and they were in such high demand back in Britain, they could spare only one to hand over to their US colleagues. And the idea was between the paperwork, the specs, and the blueprints and the technical reports and literally the one working device, the hope was that these groups in the US could reverse engineer this thing, make many more of them, and improve the design. So at this meeting, which was held in a fancy hotel in Washington DC, it was a small group of US colleagues who attended who met with Tizard and his British team. But it included a heavy dose of MIT folks. So, in particular, it included Vannevar Bush, who had only just recently left MIT by that point to work full time in science policy at science advising in Washington DC. But until that time, he'd been the Dean of Engineering at MIT, came from electrical engineering himself. Likewise, attending this meeting was Karl Compton, the physicist, who at this point, was president of MIT. There was a dominant MIT presence in this top-secret meeting in this Washington DC hotel. And at one point, Compton actually literally stepped out of this meeting to place a phone call to one of his assistants who was back here on campus at MIT to see if MIT could spare space to be the place to build up the headquarters for this new Allied effort in radar. The biggest stumbling block was that there was a big faculty parking lot in the middle of campus. And you might know that then as now, faculty parking is super precious and rare. And so the concern that the president of MIT had was whether the campus could basically take over that parking lot to build this temporary laboratory to do work on radar. And the grudging response was yes. That's where MIT'S famous building 20 then wound up being built. None of you probably has seen building 20. Was torn down in the mid '90s. It was a temporary building literally built out of plywood. It was not meant to outlast the duration of the war. It was in constant use, in fact, for 55 years, much beloved on campus. It was only torn down to make space for what's now this Stata Center. And here's a picture, you can see the 1940s vehicles. It was literally a large-scale but temporary facility right in the middle of campus, famous building 20. That became the original headquarters then for the MIT-based Allied efforts in radar. It became known as the radiation laboratory or just the Rad Lab. OK, so this project became one of the earliest and largest projects sponsored by this new institution within the United States called the National Defense Research Committee or the NDRC. This was also a kind of brainchild of MIT'S Vannevar Bush. So Bush, because of his time in DC, he knew a US President, Franklin Roosevelt. He convinced Roosevelt and Roosevelt's immediate circle of advisors that the US, although it was still not officially at war, should begin mobilizing or getting ready. It looked like war could spill out and involve the US at any time. So Bush's idea was to make a meeting place, an institution that could help connect researchers in science and engineering, some at universities, some at industrial laboratories in the private sector, connect them with US military officials. The idea was that the military could come to Bush's organization, the NDRC, say, we really need a better this or a better that. Could you please get people to work on it? So the NDRC would be the kind of meeting ground to help arrange these contacts. So a little while later, roughly one year later, still well before the US even entered the war, Vannevar Bush convinced Roosevelt that wasn't enough. That just being able to arrange for research contracts was actually insufficient, and that in fact, there should be an even an expanded institutional base that became known as the OSRD, the Office of Scientific Research and Development and that Bush would lead that. So not only would the OSRD let out contracts like the older model, but it would actually have a much more active and ongoing role in production. It wouldn't just say, here's your contract. Tell us when you're done. It would have a steady oversight role to make sure these things were getting done on time and on budget. This next part is really just for Professor Gensler's benefit. But I find it fascinating. One of the things that stands out from Vannevar Bush's strategy was to use contracts rather than grants, let alone gifts, to make it look like these were equal partners entering into a business arrangement. I find this actually very fascinating. So the last thing Vannevar Bush wanted would be anything like a federal takeover or a federal bailout of higher education, that universities should be independent from the federal government. And so instead, if it looked like two equal partners coming to do business together, like in the private sector, we'll have contracts with overhead and all these kinds of affordances of a business-to-business style contract as opposed to a grant or a gift or things like that. So in actual fact, the universities were completely desperate for funds. This was now nearly a decade into the Great Depression. All of these universities were facing enormously difficult financial times. But Bush wanted to maintain the appearance of equal partners arranging contracts as opposed to anything else. I find that really interesting. OK, so what's going on then at MIT? Here's a photograph from the top. There was rooftop facility temporarily built not just in building 20, but even in part of the Infinite Corridor. This was now on the roof of building 4, part of the Infinite Corridor there. So they had many, many sites on campus. So the Rad Lab grew very, very quickly. It began once the green light was given and they could throw that plywood palace together, building 20. They began attracting staff to it. At first, they hired 30 physicists, most of whom were not previously at MIT. They came from other universities across the country. They had three security guards-- this was a top-secret effort right in the middle of campus-- two stockroom clerks and one administrative secretary, not so big. It was led by a nuclear physicist, not an engineer or even an electromagnetic expert. The physicist, Lee DuBridge, who at the time was at Rochester University in upstate New York, he was recruited to temporarily leave his job and move to MIT to be the scientific director for this. But it quickly grew well beyond physics or even electrical engineering. It included meteorology, experts in geology, what we've now called material science, even linguistics. How do you how do you identify signal from noise and so on? It grew very rapidly. In fact, after less than two years of operation, the staff numbered 2,000, not just 30. And it doubled again before the end of the war. So by the end of the war, it had 500 academic physicists, a very large fraction of all the PhD physicists in American universities altogether, a huge fraction were recruited to the Rad Lab. It was, by this point, spending $1 million per month. If you adjust for inflation to our contemporary currency, that's about a $15 million per month budget. They were burning through cash very rapidly. And this became the largest part of actually a very large suite of defense projects that were being done at MIT throughout the war totaling, again, in today's dollars, about $1.5 billion. MIT became, by a very, very wide margin, the single largest University contractor for war-time projects in the United States. And in fact, it was even a bigger contractor for these research and development projects than some of the largest industrial companies in the country, AT&T, General Electric, RCA, DuPont, Westinghouse. The research and development contracts from the OSRD that came to MIT were three times more than even those huge industrial behemoths. Now, those companies got huge contracts for production like building airplanes and engines and all the rest. But the actual R&D, MIT became an enormous, enormous node of this OSRD. So the Rad Lab staff-- now, they had 4,000 people by late stages in the war, they were very busy. They designed dozens of different radar systems, not just one type of system from that one cavity magnetron, but really dozens of variations. So you wanted to have, say, ground to air to get early warning about aircraft, air to sea, ground to sea and so on. Now, these were all in the centimeter range, a couple of centimeters. So they were of that new type, but now adapted to different kinds of tactical needs. They would conduct tests, literally from MIT rooftops, to see if their beta versions could detect actual aircraft from nearby airports, what are now the Hanscom Air Force base in the western suburbs and what's now called Logan Airport, both Air Force and commercial aircraft. And they also trained nearly 10,000 active duty service members from across the United States. They would come to campus for very brief, very intense training in how to use these new systems and then be shipped out and use them operationally. Now, at first, many of the theoretical physicists who were recruited to the Rad Lab, they were, even by their own lights, pretty arrogant. That's not just me saying so. Many of them came to that conclusion themselves. After all, they came in saying, oh, radar. That's just classical physics. That's merely Maxwell's equations. How hard could that be? Many of them had been immersed in these very fancy, esoteric ideas about quantum theory or nuclear physics of the sort we've talked about in recent sessions here. They were quickly schooled. They learned very quickly that calculating the actual electromagnetic field configurations for real devices, not just the kind that they assign to their students on problem sets, was as we physicists say, non-trivial. Means it was really, really hard. This was not at all an easy task. So as you all know and are still learning, as we still use in our own more mature research, oftentimes, it's very, very important and very helpful to exploit symmetries to simplify calculations, imagine a spherical symmetry, or if you really must, a cylindrical symmetry, so two dimensions of space can be treated the same way, and one third one's different. That was going to get you absolutely nowhere when it came to these real-world devices like radar. Here's an example of just some of the so-called components, these waveguide components, that were already in standard use by this time, by the mid 1940s. Few of these could be treated as a cylinder. None of them could be treated as a sphere. And these are just components. The actual parts of these radar devices, like, here's one schematic, would be putting all these together in these complicated forms. There is no symmetry argument is going to help you there. So what the physicists began to learn-- and again, many of them recalled this years afterwards-- was really a whole new way to think about their own calculations. And here, many of them credited the engineers-- with whom they were suddenly and often for the first time working very closely-- credit to the engineers for helping them to learn a whole new way to think about their own calculations, to think about effective circuits don't start from individual basic parts, and even more basically, focus on input output relationships. So you might have a particular circuit for part of that radar system that would have a bunch of resistors, some in parallel, some in series. You have capacitors. You have all these messy electronic components. And although it is the case that one can simplify these mathematically and find an effective circuit using things like Ohm's law, the engineers would say, don't even waste your chalkboard time on that. Stick a lead on over here. Stick a lead on over here. What's the current flowing in? What's the current flowing out? Infer an effective overall resistance. Have an effective circuit based on the input and the output. And stop it. You don't have time to do this kind of thing, plus when you're faced with those crazy, crazy shaped waveguides, even these simplifying mathematics would be no help. So several physicists, including this gentleman here, Julian Schwinger, who spent the war working at the Rad Lab, they later recalled that this new approach to this engineering input-output approach to problem solving really shaped how they thought about research questions even after the war was over. And that's a little bit of a foreshadowing. We'll look at some of the lessons that Schwinger took from his radar experience when he returned to challenges in quantum theory. We'll look at that in a few class sessions. OK, so by 1943, so in roughly 2 and 1/2, three years into kind of full scale operations for the Rad Lab, these kinds of units were actually developed and deployed all over the so-called theaters of battle. They were not only used in ground-based scanning stations. They were also put onboard aircraft, onboard Naval vessels. They began finally to turn the tide against the devastating German submarines, the U-boat campaigns, as well as the Luftwaffe bombing raids over Britain. It turns out these systems were not only, in some sense, defensive. It wasn't only trying to get early warning of an incoming attack, though they turned out to be very effective at that. These also became very important for offensive weapons, for weapons that would go and attack the enemy. And so one of the most substantial was actually a related OSRD project, the so-called Applied Physics Laboratory associated with Johns Hopkins University near Baltimore, which was set up in a similar fashion to MIT's Rad Lab. And one of the most important things there was developing the proximity fuse. So this would actually embed-- it was really quite amazing-- embed miniature radar units in the warheads, in the tips of these artillery shells. So now each artillery shell that would be fired from these very large cannons would carry its own ranging device. So it could tell in real time how close it was getting to a given target. So you could then wire these things up to explode, to actually detonate, not any old time, but only when they were within some preset distance from the target. And that had an enormous impact on the offense. Previously, these anti-aircraft efforts, like shooting these big guns from a Naval vessel against incoming aircraft, they typically had to fire hundreds of rounds, these very expensive rounds, to hit a single fast-moving airplane. They had a very bad return on investment, so to speak. They were not very effective. Once these same shells now were equipped with these proximity fuses, they needed on average two, not hundreds, to successfully strike an incoming aircraft. So after the war, it became common, especially for veterans of the Rad Lab of the Applied Physics lab, to say that nuclear weapons of the sort that we'll talk about for the rest of today and on the film, that these weapons might have ended the war. But it was radar, they said, that had won the war. So let me pause there and take some questions. I see the chat is filling up. So Fisher's right. So building 20 is indeed on the campus location where the Stata Center now is. And whether that's an ugly building or a beautiful building, we can all decide. But indeed-- it's a building of real legend. It was really supposed to last five years. And it was in constant use for 55 years, much beloved. There's a time capsule within Stata, so on the same physical location-- when we can all get back on campus, I encourage you to go take a look at it for those of you who might, like me, still be remote. There's a time capsule in Stata of Rad Lab materials and memorabilia. They wanted to have it on the same physical site. And it will be sealed until sometime later in the 21st century. And so anyway, so that's a large part of MIT'S role and kind of legacy during the Second World War. So Hastens asks, how is the original radar built if it was so non-trivial? Yeah, good, so the non-trivial part was mostly shrinking it down to short wavelengths and having a means of generating high power, high intensity waves of that short wavelength, and also getting very careful electronics to detect the echo, getting even fancier electronics to detect any kind of frequency shift, the Doppler shift. The original idea of getting a big basically a big radio tower, send out large multi-meter wavelength waves, people had been generating radio frequency waves since around 1900, the 1890s. Think about Marconi and other people. So just generating long wavelength radial waves and detecting them, that's kind of like what radio does. You have to have a little more sensitivity to get the right radar echo and get the timing right. But that part was indeed close to a kind of trial and error, or let's say, building on established engineering principles. But getting it really to work with short wavelengths, much finer resolution both for time and frequency. That was much more tricky. Gary tells us that his own father was experienced in the war. He was, say, a mere corporal. Yeah, so he found himself-- because you're getting ahead of ourselves here. So Gary, we'll see photographs of the sorts of experiences your father indeed might have had, and also, more in the film as well. By the way, much like Gary is reminding us of, there was a large number of people who were stationed at places like Los Alamos and other Manhattan Project sites who were part of what was called the SED, the Special Engineering Detachment. These were people who were drafted. They were not recruited especially for the scientific project. Many of them had a high school level education, had some aptitude for physics and math, but were not full-time science students or engineering students. But if they had any aptitude with using their hands, radio kits and so on, many of them were sent to the special engineering detachment to these high-powered labs. And that's where many of them got their first kind of real exposure to higher level kind of laboratory work. And several of them then decided to-- it changed their life path. And many of them then decided to go and study these topics more formally in graduate school to get a PhD to become professors, when before that, they had never dreamed of such things. So that SED detachment played a largely kind of unsung role at a lot of these wartime laboratories. So Alex is right. So not only, by the way, were the proximity fuses so secret. He's absolutely right. They could only be fired over the open water in the beginning. They relaxed that by the end of the war. But the fear was if one of these failed to detonate and it landed on the ground, could the enemy troops capture it and then reverse engineer it? So exactly as Alex very rightly reminds us, these proximity fuses, especially in the early rounds, literally rounds, were limited only to Naval operations so if it didn't explode, the enemy couldn't capture it. I think is just extraordinary. Fisher asked, what was the status of radar in Germany? They had some prototypes. Yeah, that's right. So again, that's one of the places where there had been early working units, even before the start of the Second World War in the '30s. There were, again, efforts to improve them. But much like we saw in Monday's class, everyone, especially the Germans, thought that Germany would win the war immediately. The blitzkrieg was just blindingly successful in the early months and years. So they figured, why bother sinking a lot of research? The one exception to that by the way, was peenemunde, the effort to develop rockets. That really was invested in very heavily. And, of course, it did have a real impact, even during the war itself. But for a lot of these other high technology, wartime projects, the Germans tended not to invest so heavily because they thought they'd win. And then when the tide turned, they said, oh, we're not winning. But now we have to divert resources to proven technologies. And then, yes, we also have Alex-- Alex reminds us we can also thank microwaves for burnt popcorn and so much more, like most of our lunches probably during the pandemic. Any other questions about radar or burnt lunches? OK, let's switch gears now. Let me go back to sharing screen. These are great questions. Let's go back now and we will shift now to the other most famous project of the OSRD, the Manhattan Project. So I do want to emphasize, over the course of the war, this Office of Scientific Research and Development with Vannevar Bush at its head oversaw literally thousands of defense projects for the military. It wasn't like there were only two or three. It wasn't just radar, proximity fuse, and Manhattan Project. There were literally thousands of funded projects. The biggest though, the largest in terms of personnel and budget, and arguably, in terms of impact on the course of the war, were indeed radar and then the one we'll turn to now with the project for nuclear weapons. So this other main project was officially called the Manhattan Engineer District or the MED. Of course, it was rapidly shortened to just the nickname the Manhattan Project. Why was it called the Manhattan Project? It actually started the first headquarters for the first very small planning office was in Manhattan. It was a joint project between this OSRD and the War Department, in particular the US Army Corps of Engineers, long-standing organization within the army. And the Corps then as now has these regional offices. There's a Corps to oversee things around, say, the Mississippi River. There's a Corps office for the Northeast and so on. There's a regional structure for the Army Corps of Engineers, then as now. And it was the Manhattan office that was made in charge of this early-on quite modest-scale effort to begin investigating fission for weapons purposes. So they were literally headquartered in New York City. And that wasn't entirely random. Some of the earliest scientists consultants on this project were at Columbia University in New York City, including Enrico Fermi, who had just, as we saw last time, had just left Italy upon receiving his Nobel Prize in December of 1938. By January of 1939, he was then basically set up at Columbia and many of his colleagues there. So that's why it was called named for Manhattan. Very, very quickly, this project grew well beyond that small little planning office. In fact, it included more than 30 sites across both the US and Canada. Over the course of the war, it wound up employing more than 125,000 people. It was a massive, massive project, far, far larger than radar in terms of personnel. And of course, the overwhelming majority of those people, of those 125,000 people who were paid by the Manhattan Project during the war, most of them had almost no idea of what the project was actually about. There was a very, very tight control of information flow. And you'll hear more about that, even in the film, Day After Trinity. So although these people were officially working on the Manhattan Project, very few of them had any idea about what the project was even aiming to do, let alone, the relevant details. So among these 30 sites, there are four that really were most critical to the project and that get talked about most often. And again, you'll hear more about these in the film. And those four are highlighted here on the map, Chicago, Illinois; Oak Ridge, Tennessee; Hanford, Washington; and Los Alamos, New Mexico. The film is largely about developments at Los Alamos, but it does give us insights into these other three main sites as well. We're going to take a tour today at some of the-- a brief tour of some of the kinds of work that was being done at each of those four main sites during the war. We're going to start with Chicago. That installation, that part of this very large sprawling project, was called the metallurgical laboratory. They weren't doing metallurgy. These names were always meant to throw off suspicion. It wasn't actually basically material science or chemistry. But they gave it a name the sound kind of innocuous. By this point, Fermi had moved from Columbia to the University of Chicago. He moved pretty rapidly upon emigrating to the US. And he was one of the main leaders then of the Chicago Met Lab, as it was known for short. The Metallurgical Laboratory took the lead on trying to further understand these still-quite-new fission reactions. And that was what Fermi had been working on even before he even knew it, went back from his Rome days, and certainly one of his main interests once he arrived at Columbia. So the Met Lab was focusing on the physics and chemistry of fission. One of the first things they did was build this monster here. It was known by the code name Chicago Pile One. You can see, it's literally a pile. This was the first working nuclear reactor in the world. It was built, as some of you may know, under the stadium seating of the Stagg Field stadium on campus. There were underground squash courts, like racquetball courts. And they during the war, in secret, took over some of those underground facilities to start developing and building the first working nuclear reactor. No one above ground was told that they have a bunch of uranium underneath. So what was their goal? Each time a uranium nucleus underwent fission, additional neutrons were released. This was not so obvious at first with the experiments from Hahn and Strassmann in Berlin. But this is exactly the thing that a lot of us based labs and elsewhere began to identify and then to confirm as soon as Niels Bohr arrived in New York and let his US-based colleagues hear about fission. Once people began replicating the basic fission reaction in their labs, they could then hone in on the other reaction products. And so it was clear by Fermi's time at the Met Lab that more neutrons came out each time a single nucleus underwent fission. And that meant there was a prospect for getting a chain reaction. If you inject one neutron into the system, if this nucleus splits and gives out more than one new neutron, each of those could split neighboring nuclei. Then would split more and more and more. It could become an exponential runaway process known as a chain reaction. So what was this pile? How did this reactor work? It consisted of 57 layers. You can count them up. I'm not sure all 57 were there yet. But it was literally stack upon stack upon stack of very closely packed ingredients. So most of the weight actually came from graphite bricks, dense carbon bricks. Those were not going to undergo fission themselves. They're not a nuclear source. They were actually to slow down the neutrons. So each time a neutron flies out of one of these recently split nuclei, you want it to interact with some moderating material. And it turns out carbon was quite effective the carbon inside these graphite, not to absorb the neutron but to slow it down. Remember, we saw Lise Meitner and Robert Frisch had recognized that fission reaction rates in general should rise if the neutron is slowed down, because then its quantum properties would be stretched out more to be comparable in size to the entire target nucleus. So the idea was to use some moderating material, some material that would slow down those neutrons by a few collisions, not to absorb them, so that by the time the neutron then found a neighboring uranium nucleus, it would be more likely to induce fission. So most of these piles are the graphite moderators. In between, were little chunks of uranium metal, some of which would hopefully undergo these reactions. And the last and quite critical part of this were huge control rods, 14-foot-long rods of purified cadmium metal. Why cadmium? The idea here was not to slow down the neutrons but to absorb them. So cadmium will very readily absorb neutrons. So if you want to halt a chain reaction from say, blowing up the entire University of Chicago, let alone the lovely sports arena here, if you want to slow down or halt a fission reaction, you start taking neutrons out of the equation. You absorb these neutrons before they can find a target nucleus. So the graphite bricks moderate the energy of the neutrons. The cadmium control rods take them out of the system altogether. And then these were movable. They could literally be manually, in the earliest days, manually pushed in or pulled out to control the average number of neutrons in play. And so with this arrangement, the very first self-sustaining chain reaction that got this reaction to undergo, not a runaway chain reaction, a controlled chain reaction, thanks to those cadmium rods . That went critical literally underneath the stands here on December 2nd, 1942 under Fermi's direction. That was not quite a full year after the surprise attack on Pearl Harbor. So you can see, the pace begins to pick up very rapidly once the US did actually enter the war. The bombing of Pearl Harbor was December 7th, 1941, the so-called day that will live in infamy. And almost exactly a year later, Fermi's group in Chicago had produced the first self-sustaining chain reaction. OK, now independent of that, kind of in parallel, there were developments going on at other sites which would also eventually become absorbed into the Manhattan Project. So another very important one was at Berkeley, California where a very young nuclear chemist who was an assistant professor, just a few years past his own PhD, named Glenn Seaborg, worked with a small team. He was a nuclear chemist. And they actually finally, finally succeeded in doing what Fermi had first thought he'd done back in the mid 1930s. So Fermi, remember, we saw won the Nobel Prize for essentially a mistake. Everyone, including the Nobel committee, thought that Fermi had produced nuclei heavier than uranium by neutron capture, although it turns out, Fermi's group was inadvertently inducing fission. While Seaborg and his team actually finally, so to speak, was able to successfully produce transuranic nuclei with the same mechanism that everyone thought had been already been going on. They could control it better and measure the products better. And so early on, he and his team produced neptunium by neutron capture followed by beta decay. And then a few months later, by very early 1941, Seaborg's team had produced the next largest element on this new extended periodic table, plutonium. This was actually a single neutron capture followed by double beta decay. So the heavy uranium nucleus, one neutron undergoes beta decay so the proton count has increased by one. Then a second neutron within that same nucleus, will undergo beta decay. And now increase the proton number by another step. So now you've made element 94. The next month after first making very small trace amounts of this new element, Seaborg and his colleague, Emilio Segre-- Segre had himself left Italy because of Mussolini. He'd been a member of Fermi's group in Rome. He relocated to Berkeley. So the nuclear chemist Seaborg and the nuclear physicist Emilio Gray then began studying the properties of this brand new chemical element, plutonium and particularly found that plutonium really is subject to nuclear fission, much like certain isotopes of uranium. Within months after that, Seaborg then went to what was by now this flourishing Manhattan Project site, the Met Lab in Chicago, to work more directly with Fermi and to continue measuring the properties of plutonium and its fission rates. So that's largely what's going on at Chicago during this time. Meanwhile, overseeing this entire project, the entire Manhattan Project, was a member of the Army Corps of Engineers, the US Brigadier General Leslie Groves. Until that time, Groves's largest project had been overseeing construction of the Pentagon building. Literally, the building itself was quite new by the late '30s, early 1940s. And Groves, who was an engineer, a rising member of the US Army Corps of Engineers, had been like the head contractor. He'd overseen the construction of this enormous, enormous strategically important headquarters for the war department. So he was seen as someone who could get things done on budget. He was then tapped to take over this newest new project of the Manhattan Project. And in fact, as again, you'll hear more about this in the film, Groves was very reluctant to do it. He actually was very eager to see combat. Once the US actually actively entered the Second World War, he wanted to be leading troops in battle. And he thought this very abstract-sounding weapons project, something about nuclear fission, if it were to work at all, would have some long-term benefit or impact much down the road. Groves was eager to see the scenes of battle directly. Nonetheless, he was more or less ordered to take this over. And he was in the army. He followed his orders. So he was then put in charge of this entire project. One of the first maneuvers he did, which really just stunned, stunned people around him, was he asked the very young theoretical physicist at Berkeley, Robert Oppenheimer, to join Groves as the scientific director for this new Los Alamos laboratory. And you'll hear a lot about Oppenheimer in the film. It actually functions as almost a biography of young Robert Oppenheimer combined with the story of these projects during the war. So I won't say too much now about Oppenheimer, but just to say this was a stunning move on Groves's part, stunning at the time. So Los Alamos began operations. And in the spring of 1943, if you may remember, the project was established in June of '42. So just getting things like the Chicago lab up and running was really the earliest priority. And by the spring then, by spring of 1943, not quite a year into this new project, this additional site, what would eventually become a kind of central coordinating site, at Los Alamos, New Mexico was set up, as you'll see in the film, taking over what had been a very tiny boys school, young like K through 12 Academy in rural New Mexico, with mud-caked small little facility. Oppenheimer used to enjoy vacationing in that region, going on long camping and horseback riding trips. So it was actually Oppenheimer who recommended the site to Groves, saying maybe we can requisition this out-of-the way place that could be well-hidden and kept secret. So again, just a little bit about Oppenheimer. And here's the poster for the film that you can learn much, much more. There are many, many very, very good books about Oppenheimer, including this one that actually received the Pulitzer Prize. It's a spellbinding book as well as one based on an enormously impressive research. Lots more to say about Oppenheimer, I'll just be brief. He was a near contemporary of people like Werner Heisenberg and Wolfgang Pauli. He was roughly two years younger than Heisenberg, so pretty similar generation. Oppenheimer was a bit of a prodigy. He actually went to Harvard very young. For his undergraduate studies, he had skipped grades as a younger student. And then he studied for his PhD in a quick postdoctoral jaunt in Europe, both in Cambridge, England and especially in Gottingen. He actually did his PhD under the direction of Max Born. So he was there just as the brand new quantum theory, quantum mechanics was emerging. He got to meet many, many of those folks when he was a grad student there. He came back. He was hired to teach both at Berkeley and at Caltech. He was hired to be a photon professor at two universities hundreds of miles apart. And both schools were so desperate to get him, they agreed to let him spend one semester at Berkeley and the next semester at Caltech. It was just extraordinary. And part of his role, what they hoped he could do was build up a US-based strength in theoretical physics, which was seen, I think quite appropriately, quite accurately, as really lagging behind the European schools by that point. Before the war, he had basically no experience with either experiments or with any large-scale organizations, which is part of what made it so shocking when General Groves asked him to play this very large administrative role for the wartime projects. OK, so as soon as this new facility at Los Alamos began to get built up, Oppenheimer's own former student, his postdoc, Robert Serber, gave a series of initiation lectures for the new recruits who had been asked to come to this place but not told why. It was so top secret, people were basically not told why they should drop everything and move to nowheresville rural New Mexico. So Serber gave what was actually called a quote, "indoctrination course," as if they were joining a cult. There was another physicist, Edward Condon, who took notes and typed them up. And this 24-page document became literally the first technical report of the laboratory. It was classified immediately. It was a top-secret report. This was Los Alamos Report One, of which there would be thousands, in fact, probably tens of thousands then to follow. Informally, it became known as the Los Alamos Primer. Decades later, it was declassified and published. And in fact, you can actually just download the PDF of the original TypeScript on the web. It's not hard to find. So this is from the actual document, the actual primer. On page one, the first thing he tells these people when they meet together in Los Alamos is the following. "The object of the project is to produce a practical military weapon in the form of a bomb in which the energy is released by a fast neutron chain reaction in one or more of the materials known to show nuclear fission." Lest there be any doubt, basically, we are here to build bombs. That's literally the first thing he says in these notes. The next thing he does is very, very quickly go over the back of the envelope order of magnitude estimates for the energy released every time a single nucleus undergoes fission exactly of that we saw last time that Lise Meitner and Robert Frisch had been doing not so long before and that work out in more detail in those lecture notes for Monday's class. There's one really interesting shift, though. I find this pretty fascinating. Serber gets the same answer. But he chooses to write the energy associated with these nuclear reactions not in units associated with chemical or nuclear reactions like electron volts or maybe millions of electron volts. He writes down the energy in ergs. Remember, that's the unit of energy in the human-scale units, centimeters, grams, and seconds. He's now not thinking about individual reactions among one or two nuclei. He's already thinking about a human scale because the next maneuver he does in these notes is say, this is not very much on a human scale. A fly buzzing around your head expends more energy than this. However, that's for one nucleus that's undergone fission. There are 10 to the 25 nuclei in a single kilogram of this stuff. So suddenly, if you could get this runaway chain reaction that he refers to here, if you can get lots and lots of these nuclei each to undergo fission, they'll each release that amount of energy. Now you're talking about some enormous kind of reaction, not just one isolated nucleus that happened to fission, again, right on page of these notes. The next thing he does is compare that kind of energy release with the energy associated with conventional chemical explosives like TNT or dynamite, basically. Those were well known to release something like 10 to the 16th ergs per ton not per kilogram of chemical explosive but per ton. So then, again, right on page one, he then takes this ratio to say, this is why we're people have gathered here in Los Alamos. One kilogram of this stuff-- sorry, of a fissionable isotope would give off the energy equivalent of 20,000 tons of conventional explosives. Let me just pause here to say he's used a code here-- they were so worried about secrecy, even though they were in the middle of nowhere-- that in the notes, he never refers to uranium 235. He refers to substance 25. And the code was take last digit of the atomic number, so it's 92, the last digit of the atomic mass, five, that's your code. So U-235 is 25. U-238 is 28. Plutonium is 49, because it's element 94 with atomic [INAUDIBLE] and so on. So that's what Serber writes on page is the reason to have brought all these people in secret to the mesa. Now which material to use? By this point, there were several known fissionable materials. U-238 is actually mostly stable, as it had been clarified by this point. U-235 is the isotope of uranium that is most readily fissionable. However, it only exists in trace amounts in nature. So if you dig up Uranium ore out of the ground, whether in the African mines or in mines out in Western United States or elsewhere, most of the uranium that you'll dig up, the overwhelming majority will be this very stable isotope, U-238. Less than 1% of naturally occurring uranium is of this fissionable kind. And in the meantime, this newest element that Seaborg and his colleagues had made, actually synthesized in the laboratory, plutonium, that can be even more fissionable under certain conditions. And yet, it existed only in micrograms, not kilograms. So these are the challenges that Serber begins telling the recruits. This is, again, his hand-- actually, it's Edward Condon's hand-drawn chart, the chart that Serber showed. This is the level of detail and accuracy in the Los Alamos primer, literally taken from the primer. This is showing the reaction rates for fission in convenient units for these different types of materials. And again, you can see here is uranium 235. Here's the common kind, U-238. Here's plutonium, the four and the nine. So for slow neutrons of the sort that Meitner and Frisch were thinking about, ones that have been slowed by some moderator like in the Chicago pile, the highest reaction rate that had been measured at least was of uranium 235. The highest likelihood to undergo fission was way up here for 235. The problem was when the next nucleus fissions, the neutrons that come out of that are actually very fast. They're at these kind of nuclear energies or at least fractions of the nuclear energy. They're more up in this scale in a million times say electron volts or 10 million, not a tiny fraction. For very fast neutrons, it turns out, plutonium is even more susceptible to fission than U-235. The challenge is, how do you isolate this trace stuff from this, because you want to get a lot of this stuff in one place, kilograms worth. Or how do you scale up this stuff by a factor of a billion from micrograms to kilograms? Neither of these seemed at all straightforward. That's where some of these other facilities then come in. So we'll now look at the Oak Ridge facility very briefly, Oak Ridge in Tennessee. Here's an example of this enormous industrial scale operation under the auspices of the Manhattan Project, scaled up by the US Army Corps of Engineers and now with more and more industrial partners as well. This was literally a classified city. It wouldn't even show up on maps until many years after the end of the war. And yet behind the fence, under classified conditions, they built the single largest factory at the time on the planet, the largest factory, at least, under one roof ever, with about a mile-- if you walk from here to here, the single building was more than a mile in distance. This was to try to separate the kind of uranium isotope that was now really needed the fissionable one from the common one. Now, these are both atoms of uranium. So in terms of any chemical analysis, they will be indistinguishable. The chemical properties are the same. They're isotopes of a single chemical element. So people realized right away, and Serber lectured on this, of course, as well, to separate them, you have to turn to physical methods, not chemical ones. You have to exploit the very tiny percent level mass difference between this slightly lighter version of that more common isotope. So here's an example I'll go through very quickly. There's a marvelous treatment of this by my friend and colleague Alex Wellerstein, who's a real expert on the wartime nuclear projects. You can check out his very brief essay there. So here's what they wound up doing in this plant. Here's what's going on in this huge, enormous factory plant, something called gaseous diffusion. So you mix the uranium with fluorine to make a guess called uranium hexafluoride, U46. This is incredibly poisonous, incredibly noxious. It will burn through many kinds of gaskets and rubbers and metals. It was really nasty stuff to work with, not just to humans, but even to the kinds of engineering parts that one would build a factory out of. This was hard to work with. Nonetheless, it had the property that they could heat it up. In equilibrium, the molecules of uranium hexafluoride that happens to include the rare, lighter isotope of uranium, those molecules would enter equilibrium with the more common molecules that happened to include the standard isotope, U-238. If they're in equilibrium, their energies should be about balance. The kinetic energy should be about equal. But that means that the smaller mass here of the lighter isotope must be multiplying a slightly larger velocity, that the molecules with the stuff you want will, on average, have slightly larger speeds in equilibrium than the more common stuff. So put the whole gas under pressure, force it through these chambers with a permeable membrane, and then because you have a larger average velocity for the ones you want, for the lighter ones, they will diffuse through this chamber slightly more quickly. So after a short amount of time, the ones you want, these small black dots, the ones with the fissionable isotope, will diffuse throughout the chamber a little bit faster than the ones that are stable that you don't want to focus on, not by a lot. The enrichment of doing this one cycle is less than 0.4% enrichment. And so the idea was, well, let's just do that 1,000 times, literally, scale it up like never before with the help of these experienced industrial partners. And so what this building has is literally thousands of these cubic meter scale gaseous diffusion units strung one from the other take. You take the slightly enriched output from here and put it in another chamber and output and output and output. So you do this 1,000 times. Meanwhile, the other main installation for the Manhattan Project was in Washington State at the Hanford site. And by the way, a member of our group here, Tiffany Nichols, is a real expert on Hanford. So we should get her to talk about Hanford sometime as well. So Hanford, during the war, was also a secret facility, enormous, sprawling industrial site. Now here, the main partner was DuPont, although many other industrial partners there as well. And the job at Hanford was not, first and foremost, to separate the isotopes of uranium, but actually, to make more plutonium. So their job was to take Fermi's Chicago pile, take the insights that people like nuclear chemists like Glenn Seaborg had learned in the interim, and do that at an industrial scale, build enormous reactors that could induce neutron capture within otherwise relatively stable uranium and then induce the production of plutonium. And so again, you can see the scale here. Just to put it in complex, there are multiple reactor complexes. There was the B reactor, the F complex, and many others. All told during the war, Hanford required 1 billion cubic meters of concrete. Just think about the scale of that going on. OK, let me pause there. So that's a kind of lightning tour of some of the technical things happening at the main Manhattan Project sites. And we'll talk more about what comes next. Let's see. So Fisher asked, by the time the transgenics were actually discovered, did people really know about antimatter or neutrinos? Oh, good. Yes, good, so people had ideas, nothing like a real evidence yet. I'm going to bracket-- I won't spend too much time on that. But Fermi had actually done a lot of work creating a theory that included neutrinos and of the weak decay, more generally. But it was still entirely hypothetical. And Fermi himself believed these particles would never be detected. He was very skeptical at the time. Will come to, actually, it turns out that the first evidence that neutrinos exist came from these huge nuclear projects, not during the war but soon afterwards. So people had to build enormous industrial scale reactors or weapons to create lots and lots of these things flying out before there was any hope to try to detect them. And so we can talk more about that. But the short answer is, people had ideas about what we would now call neutrinos and the beta reactions. But it was still quite fledgling. Johan asks, was there widespread protest against it [INAUDIBLE]?? Very good. It was part of what we should talk about, especially on Monday and thereafter. The short answer is no. If you say widespread protests, an unequivocal no, no way, no how, partly because these projects were all deeply, deeply classified. Many of these sites literally weren't even on a map. You could not drive to the town of Oak Ridge. You wouldn't know it was there during the war, and same with Hanford and many other so-called atomic cities they were nicknamed afterwards. So that's part one. Part two, what was going to happen with these things wasn't so clear. And we'll talk more about that soon. Part three, well, let me leave part three to our discussion. Johan, that's a fantastic question. And that's exactly what I want to be able to talk about, at least broach, both with the film and with our discussion on Monday, good. Muriel shared a screenshot, excellent. Yes, Alex is right. So part of why Oak Ridge was sited where it was because it had already access to enormous, enormous sources of electricity. It was really just-- I mean, you saw the photos. There was a huge industrial output. Sarah says, will we talk about radioactive contamination? We will. Yes, very good. So they did. Sarah asked, did people worry about radiation from these materials? Again, Sarah, it's a great, great question. They were aware of it. It becomes controversial, I have to say, who knew what when? So even with hindsight and declassification and lots of more documentary evidence, it's unambiguously the case that many, many of these scientists and engineers knew there was something to think about radiation at the time. It's also unambiguously the case that they were quite cavalier, not just with themselves, but also, and I think even more shamefully, with all these workers at these huge industrial sites who were handling extremely dangerous materials with minimal safety precautions or even basic information. There's actually a very there was a dissertation on that very topic by a grad student in our own department in the SDS program some decades ago, again, based largely on declassified documents and so on. So there was knowledge that this radiation existed, that it was harmful to humans. It wasn't clear exactly how harmful at which doses. But it was unambiguously harmful in general. There were some precautions taken, but nothing like what would come later. So that's a huge, huge question that much more of which is learned actually after the weapons are used. There's a long-term longitudinal study of victims of the bombings in Japan, for example, that goes on for years. There are then other kinds of experiments and more controlled studies after the Second World War. And so we will talk about that. There's a second film we'll watch together, a second documentary, we'll see in a few class sessions that looks much more directly at the broader environmental impacts, including radioactivity associated with these very, very messy projects. And by the way, just one more plug. I showed the book cover. My friend and colleague, Kate Brown, who's now also a professor at MIT, wrote this really very compelling, very moving book called Plutopia on some of the longer-term impacts, not just during the 1940s but even afterwards, about these things. So that's a very important, very good question, Sarah. And we'll have a chance to talk a bit more about that. Were these commitments binding? Good. No, so Johan asks, were people threatened if they chose to leave? No. And yet, there was literally one person, one person who left Los Alamos, Joseph Rotblat is his name, before the project was completed because of what he cited as moral concerns, goes back to Johan's question. So it wasn't that it was impossible to imagine the consequences of these things. Some people did. Some people thought about it and kept working the project. Some people said, that's not my problem. Some people said, someone else would worry about it. And one person at Los Alamos said, this is my problem. And I don't like it. And I'm leaving. He went on to found the Pugwash movement, among other things. So again, great question. We'll talk more about those things-- we'll have an opportunity to talk more about that soon. Let me press on. I want to talk-- the last part of class is a bit more brief. But I don't want to run too long. So let me jump in to the next part. These are great, great questions. So last part is now, how do people actually construct a device that would explode? How do you make an actual weapon out of these esoteric-sounding things? So this last part is actually making bombs, very briefly. We saw each time a single nucleus undergoes fission, a couple extra neutrons are released. The problem that also is right in Serber's initial primer, they knew this right from the spring of 1943, was that if you have too small a mass of this fissionable material, then on average, too many neutrons will be close to the edge. And so they're more likely to diffuse right outside of the active region than to stick around and cause more fission. So you have something like a critical size. If you have a larger volume-- the same density, same properties, just more of it-- then on average, most of the time, a new neutron is released. It'll be more likely to encounter another target, another nucleus, rather than be close to the edge and kind of fly out and fizzle. So this introduces the notion of a critical size. From there, you can then calculate a critical mass. There's a huge story here you can learn some more about in Peter Galison really fascinating book called Image and Logic, also this really amazing resource written by a team of historians and scientists, several of whom actually applied for and received top-secret clearance so they could actually read classified materials, even though what they wrote about it would then be subject to-- it could only be safe to release. So you have some real insider experts who worked on this other book called Critical Assembly. And a portion of that team is Lillian Hoddeson, who wrote the main piece we read for today. So what was happening at Los Alamos was a series of hybrid computation, human, almost entirely women, volunteer computers. They were usually the kind of wives of staff. So the people who were first hired were almost exclusively men to work at Los Alamos. We'll talk more about the kind of gender dynamics in the field around this time that we'll see that more squarely in a lecture or two. So most of the people who were trained in science or engineering in the US in this time were men. Many of them were invited to relocate to Los Alamos with their families. So there were many spouses, mostly women spouses, who came along. And many of them were then able to pick up work at the lab as well as computers, that is the people were named computers. They were usually using these handheld mechanical calculators, not programmable electronic machines. Those were just at the moment under development. So you have these kind of hybrid human machine computing teams that would break down complicated iterative calculations to try to do things like calculate the likelihood for a neutron to leave a region of active material or induce fission. Will it drift and diffuse outward or not? So with this series of early what we could call a numerical simulation, but just painstakingly slow, the scientists were able to estimate the critical size above which you more likely to be in this regime than that. And that was about a radius of nine centimeters. If you had purified all fissionable U-235 and you had a sphere of radius nine centimeters, you'd be more likely to have that thing undergo runaway chain reaction rather than this loss of neutrons from diffusion. That then, the size, translates to a mass because you have a constant density of the metal there. So the critical size was related to a critical mass of about 50kg, over 100 pounds of pure U-235 at a time when this existed in tiny trace amounts. So what had already been figured out, and Serber lectures on this in the primer, is you can actually get by with a much smaller size. You could get by a size closer to this if you surround the active material, the fissionable material, with something called a tamper, a very heavy metal that is very inert to nuclear reactions. So it will neither absorb neutrons nor undergo fission. It'll basically just act like a mirror and bounce those neutrons back in, a very heavy, very inert stable nucleus of which they had ideas of what there might be. That was called the tamper and just put that heavy metal around the active region. Then you're going to reflect these neutrons back in. You can shrink down by a factor of about three, this critical size. And then the critical mass becomes about kilogram scale, not tens or hundreds of kilograms. So now the question is, how do you get this thing to actually undergo a runaway chain reaction? So now you know roughly how much stuff you need for that critical mass. How do you get it to undergo this very rapid energy-releasing response, shall we say, how do you get it to blow up? So here again, all that was identified already from the primer. They knew about this very early. The idea was to get two subcritical pieces so you're not in danger of either of these pieces, the shaded regions here, undergoing a runaway chain reaction, each too small. Neutrons, on average, will diffuse out before they cause too many fissions. So you get two subcritical pieces of this enriched fissionable material and literally shoot them together like from a musket, from a gun. So they knew they were existing army guns actually in use that could get muzzle speeds of projectiles that would correspond to, say, a tiny fraction of a second. The velocities were thousands of meters per second or centimeters per second, I guess, so they could get these two subcritical pieces to be jammed together to make one critical mass within a tiny fraction of a second. You'd also need to actually induce then-- once they're together, you have to inject at least one neutron that can start this runaway chain reaction. They were already thinking about what are called initiators at the time of the primer. The idea was to actually have a natural alpha emitter, something that is naturally radioactive like radium or polonium, glue that onto one of these pieces, attach it to one of the projectiles, and have beryllium or some other target on the other piece. It would basically redoing Chadwick's experiment from which he identified neutrons. If you have alpha particles smacking into some materials like beryllium, they will produce neutrons. So just do that really fast by gluing the two ingredients of Chadwick's experiment into these pieces in the middle of a bomb. I found that fascinating. They were so confident about this method. I just find this mind-boggling. They were so confident, they literally never even tested it. The first time any device ever underwent a runaway chain reaction from this U-235 gun-method assembly was that it was used against a population in the Japanese city of Hiroshima. And you'll see, of course, much more about the actual use of the weapon and consequences in the film. So the very first time a device like this was even exploded at all was actually in a military usage on October 6, 1945. Many of you might know, we just passed the 75th anniversary of these bombings this past summer. So here's what is now called the atomic dome. For some reason, it's still not so clear, this one building near ground zero was mostly destroyed. And yet, this kind of dome structure, the skeletal girders of the dome survived. That's now called the atomic dome. It was actually a kind of industrial management hall in the middle of Hiroshima at the time, the not even testing it. The other method was actually much, much more complicated. And again, you'll hear more about this in the film. And we can talk more about it soon too. This became a major challenge. And this is the subject of Lillian Hoddeson's piece that we read for today, the other kind of fissionable material, the material that was even more likely to fission than uranium, was this plutonium. But it had a spontaneous fission rate. This was a naturally unstable element. That's why it doesn't exist on its own on Earth. So it is more likely to blow itself apart in something other than a runaway chain reaction faster than you can get two of those subcritical pieces to join. No matter what the muzzle velocity was, they came to recognize, only by summer of 1944, well past the start of the laboratory, that any of these assembly methods for this highly unstable plutonium would be too slow. So again, as Hoddeson tells us in the reading, what they wound up doing was pursuing something called implosion. This became a really very, very significant technical challenge. It leads to all kinds of moral challenges as well. I don't want to downplay those. I just want to say what occupied many of these folks during this very hectic days of the war. The idea was to then get a tiny little plutonium core, actually have it separated into tiny little subcritical pieces but near each other, surround that with a tamper, again, so reflect the neutrons back in, but then surround that with multiple kinds of conventional explosives. That were shaped into what became known as shaped charges. So you want to set up-- here's the plutonium fissionable stuff here. Surround it with different blocks that are shaped very intentionally with different burn rates. So this kind of basically TNT, one kind of chemical explosive, would have a certain burn rate. You have a different burn rate here, and different burn rate here, so you could actually shape the ingoing wave into a spherically symmetric shock wave that goes in instead of going out. So you want it, with very high precision, induce an ingoing wave that will then crush the plutonium core so that all these subcritical pieces are condensed into a single critical mass even more quickly, much, much more quickly, than any of those kind of muzzle velocity methods of gun assembly. We can talk more about that. But that was what was the idea. Now, that created a huge challenges, both theoretical and experimental. How do you calculate the appropriate shapes? How do you actually mix these materials to appropriate purity, lots and lots and lots of challenges there. Really, the leaders were not so confident this would work on its own. This they did a test of. This became known as the Trinity Test. The film that you'll watch before Monday is called The Day After Trinity. It's referring to this now-famous test called the Trinity test, which happened on July 16, 1945. You can see, here's the test bomb about to be exploded after Norris Bradberry leaves the assembly. So this was arranged not too far from Los Alamos. Plutonium was so rare-- remember, there was just barely eking out kilograms worth after that entire industrial effort at Hanford-- that at first, the idea was to surround this test bomb in an enormous very thick steel container literally called jumbo. They had to build a special railroad carrier just because this thing wouldn't fit on standard tracks and get it from where it was made, I think forged in like Pennsylvania, to get it to New Mexico ahead of time so that if the bomb didn't work as expected, they could scrape off this very rare plutonium and try again. In the end, they wound up not using it. But that just gives you an idea of how experimental this was. Here's one of the rare color photographs of the Trinity test. It was so powerful, it fused the desert sand into glass. There was a special material that was dubbed trinitite, glass from the Trinity test, that covered the desert floor from the unleashing of these extraordinary forces. So three weeks after that test, and just three days after the surprise bombing of Hiroshima, a bomb of that kind, nicknamed Fat Man, a plutonium implosion bomb, was then dropped on the Japanese city of Nagasaki. Again, we just passed the anniversary. And you can see, just as a quick version here, there's much more we can talk about, the kind of impacts of this nuclear weapon on the city of there. So many, many more questions to think about of the sort we're already were beginning in the chat now. And I want to-- these are important and very difficult questions. And we're going to take our time with them on Monday. Just some things to think about, when you do watch the film, what got people to work on this? Did their own motivations change over time? Why were these things used? How was the decision made to use these new weapons? What really was the impact militarily or strategically on the course of the war as imagined then or now? How do people react beyond these projects once the secret was revealed and so on, many, many hard questions to ask about there. So I'll stop there. Good, Alex shared some good resources here. Scott Manley series is indeed excellent. And also, I encourage you to go check out Alex Wallstein's blog as well, tons of stuff. So I'll pause there. Any final questions before we turn to our discussion together on Monday? OK, I'll pause there. Please remember, paper two due this Friday. Good luck with the paper. Enjoy the film. It's a hard film. But I hope you'll appreciate the film, I should say. Watch that on your own. And then for those who are interested and able to spare the time, we'll meet together at our usual Zoom link Monday at 1 PM Eastern. Take care everyone. See you soon. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_18_Coldwar_Classroom_Teaching_Quantum_Theory_in_Postwar_American_Physics.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So today, we're really kind of making a transition in the material from the overtly war time focus we've had for a couple of classes in a row. And now we'll start moving toward the last part of the semester, asking about some implications or some effects from that very dramatic series of developments during the Second World War on the course of physics, and physicists, and research, and teaching in the decades after the Second World War. So today's lecture is a kind of pivot to help us think about moving from the focus squarely on the wartime projects toward a longer Cold War situation. So for the main lecture today, as almost always, it has three main parts. The first part is to talk about this phrase that got used pretty early on after the Second World War, and historians and scientists still tend to use it, this phrase called big science. So the first part, we're going to talk about what have people meant by this phrase big science. What are the characteristics that we can identify from our readings? And we already saw hints from our focus on the wartime projects. And that will be the first main chunk. I think that will be the longest part of our discussion today. And then two shorter follow-ups that begin to help us think through, what were some of the implications of this period of post Second World War big-science for the study of physics largely in the United States so we can talk about some comparative or examples from other parts of the world? But most of my own research has been focused on developments within the United States. That's what I know best, and that's what I'll mostly focus on for the rest of today. So who was thinking about this question of a term that was also very widely used at the time, "scientific manpower," with its to our I think many of our eyes today this kind of inescapable gendered notion of a manpower, a certain kind of person that was being sought for to be trained up during the Cold War? And then lastly, the part that I wrote about for one of the assigned readings for today, can we trace through intellectual or pedagogical aftershocks or implications of these very significant changes in priorities and infrastructure? What does it mean for people entering the field of physics during this new and kind of fast changing scene? So that's our plan for today. So one of the classic and still kind of worth reading articles on this topic of big science is published now more than 30 years ago by my colleague Paul Forman in this journal that used to be called Historical Studies in the Physical and Biological Sciences. Now the journal is called Historical Studies in the Natural Sciences. So Forman's article is very, very long. The article is nearly 90 journal pages. It's practically a little monograph unto itself. What Forman was tracking was basically anything he could count. Much like me, Paul likes to count stuff. He's very good at counting things. As a way to characterize change over time for the US based physics community. And this is just a remarkably rich article. I think very creative use of lots of different sources to put together what is otherwise a pretty consistent picture that Paul was really among the earliest to paint in some kind of comprehensiveness. So I've taken this plot from his article. Anything we might want to count, the membership in the American Physical Society, the number of papers presented at the meetings or something I've done. Look at, say, the number of articles published in the main journals. Any of these things we can count starts growing exponentially during and especially after the Second World War. And the one that Paul really focuses on and what often gets associated with the term big science was budgets, was funding rates. And that's this third, that's his curve labeled number three in this plot of his. The funding for what at the time was often called basic research, meaning it wasn't to build specific devices. It wasn't tied to very specific so called mission oriented programs. It was for open ended, often academic or university based research in the physical sciences or in physics even narrowly. And one of the amazing things that Paul identifies is that over just a 15 year period from just on the cusp of the Second World War to just really not even a whole decade after, that 15 year period, the budget just in the United States for funding the so called curiosity driven or basic research in physics, the budget grew by a factor of 25. Not a 25% increase. That already would seem generous these days. It grew by a multiplicative factor of 25. That still boggles my mind. And that includes the extraordinary escalation during the war and then as we see, it never slows down or doesn't slow down for quite some time, even after the war. So the first thing that Paul helps us identify is this enormous change in scale for things like funding for research in physics. The next thing he does, this is really where the article gets especially detailed, is not just the scale of funding, but also the source. That was also changing quite dramatically during these immediate years after the Second World War in the United States. So four years after the war in 1949, 96% of the funds that represent this kind of curve here for academic basic physics research were coming not just from the US federal government. That was already new. But coming from the defense or military related branches. That included the Atomic Energy Commission, a nominally civilian agency that nonetheless was responsible for the nuclear weapons complex. So if you include AEC as defense related as well as what became called the Department of Defense, 96% of this exponentially growing amount of money was coming from now a unique source. And what Paul finds, I find this even more interesting, five years later in 1954 as the total budget has grown exponentially, the proportion coming from the defense related agencies of the US federal government, those had actually grown to an even higher proportion from 96% to 98%. So why is that especially noteworthy? Because in between these two dates starting in 1950, the United States established for the first time the fully civilian National Science Foundation. So even after there was a different non-defense oriented funding mechanism to support basic research in areas like physics and the life sciences and eventually some social sciences, even after there was a thoroughly civilian mechanism, nonetheless both the amount and the proportion of funds coming from defense related agencies was growing exponentially during this period. So often when people use the term big science, even back in this time period itself, let alone by historians since, they often mean big budgets. Kind of unprecedented expenditures often from federal, from nation state budgets, not just from local universities or associated industries. But enormous rise, fast rising budgets to support basic research across the sciences. That was big science kind of type one. And indeed here at MIT, that was absolutely the story of MIT in particular as a kind of microcosm for the broader trends. Here I'm showing the annual operating budget for MIT in inflation adjusted constant dollars. You can compare like with like. Here's the very end of spending on the radiation laboratory, the radar project here on campus. There was a brief demobilization after the war and then a kind of exponential rise, again, for the better part of two decades. That was by no means unique to MIT. That's reflective of these broader trends that Forman had identified in that long ago article. Now, what were people doing with that money? I love this photograph. This is from MIT's own once quite beloved synchrotron only recently decommissioned. And this is on the facility on Vassar Street that will soon be torn down to make room for the new building for the new College of Computing. But starting in the early and mid '50s, MIT had its own particle accelerator, this particular kind of device called a synchrotron. Here's a grad student working on it with his bow tie. I just love this. You're going to do your work with your sports jacket and bow tie. I don't know if that was because this person knew a photographer was coming or just how you dressed as a grad student in the '50s. I think it might have been the latter. Anyway, so we start getting larger and larger equipment. There's no longer benchtop experimentation. This is beginning to grow rapidly. It's about to engulf at least this one graduate student. And of course, that became just a pale, pale imitation of the kinds of machines that were also becoming more common in other parts of the country also supported by the federal government, almost overwhelmingly by the Atomic Energy Commission. This was, for a time at least, the largest in the world and certainly in the United States called the bevatron, or the billion electron volt accelerator at Berkeley. And here again, you see an individual working on it with necktie in the lab. This is a complete circle. This is a ring of magnets to accelerate the particles in a circle. So you can get a sense of the scale. They're really filling a kind of factory sized building. So what happens when the United States built many, many of these from the kind of modest sized MIT synchrotron up to these behemoths? Physicists were able to accelerate nuclear particles to unprecedented energies, rev them up by those electromagnets, basically, smash them together, and very soon they began finding an exponentially rising number of new particles. So with these larger and larger machines, physicists could probe energies, excuse me, interactions among particles at higher and higher energies. And not quite every single time they tried, but with remarkable rapidity, they began finding more and more new or previously unidentified particles zooming out of these collisions among nuclear particles. And so even by the early '50s, the joke was that basically every single month the community had discovered some new particle. And that's borne out by the data that a different colleague of mine compiled some years ago showing the number of published discoveries of seemingly new or elementary particles growing, again, exponentially over time. So the second meaning that's often attributed to this very common phrase of big science is big machines. Kind of conducting research on a scale of instrumentation that, again, had very few, if any precedents before the Second World War. So big funding, big budgets, and also big machines. There's a third kind of element of big science that really I've been most interested in, most curious about for a long time. I've written a lot about this in my historical work. And the third aspect of big science that I think often got overlooked in the early days was actually big enrollments, an enormous training mission to train lots and lots and lots of specialists in the sciences often in physics or the closely allied physical sciences in particular. So here's, again, a photo kind of close to home. Here's Tony French, who I think only very recently passed away. He lived to just about age 100. He lived a very, very long life. He actually had worked on the Manhattan Project originally himself and then made a very long, distinguished career teaching physics here at MIT teaching in a room that I'm sure many of very well, 26100. This is when this kind of picture for undergraduate physics classes no longer becomes unthinkable but in fact becomes the norm. Again, really across the country, not only at MIT. So the third kind of aspect of big science that I found most interesting, most wanted to dig into, was big enrollments. This kind of training mission. I want to take a few minutes now-- oh, excuse me. The last point here again to bring it back home to MIT. Again, MIT's enrollments grow exponentially during this period exactly consistent with the broader nationwide pattern. This is also the first period in the decades soon after the Second World War where the number of full time graduate students catches up and actually comes to exceed the number of undergraduates at MIT. Right now they're just about 50/50, but it had been a very small proportion in MIT's earlier history. Again, consistent with many places across the country. So you have a very rapid expansion in this mission to train lots and lots of new specialists, often in the physical sciences and related engineering fields. OK. So this plot I look at all the time. I stare at this picture really often. And I think most if not all of our teaching assistants have stared at this graph also, because I can't get away from it. So this is a plot of the number of physics PhDs granted in the United States per year starting in 1900. And this version of the plot goes through 1980. We can read a lot of information off the kind of structure of this plot. We can see that the enrollments fell very quickly during the Second World War, which makes sense. People were drafted. Many people who were drafted but were in physics left school to go work on either the radar project, the Manhattan Project, or dozens of other related military projects. During the war, enrollments plummet. Then they start growing very rapidly as soon as the war is over. They kind of reach a kind of what felt to many people at the time a saturation or a plateau. But then they come take off yet again exponentially very soon after the launch of Sputnik, the first artificial satellite launched by the Soviet Union late in 1957. So we have these periods of exponential growth and expansion. In fact, during the second wave here starting in the late 1950s between this take off and the peak, the number of university departments within the United States, physics departments, that began offering a PhD degree doubled. So it wasn't just that the same departments got bigger. They really did. But even the capacity across the United States to make more and more trained specialists doubled as well. The number of PhD granting physics departments doubled between 1958 and 1960 in the US. So many, many fields of study were growing rapidly after the war. This was a period of the so called GI Bill, which gave support at least to some returning veterans. Or now historians have identified some really quite extraordinary racial disparities and frankly just biases in the way the GI Bill was implemented. It was originally designed to help all US veterans of the wartime Armed Services get extensive support for things like going back to college, including also going back for advanced degrees beyond undergraduate degrees. So every field of study in the academy across the United States grew exponentially after the war. But look at these exponents. The rate of growth for physics was actually twice as high as for all other fields combined. So to make it easy to spot, I've taken all PhDs, that's this green line, and I've just normalized it to the same value as physics PhDs in 1945, just so we can see rates of change. I could have made a log plot. But anyway, this is just a way to say what's the steepness of this curve compared to that curve. So physics was growing twice as fast during a period of general exponential growth. Likewise, all the fields went through some contraction. None fell nearly as sharply or as dramatically as physics following this peak. This was not simply demographics. This wasn't just more kind of college aged people in the community. In fact, those trends were exactly out of phase. So this purple line shows the proportion of the US population in what we think of as a kind of common age range for graduate school. The bulk of PhD students would have been in the range ages 25 to 29 at the time. That proportion was actually falling during this exponential rise and then rising during the crash. So this wasn't just more baby boomers entering higher education. That's actually out of phase. The baby boomers enter higher education later. This is growing like gangbusters. It points instead to some very intentional shifts in policies and incentives, as we'll talk a bit more about today. But before we go on to say what was happening at universities and congressional funding and all that, I want to pause a bit more on this photograph and say, who is filling those seats? If we see, it's not only Professor French up front in these familiar multistage blackboards, which I personally dearly miss. I wish I could get back to those blackboards. But look at the folks filling the room. And even from a quick glance, we can see they mostly come from one demographic type. And this was, as I'll say in the next few slides that we'll talk about, that was not unique to MIT. It was not unique to physics, but it became especially accentuated during this period of exponential growth in the study of physics. More and more, the field narrowed in the enrollment trends around basically white males, whereas that had been not always the trend before and not exclusively the trend now. But in this intermediate period, it was really, really a period of real consolidation or contraction of the range of the types of people who were otherwise flooding into physics. A couple of examples that I wrote about some years ago in an article that I wrote up. And other historians have found similar things in their own archive work. Here's an example that I found of a letter of recommendation being written by a Harvard physics professor on behalf of his recent PhD student looking for a job at one of the new national laboratories. It's a letter from 1954. The student had a traditional sounding Japanese last name. The letter goes on to explain this was a child of Japanese immigrants. The student was actually born in the US in Honolulu. But nonetheless, the letter writer felt the need to emphasize that despite this kind of Japanese sounding name, the applicant for this job actually was not just a US citizen but a kind of honorable US veteran who had fought in the US Armed Forces during the Second World War. It had to spell that out because of a kind of anticipation of a kind of anti Japanese sentiment so soon after the war. There are other notes I found in Berkeley's physics department archives from 1950, a kind of way to try to figure out who should be hired as the new faculty, some new faculty members in the physics department. And these are these kind of informal handwritten notes that mostly emphasize personal appearances using some pretty kind of flagrant stereotypes. They often talk in these notes about Jewish noses, Jewish features, Jewish type, whether the person who looked Jewish was or was not too pushy. And that got much more space in these notes than what the actual topic of research was or what kind of teacher the person might be in the classroom. We saw some of that even in some lectures ago about a kind of widespread anti-Semitism in many, many US universities before the war. It certainly didn't vanish after the war. Here's another example also it turns out from Berkeley. This was also not unique. The department head was answering a kind of query from I think in this case a Dean or someone like that. I can't remember. And the department head for Berkeley's physics department says, we have in the department head's words, no minority group problems in the department. So far as race is concerned, we have never yet had a Negro grad student in the department, hence that particular problem, so called problem, has never arisen. I just find this extraordinary. Berkeley, as I think you know, was then, is now a public University in a very dense metropolitan area that also was at this point the country's largest graduate program in physics. They had well over 100 grad students at a time enrolled and that number kept rising during this period. And yet the department head could state, I don't want to say proudly, but could state matter-of-factly that they've never had a so called minority problem because they haven't had any minorities, at least no African Americans in the department. I just find that extraordinary. One last example. In case these were a little too subtle, I don't think they're very subtle, here's another one I found in the archives some years ago. A job ad for a new physics instructor at the US Naval Academy in Annapolis. And it actually said in the ad things that people literally can't write anymore. It said literally don't even apply unless you are quote "white male and an American citizen." So this was a kind of assumption throughout all of these notes that the people entering the field either as grad students or as young faculty would be male, they'd be men, but of a particular kind of background. And then we can ask what about people who weren't male? And here again, we had the reading from Professor Evelyn Fox Keller. And I just want to share, put that in a bit more context as well. Also again, research I've done in many other stories by now have found very similar things. So throughout the 1930s in the US physics community, women had accounted for about 1/6 of the students who earned undergraduate degrees in physics each year during the '30s, so before the Second World War. Not 1/2, but about 1/6 were getting undergraduate degrees. And that proportion fell by a factor of 4 during the postwar decades. So instead of 16% each year, it fell to 4% very, very rapidly after the end of the Second World War. Meanwhile, the proportion in the US of women who had earned PhDs before the war, PhDs in physics, had never been larger. It had only been about average about 4%, held pretty steady throughout the '30s at 4%, 1 out of 25. And that already small proportion fell in half again during that postwar decade. So from a trickle to barely even measurable practically in terms of just the throughput through these now large, booming departments. There's also this part I found even more interesting, troubling. There was a kind of ambivalence among many physicists about what to do about women in or near or around physics. If there weren't very many women who were students formally in physics, what about the male students who were assumed to be heterosexual and having relationships with women? What about the women who were dating or married to the male physicists, either young grad students or young faculty? And again, there's a kind of back and forth. Some people say it's great that so many male physics students are married because it gives them stability. Now they can focus. I don't know where they get that idea from. That was one idea. On the other hand, you see these things like the women are cast as being too demanding. The wives want the husbands home early. I wrote a lot about these advertisements for physics positions in the main kind of trade journal of the time, Physics Today, which were mostly advertising about the kind of great kind of suburban comfortable middle class lifestyle rather than we have the best equipment. We study really interesting problems. It was all about the male physicist so called breadwinner can make a nice life for the family while the wife presumably stays home with the kids. That was how you would advertise to get the best physicists. This photograph I love. This is from, again, Berkeley's archives. This was a get acquainted tea for the wives of new physicists who had joined the Lawrence Radiation Laboratory. And what I love is that the greeting line here are the wives standing in rank order of their husbands at the lab. So I can never remember if it's here or here. But one of the endpoints was the wife of the director of the laboratory at the time and this was like the wife of the associate director and so on. Meanwhile, John Slater, who was for a very long time the department head of physics here at MIT, really helped build the modern department and did many things that he set in motion that we now kind of take for granted. On the other hand, he was also capable of writing things like this in the late '60s in Physics Today where he's not so excited about the presence of so many wives just seemingly distracting the male physics grad students. So he writes that present students, by which he means male physics grad students, find it harder to settle down to work. Wives and babies take up a lot of time that my generation put into physics. There's very little evidence of actually there was marriage rates very different at that point, but this is his nostalgia. The wives, it is true, helped to type their husbands thesis. But in the older days, the necessity of doing this ourselves made us learn typing. I mean, that one just kind of makes me laugh. It's not even a benefit to have secretarial support. The men should learn how to type on their own. He's just frazzled by the fact that women are not studying physics but they're also not somehow leaving the male physics grad students alone. It's really remarkable. So I find that helpful context, then, when we come to this reading that we had for today from my colleague, Professor Evelyn Fox Keller, which is a remarkable scholar who's gone on to win more awards than I could list. She retired from MIT a couple of years ago. So she wrote this reminiscence kind of later in her own career of her experiences as a grad student in theoretical physics at Harvard starting in the late 1950s. I think she entered around 1958, give or take, if I remember correctly. So at that time, again, this is based on my own archival research, Harvard's department also by that point had grown to have about 100 PhD students enrolled total. Not 100 new ones per year but across the approximately five years of grad school, they had about 100 at a time. And during the 1950s, the number of women PhD students in physics at Harvard ranged between three and seven. So it was fewer than 10% at any given year. And this is something that Evelyn writes about in her piece and I'd found independently in the archives. The women who entered grad school to study for a PhD in physics at Harvard would apply to Harvard's department of physics. If they were accepted, then their files were transferred to a make believe shell game called the Radcliffe Graduate School. At the time, Radcliffe was, of course, a separate all women undergraduate institution that did not have its own physics department and did not grant PhDs in physics. So the women applicants, like the male applicants, would apply to Harvard's department of physics for graduate study. And then their papers, their files, would literally be segregated to be shipped off to be physically housed in a different building. And as Evelyn recounts, that actually could matter. It wasn't just that they were treated differently. That could mean they wouldn't get certain announcements for when the final exam would be for a graduate level course, as Evelyn recounts in her piece. Of course, she talks about many other microaggressions, what we might even call mezzo aggressions. Some of them weren't so micro. And the kind of climate or atmospherics in the department at the time. So I just want to give background she was she was not alone and other archival materials from the time suggest similar kinds of experiences for other people in the field. The American Institute of Physics compiled data in the early '60s and they found, again, maybe not surprising what we now know today, that the few women who did pursue careers in physics were paid much, much, much less than men in the field who had achieved the same level of education. And here are the other things that I found in some of my own archival work that still frankly kind of haunts me. There was a tradition in many departments, at Harvard's department and many around the country, including University of Illinois Urbana-Champaign, of having these informal skits. Sometimes the grad students do a skit to have a gentle roast to tease their own faculty. Sometimes the junior faculty do a skit to tease their senior faculty, which is not a good idea if you're tenure track. Anyway, so I found these transcripts of these faculty skits from physics from Urbana in the early '60s. And they were joking one year, seemingly innocently, just kind of blowing off steam, about how they should handle admissions to the PhD program. And they said the male applicants should submit their credit ratings. Do you have a strong FICO score? And the women applicants, excuse me, the girls, as written there, should submit photos of themselves in bathing suits and give their critical measurements. And again, that wasn't unique. You see the other ads from physics today for a kind of optics manufacturer that has really good polarizing lenses. And so instead of just saying we have great equipment and cool research projects, they put a so called pinup girl woman in basically sort of like a bathing suit to show off the optics of the company. So again, this was obviously not unique to physics departments in the United States at the time. It's reflective of but also kind of amplifying some very pervasive kind of cultural stereotypes or notions to which universities were hardly immune. And we see a kind of, I think, a kind of amplification of these things in physics departments during exactly this period. I find Evelyn's piece especially evocative of that, though hardly unique. So my last thing to say on this first part about big science meaning big enrollments. So it's really critical to keep in mind these were enrollments of a certain kind. Enrollments were growing exponentially faster in physics than in any other field across the American academy. And yet it was really as the numbers were growing, the demographics were actually winnowing very, very, very quickly and looked nothing like either what had come before or indeed what has slowly come to open up since. So let me pause there and ask for some questions and then we'll go to the next part. Any questions on any of that? DAS. Why is there a sawtooth pattern of the number of papers released? That might be referring to that early plot from Paul Forman about the number of papers at the meetings. I think there were a lot of disruptions for things like the Second World War and the Korean War. I'd have to go back to the exact dates, but I think there were other kind of worldly events that would affect the kind of annual attendance. But I can go back and double check on that. OK, other comments. Benjamin asks, did I just say that physics were expected to be in relationships? It's really interesting. So sometimes department heads would celebrate that more and more of their, again, overwhelmingly male grad students were getting married to women. And they thought this was adorable. They'd have little parties. The Berkeley department head called it an outbreak of marriage-itis, as if it was like a disease, but he was cute about it. We should have a party, we'll have cake. So sometimes they eyed this very as a cause for celebration. And other times they say no, no, no. Our students are too distracted. Back in my day, all we did as a kind of monkish thing, they would say, we were kind of celibate monks and only studied physics, which is not borne out by the existing data at all, but it was how it was often cast by not all but by some physicists who were then later in their careers thinking back to their own experiences from the '30s. Fisher, you say on the graph physics PhDs over time, what caused a massive drop? Yeah, we'll get to that. I would think Vietnam is correct. That is a large, large, large part of the answer. But we will talk about that actually a little bit later in today's class. It's a great, great question. Any other questions about that? I mean, one of the main takeaways is when people talk about big science, that might mean big money. And remember, it's not just money, budgets, but actually the source. And the US had really collapsed to these defense agencies at the federal level. Big machines. And I love those pictures of these enormous cyclotrons and synchrotrons swallowing up the people wearing their neckties. And then big enrollments. That enrollments part I really think we have to keep the asterisk in mind. Some people wanted lots and lots and lots of people to study physics. Let's ask a little bit more about who. Lots of what kinds of people? And so I should say that last part I meant to queue up. I have some ideas about why it was so narrowing demographically or stereotypically during that period, and that does have to do with the part we'll talk about next in class. So I don't think it was either accidental or unexplainable why that demographic narrowing happened amidst the exponential growth. And that's also I think at least in part a kind of Cold War story. DA says that it might have been Slater's own lifestyle. I read somewhere that he was not very social. That absolutely could be. There's a lot of personal idiosyncrasy to bear in mind with these reminiscences. But as I say, sometimes it's just fake nostalgia or real nostalgia even though it was never quite that way at the time. We can now independently check. And also it is the mixture of the usual kind of play of personalities and who's recalling what, when, and in what context. I absolutely agree with that, DA. I think that's exactly right. OK, let me press on to the next part. I did not assign a reading squarely in this next part, but I have written quite a lot about it. So if people are curious, I'd be glad to point you to some more readings if this just kind of gets you interested. So let's talk about this term manpower. And by now the man part might not be so surprising. And I do think that's relevant for this next phase to help us account for this really extraordinary, unprecedented growth in the numbers of people entering the field. Manpower was originally a military term. It really meant things like so called boots on the ground, the number of, say, members of the armed forces who could mount an invasion such like that. And this kind of military force term, manpower, was very rapidly taken over to describe what seemed like a very pressing at times kind of hysterical concern in the United States to make lots and lots of new scientists. That almost always meant physicists. Make lots and lots of new scientists so you could expand the ranks of so called scientific manpower. Here's one example. Many examples beyond these. Henry Barton, shown here, was at the time the president of the American Institute of Physics very soon after the war. He was saying in a time of national emergency, including this period after the war, this unsteady developing Cold War with the Soviet Union, this country would think nothing of spending $1 million to survey developing-- conserve a short commodity like natural rubber or tin. In other places, people compared physics grad students to pigs and cattle, which I thought was telling. Highly trained and able human resources viewed as a commodity are far more important. So he was urging the federal government to spend even more money or at least help the AIP do very extensive surveys of the kind of capacity within the country to make lots and lots of newly trained physicists very quickly. Likewise, Henry Dewolf Smyth, department head of physics at Princeton and an accomplished nuclear physicist, was by this point a leading member of the Atomic Energy Commission as well. And he gave a series of speeches and lectures on this topic to Congress, to public lectures, to civic groups. Again, saying things like scientific manpower was a war commodity, a tool of war, a major war asset, and hence had to be stockpiled and rationed, again, like rubber, tin, or gasoline during the recent war. And here he's referring to scientific manpower by which he then goes on to say things like PhD students in physics. So we come back to this plot that I was showing earlier, the PhDs in physics over time. Fisher already noted there's this very precipitous fall around 1970, '71 after it peaks. Let's look first, though, at this unbelievable rise. And I showed you earlier this is rising faster than any other field in terms of rates of growth across the American academy. The feature that this curve looks most like, at least to my eye, is actually something economists call a speculative bubble. It looks like a stock market crash. This exponential rise that is clearly not sustainable followed by an exponential crash. We see this happen on admittedly shorter time scales but with the same kind of characteristic curve all the time. So that got me thinking about what do economists or economic historians or sociologists, how have they described these financial or speculative bubbles? And is there anything we can learn from their treatment of these kind of characteristic patterns of prices of commodities that might help us make sense of this very unusual, very stark pattern in the training of young physicists in this country? So there are some lovely books. There's a book that I quite enjoyed by Robert Shiller, who's an economist at Yale. Actually received the Nobel Prize in economics a number of years ago. There's also a very cool book by my colleague Donald McKenzie, who's a sociologist of science and technology. Many others we could point to beyond these that look at these kind of characteristic financial or speculative anomalies. Although unfortunately, they're not so anomalous. They happen kind of often. Where the price of some object gets bid up kind of all out of proportion to any kind of valuations that otherwise might have made sense until the price becomes simply unsustainable and followed by a crash. Not just a market readjustment, but a catastrophic fall. And these have been identified at least as early as the 17th century. The South Sea bubble, about which my friend, two of my colleagues are writing actually. Both Will Derringer in our SGS program and also Tom Levinson in our comparative media studies and writing programs are each writing really fascinating books about this. This one nearly bankrupted Isaac Newton. More recent examples, not just in the US, but around the world. We see this happen sometimes with great frequency. What Shiller has written about, is there a way to make sense of this recurring pattern of this exponential rise followed by an exponential fall as opposed to anything else that might be kind of tiptoeing around some supposed market equilibrium? This is clearly non-equilibrium behavior here. And Shiller identifies three stages that he thinks are consistent across each of these examples and others. A role of hype, of amplification, and then feedback. And I found that really helpful. I think that helps us make sense also of that plot of PhDs in physics as well. So let's talk first about what gets this first phase of hype. During the early years of the Cold War, soon after the end of the Second World War, there were a series of efforts to assess just how many physicists and engineers were being trained each year in the Soviet Union after the Soviet Union had emerged as the number one rival to the United States, at least as seen by many political and military leaders in the US. So there were these three kind of classic studies undertaken in rapid order. You can see they're years. Not much time between them. Written by the lead authors were Nicholas DeWitt and then separately Alexander Korol. DeWitt and Korol actually shared a lot in common. They were both Soviet ex-pats themselves. They'd both been trained in engineering in the early years of the Soviet Union before they left and moved to the United States. They both settled in Cambridge, Massachusetts. So DeWitt was hired at the brand new Russian Research Center at Harvard, founded in around 1948 or so. Meanwhile, Alexander Korol was hired at MIT in the Center for International Studies, just set up after the war, part of the political science department. We now know both of these efforts were secretly bankrolled in part by the CIA. Although the documents they produced were fully open, not classified and so on. They were widely covered. And in fact, these reports were covered in the news. They were reviewed in widespread magazines and newspapers. What they both wanted to do was figure out how many scientists and engineers mostly in the physical sciences were being trained each year in the Soviet Union. And then you could ask is there a kind of concern about a gap, a kind of scientific preparedness gap between these now two rival countries. So both DeWitt and Korol, who had direct experience of the early years of the Soviet training system, had a lot of caveats. They were very explicit if you read these thick books that let's not get lost in what I think it was Korol called the numbers game. Let's not just look at tabulations of enrollments or degrees conferred each year. Because the systems were very different, they argued. The educational systems themselves were actually kind of a bit like apples and oranges. So just comparing numbers might be misleading was at least their conclusion. So for example, they both emphasize there were a large fraction of Soviet engineers who stayed in the Soviet Union who worked in administration or bureaucracy, not actively in research and development. So just because someone got a degree in metallurgy, if they're not actually an applied metallurgist, does that change the kind of preparedness for either country? They both identified what they considered an extreme specialization in the courses of study. And I'll say more about that in a moment. They each also independently suggested, the evidence here is a bit more spotty, but they put forward in their reports that when the targets for the numbers of degrees in a given field per five year period look like they might be falling below the quotas, below the expected production quotas in the Soviet Union, that standards would be lowered. So you could force more kind of mediocre students through, which of course, we would never do at MIT, of course. But the allegation was made in each of these studies that the quality of instruction is quite uneven because the Soviets were more concerned about these famous five year plans, how many tons of pig iron produced per five years, how many undergraduates majoring in physics or mathematics per five years. And likewise, something became increasingly important in these later reports. The number of students in the Soviet system who were full time students was actually the proportion was falling. And that in fact, the number of students enrolled included as many as 1/3 in the early years greater than 1/2 by 1960 of part time students who held full time production jobs in a factory or some office and were doing basically correspondence courses to get their degrees at night. Which at least the authors argued would be quite a different kind of training than full time students in their chemistry labs day and night. In fact, many of these correspondence students had a ratio of 80 to 1, 80 students to 1 professor, including for seemingly hands on courses like organic chemistry. So not only were they never in a laboratory, but they had one overworked instructor grading 80 problem sets at a time. Which at least the assumption was this would maybe not be the best kind of education. For the specialization, they both point out things like at the undergraduate level at this time in the Soviet Union, a student would not major in the field of metallurgy. But actually in 1 of 11 subfields of the subfield of non-ferrous metals, metallurgy. So they argued there was this enormous specialization, nothing like a kind of GIRs or let alone a liberal arts model that was becoming so common in the United States. My colleague Lauren Graham has written a bit about this as well, more recently looking back, but even at the time both DeWitt and Korol emphasize these caveats. So with all those in mind, Korol, the MIT based author, refused to even tabulate enrollment data side by side. He puts numbers in his very lengthy report. It was published by the MIT Press, in fact. So his book is full of data, but he always put them on-- he never put them side by side because he wanted to avoid what he called unwarranted implications. Nicholas DeWitt did have tables kind of side by side, but he also always emphasized all these caveats about the kind of apples and oranges. What DeWitt found was consistent with Korol's numbers as well. I've now simplified it to make it just ratios. He actually has full kind of tabulations of enrollments and degrees conferred. If you compare the number of photon university students pursuing undergraduate degrees or more advanced degrees, in the US, the Soviet Union, there were actually three times more full time students in higher education in the US than the Soviet Union at the time he wrote these reports. If you add in all those extension and correspondence students, there were still more students in US higher education than the Soviet Union. Now the gap closes from 3 to 1 to 4 to 3. But there was this remarkable imbalance in the distribution of topics of the students were studying. And in the Soviet Union, it was nearly 3/4 of the students who were studying something in science and engineering versus only 1/4 of the time in the American system. So when you combine these last two, include all those extension and correspondence school students, and look at this high imbalance in the distribution of topics, it looked like the Soviet Union was graduating two to three times more students per year in engineering and the applied sciences than the United States. So they were both saying, it's not the same education. The fields of study don't map onto it the same way. There's all these part time students. But if you ignore all those caveats and just run the numbers, it looks like you have this gap of two to three times per year. Now, that's where we find Robert Shiller the economist's example of hype. That ratio, two to three times, gets pulled entirely out of context and repeated literally ad nauseam in briefings by members of the CIA, in congressional testimony, both by and to members of the Pentagon in various congressional committees, the AEC. You see it everywhere. It's repeated in mainstream media up and down. It was just everywhere. And again, I've written about this in a separate essay. And that was a dominant story even before the launch of the Soviet satellite Sputnik in early October of '57. So this notion of two to three times was put forward even with the first of DeWitt's studies in 1955. And that just gets even more hyped up much more dramatically after Sputnik. So let's look at that second step in Shiller's model. You go from hype to amplification. Again, I found these examples in various archives. And I found them just fascinating. Looking at physicists trying to respond to them the surprise launch of Sputnik. So now this is people mobilizing literally in the closing weeks of 1957, just days and weeks after the launch of the Soviet satellite. We now know there was at the time a private briefing in the Oval Office for President Dwight Eisenhower by his brand new still rather informal group that came to be called the Presidential Science Advisory Committee or PSAC. It was actually formalized by President Eisenhower soon after the launch of Sputnik. And one of the leads of that was Nobel Laureate Isidor Rabi. Rabi actually knew Eisenhower. Eisenhower, as you may know, before becoming President of the United States was actually president of Columbia University and that was where Rabi was a member of the faculty for many, many, many years. So Rabi knew Eisenhower personally. And we now know from notes of this meeting that were much later released, Rabi was pushing Eisenhower to use Sputnik as a pretext, as an excuse to close what they already began calling the manpower gap. The notes make clear that Rabi didn't think, or some of Rabi's colleagues, didn't think there was actually a gap to worry about in the numbers of scientists and engineers trained. But that nonetheless, you could use this politically as cover to make a whole different kind of investment in making young physicists and engineers in the US. Likewise, right around the same time, another set of notes I found in the archives. Elmer Hutchinson, who by this point had become director of the American Institute of Physics, was chairing many committees. Kind of outreach and public policy and lobbying committees. And he writes in memos to his committee mates that the launch of Sputnik presents the physicists an almost unprecedented opportunity to influence public opinion greatly with the same argument as Rabi was making, that we should be investing in training many, many more physicists now. Here's actually my favorite example I found in Hans Bethe's papers at Cornell. Bethe would go on to win the Nobel Prize. He was already a very prominent physicist in the United States context. He was an emigre originally from Germany in the '30s. But by the '50s, was an outspoken leading member of the community in the US. He was already by this point a past president of the American Physics Society. So he gave a series of radio addresses to reflect on what the launch of Sputnik might mean. And I found the typescript from which he would read. And in pencil in his handwriting in the margins are all these questions about where this ratio of two to three times had come from. He said, what is it based on? How is it computed? But he never took it out of his address. He was like, I should look that up, and it appears he never did. So that kind of amplifying of the initial hype of two to three times is getting-- we see it unfolding in very concrete places and then, again, picked up broadly by journalists and policymakers. My colleague John Rudolph writes nicely about this in his book and I write about it in my own recent book as well. Now we come to the last step in that kind of three stage model from the economist Robert Shiller's I think really lovely book. And that's feedback. So again, it's actually kind of hard to believe, given our recent experience of US Congress these days and how slow it is to get anything to move, US Congress moved unbelievably quickly in response to Sputnik and passed major, we might rightly say, landmark legislation less than one year after the launch of Sputnik. It became known as the National Defense Education Act or often just called the NDEA. It was signed into law by President Eisenhower in September of 1958. It made available $1 billion from the federal government. If you adjust for inflation, that's around $9 billion today over a short few years window. Concentrated kind of infusion of a lot of cash. And part of what's so interesting to me about this historically is that that was literally the first federal aid to higher education in the United States in about 100 years. There had become a long standing tradition between the 1860s with the Morrill Land Grant Act, which actually helped found MIT and many of the public universities throughout the United States around the time of the Civil War, between the 1860s and this post-Sputnik bill of the NDEA. The tradition had kind of coalesced that education, including higher education, was a local affair. It was for local communities and states to handle, not the federal government. This was often seen as a so called third rail. The federal government had no business in higher education. And that was just shattered. That tradition was overridden very rapidly in response to the launch of Sputnik with this billion dollar investment in higher education. That had some predictable and very rapid effects. So in the first four years after the bill was enacted, it basically doubled the number of graduate fellowships, fully funded graduate fellowships for the sciences in the United States. So 7,000 isn't very many until you compare it to the number of federal fellowships before then, which had been practically none. It also subsidized the training of half a million undergraduates as long as they were studying the physical sciences, engineering, or mathematics. It also offered block grants to states and to individual institutions. The whole point was to get more and more students in to address the so called manpower gap of so called scientific manpower. Since that time, several political and legislative historians have gone back to the passage of this landmark bill, the National Defense Education Act, and they've really picked apart the kind of backroom lobbying and the sometimes, let's say, shadings of the truth that were done by proponents to kind of ram this thing through so quickly. And as the historian Barbara Clouse has concluded from a really just fascinating kind of moment-by-moment legislative history of this bill, in her own conclusions I'm quoting here, that the "proponents were willing to strain the evidence to establish a new policy," that it was made to seem that the manpower gap was more important and more dire than indeed it seemed to be. And again, what explains these plots here is that all the aid from the NDEA was earmarked for so called defense fields in the language of the time, fields that were seen to be relevant for the National Defense. Science, math, engineering and area studies. Some parts of humanities and social sciences, like we need more people who are experts in the Soviet language or know about the culture of Ukrainians. So that would get some support as well. Now here's, again, something I've written about. I'd be glad to share the piece. If you go back to those reports from DeWitt and Korol, remember, these CIA funded, thoroughly unclassified, easy to find books still in our libraries today. If you go back and look at even just their data, let alone all their caveats about the different kind of educational institutions in the two countries, the numbers themselves really deserved a closer look. The columns in DeWitt's very densely formatted data that are in his book that were put together to form this ratio of so called two to three times were labeled in the press engineering and applied sciences. It actually meant engineering, agriculture, and health. And it explicitly did not include the natural sciences. It left out physics, chemistry, mathematics, and biology. And so what it was actually counting was a lot of experts in agriculture and agronomics, as it was called at the time, including this what I think most would conclude devastating tradition in the Soviet Union of so called lysenkoism, kind of an idiosyncratic non-genetics notion about how plants change, which is at least blamed by some historians as having led to just failures of many collective farms and widespread starvation. So yes, there were many people graduating with degrees in agriculture but not in physics or math. Likewise, it Included health professionals, including nurses and dentists and so on, which is clearly an important measure of overall society labor force but was not the kinds of defense oriented experts that it was characterized to have been. If you go back to the thoroughly accessible non-classified data in these books and you drop agriculture and health, keep engineering and add back in science natural sciences and mathematics, then actually the enrollment numbers come out to be basically a tie. The Soviet lead falls by a factor of 10. And that's still including all those extension school students and correspondence school students. So the so called manpower gap was really a kind of chimera. And that's still under the assumption that the nation's safety depended on having lots and lots of theoretical physicists around, which itself was an assumption that one might have questioned at the time. Just playing the numbers game was actually really kind of twisted or misread and used very strategically during the kind of fever years of the early Cold War. So we come back to this plot. Now anticipating Fisher's question from earlier, the same plot I showed before. We now have a better sense of what was driving this up so high. And it was kind of a series of political decisions and priorities often at the federal government level. We can question them, but we can at least understand why they were made when they were. What about this really quite precipitous fall? And that was very much as Fisher had anticipated, a lot of that had to do with other changes in the United States and around the world by the late '60s compared to the early '50s. A lot of that was the escalation of fighting in the Vietnam War, which included things like the removal of draft deferments for full time students. So full time students, both undergraduate and graduate students, got deferments from being drafted to go serve in Vietnam up until the late '60s. And then because the war wasn't going very well for the United States, there was a kind of recalculation on the number of people needed for the kind of military manpower. So people who had gotten deferments for being full time students were now no longer eligible for them starting in the late '60s. There was also a lot of concern on campus and actually even within the Pentagon about why the defense agencies were funding all of non-mission oriented, open ended, basic research. The argument came to be among the Department of Defense's own auditors, their own accountants. If we want to get better kind of military ready instrumentation or weapons or defense systems, shouldn't we invest in projects that will make better defense systems as opposed to spending money on seemingly open ended topics and basic physics that sometimes have a kind of trickle on effect and lead to new devices but often do not. So the Pentagon itself began this internal audit that they concluded in 1969 saying that it's been a terrible policy, terrible return on investment for the Pentagon itself to be paying all these students to pursue PhDs in physics if they're not doing defense related work. And at the same time, really because of the new kind of immediacy of the fighting in Vietnam, there were many, many critics of the Pentagon on college campuses, students, faculty, staff alike who were very upset at the domination of spending on university campuses by the Pentagon. So both from within the Defense Department and from without really heightened by the course of the Vietnam War. This whole kind of framework for funding research and research students in the science is especially in the physical sciences gets in for some really quite severe Reassessment All kind of cresting around the same time? A third factor that starts to impact things is huge federal deficits and cuts to both military and education spending at the federal level. So there's less money to go around and then fights over how to allocate that money. These photos are both taken, by the way, from MIT. You can see instrumentation laboratory. That's what's now Draper laboratory. There were these marches in November of 1969 throughout the month of basically fights not just anti Vietnam War. But also anti-military spending on MIT's campus. And that led to fighting. You can see this was also-- I think this is right down the street from that where it led to clashes with the local Cambridge police. Really, I mean, students and staff being clubbed and some police injured as well. It was a melee. It was a fight in the streets. And that became emblematic of the kind of deep questioning of the arrangement that had led to this extraordinary 20 year rate of growth. And as you also see it in a very dramatic contraction in the field itself. So here's other information I found in the AIP archives. The AIP is to run a placement service to match up young PhDs finishing their degrees in the field with employers who are eager to hire young physicists. So we would help arrange interviews for jobs at the annual meetings, for example. And the AIP kept statistics on how many people, how many students registered with the service and how many employers were coming with jobs on offer. And you can see even after 15 years of exponential growth in the field. there was still more jobs on offer than students seeking jobs at, least through the AIP right into the early and mid 1960s. I find that mind boggling. But then you see the fortunes begin to reverse quite dramatically. So the bottom falls out. And then by 1971, there were more than 1,000 students registered with only 53 jobs on offer. And that wasn't just in universities. That was across government laboratories, private sector industrial laboratories, and universities. So you have this unbelievable collapse of the physics job market as a coalescence of these forces, political, economic, cultural, institutional. So that together helps account for. I think this very sudden and dramatic fall. Which again, every field of study went through some fall, but none so steeply or so quickly as physics. So we pause there and ask for some questions and we'll talk briefly about the last part. Oh, good. DA asks in the chat, how did the government cultivate interest in physics? Great question. Again, some wonderful things written about that. I've written some. Many colleagues written about it. One of the things that I find really astonishing about this time period is that physicists, if you can believe it, it will your imagination, became kind of rock stars. They were featured on the covers of Time Magazine. The word that was often used at the time was they were lionized, which is to say they were treated like celebrities speaking to League of Women Voters Luncheons, speaking to civic groups paraded into school auditoriums or assemblies for L23 students. In 1961 I think it was '61, Time Magazine named a dozen physicists, all white male physicists, people of the year. Time magazine for years ran the Person of the Year Award. And in '61, it was physics as embodied by 12 faces of renowned physicists as the people of the year. That same year, maybe 1963, in Gallup polls of the American population with public opinion polls, people were asked to rate the three most favorable professions. Three professions that are seen as most highly respected among the general American public. Number one was Supreme Court Justice. I don't know if that would be the same way today. But 1963, that was seen as above the fray in the most kind of laudatory role someone could aspire to. Number two is medical physician. Number three was nuclear physicist. So it was a combination of a sense that nuclear physicists had single-handedly built the bomb and, therefore, won the war, each part of which I think we've seen in this class was maybe a bit of an oversimplification, combined with this really kind of hero kind of treatment of young physicists, even those who had had no role whatsoever in the war. So there was a moment in time when physicists in the United States were treated really as kind of celebrities and also a very, very valuable resource in all too short of supply. So you have a lot of these cultural reinforcements that then get overlaid upon this kind of more politically driven post Sputnik cry to get lots and lots of people in defense, because the Soviets are somehow ahead of the United States in military technology. So that was at least the allegation. Fisher says it looks like the number of PhDs over time to the 1980s-- yeah. So I'll show later on what the graph looks like later on. That's right. And Jade perhaps rightly or wistfully says, a brief shining moment. You may recall from the very first lecture for this class, literally the first day, I shared with you a quotation from Harper's Magazine in 1946 or '48. Late '40s. And the columnists had said that physicists are in vogue these days. No dinner party is a success without at least one physicist. I always say, boy, you know something's changed if that was what people thought in the late '40s about physicists as the ideal dining companions. So a lot was reinforcing. There were overt incentive structures for kind of political decisions. And they were often in tandem with or reinforcing a kind of broader, maybe more diffuse kind of cultural notion that physicists, in particular nuclear physicists, were somehow at the top. Let me go on to the last part today. This is the part that's closest to the reading. And so I'll go quickly through it, but I do think it helps make sense of some of the trends we've been thinking about in the class so far. And that's to ask what happens to the study of physics when the number of people pursuing it grows exponentially? How does the world of ideas begin to react or change, not just the kind of enrollment patterns or degrees conferred? I love this photo. I always show this photo. This is actually Nobel laureate Val Fitch, a particle physicist, who about to be crushed to death by the Physical Review. It's like death by journal. Many of us think that scientific journals are deadly boring, but this is actual a weapon. So this is now stacked by decade. The journal began in the 1890s, 1900s, '10s, '20s, '30s, '40s, '50s, '60s, '70s. By the 1970s, there were 30,000 pages published per year across all the phys rev journals as opposed to a tiny fraction at that point. And can see they're tracing out basically that same exponential curve as the number of new PhD students. Of course, they went together. All those PhD students had to write a thesis on something and publish it. So I wanted to ask what's it like to be a physics student under these two very different regimes? So let's try to look at apples to apples as best we can. Here's one instructor who happens to be a rather popular one named Richard Feynman, teaching at Caltech. The first one here with his necktie on in 1962, much like those photos of Tony French here at MIT. Very large stadium tiered auditorium seating with hundreds of students. And here just 13 years later after that bust teaching admittedly a different course but what he used to call his physics ex course with about a dozen students. Much has changed. He's lost his necktie, his collar, and his pants cuffs are both much more wide. Feynman's own hair is longer. I don't know what's going on with these feet on the desk. Things have changed, again, culturally as well as kind of numerically. What's it like to become a young physicist under these two quite starkly different sets of conditions? And I wanted to track that by looking at the training of first year grad students in a particular subject that's near, I think, to many of our hearts, quantum mechanics. This is exactly the period in time when at least one course in quantum mechanics became required of all grad students in physics in the US. It was an elective in many places until soon after the wall. So we have one kind of way to, again, compare like with like. And so this is a topic that, again, as we've seen in our own class, had been the subject of or treated to overtly philosophical discussion by many, many leading physicists who helped invent quantum theory in the 1920s and '30s. We looked a bit at things like the Einstein board debate. It turns out they filled their own textbooks-- they didn't fill, but that philosophical mode of interpretation shows up in their explicitly pedagogical materials as well. Now, they would disagree about which philosophical tradition they found most helpful. Some turn to the German idealism of Immanuel Kant. Several turned, like Einstein himself had done to Ernst Mach, all kinds of favorite philosophical or interpretive traditions. But they all agree that this was part of what it meant to be a physicist and part of what it meant in particular to learn about quantum theory. Before the Second World War, this part actually surprised me, that was also true in the United States. If you look at all the leading instructors in the new quantum theory in the US, the early textbooks, their own lecture notes that survived, student reminiscences, homework problems, and so on, that there also was an explicitly kind of interpretive or philosophical approach to quantum theory. Again, they would often appeal to different philosophical traditions than many of the continental Europeans. Rarely would they appeal to Immanuel Kant. But there was a notion, an explicit one, set in the opening prefaces of the famous books by these authors and throughout the lecture notes from Oppenheimer's very famous course at Berkeley that part of the job of reckoning with quantum theory is adopting an explicitly philosophical mode to try to understand these kind of puzzles, like things like Schrodinger's cat or the restriction to probabilities and determinism, things that we at least talk briefly about in our own class here. Well, my favorite source of that is a series of notebooks that are preserved in Caltech's archives. They span from 1929 to 1969. Unbroken 30 year period. These were communal notebooks passed down among the physics PhD students and then finally grabbed and preserved 30 years later. When the students would narrate how they prepared for their general exam was an oral exam only for the PhD students. No written exam. So the students would say I studied this and this and here are the questions I was asked so that you could pass it along and help your friends who would come to the exam a little later. So read these accounts of the exam at Caltech for your PhD qualifying exam in physics. And in the '30s, indeed, they were pressed to calculate certain things but also pressed to give kind of interpretive answers to some of these quandaries of quantum theory. And likewise, if you look at lecture notes, textbooks, book reviews in the '30s, you see that rapidly as well. What happens very quickly after the end of the war is that that just vanishes in the United States. I mean, it just vanishes very quickly. Again, many ways to measure this. One of my favorites, a statement from MIT's own Herman Feshbach, very, very prominent nuclear physicist, helped found our Center for Theoretical Physics, very prominent. And he wrote in '62, in his words, enough with this musty atavistic to do about position and momentum. He's like, enough fretting over the kind of philosophical implications of the uncertainty principle. We have all this stuff to calculate. And a kind of common phrase that was uttered kind of half jokingly at the time was the phrase shut up and calculate. Meaning don't stay up all night wondering about the grand mysteries of quantum theory. Learn how to do your problem sets, which now were winnowing to be almost exclusively calculation and not kind of verbal or essay like response. So again, we can go back to those notebooks from Caltech, these kind of continuous record of the qualifying exam for the Caltech physics students. And you see some of these entries in the early '50s of students who are kind of shocked. They actually feel cheated because they'd studied years and years of these older accounts of the types of questions they would likely be asked on the exam. And then they didn't show up when those students in the early '50s actually sat for their exams. They write things like the effort invested in analysis of paradoxes and queer logical points was of no use in the exam. The student was only asked to solve certain problems. Another student says, the best advice is to memorize and rehearse the stock problems or give what he calls the usual spiel. They will not ask you to wax poetic about the fall of determinism, whereas they had really been asking that question only a generation or two earlier. Likewise, the newer textbooks now get praised in the published book reviews for, quote, "avoiding philosophical discussion or omitting this distracting philosophically tainted questions." You see this very stark shift also in other written exams and problem sets across the country. So between the '30s and the '50s, something really stark has changed. How might we account for that shift? We might say, well, these were open questions in the '30s, but maybe they were just solved by the '50s. I mean, I chuckle because, of course, we know as many of us have been discussing in office hours they haven't been solved to this day. These are still open questions subject now to very active research even in the United States and even then in the '50s were being pursued actively by some physicists outside the United States. So it wasn't that the philosophical challenges had gone away. Another answer that I find much more compelling, much more interesting, is written by two of my own mentors. So I guess I have to like it, but I do. Both Sam Schuaver and Peter Galison, among others, have written that maybe it was what happened between the '30s and '50s that could account for the change. In particular, this really massive, unprecedented effort by physicists and engineers and chemists and so on during the Second World War. If you're working in the midst of factory sized kind of time sensitive, mission oriented projects like isotope separation, like radar, you don't have time for kind of philosophical niceties. And both Sam and Peter find just really compelling examples of physicists saying stop doing what we used to do. There's a war on. Get the numbers out was a phrase that Sam in particular would use to characterize this period. And yet I found from going through the archives and collecting unpublished lecture notes from physicists at a range of institutions during the '50s, not just during the '30s, that that wouldn't account for the shift. The folks I was looking at were often highly placed veterans from the wartime projects, people who had very extensive roles at Oak Ridge, Los Alamos, and the Rad Lab during the war, and yet went back to their home institutions after the war and taught quantum mechanics in this comparable kind of overtly philosophical vein like in the '30s. So it can't be that somehow the war was this unique explanation, even though I do think there's a lot of compelling evidence about a shift. Maybe we might say after the war, it was a change in who wanted to hire these physicists. After all, 1953 was the first year in US history in which more physics PhDs were hired in industrial jobs than academic ones. So the industrial research in physics was really growing rapidly in the United States in the '50s. And again, maybe if you're working at Westinghouse or Bell Telephone or many, many others, maybe you don't need to worry about the kind of epistemological philosophical questions about Schrodinger's cat. And yet because of the Cold War fears about where the physicists were and how they could be tracked, we have an enormous amount of information about this generation, where they were trained, where they got their jobs, who moved to what position when. So you can actually do a statistical sample and find there was literally no correlation. In fact, some strong anti-correlations between where an individual got jobs, got a position after PhD, and the type of course they had for quantum mechanics during their graduate work. This was not a kind of job market driven shift either. So what it led me to, and this is where you'll read in the piece, was to ask about is this a kind of legacy of bloated classrooms that were changing very, very rapidly? So if so, how might we assess that? And here I'll boil down a lot that's written a bit more in the essay. One thing I did was collect all of the lecture notes I could find, mostly unpublished lecture notes, from the 1950s from, again, this same kind of course. First year graduate level quantum mechanics to try to compare apples with apples. Then separately assess the enrollment of those courses. Sometimes it was actually in the archives. I could see Hans Bethe's grade sheet. I know which of his students got a B+ and I know how many students were in the class. Other times the University registrar's still have records or other ways to get this. So you can look for correlations between class size and the nature of discussion of the quantum theory in those classes. And again, to boil down a lot, to be kind of overly precise, these numbers should be read with grains of salt. But a shift in a factor of 3 in the average enrollment size, a larger class by a factor of 3, was accompanied by a fall by something like a factor of 5, by a very dramatic, very noticeable shift, in the number of, say, pages of the lecture notes devoted to anything we might generously consider philosophical, speculative, or interpretive. So that's one moment in time over the '50s. Then I also wanted to look across time, what historians would call diachronically. So with the help of several grad student research assistants, I looked at all of the graduate level textbooks on quantum mechanics that were aimed at that same audience, first year grad quantum mechanics courses. All of them that were published in English during this 30 year period after the war. And then we looked at all of the homework problems that appear in those books. There's 6,000 homework problems. Actually there's, of course, many redundancies. The same problems get repeated over and over again. And again, we can look at trends and how many of those homework problems in the textbooks asked for any kind of verbal or interpretive answer. How many of them were short essay? Write out an answer in reasoning in sentences and paragraphs versus calculation and circling your answer at the end. So what I'm showing here is the proportion of the questions that asked for an essay like a brief short answer, paragraph structured response. And it hovers at around 10% while the enrollments grow rapidly. And then look at this transition to an absolutely explosion to reaching almost 50% where kind of essay or discussion questions in these books published soon after that bubble had burst. A couple of more kind of archival examples. Are these general trends kind of playing out on the ground? And here's just two more that I'll share with you before I wrap up. There's one example I found again in the archives at Berkeley in the early or mid 1950s. This was a young assistant professor, tenure track professor named Roland Good, who was interested in some of the kind of foundational aspects of quantum mechanics, including quantum field theory. He was ultimately not promoted. He was denied tenure. And that always leaves a large paper trail. It's actually hard to deny someone tenure because you write a lot of reports, then as now. And so if you look at the reports of why was Roland Good not promoted, they say that his choice of research topic was pedagogically inappropriate. He wasn't generating doable homework problems for his students, especially for his PhD students. And because Berkeley had such a huge population of overwhelmingly white male PhD students in physics, that every faculty member had to be a kind of engine, a factory producing dissertations. And Roland Good's choice of topic by being more overtly interpretive or philosophical, it was a poor fit. And so he wouldn't be promoted even though it was a productive researcher and an effective teacher, according to reports. Last example of that. I looked just nearby Berkeley at Stanford. Their department was always much smaller than Berkeley's. But in relative terms went through, again, its own kind of boom and bust cycle. It grew much larger compared to its older kind of capacity very rapidly. And again, you see this remarkable correlation in time as incoming cohorts grow from 10's to 20's in the earlier period up to nearly 50 a year, high 40's per year. The faculty get overwhelmed doing all these qualifying exams. They switch to a true false general exam. For a period of almost 20 years, you can get your PhD in physics at Stanford on the basis of a true false qualifying exam. I'll let that sink in. Meanwhile, just as it did everywhere, enrollments crashed, fell by more than half in about two years. And as the numbers fall, then kind of coincidentally, or at least correlated in time, the faculty say, no, actually our students should be better prepared to discuss these things in words. So 40% of the revised general exams become essay questions. Literally write out interpretive answers as opposed to true false and so on. So I find those trends really quite striking. So this is now my final set of slides. I'll wrap up very, very quickly. Going back to that early article that I mentioned by Paul Forman, Forman was convinced that this overwhelming military sponsorship of funding for basic research had changed what it meant to do physics. In his famous words, physicists pretended a fundamental character to their work that it scarcely had. They become merely instrumental to their military patrons. He says they sold out. This really is a kind of one dimensional analysis that somehow money from the Pentagon changed the world of ideas. It certainly had an impact. But it seems to me that what's missing from those kinds of explanations is the way the money gets used, the decisions about what one does with this suddenly very fast changing infrastructure. How do people learn in institutions as opposed to react to budget lines? And then we start asking questions about enrollment patterns, about the nature of the instrumentation, and about the kind of arrangement of the world of ideas. Then I think we can start asking about did the character of research and teaching start to shift during this Cold War period? So let me pause there. I'm sorry again for running long. I'd be glad to stay longer and chat if people would like. Any other questions? So if there are no questions, I see Jade has helpfully shared a link to a chapter I'd written that Peter Fisher shared with our department back in August. I'd be happy to share more. I wrote a lot about those reports on the Soviet training. And if that's of interest, I can share that too. Fisher asked, was there any renewed interest in experimental physics? Yes. So it's actually this period. Very good question. It's this period in the United States that theoretical physics really begins to break off into its own subfield, considerably later than we'd seen it happen in other parts of especially in Europe. Nonetheless, it was still a kind of predominant assumption during this time period that physics meant experimental physics, like we saw much earlier, and that maybe there should be a role for theory. The theorists were often considered so called house theorists. So you do theoretical physics by being kind of a closely coupled consultant to your immediate local experimental group. And that kind of shifts and gets renegotiated over this time period. Another example I cut for time but I think it speaks to this point. I found advice letters from Berkeley physicists to people who wanted to enter the field during this kind of heyday in the '50s. And in one letter, a presumably very well intentioned Berkeley physicist wrote back to a student. The student explained that he was in a wheelchair permanently, had some reason to be in a wheelchair permanently. And the physicist said, you can't have a career in physics, because he'd never heard of Stephen Hawking, who hadn't become prominent yet and hadn't been in a wheelchair yet. But also because the presumption was still physics meant experimental physics, which meant getting around these factories like at Berkeley like the bevatron. I mean, it was a kind of collapse of who's legitimate to even consider a role for themselves in physics, which is assumption both about we might now call it ableism. There was a clear kind of bias against people with a non-standard physical situation. But also I mean, that came coupled with in this person's mind the sense that physics equals experimental physics which equals somehow lumbering around kind of large equipment like in a factory. So again, that's one data point, but I think it's illustrative of this kind of collapse in many physicists' minds at the time about what a proper role, a proper career in physics would be like. And of course, that changes. That changed over time, both about the kind of demographics, but also about theory as an autonomous thing. MIT Center for Theoretical Physics was among the earliest founded as a standalone center for theory. And it was founded in 1967 or '68, I can't remember which. Maybe Julian might remember. But late '60s. So that's a kind of marker point for when this was seen as, oh, this is a separate thing. It needs its own space on campus. So anyway, so that's a great question. OK, I've gone for a long time. Sorry to keep you all long. Stay well. Hang in there. And I'll see you all on Monday. Stay well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_24_The_Big_Bang_Cosmic_Inflation_and_the_Latest_Observations.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So today, we're going to pick up where we were on the most recent classes. So in the last few class sessions, we were looking at some changes in high energy particle theory with the development of things like quantum chromodynamics and so on and ways to try to make sense of very high energy interactions among elementary particles or at least particles smaller than an atom. And then in our most recent class session, we looked at some of the shifts within the fields of study, including the emergence of this relatively new subfield called particle cosmology, which really came together starting in the mid 1970s partly because of some really exciting new ideas, but also, as we've looked at in some detail, because of some broader institutional even geopolitical shifts in the physics profession that really helped some of these new ideas take hold in a way that they might not have in other times or places. So today, we're going to focus on a kind of example of that new subfield, a relatively new subfield that's known as inflationary cosmology or simply cosmic inflation. So to make sense of that, we're going to first look at some of the work in cosmology before this merger of fields, before particle cosmology really in into its own. So we'll look at the coalescence of what came to be called the Big Bang model, still enormously successful framework for trying to make sense of very large scale changes in our universe over a long sweep of time. And then we'll see that already by the 1970s and '80s, there was some curiosities or maybe inconsistencies with that otherwise quite successful model. And so people began thinking about shortcomings of the Big Bang model around the time that this new subfield of particle cosmology began to come together. And then we'll see how cosmic inflation emerged from that particular moment to try to address some of the shortcomings while retaining some of the successes. So it's a framework for trying to understand the evolution of our universe over a huge expanse of time, increasingly using tools at the interface, not just of Einstein's general theory of relativity, but also ideas about particle physics and high energy phenomena. [BEEP] So that's we're heading today. And the asterisks are to remind you there's a set of strictly optional lecture notes on the Canvas site which go into a little bit more detail of some of these parts from the lecture. Again, strictly for your own interest as your interest and time allows. But if some of these things go by quickly, there's some more material there. And again, I'd be glad to chat more about this if questions come up beyond that. OK. So oftentimes, astronomers will describe the most salient features of our universe in terms of what they call large scale structure. It's really quite remarkable. And this [CLEARS THROAT] picture's been emerging really over a century, for 100 years or even more, that when astronomers turn their telescopes to the sky and look at many different length scales, many different characteristic lengths, either very, very, very large length scales-- this is a modern example from the Hubble telescope deep field survey-- where you can look at clusters and even super clusters of galaxies on enormous scales, tens or even thousands of-- let's see. I want to get my units right. Basically billions of light years across. It's a huge scale. Or we can zoom in to the size of single galaxies like the Andromeda galaxy or our own Milky Way galaxy. Or if we zoom in even closer to home with the solar system or even really in human terms, there are concentrations of enormous matter and energy and activity separated by huge voids. I like to make the joke with my apologies to Tiffany that in the Cambridge example, we have all this stuff happening like by this data center, whereas there's nothing at all happening at Harvard. Is that right, Tiffany? Just a simple nod yes or no. Yeah, OK. Sorry, Tiffany. I'm teasing. The point is on scales from kilometers out to billions of light years and everything in between, we find this lumpiness, that there's a pattern to it. Matter is not uniformly smushed out in space. Thank goodness, right? There is actually teeming pockets of activity separated by large voids where very little matter or energy is located. And so one of the questions is, what could account for that structure across these scales from meters or kilometers up to tens of billions of light years? It turns out that ordinary gravity-- even Newtonian gravity, let alone Einstein's fancier version that we looked at in class, general theory of relativity-- that these gravitational frameworks are sufficient to help us make sense of this hierarchy of scales, of structure across large distance scales. [CLEARS THROAT] If we assume, to start, there's some initial, very tiny lumpiness to begin with, if we assume some very tiny inhomogeneity, a little bit of unevenness in the distribution of matter and energy at early times, then gravity will do the rest. Gravity will make those regions that happen to have slightly more matter or energy per unit volume than average-- the gravitational force will then attract more and more matter and energy to those local regions. So it'll become more and more dense, more pockets of activity. And meanwhile, the areas that happen to start out with slightly less than average matter and energy per volume, slightly under dense regions, will become more and more further evacuated. And so even in much more quantitative precision, much more detail, we can account for this array of structure from human size scales out to the super galactic using really only gravity as long as we start with some initial otherwise unaccounted for lumpiness. A tiny amount of unevenness in the distribution of matter and energy will grow more and more uneven over time. So a challenge for astronomers for a century or so has been to try to make that account more precise and more quantitative and compare it with more and more kinds of observations. And there have been two main conceptual ingredients, especially as we'll come to in the more recent versions of this in the era of particle cosmology. One of the sets of tools, the conceptual ingredients, not surprisingly, is some theory of gravity. And since [CLEARS THROAT] the early years of the 20th century, as we've seen many times in this class, the framework of choice has been Einstein's general theory of relativity, which, as we've seen, describes the phenomena that we associate with gravitation as really being nothing but geometry, being local deformations, local curvature in this almost physical type fabric of space time in response to the distribution of matter and energy. And the other main ingredient especially, as I say, refined in recent years with insights from high energy nuclear and particle physics has been some prevailing understanding of matter, especially matter at very high energies and temperatures, matter that consists of things like electrons and photons, matter that nowadays people are pretty well convinced consists of things like quarks and gluons and even some more exotic particles like the Higgs particle that we looked at least briefly in the previous class. So we have these two ingredients of the structure and behavior of space time as governed presumably by Einstein's theory or something perhaps similar to it and then the stuff that's filling that space time, an idea about matter, especially how matter behaves at very high energies and densities. So with those two ingredients, the goal has been, again, for many decades to account for the observational features of our universe even on very large scales. So as a reminder, the so-called field equations of Einstein's general theory of relativity take this deceptively simple looking form. This side is what he had to learn from his friend Marcel Grossmann. This is the geometry of a warping spacetime, the way to quantify things like gradients, rates of change of space and time. And this tells you where the stuff is. This is the distribution of matter and energy. Pretty soon after Einstein arrived at this form of his field equations in November of 1915, within a few years, Einstein himself and then soon several other colleagues began applying these equations not only to local phenomena like, say, the warping of space outside the sun. His friend Karl Schwarzschild first found an exact solution for very early in this work not only for local phenomena, but actually for global phenomena. Could one build at least a toy model of an entire universe that might satisfy or might be governed by Einstein's field equations? And actually some other colleagues were very quick to find that there were some exact solutions even on this global or universal cosmic scale that would satisfy Einstein's equation. They took three particularly simple forms. Depending on the amount of stuff, depending on the distribution of matter and energy, if you assume that the matter and energy was spread out perfectly uniformly as a toy model, a uniform density of stuff per volume, then depending on whether there was more than some critical value, less than some critical value, or exactly equal to some critical value, these Goldilocks situations for the global shape of space in response to how much stuff per volume was filling that toy universe. If you had more than some critical value, a critical value that came from the equations themselves above that, an overdense region, space itself would warp back onto itself like a closed sphere, the surface of a closed sphere that would be a positively curved geometry globally. If you had less stuff per volume, if the universe were underdense compared to that critical value, then, in fact, the universe would open up away from itself. You'd have a hyperbolic solution or an open geometry with negative curvature. And only if the amount of stuff per volume were exactly equal to that critical value would the sections of space be flat, obey the geometry of ordinary Euclidean geometry. And so you can have these global features, for example, on a positively curved geometrical surface. Parallel lines, lines that are parallel at the equator like these here, will actually converge at the poles. So lines that are parallel in some part of the space will not remain parallel forever. That breaks the Euclidean assumption, the fifth postulate about parallel lines. Likewise, on a positively curved surface, you can draw a triangle and add up the sum of the angles contained within that triangle and add up to more than 180 degrees, whereas Euclidean triangles have to have exactly 180 degrees. Likewise for open geometry, parallel lines will actually diverge. They'll get further and further apart from each other over distance instead of remaining parallel. And triangles will sum up to have less than 180 degrees of the internal angle. So there are these self-consistent, non-Euclidean geometries. And they could apply not only to local physics like the warping of spacetime outside a massive object like the sun, but even to these toy universes, these otherwise very simple models of a universe as a whole. Well, very soon after that, people-- not Einstein himself. He thought this was horrible. But some of his colleagues who began pursuing these cosmological solutions began to realize that the universe could not only have a shape at a given moment in time. But the shape could change over time, or the size could change over time. You could have expanding or collapsing solutions, also strictly consistent with Einstein's equations. Einstein thought that was horrible. He had a very strong aesthetic and philosophical preference for a universe that had no beginning that was simply static, that would look the same for any observer at any moment for an infinite expanse of time. But other colleagues showed at least it was consistent with his own equations to have universes that would change over time, that could either expand or contract. That was actually a prediction made by some of these colleagues even before some empirical evidence began to come in starting in the late 1920s with some at the time absolutely enormous telescopes-- now they're pipsqueaks compared to what astronomers have today-- but what was at the time some of the largest telescopes of available on the planet. Astronomers like Edwin Hubble in Southern California were able to collect information about not just the distribution of distant galaxies, but could also measure how rapidly they were moving with respect to us by measuring a Doppler shift, slight shifts in the spectral lines associated with those galaxies. And Hubble found this remarkable trend that the further away from us a given galaxy was, the faster it tended to be moving away from us further still. So the objects that were relatively close to us were moving away from us at one average speed. Objects that were moving further away from us now were moving even further away from us at a faster speed. So there's a remarkably close to linear relationship between the object's distance from us today and the rate at which they're moving further away from us that became known as Hubble's law, more recently amended to be the Hubble Lemaitre law because it actually predicted first by a theoretical physicist even before Hubble had found that data. Now we have, of course, as you know, the Hubble Space Telescope named in honor of Edwin Hubble, which has been able to extend this to extraordinarily far distances, not just the ones that Hubble could access with his ground based telescope. And the basic trend holds. There's some interesting deviations. But the idea is nonetheless evidence consistent with our universe expanding, not just having a shape to it, but actually stretching and getting larger over time. So you can actually then work backwards and say for how long has our observable universe been stretching? When did this stretching or expanding phase begin? You can work it backwards and say given the rate of expansion that can be measured today, whether with Hubble's techniques or now with the more modern ones with space based telescopes, work it back, and it's consistent with the beginning of that expansion being not quite 14 billion years ago, billions of years ago. Our own universe seems to have been stretching and getting larger and larger. So this gentleman here, whose name I already mentioned briefly, Georges Lemaitre, was really at the forefront of this work, starting in the 1920s and throughout the 1930s. You might notice in this photograph he's wearing a Catholic priest's clerical collar. George Lemaitre is I think a fascinating scholar. He was indeed an ordained Catholic priest. He was also an MIT trained PhD astrophysicist. And he was originally Belgian. He studied briefly in Cambridge, England with one of the first converge to general relativity, Arthur Eddington. Then he came to MIT to finish his PhD and then was finding many of these solutions to Einstein's field equations even before Einstein did. And, in fact, Einstein came thinking he must be wrong, and then Lemaitre kept being right. So they became very nice colleagues. But Einstein started off by always being frustrated that Lemaitre found solutions that Einstein found abhorrent or disgusting. [LAUGHS] Yet Lemaitre showed they were at least mathematically self-consistent and gradually became more and more relevant in the light of data like Edwin Hubble's about the expanding universe. Lemaitre was one of the first to start thinking about playing that filmstrip backwards to say if things are moving further apart from each other on average today, and if the universe in general is expanding today, then was it, in fact, smaller at earlier times? You can imagine playing a filmstrip backwards and watching these galaxies actually approach each other as you look at earlier and earlier times, heading back toward that roughly 14 billion year old starting point. So it was really Lemaitre who began writing about this both in technical papers and soon in some very charming, more popular books that if the universe is getting bigger today, it must have been smaller in the past. And what if you ride that all the way back? Was there a primeval moment, was there a single moment when all the matter of the universe, at least all the matter that we can see, was actually on top of each other, that the universe should've started in a very, very hot and dense state and been expanding ever since, maybe infinitely dense? But either way, there was a moment when all the stuff that we see in the sky should've been closer and closer and closer together and been stretching and expanding ever since. So it's Lemaitre who begins thinking about what comes to be known the Big Bang model. He was calling it a primeval atom, that there was this initial fireball from a very, very hot, dense state. And he was very eager to understand the early stages of that expansion. [CLEARS THROAT] That's where things stood really through the 1930s. As we saw, there were a number of disruptions when much of the world descended into the Second World War. And then soon after the Second World War, new groups began coming back to these somewhat old questions. Some of the newer groups had experience with things like the Manhattan Project and in general were much better versed in things like nuclear physics than had been known even in Lemaitre's day. So the field had expanded, and some of these folks had direct experience from things like the Manhattan Project, one of whom was actually George Gamow. So one of the most active groups soon after the Second World War was based at the advanced-- excuse me-- the Applied Physics Laboratory. We looked briefly at this when we talked about the Second World War. That was another one of these US based defense laboratories built in a hurry to try to advance a bunch of wartime defense projects. It was much like the MIT Rad Lab. At the Applied Physics Lab, they worked on things like proximity fuses and so on. Starting very soon after the war, there was some unclassified research going on at that research lab as well. And George Gamow was advising two younger physicists, Robert Herman and Ralph Alpher. Here's a famous composite photograph. They're making a not so subtle gesture to the fact that Gamow was widely rumored at least to enjoy his drink. So his head is emerging from the vapors of Cointreau, of a liqueur. So this was a trio that began coming back to some of these questions about the very early universe inspired by the writings of Georges Lemaitre but now with a lot more knowledge about high energy interactions among elementary particles as well. And in a series of really quite ahead of its time farsighted work starting in the late 1940s, this trio and a small number of other colleagues around the world began trying to fill in this picture, this primeval fireball picture. And in fact, it soon became known simply as the Big Bang model. They realized that if the universe was very hot and dense at early times, then the conditions in which these elementary particles would find themselves should be quite different than what we find commonly around ourselves today. In particular, at early times, the ambient energy, the interaction that any random elementary particle would likely carry, should be very, very high. Temperature, after all, is just a measure of kinetic energy of motion. So you have a very high temperature. That's like saying that the average kinetic energy of each constituent, each elementary particle, was very high. It could've been, for example, higher than the binding energy of stable hydrogen atoms. So if that were the case, then every time some positively charged nuclear particle like even just a single proton would approach or be in proximity to a negatively charged electron, they might begin to form a stable electrically neutral hydrogen atom, which, of course, is just a bound state of one electron and one proton. But before they could form that single stable atom, some ambient particles from the environment like a single photon would have such high energy, would come and zap them apart because the average energy of everything was higher than that binding energy of the Coulomb attraction of a hydrogen atom. So at early times, you would have an electrically charged plasma, that the universe would not be filled with electrically neutral atoms because they literally couldn't form yet because they were blasted apart every time a single putative atom got close enough to begin to form the average jostling of all this high temperature junk in its environment would blast it apart. So photons then become trapped between charged particles. They begin to piece all this together. At early times in cosmic history, the universe should've been opaque. You literally wouldn't have been able to see anything because the mean free path of any given photon would be very, very short. The photons would each be trapped, kicked like soccer balls between all these loose electric charges. Light can't propagate in a charged plasma because it's always bouncing between these very nearby free electric charges. So they could calculate and say when would that effect go away? Well, when the average or ambient temperature fell below the average binding energy of a single hydrogen atom-- and that would happen at a distinct moment in cosmic history. So again, they were trying to flesh out Lemaitre's picture of an evolving universe. It wasn't just hot and dense at one time. But that would mean that certain kinds of interactions among elementary particles would dominate. And then those would change over time. In particular, as this entire hot ball of gas of charged plasma is expanding, the average temperature should decrease in size, much like the temperature of a gas inside a balloon will fall as the balloon expands. So as the volume of space stretches, as you have an expanding universe as Le Maitre showed what could be possible, the average temperature of all the stuff inside it should fall. It should fall in a quantitatively calculable way, again, using Einstein's equations. So again, they put numbers to that and say, well, at a particular moment in time, now using the modern values-- they had the right idea that different values for some measurements-- we now calculate at around 380,000 years, after the start of that stretch after that primeval atom begins to expand, the ambient or average temperature of all the junk inside that universe should've fallen below this Coulomb attractive energy for neutral hydrogen. So whereas at earlier times, earlier than 380,000 years, you would have a charged plasma in an opaque universe. At that moment, the average energy per photon or per elementary particle would fall so that you could actually begin to form stable atoms of hydrogen. So only at that time, a new phase in the universe would begin to unfold. The universe would be filled with neutral atoms of hydrogen. And now you have a mean free path for light that's arbitrarily long. Light can pass through electrically neutral matter. It does so in our own atmosphere, let alone in empty space. So once you can have stable, electrically neutral atoms like hydrogen atoms at a particular moment in the cooling evolution history of our universe, only then would you have things like photons traveling macroscopic distances. So after that time, when the temperature has fallen below about 10,000 degrees Kelvin, photons are free. And then they can now travel large distances. And their energy continues to redshift. They lose energy as the universe continues to expand. So the energy of those photons would've started at the equivalent of around 10,000 degrees Kelvin and now today would be much, much, much lower than that because the universe has been expanding and draining that average energy per particle over time [CLEARS THROAT] so that today the universe should be filled with this remnant glow. This is all work that they predict around 1948, '49, '50, Gamov, Herman, and Alpher. So they argue that today, this bath of remnant radiation from that early hot, dense state should be filling the sky in every direction. It should be more or less even distribution, a uniform glow. But instead of it being a very, very high energy X-ray or gamma ray radiation, it should be redshifted all the way down into the low energy microwave band. So this became known as the cosmic microwave background radiation. And they say this should be filling the sky. It should be everywhere in a uniform pattern. Here's an aside. I like to think about this. That's very abstract. I think about it as the evolution of a dance party. It turns out I don't attend dance parties very often. This is what the internet tells me they look like. So this, I'm sure, is accurate. If you just Google "dance party" and throw away the bad pictures-- anyway. So at early times, the DJ's playing some raucous house music, and everyone's just jostling around. The average energy per dancer is very high. So I'm told. That's like the charged particles where the mean free path is effectively zero. No one could cross that dance floor. And then at a calculable moment, if the DJ knows what she's doing, she'll put on some slow music. And you start having couples form like in Harry Potter at the Yule Ball. Again, that's what I assume school dances are like. I don't know. So at some later, calculable moment, the average energy in the room begins to fall. The DJ reads the room. And now you start having couples so that you actually could cross the floor because now you have a mean free path to cross the dance floor, unlike that very exciting early universe phase when everything's just a mash or a mush. This makes perfect sense to me. You can tell me whether it's accurate or not. Anyway, that's the analogy for what Gamov, Alpher, and Herman were putting real numbers to to try to make sense of these different phases of the very early universe. Very high temperature, early dense state should be qualitatively different in its behavior than a later lower energy state. So they predicted as early as 1948 that there should be this remnant glow from the Big Bang, all those photons that only then at 380,000 years after the Big Bang were able to start streaming freely. And the question was, where is it? Well, almost 20 years later, 15 years later, these two radial physicists working at Bell Labs, Robert Wilson and Arno Penzias, were using a new horn antenna sensitive to radial microwave and radial band frequencies. This was basically left over from the early telecommunications age. Soon after the launch of Sputnik, lots of folks like private companies wanted to get into the satellite communications business. Often it was just bouncing radial waves off of reflectors in the sky in low Earth orbit and then bouncing signal back to Earth. Of course, it became more and more sophisticated than that. And they were actually given time on this telescope not only to fine tune the corporate program for telephonics, but also to conduct actual radial astronomy. They were interested in the evolution of nearby galaxies. They were not doing cosmology. They were interested in radial signatures of astronomically nearby things. But they found this remnant hum in their electronics. This should've been among the most precise instruments available on the planet for that band of the spectrum. And they couldn't get rid of a residual hum. At one point, they climbed inside that huge horn antenna on their hands and knees to scrub out what they graciously called special dielectric materials from pigeons who had made a nest in there-- that, of course, meant pigeon droppings-- because they figured that might be messing with the electronics of some extra insulating layer. That didn't make any difference. Finally, they were put in touch with a group at Princeton that was independently rediscovering many of the ideas from George Gamow and his group, actually at the time unaware that Gamow had even done these calculations. And they likewise convinced themselves about this cosmic microwave background remnant radiation. So now the group here, these folks are in Southern Jersey. They were close to Princeton. They all got together. They said, oh, what you found is actually the remnant glow from the Big Bang. This residual hum in your receiver consistent with an energy of about three degrees above zero, three degrees Kelvin, was really the leftover photons, that remnant hot radiation from the Big Bang that had been streaming freely for the next 14 billion years. And the average energy per photon had fallen steadily since the time when they were first released in that early dance party. They were very soon afterwards awarded the Nobel Prize for actually detecting evidence of the Big Bang. So let me pause there and ask any questions. Any questions on that stuff so far? So far so good? AUDIENCE: This might be a dumb question. DAVID KAISER: Go for it. AUDIENCE: But guess I'm just wondering how if you have photons that are like-- I understand that the photons from the CMB, they're from us looking in the past. But I don't understand why they're still there, why they haven't passed this already. I guess are they constantly being emitted? Or why is it still there basically? DAVID KAISER: Yeah. Thank you. That's actually a really good question. Basically, the idea is that they should've been everywhere at once. So the idea was the whole universe is filled in the early times with very high energy particles that are at early, early times too high energy to form stable, electrically neutral atoms. So you have this huge plasma everywhere, not just in one corner, not just over there in the sky, but everywhere. And likewise, there are photons everywhere. It was at least the idea of more or less uniformly distributed with no real pattern to it. And so from every part of space, from every single direction in the sky, those photons began to move freely at this single moment in time or a very short lived moment in time. So basically, the universe should've been filled with light, originally a very high energy. And then actually the energy of that light should be falling as the container expands, so as the average energy inside that balloon goes down. So we're basically just moving through a bath of light. So the photons aren't coming from that direction of sky the way we think of with point-like sources. There's a galaxy there, a quasar, a particular bright star in our neighborhood. The photons were everywhere. And it's like sitting in a bathtub full of these photons. And they're just losing their energy as the overall size of space continues to grow. So the idea was that there should be-- so it's not just that they should have a particular temperature. They should be everywhere in the sky was the idea, a uniform pattern. So if they could point that radial telescope in any direction, and they should find more or less the same signal, which is indeed what they were finding. So think about sitting in a bathtub that at first is like boiling hot water. And then you're sitting there while the water continues to cool down, but you're still immersed in that. The photons are coming from every single region of space. And all that's changing is their average temperature per photon is the main idea. Another way of saying it is it's-- it's hard to get-- [LAUGHS] hard to get one's head around. Believe me. But instead of saying the Big Bang happened over there and things are stretching out from it, the Big Bang happens everywhere. So every part of space that we could see today once had been at the Big Bang, so to speak. The Big Bang happened there. It happened at x equals 0, and x equals 1, equals 2. Any place we could put spatial coordinates to on this model, those all experienced the Big Bang at the same time. So it wasn't the Bang happened there and stuff was flowing outward from it towards us. Again, think about being inside a balloon or let's say a bathtub. So there should be a uniform set of properties filling that space. And we're just immersed in it trying to measure it as we flow through. I'd be glad to chat more about that. But I say all that as if that's obvious. It's not obvious. I'd be glad to chat more about it. But that's the kind of reckoning that people like Lemaitre got very comfortable with starting in the '20s and '30s. And it took other people a lot longer to try to get their heads around. Alex rightly puts in the chat that Edwin Hubble was actually pretty lucky. It's true. There's been a tremendous amount of controversy right to this day or let's just say earnest disagreement over what's called the cosmic distance ladder, which is to say it's actually pretty easy now, relatively easy to measure the speeds with which objects are moving away from us because that has to do with spectroscopy. We can measure these atomic transition lines with great, great accuracy. And so you can just do a Doppler shift to say, oh, I would expect that line to be here. It's over here. And the difference is a direct measure of the relative speed, the recession speed. What's not so easy is to figure out how far away from us that thing is right now, the actual distance right now. We can get the velocity from these very fancy spectroscopic-- I say we. The people who do it are actually astronomers. They kindly share their information. I don't know how to do it. But one can do it well. Now, whole teams can do it very, very well. What's hard to do is to calibrate what's called the distance ladder. How far away is that object right now, let alone how quickly is it continuing to move away? And so this was especially off compared to modern day values in Hubble's time. In fact, his value was-- well, let's see. 500 versus 70. So what we now call Hubble's parameter is roughly 70 in appropriate units. And he measured 500. So he measured a much quicker average rate of expansion than what we have mostly settled on today. But the basic picture was there. The picture was enough to get a small number of people to pay attention to Georges Lemaitre's otherwise quite obscure mathematical solutions. So Lemaitre. There's also a Russian and by then Soviet physicist Aleksandr Friedmann, who was doing very similar work. A few of these mathematical physicists were using Einstein's equations in ways that Einstein thought was awful, as I say. He was not at all a fan of this early on. But to the few people who paid any attention at all, it looked like just a mathematical curiosity. And it was only when paired with observations most famously from Hubble-- Hubble actually had a number of assistants, and other groups began contributing as well. But it was Hubble's data that got the biggest splash, that made the biggest impact on the community at the time. And regardless of whether we agree with the number he inferred of the actual rate, it seemed pretty clear to many people at the time that this was consistent with an actual overall expansion, with the change over time. And that made these seemingly pure mathematical solutions like Friedmann's and especially the follow up work by Georges Lemaitre, that made those mathematical solutions look much, much more curious and interesting than they had prior to Hubble's data. Although, Alex, you're absolutely right. If one took his value literally it looked like the universe would be younger than things inside it. And that's not good, right? So how could the Milky Way galaxy or even the planet Earth be older than the universe in which it resides? So it did lead to these kinds of puzzles. And that was eventually smoothed over or made more consistent by the 1950s. That took a while. It's an excellent point. Any other questions on that? OK. So this is a, I think, safe to say, remarkably successful set of ideas that eventually becomes called the Big Bang model, going all the way back to really Einstein, but especially people like George Lemaitre, a huge boost to try to give real quantitative teeth to these internal phases or intermediate phases by people like George Gamow and his younger assistants. And it starts to match observations really quite well. And yet we're not done. And people began to get worried about some of these features of the Big Bang model not only to cherish its successes. So for this next part, it's actually really helpful to adopt convenient coordinates. If we accept the notion that there is a universal stretching of space, then it's actually helpful to adopt coordinates that take that into account. So what astronomers have done really for generations is to adopt what are called comoving coordinates. And then a physical coordinate at any given moment in time would be scaled by this universal stretching, this so-called scale factor. So Hubble's data was consistent with galaxies sitting still at some fixed comoving location. You could plot down the Andromeda galaxy at r equals 7. And then it's moving away from us because all of space is stretching in between. So you can have galaxies that are more or less sitting still locally but are receding from us just as we are receding from them because the space in between is stretching. And you can accommodate that by inserting this one universal stretching function called the scale factor. So the distance between, say, the Milky Way and the Andromeda galaxy at any given time, the physical distance really would change. It is getting more distant over time because the space in between is stretching. So if you plot things in terms of what's called comoving distance, you scale out, you take into account that universal scaling, then it would just look like Milky Way at r equals 0, and Andromeda is stuck at r equals 7 in these convenient coordinates. Then you have to be a little more clever to adopt your clock to make things, again, actually really simple. This is actually called conformal time. That might remind you of our beloved friends, the 19th century Cambridge Wranglers. At least my beloved friends. We looked briefly at the Wrangler stuff at these conformal mappings. We're really doing a Wrangler-ish thing here, very similar idea, to adopt coordinates for the time, for the rate at which we think clocks should tick to be convenient that also takes into account that changing stretching rates over time. And so what we call conformal time, often labeled by the Greek letter tau, is basically a variable tick rate that is really convenient because then we can start making a dynamical changing spacetime look just like the spacetime of special relativity, look just like a Minkowski diagram. Why do we do all this funny stuff I had to wave my hands around for comoving distance and conformal time? Because when we make spacetime plots where we use comoving distance rather than physical distance for the spatial part and conformal time, this variable clock rate rather than what's called cosmic time or sometimes simply called physical time, then actually we get back to some simple looking arrangements like light rays, once again, travel on 45 degree diagonals. If we were to try to take into account the changing stretching rate of space, these paths of light rays would become these very complicated, bent, twisted paths. But in these special, very simple fine coordinates, light just travels on 45 degree diagonals. We sit still at a fixed value of comoving location. And then light comes to us at 145 degrees. That sounds very abstract. We do that all the time. This is just an example in space time of a conformal mapping of a sort that we all use every day like a Mercator projection. So what this does is it inserts certain kinds of location dependent artifacts like Antarctica looks huge on a Mercator projection or Greenland for that matter. The surface area of Antarctica is nowhere near the surface area of Africa even though it looks so much larger. But that's because we've had a stretching. And the amount of stretching increases toward the poles if we do a two dimensional spatial conformal map. We're doing the same thing here. We're stretching our time coordinate so time gets more and more stretched out towards earlier times. We can take that into account. We know how to use a Mercator projection. And it makes other relationships remarkably easy. Likewise our conformal maps here make the paths of light, for example, very easy to follow. When people begin using these convenient coordinates, they also go back to some questions about or features of the Big Bang model, and they start having new questions. So we talked briefly last time about Robert Dicke. He was, again, a veteran of some of the wartime projects. He was an expert in microwave electronics, a radar Rad Lab veteran. And after the war, he went back to Princeton and about 10 years after that became very interested in general relativity and cosmology. And he began retooling his whole research group around questions related to things like the Big Bang. We saw that in 1961, he published this alternate to Einstein's own theory of gravity, the Brans-Dicke theory of gravity, which we looked at last time. It's the same Robert Dicke. So he introduced this conundrum in 1969, so soon after the discovery of the cosmic microwave background radiation when people began to take the Big Bang model more and more seriously, including Robert Dicke. He goes back to what I mentioned briefly before, that according to Einstein's equations, as clarified by people like Alexander Friedmann and Georges Lemaitre, you can have these very simple geometries. At any given moment in time, the shape of space could either have a positive geometry where it closes back on itself like the surface of a sphere, an open or negatively curved geometry, or a flat geometry. What controls which geometry you have is this ratio of the actual amount of stuff per volume, the actual density of matter and energy per volume, compared to some critical value. So it became common after Dicke to just introduce the Greek letter capital omega simply to refer to that ratio. What's the ratio of the actual stuff per volume in our universe compared to that critical value? And only for that Goldilocks solution, we have exactly the balanced amount of stuff per volume would you expect to have space obey Euclidean geometry on large scales. If omega is larger than 1, you have more stuff for volume. You have a positive curvature. If omega is smaller than 1, you have less stuff per volume than the critical value. You expect it's open or hyperbolic geometry. So far so good. Then Dicke plugged this quantity into Einstein's own equations. So this is a time dependent quantity. After all, we're talking about densities. The density should depend on the volume. It's stuff per volume. If you have a universe expanding over time, the volume should be changing over time. So the density should presumably go down. The density should fall. If you have a fixed amount of stuff in a space time and you stretch that space, the density falls. If you throw four marbles into a bucket, you have a density of four per cubic liter, let's say, a cubic liter bucket. If you double the size of the bucket, then you have still only four marbles, but now a larger volume. The density's gone down. And so what Dicke demonstrated is that according to Einstein's equations, this solution that looks like the Goldilocks solution, the spatially flat solution where you have just the right amount of stuff per volume, is actually an unstable solution. It's an unstable equilibrium point of Einstein's equations. A universe should generically become more and more different from flat over time. And if you just plug in the notion that the density is falling 1 over the volume, the volume should go the cube of the spatial dimensions, so go a cubed. So in fact, this part comes just from Einstein's equations, 1 over a squared, a being that scale factor, like the radius at any given moment in time, and rho is the density of stuff per volume. Well, if the rho is going 1 over a cubed, then this whole quantity here should grow with a scale factor. So the difference from a flat universe, the deviation from spatial flatness Dicke shows, generically should grow over time. If a universe started out being close to but not identically equal to flat at early times, it should look nothing like spatially flat at later times. Depending on the sign, it could either become more and more like a hyperbolic saddle or more and more like a closed sphere. What it should not do is say looking anything like the flat Euclidean solution. So if you extrapolate this backwards to early times, the time of nucleosynthesis, for example, or even to the times of when the cosmic microwave background radiation was released, you find that you have to fine tune, you have to have some reason why the amount of stuff per volume was not just in the neighborhood of that critical value, but, in fact, was exponentially close to it you. As Dicke points out, you have these exponential fine tunings for the universe to be even remotely close to a spatially flat or Euclidean-like behavior today, which is looking more and more consistent with observations by the '60s and '70s and '80s. Even if it wasn't compellingly equal to flat, this parameter omega was, say, 0.3. It wasn't 10 to the minus 70. It wasn't 5. It was 0.3 or 1.1. It was in the rough vicinity of 1 to the extent that the measurements could converge the observations. And yet a measurement of anywhere near 1 today suggested that it had to have been exponentially close to 1 at early times. And that seems like this very strange or unexplained fine tuning. If the universe has been stretching for 14 billion years, what set it to be so exponentially arbitrarily close to spatially flat given that is an unstable equilibrium point? That became known as a flatness problem. That was introduced by Bob Dicke in 1969. Dicke actually was really thorough. He was thinking about other things too. So 10 years later, he introduced the next big real conundrum for the Big Bang model. And this one he did with his younger student, by that point, his collaborator, James Peebles. You might know Peebles' name. He actually just received the Nobel Prize in physics a little over a year ago for much of this work. Peebles had done his PhD with Dicke at Princeton. So 10 years later, Dicke and Peebles introduced the second big conundrum. And this one's called the horizon problem. So now let's go back to this very convenient, conformal diagram. So I'm going to use the same funny coordinates I mentioned before. I'm mapping the history of the universe using comoving distances. So I've taken into account that universal stretching of space and that variable clock rate, that conformal time. So now, according to this very lovely picture that people like George Gamow and Ralph Alpher and Robert Herman put together, we should be receiving these microwave photons today from literally every direction in the sky. This goes back to Steven's question. Imagine you have a three dimensional version of this. Everywhere in the sky, you see these photons heading toward us. Some of them are heading away from us. But we're just immersed in the bath. The ones that we see have been heading on trajectories toward us since they were first released, since that moment 380,000 years after the Big Bang when photons could begin streaming freely, when they could travel macroscopic distances because the universe is filled with electrically neutral matter. There's some moment after which the photons begin traveling large distances. They've been traveling that whole time until some of them enter our antennas and our satellites today. So remember, in these coordinates, we sit still at some fixed value of comoving location, r equals 7 or whatever you'd like. You can call it r equals 0. And then light travels on these lovely, convenient 45 degree diagonals. So from this corner of the sky, it's like pointing your telescope over there. From this corner of the sky, looking in the opposite direction, we receive this uniform bath of photons that have been traveling towards us this whole time. Well, here's where Dicke and Peebles start raising some questions. We received this remarkably uniform signal on the sky today from opposite sides-- matter of fact from any direction that we point our radial telescopes both on Earth or now from satellites. There's a comoving distance delta r across which we receive these photons. And they look remarkably uniform. However, those photons were emitted at a finite age when the universe was only a short portion of its current age. It was only 380,000 years old as opposed to nearly 14 billion years old. Light can only but travel at a fixed speed at least according to Einstein's theory. So if the universe has only been around for so long, then light could only have traveled so far. That's called the horizon distance. What's the furthest possible distance that a light beam could've traveled traveling at that constant speed of light for as long as it was able to? So even though an actual physical light beam couldn't have traveled because the universe was optically opaque, any information, any physical signal, any force, anything that is limited by Einstein's speed limit should only be able to travel up to and limited by the speed of light. That means there's a furthest distance according to which any information or influence or physical force or anything could've traveled since the Big Bang up to the time when that radiation was first emitted. That's called the horizon distance. And as they show, for any finite age, for any universe that has this beginning a finite time ago, at any moment in time, there's a furthest distance that anything traveling at the speed of light could possibly have gotten to yet. That's called the horizon distance. So what they show is that at the time that the microwave background photons began their journey when they first began to free stream, the universe was still so young that the furthest possible distance that any causal influence should've been able to travel was a tiny fraction of the distance across which we actually measure remarkably uniform signals on the sky today. So the horizon distance was actually a factor of 100 shorter than the smoothness scale across which we receive remarkably uniform information. How could that be? If this portion of the sky never had a chance to become in any kind of physical equilibrium or even exchange a single tweet, to have absolutely no information of this part of the sky about what the average conditions are in this part of the sky, how could they have become indistinguishable in the signals we receive today? That became known as the horizon problem. And this was heightened as more and more data came in, more and more careful observations of the microwave background radiation. It became clear that signal really is uniform to one part in 100,000. It's remarkably uniform to a tiny fraction of a percent. It was at 1,000th of 1%. That signal is uniform across every direction we look in the sky today, even though it's coming from all these regions that when that light was emitted couldn't have possibly had any physical interaction with each other or had even a single status update saying, I'm going to release my photons at this temperature. You should do the same. So the time that the light was emitted, the smoothness scale was much, much larger than the causally self-connected scale. That becomes known as the horizon problem. So why on Earth would the CMB be so uniform today to this exponential accuracy from regions of sky that were never ever in causal contact? Plus, why on Earth do we have this distribution of scales that I started off in the beginning? Where does large scale structure come from? Oh, I'm sorry. I skipped ahead. Sorry. This is my anthropomorphic analogy for why the horizon problem should give you pause. This usually works better when we're meeting in person in an actual classroom. But imagine we're all sitting in a nice big comfortable room socially distanced. And everyone has come in the room, and I've stolen your cell phones. Sorry. You'll get them back. I've temporarily taken your cell phones and your laptops. I've blindfolded you all and put in earplugs and handed every single one of you a ping pong ball. I won't actually do this, but just imagine. And then without any prior coordination, I say, please throw your ping pong ball at the same speed at the same time to one part in 10 to the 5 without any chance to coordinate, without anyone saying, ready, set, go or you being able to say to your neighbor, here's my plan. Let's coordinate. That's what it's like to have these causally disjoint regions emitting these photons not just with the same energy, but with the exact same energy released at the exact same time. That's the point of this series of ping pong balls distributed through space. Now, as I was saying, what about the lumps? This entire Big Bang model has still had to assume by fiat with no real explanation that there was some initial lumpiness, there's some inhomogeneity that over time could then grow to become this cascading hierarchy of scales, which is why we'd have super clusters of galaxies separated by huge voids and all the rest. So the Big Bang model had some amazing successes but some pretty stubborn quandaries as well. So I'll pause there again and ask the questions about that. Any questions on the shortcomings of the Big Bang as people began articulating them throughout the '60s and '70s? Feel free to jump in or use the chat or either way. And again, there's more on the quantitative details of that in that optional primer you can find on the Canvas site. So Fisher asks, is it useful to think of the universe as spherical still? Yeah. These pictures get pretty hard. So basically, we can imagine choosing some point of interest and drawing some sphere around it and asking what's happening in that region? Is that region itself will grow over time? And is that region representative of some larger sample from which it's taken? So we can still ask about the behavior of some randomly drawn sphere even if the global shape of space might not be spherical. As you can imagine, filling a perfectly rectilinear Euclidean space with a bunch of representative spheres whose behavior we could study. Now, it could be that the entire universe has a global shape to it. We could be living in a closed universe where on the largest scales it actually looks like a sphere. But we could, again, do the same trick. We could still ask about locally, let's fill it with some representative shape and ask about the average behavior within that shape. So what we mean by what spherical, we can continue to use spheres usefully even if we live in a flat universe as long as we're careful to distinguish a sample volume versus the global properties. It's a good question. Alex asks, what about the model problem? Yes, very good. So I left that one out. That was something that exercised Alan Guth in particular because-- maybe that's getting ahead. Maybe I'll talk a bit about that actually in the last part of the class. That's a good question, Alex. Any other questions about that, about the shortcomings of Big Bang? OK, let me press on because these are actually questions. Alex is already giving us a segue to the next part, which is great. So let's go to that last part for class today. I love this photograph. This is to me priceless. This is what Alan Guth looked like circa 1980. I think MIT used to have a law that you had to dress so as to match your own blackboard. I think we relaxed that rule. But anyway, he was blending into his surroundings. His room, like the universe, should've been perfectly homogeneous and isotropic. Anyway, here's a very young smiling Alan approximately 40 years ago. He was wondering about these questions as well as we'll see in a moment. He was, however, coming at this having been trained at MIT in particle theory. He was not trained in relativity or cosmology. He was much more the same generation as Tony Zee, whose work we talked about briefly in the previous lecture. When Alan was in graduate school, he was studying high energy physics and therefore not gravitational cosmology. He wound up doing a series of postdoctoral studies. He liked Tony Zee, accidentally heard some talks about some of his early work in gravitation. In particular, he heard some lectures by Robert Dicke when Dicke was on the lecture circuit talking about these curiosities or shortcomings of the Big Bang model. And that really stuck in Alan's mind. He was not originally asking questions about the cosmos, but he was haphazardly encountering some of those questions, again, very much like Tony Zee around the same time. What Alan was interested in was in things like spontaneous symmetry breaking and the Higgs mechanism. That was all the rage for a lot of particle theorists in the early and mid '70s by then. And he was wondering about shapes for the potential energy function of that Higgs field that might have a extra structure. There might be a kind of dimple to that energy function. We could imagine the Higgs field getting temporarily stuck at some metastable state at the origin of its own potential energy function where there's a barrier in any direction, but it's not an infinitely high barrier. So according to quantum theory, that Higgs field should eventually decay to the genuine global state of lowest energy, anywhere along this so-called vacuum circle. And Alan was realizing upon hearing Bob Dicke's lecture that that could have remarkable cosmological implications. If there were a time, even a short time during which the matter that's filling the universe could be temporarily stuck in a metastable state in which it had some non-zero potential energy, but it couldn't release or relax that energy arbitrarily quickly because it's stuck in this metastable so-called false vacuum, then that could have implications for the global shape of space and not just for the behavior of elementary particles. I highlight the date. Thank goodness for historians. Alan is unbelievably anal retentive and writes everything down and has pretty neat handwriting. A lot of people who write things down and have egregious handwriting, and there's more people who don't write things down. Alan, although he likes to blend in with his blackboard, writes things down with neat handwriting. And so I note the date. 41 years ago to the day-- today is December 7-- to the day, he was up very late as is his wont piecing together his ideas about these Higgs like functions with these funny metastable states where the energy density of the universe could get trapped temporarily at some large non-zero value. And he was putting that together in his mind with the lectures he had literally just heard from Robert Dicke not long before. And he calls this a spectacular realization. So my request number 3 to you scientists is both write things down, use neat handwriting. And when you do something cool, tell us it. Tell us that you're excited and put it in a box because when we're going through your notebooks, honestly, most of it's just garbage. We just don't care. But he actually did us a favor and put it in a box. Pay attention to this. His notebook's actually now on display in the Adler Planetarium in Chicago, literally this page of notes. He realized that this kind of feature could actually lead to a cosmologically distinct kind of evolution, that if you have a period of time even briefly during which the energy density, the amount of stuff per volume, gets stuck, gets stuck at some non-zero value and can't change quickly, then the energy density could remain constant. If the energy density, the stuff per volume, remains constant, then very counterintuitively, you have a runaway growth in the size of space. That stretch function, the scale factor going back just to Einstein's equations, will grow exponentially quickly, will have a period of accelerated expansion during which the universe won't just get bigger. It'll get bigger faster if you have this counterintuitive even temporary phase during which the stuff per volume stays constant even as the volume grows exponentially. That could happen. Alan began wondering if you have this weird state of matter that was at least hypothetical and of right interest to particle physicists because they were worried about things like symmetry breaking and the Higgs mechanism, that does not happen with marbles in a bucket. It does not happen with electrons or quarks or protons. It happens for certain kinds of elementary particles, including things like these very simple fields like the Higgs field or Higgs particle, which for other reasons could have some funny shape to their potential energy function. Alex, I'm going to skip the monopolar problem, but it comes from this discussion as well. And I'd be delighted to chat more about that if you'd like afterwards. But in the interest of time, Alan was worried about some exotic features from these Higgs fields that can get twisted up in some topological shape. But he was really just wondering what happens if the universe gets stuck even temporarily such that the matter that dominates it, fills it, can't release or relax its potential energy arbitrarily quickly. That's called a metastable state. And if you go back to Einstein's equations exactly in the form that he began learning from Bob Dicke from that series of lectures, then you have these very different solutions for the average size of space. It grows exponentially quickly. And as Alan and others were quick to confirm, this happens very naturally, or at least it's a kind of feature that one stumbles upon readily, if when studying these exotic Higgs-like fields from particle physics. It does not happen with spin one half particles, for example. It does not very easily happen even with photons or gluons or things that happens with these Higgs-like scalar particles most naturally. Soon after that, actually within a few months, a number of other colleagues, some in the United States, some in what was still then the Soviet Union, were finding similar behaviors and even more generic or simple arrangements. So Paul Steinhardt was working with his then PhD student Andy Albrecht. They were at the time at Penn, University of Pennsylvania. Meanwhile, in Moscow, Alexander Starobinsky and Andrei Linde were working quite independently of Steinhardt and Albrecht. And then again, realizing that if you study the dynamics, the behavior of these exotic quantum fields like a Higgs field in a stretching space time, if you take that stretching of space seriously, then you don't even need to cook up those exotic Higgs-like potentials that Alan was first thinking about. Quite generically, you'll have a damped oscillator behavior. If you look at the evolution of some field like the Higgs field, its self-consistent change over time, its equation of motion, includes a damping factor like a damped oscillator. This comes from the fact that space itself is stretching. And that alone it turns out is enough to find these self-consistent solutions in which the field moves very slowly. You can imagine it rolling down this hill, rolling down-- sorry-- rolling down slowly as a function of time because it's like a frictional, overdamped oscillator. Again, there's more of that in the primer if you're curious to see more. Even that not literally fixed behavior-- it's literally changing, just changing slowly enough. That will lead to a slowly enough changing potential energy trapped in that field that you'll still get these nearly exponential-like solutions. So you can have inflation happening even more generically as these folks began to find very soon after Alan even without worrying about a very particular shape for the potential energy function just when you think about these fields like a Higgs-like field in the early universe. So then you come back to those quandaries that Alan had first heard about from Bob Dicke. And you ask, how would these things look if you now take into account this very early, very brief phase of exponentially fast stretching of space? Go back to the equation that Dicke had first written down for the flatness problem. But now we have a phase during which the scale factor grows exponentially-- e to the something times t, so it grows very fast in time-- while the energy density remains nearly constant. So instead of that falling with volume, it temporarily remains nearly constant. Now you'd see this expression, the deviation of the universe from spatially flat. That deviation should rapidly fall to 0. The universe today should look indistinguishable from a flat universe because the difference from flatness was driven to 0 dynamically. By having even a very brief phase of exponentially rapid accelerating expansion, you drive the universe towards a flat shape rather than having it flow away from a flat shape. And again, I go through that in more quantitative detail in the primer. So the latest measurement from the Planck collaboration using a satellite is that this parameter in our actual universe today is 1 to better than a percent level accuracy. Now, let me pause. I know I'm going to run long today, but I just can't help myself. When I was in graduate school not super long ago-- kind of long ago-- I was friends with a bunch of observational astronomy grad students in the dorm. And they were basically me. They would tease me. Like, you work on inflation, but we know that omega is 0.3. So you're a loser. Why do you waste your time on this? Some were nice, but a lot of them were actually kind of mean. I was like, oh, no. Go look for more stuff. You're missing two thirds of the stuff out there. Just try again. So maybe I was mean too. But they were meaner. They were more of them. So when I was in grad school, it looked very much like omega was 0.3. And if you squinted at it, you could maybe make it 0.35. It was not 1 according to the best observations around the world. Today, [LAUGHS] it's 1 to better than a percent level accuracy. And they can just stick it. So I like sending them holiday cards saying, thinking of you. Omega's 1. See you next year. Anyway, this is a remarkable shift even over the course of 20 to 25 years, let alone since the days of Lemaitre and Hubble. OK. What about the more subtle one that Bob Dicke and James Peebles worked out called the so-called horizon problem? Go back to our funny map, our conformal map. Now, remember, the horizon problem was originally phrased because we thought there was an origin to all of time, this Big Bang surface, at tau equals 0. And if you add up the time between tau equals 0 and when those photons begin to travel freely, there was only a fixed horizon distance. It was much smaller than the smoothness scale that we could measure empirically. Well, if inflation happened, there should've been a very brief period before what had previously been called the Big Bang. So we're adding more real estate along our time axis. We're unfurling a little bit extra time that hadn't been taken into account in the standard Big Bang model. So if you allow for more time before what you would call the Big Bang, you can continue tracing those past light cones further and further back and say there should be some time earlier than what we were starting from during which all the past light cones from the entire region we see today would indeed have overlapped. So then it would at least be plausible there's at least now a causally self-consistent mechanism by means of which the universe could have similar conditions everywhere because they actually were causally connected at a time before we had previously taken into account. So therefore, you could have the horizon distance, the maximum causal distance, becomes actually much larger than the smoothness scale that we measure. So now the horizon distance is larger because there was more time that we hadn't yet taken into account. Any kind of causal influence would've had more time to propagate than we had previously accounted for. So now you get the ratio at least in the right order. You can have a horizon distance that is larger. In fact, it could be much, much larger, exponentially larger than the smoothness scale we observed. Now, remember, this is a funny coordinate. It takes an unbelievably short amount of physical or cosmic time, the time that we measure on our wristwatches, to accomplish that. In fact, it takes about 10 to the minus 36th of a single second. That's all it takes for this inflation. If the universe expanded exponentially just for that sub, sub, sub blink of an eye, then all of a sudden, the causal structure of the entire observable universe is turned upside down. You basically erase the horizon problem because there actually was a time when all the stuff we see would've been in causal contact, very comfortably in causal contact. And the universe in that tiny blink, a billion, billion, billion billionth of a second, grew by about 30 orders of magnitude. It didn't keep doing that. It wasn't doing that during the rest of this Big Bang evolution. There's this tiny blip. And taking that into account suddenly rearranges the causal ordering and basically addresses these out of order causal conundra. So here's a plot in more familiar coordinates, going back now to time measured in seconds and space measured in meters rather than comoving distance and conformal time. You can see that as you trace backwards from today, instead of going back to saying the universe at early times should've been on the order of 1 meter, you say at those early times, the universe was actually exponentially tinier than you had thought. It grew exponentially quickly to map onto where we see today. And so the universe was so tiny, it could very easily have been in a kind of equilibrium or at least a causally self-connected state. So during this tiny blink of an eye, the universe grew exponentially quickly and then mapped onto the standard Big Bang evolution. And that alone is enough to address the flatness and horizon problems. It turns out it does more than that as well. This is what gets I think even more exciting and what has occupied much of the community ever since. That Higgs-like field that was driving inflation, that was very slowly evolving in its potential should've been subject to the uncertainty principle just like all matters should be. And this became clear to people about a year or so after Alan and Paul and Andy and all those folks began writing the first papers on inflation. By 1982, '83, pretty early on, people realized that not only would you have a gross feature of the evolution of those exotic particle physics-like fields. They should also have quantum wiggles because how could they not because they should be subject to the uncertainty principle. These are quantum fields evolving in a dynamical spacetime. I still can't believe it. But it is the case that you can study the evolution of those quantum fluctuations with a remarkably simple looking oscillator equation. I've hidden all the hard stuff in this term. I made them look easy. But you can actually take into account that frictional damping, the stretching of space, and the reaction of that jittering trampoline back on the evolution of matter. We can solve these equations to unbelievable accuracy and realize that we should have a prediction today for tiny seeds, tiny unevenness, in a very early distribution of matter and energy because the universe was filled with quantum fields. And as we've seen a number of times now, quantum jitter, the uncertainty principle, means that we can never specify the energy of that field to arbitrary precision at any time. At any given moment, that field would be subject to slight, slight quantum fluctuations in the distribution of energy across space. That starts to yield this tiny little fluctuation in why there's slightly more matter and energy in this region of space than the other one. So now those very tiny quantum scale fluctuations get stretched as the whole universe stretches. As the scale factor grows exponentially, you have the average length between the distance between crests of those tiny wiggles get stretched to galactic and even super galactic scales all within that blink of an eye. So now you have a reason why there's a primordial inhomogeneity. And you also have a reason why it's on the right length scales. It's going to seed galaxy formation, not mess around with your atoms like a Lamb shift because you have matter in an early quantum state as the universe is stretching exponentially. So you can go back to now a much more modern picture of a very tiny lumpiness captured in that microwave background radiation. This is from the Planck satellite team. It's exaggerating with false color imaging the slight one part in 100,000 offsets between the regions of sky that are slightly higher energy photons in the CMB and slightly lower energy photons. And the idea now is that the regions of the sky from which these photons were emitted are telling us about the very, very tiny unevenness in the distribution of matter and energy at the moment those photons were emitted. There's a tiny, tiny little excess gravitational potential. There was a little more stuff per volume at that region. So the photon then had to spend a little more energy climbing out of that gravitational well. We should receive it today as being a little less energy than average, very slightly less. Meanwhile, other photons would've come from regions that were slightly evacuated, a little less dense than average. So the photons we receive today had to spend less energy gravitationally to overcome that very tiny gravitational potential. They should have slightly more energy on average today than the average. So we can actually map the quantum fluctuations which leave an imprint in this dynamical fabric of space and time that then maps to this distribution of these very tiny unevenness in the CMB. They have been mapped by three generations of satellites above the ground with increasingly precise ground based measurements as well. And each of these came out 10 years apart with an increase of about a factor of 30 in the angular resolution of the sky. I was a senior in college a long time ago when the first of these released their data in September 1992. I was a senior that year. And the [INAUDIBLE] team led actually in part by our own Rai Weiss, who later became very famous for his work on gravitational waves-- Rai was one of the science leaders for this early mission, a NASA mission. They were the first ones to measure these tiny, tiny fluctuations on the order of about one part in 100,000 but over huge scales. It was like they had very poor eyeglasses. It was very fuzzy, very poor resolution. Roughly 10 years later, another NASA mission called WMAP was able to increase the resolution by a factor of 30. And they released their data in 2003. Then the European Space Agency collaboration called the Planck satellite released their data 10 years after that starting in 2013 with another factor of 30 in the spatial resolution. And so we can now make plots like this. The solid green line is the generic prediction from the simplest models of inflation, what's the pattern of bumps and wiggles on the sky you should see today-- it's basically fancy Fourier transform more or less-- what's the power on different angular scales that you should see today. And you can actually measure many, many quantities, many features of that distribution. The red dots are the actual observations from Planck team. And in many cases, the error bars are expanded so we can see them with our naked eye. This is such a precise set of measurements that, in fact, sometimes we have to make the error bars larger. So now not only do we know do we live in a universe that is indistinguishable from flat as inflation suggests we should, but the actual pattern of those wiggles, the pattern of the very slight early unevenness in the sky matches predictions to, again, better than a percent level accuracy. I find that astonishing. Let me take a few more minutes. There's one more set of things that were found or I should say that were predicted. So inflation should not only make these early primordial density perturbations where the photons should be slightly more or less energetic depending on the quantum fluctuations of that Higgs-like field. There should be primordial gravitational waves as well. This is now much like the waves that Rai and his huge team found locally from the collision of, say, black holes. There should be primordial gravitational waves excited by inflation as well. And these are waves that actually stretch and squeeze space in a two dimensional pattern. So these are mathematically more complicated structure. You should have this periodic squeezing, stretching, and then inverted by 90 degrees. So a version of these were found by the LIGO collaboration and announced early in 2016. These are not primordial. These are from local effects like the collision of black holes in our own galaxy. Inflation says similar kinds of things should've been happening in the earliest moments everywhere in space through this very violent, rapid stretching of space. So go back to that dance party I was telling you about before. At the moment when the electrically neutral atoms start to form, if there were this sea or bath of gravitational waves, then the high school auditorium-- imagine where this dance is happening-- should've been subjected to this periodic, very particular pattern of squeezing and stretching. So while the atoms are forming, gravity waves would be rippling through them. That should yield a characteristic twisting or curl pattern of polarization in that cosmic microwave background radiation. Not only should they be slightly hotter and colder spots in the sky. If you zoom in by a factor of another 20, you should actually see a corkscrew pattern, that the hotter and colder regions actually have this twisting pattern, which really is like that container of spacetime being stretched and squeezed as the gravity waves ran through it. In March of 2014, a team using the BICEP satellite at the South Pole announced they had actually measured exactly that corkscrew pattern. This is from their now famous or infamous paper. We had a celebration here at MIT. This is me [LAUGHS] cheering on Andrei Linde and members of the experimental team. We had a toast with non-alcoholic cider. I want to be clear it was middle of the afternoon. We were getting drunk on the ideas, but not on hard cider. Unfortunately, pretty soon after that, it turned out the BICEP team had measured data consistent with local noise. Many of my friends on the BICEP team managed to find out the Milky Way galaxy is dusty, which we knew. [LAUGHS] So basically, the signal they had hoped to measure was actually swamped by foregrounds they had not yet been able to control. And this was found by a number of very sophisticated analyzes soon afterwards. So it remains an open question to this day whether these primordial curling, twisting patterns really can be detected. Maybe there's such small magnitude, it'll evade our detection. We don't know. There are ongoing efforts to this day. BICEP is now souped up, and they have a much more sophisticated series of telescopes. There's another team also at the South Pole, the Planck satellite. There's new efforts being built on the Atacama Desert, very high altitude in Chile. So stay tuned. We'll hopefully learn more about that final prediction from inflation before too long. Let me wrap up. Cosmic inflation arises from types of matter in interactions that we now know exist that are heart and soul of this particle cosmology community, things like the Higgs boson. And it addresses several of these long standing conundra about the standard Big Bang model and makes specific predictions for what we should see on the sky today, including very minute statistical predictions for things like the cosmic microwave background radiation. And the simplest models fit to unbelievable accuracy despite what my mean dormmates used to say in the mid '90s. Now we have extraordinary agreement with many, many of these predictions, albeit not the final one. So why is the universe lumpy? Why is this cascade of scales? Because space time is wiggly, and matter is jiggly. Now, there's an alternate hypothesis, my final set of slides. I mentioned this last time. And I just want to make it clear. I mentioned that Alan Guth been working on this since around 1980, since December 7, 1979, in fact. As you also know, he's won many awards, including the award from The Boston Globe for the messiest office in Boston. This was published in The Globe at the time that he won first place. They also published these photographs. These are all shots from his office at the time. I used to have to walk through that just to try to meet with my PhD advisor. It was not OSHA certified. So an alternate hypothesis for why the universe is so messy is actually because Alan's been generating the mess in his own office, and it's expanded to cosmic scales. So I'm going to close with that. And I'll be glad to stay a bit longer if people have questions. Again, I'm sorry for running late. Feel free to drop off if you need. Any questions on that? The photos in Alan's office are on Canvas. So if you want to study that part of today's lecture, it's probably the most important lesson, you'll ever take away. You can study those at your leisure as well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_4_Waves_in_the_Ether.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: OK, so today we're going to be talking about waves in the ether-- a great topic, I think. Welcome back to 8225 STS 042, Physics in the 20th Century. To Maxwell and his really quite extraordinary work from the 1860s where he unites, as we've seen a few times, really, the study of optics of light waves and brings that to be very intimately connected to the study of electric and magnetic fields in the ether. And then we'll be looking for much of that first section on how some very smart people tried to generalize Maxwell's work in the years after Maxwell's publications or, indeed, after his treatise on electricity magnetism from the 1870s. So we'll look, in that case, at some of the work by Hendrik Lorentz, a remarkably influential mathematical physicist from the Netherlands. Then we'll look in the second part at one of the kinds of experiments that was making a lot of researchers around many parts of the world really scratched their heads about how to make sense of this question of waves in the ether. And we'll see what was going on with that experiment. That was, of course, the reading. The main reading you had for today was one of the publications by Michelson-Morley about their work. And then we'll come back in the last part for today and see how Hendrik Lorentz responded to the Michelson-Morley experiment as well as to the more mathematical aspects of Maxwell's work. So that's what we're going today. And as I say, some of this stuff I'm going to skip some steps in their derivations. And the lecture notes will hopefully help fill those in. So we've seen a few times by now that starting in 1865, James Clerk Maxwell came to the conclusion that light waves were nothing other than transverse undulations-- that is to say, a certain kind of wave of electric and magnetic fields propagating in the ether. And we all now have the possibility to purchase our own set of Maxwell's equations on a T-shirt. Some of you might actually already own one, in which case, I'm quite jealous. So what I want to do first is walk through at least briefly, why did Maxwell think that? What was the basis for Maxwell's conclusion that light was nothing but these electric and magnetic waves in the ether? So first, you may recall from previous studies of electromagnetism that there are four different kinds of fields that appear here on this side, on the left-hand side of these otherwise quite succinct Maxwell's equations. We see a D, a B, an E, and an H. Actually, as you may remember, those aren't all independent of each other. And in fact, D is often called the displacement field is, in most instances, just proportional to E, the electric field. And likewise, H, which is sometimes called the magnetizing field, is also, under many circumstances, just proportional to the magnetic field B. So really, what we're looking at for most applications of interest are four equations governing actually two physically distinct quantities-- the electric and magnetic fields. Now, these terms of proportionality in modern parlance, epsilon, this Greek letter epsilon, that relates the displacement field to the electric field, that's what we would call the dielectric constant. Sometimes it's called the permittivity. We saw we had a little indication in the previous lecture, to Maxwell and his circle, that was really like a spring constant. It was a way of characterizing the responsiveness of either the ether itself or some other material that might be placed within the ether, like some insulating or dielectric material. Likewise, we will call mu-- we today call mu the magnetic permeability. For the Maxwellians, it was also, again, a different kind of-- basically a spring constant, a kind of elastic feature of the medium. Now, what's important is that for Maxwell and his generation, within the ether, these took on simple numerical values. They just became constants with reference values. This was a number with a particular value, and that was the value of, say, the permittivity of the ether or the magnetic permeability of ether. So epsilon just became some constant epsilon 0. Mu likewise became some constant mu 0. Moreover, Maxwell realized, if you look at the source terms, the right-hand side of some of these equations-- sorry-- then you see that there are some terms referring to either the electric charge density rho or an electric current capital J. Well, Maxwell began to apply his own new equations before they were written in this very nice convenient form that we inherit from Oliver Heaviside. Maxwell began applying his equations to a region of space that was filled only with ether, a region of what we would call empty space. To Maxwell, it was never truly empty because, of course, it was filled at least with this all pervasive luminiferous ether. But he wanted to, in particular, consider a region where there was no bunch of no clump of electrically charged matter-- so rho would vanish in that region of space. There was no electric current flowing, and so J would vanish-- capital J. Then, it takes only a couple of steps, the kinds of steps that he was very good at as second wrangler at Cambridge. It was only a few steps to then rearrange these four equations on the T-shirt into actually two identical equations, to a pair of equations-- one for the electric field and one for the magnetic field. And so you could manipulate those four equations by taking the divergence of one and so on and the curl of the other, the kind of manipulations that wranglers are very good at. And he boiled down his four equations in this simple case of considering the electric and magnetic fields in regions that were filled only with the ether. He could form two versions of the same equation. Both the electric and magnetic fields would obey the same form of the equation. Again, just as a reminder in case you haven't seen it before, this upside down triangle-- sometimes called nabla-- that's a shorthand for the second derivatives in the spatial directions. That's like d by dx squared plus d by dy squared plus d by dz squared. How is this quantity varying in space by taking the second derivatives of spatial variation? And what Maxwell found is he could relate those spatially varying quantities to the way that same quantity-- either E or B-- would vary over time. So you see, related to the second derivatives of time with some proportionality that turned out to be the product of those two spring constants-- the dielectric constant and the magnetic permeability when they take their simple constant values in the vacuum. So this is why Maxwell started getting quite excited in 1865. He realized largely because of his Cambridge Wrangler or math tripos training that this form of the equation is very, very familiar. This was, as he already well knew, the form of a wave equation for a traveling wave. Again, I go through this in a little more detail in the lecture notes if this is new for you. We'll just go through. We'll take a quick look here, unpack that a bit further. So every good Cambridge Wrangler knew, in fact, even some of the not such good Cambridge students knew by this point that there was a general form to represent a wave equation for, say, a traveling wave. And it would take this form where you have the second derivative over space-- nabla squared or del squared. You have second derivatives in time, and you have this proportionality given by the speed with which the wave was traveling. Say v was the speed of the wave. So if you have some wave-like quantity, like, say, the amplitude of the wave, it would vary over time and space. And it would vary according to this differential equation where the key part was v is the speed of the wave. Now we go back-- oh, let's just simplify. Consider motion in a single direction of space. So instead of considering x, y, and z, let's just consider, to make it simpler, motion only along the x-axis. Then we can very quickly solve this equation where you've all probably done many times. And the solutions in general will be a series of sines and cosines. It's just a periodic oscillation. It is, indeed, wave-like behavior, as we'd expect. So I've introduced some notation here again, which might be very familiar for you. K is really just an abbreviation. It's called the wave number. It just goes inversely with the wavelength, as you might have seen before. The lowercase Greek letter omega-- this one looks kind of like a w. That's the omega letter. That is the frequency. In fact, it's the angular frequency. And that just goes inversely with the period of the wave. And so, again, in a way that's probably familiar for you, when you have a function that satisfies this kind of wave equation, then the solutions can be thought of as waves in both space and time. So here, what I'm doing is presenting a snapshot at one moment in time-- let's say t equals 0. How is that wave spread out over space? If you can imagine just to freeze time and look to your left and your right, you'd see some regions where that wave reached a peak, at a crest, other positions in space, other values along the x-axis where the wave reached a trough or a minimum, and it would periodically rise between crest and trough. The distance between neighboring crests or peaks was simply the wavelength lambda inversely proportional wave number. Likewise, we could stand at a particular location in space-- let's say, the origin x equals 0-- and just clock in how high is the wave as it passes us over time. So now we're saying staying in one position in space and clocking how the amplitude changes as a function of time. We once again see this very simple oscillating pattern where, at one moment, we see a peak of the wave passes by. Some time later, at that same position in space, we see a trough and vice versa. And the time duration between peaks is the period, capital T, inversely proportional to that frequency omega. That's very likely familiar. If that's not so familiar, there's a little bit more write-up in this lecture notes. But that's how wranglers had learned to handle things like the traveling wave mathematically for a long time in preparation for the Tripos exam. Now, let's go back to what Maxwell had found, as I showed you in the previous set of slides. He found an equation for the electric field and an equation of identical form for the magnetic field when he applied his four equations. In fact, it was more complicated with all the components-- what we would now consider the four simple equations for electromagnetism. When he applied them to a region of only ether, no nearby charges or currents. And he found an equation of exactly the same form that would be obeyed by both the electric and magnetic field. And not only that, when he plugged in the best-known values for these kinds of parameters of the ether, these spring constants, what we would now call the dielectric constant and the magnetic permittivity or permeability-- excuse me-- he found the numerical coincidence that the product of those two constants actually was very, very close to what was already known to be the speed of light. In fact, he found it went 1 over the square of that speed of light. So this quantity looked a lot like 1 over the square of the speed of that traveling wave as it would look for a generic mathematical treatment of waves. Now, the speed of light had been measured for 200 years by this point, both in optical experiments, but often in astronomical ones. So the speed of light was actually a constant, a number that Maxwell knew about, he'd heard about. So he was prepared when he began substituting in these seemingly totally separate parameters or constants-- epsilon and mu. And he found really, to his surprise, this numerical value that their product went 1 over the speed of light squared. That's what drives him to conclude, starting in 1865, that light waves-- waves that travel at the characteristic speed c-- were literally nothing but traveling waves of electric and magnetic fields in the ether. These were these transverse undulations that he began writing about. That's what gets Maxwell to that conclusion. I want to give this a little more example of his thought process for why he concludes that light was nothing but these waves of electric and magnetic fields in the ether. OK, so now Maxwell's whole analysis there, as really quite succinct and elegant as it was, had made a number of simplifying assumptions. You might have noticed he was assuming that both the emitter of those waves, the source of those waves, and also the receiver of them were both at rest, not only with respect to each other-- neither was moving with respect to the other-- they were both at rest with respect to the ether. Let's imagine that we have some source-emitting light that's just sitting absolutely still in the ether, and it's going to generate those sine and cosine-like waves of electric and magnetic fields. And let's say, at some other location, fixed at rest in the ether is something that's going to measure or receive them. So an obvious next question to ask is, can you generalize Maxwell's treatment of optics to the case where either the sender or the receiver, or maybe both, are moving not only with respect to each other, but even more important for these folks, moving with respect to the ether, with respect to that light-bearing luminiferous ether? And that was a challenge that many, many leading mathematical physicists recognized one of the next obvious things to try downstream from Maxwell's result, and this became basically the challenge of the electrodynamics of moving bodies, a title that we're going to come to many, many times. For much of today's class, we're going to look at how a particular mathematical physicist, Hendrik Lorentz, tried to make sense of this. So he took on this task, as did many of his peers, this task of trying to generalize Maxwell's treatment to the case of moving senders or receivers of light. Now, on the one hand, this looked like a pretty straightforward challenge. Lorentz, like every trained mathematical physicist of his day, knew how to perform things like coordinate transformations. In fact, it was the Galilean coordinate transformation. Galileo had worked it out in the early 1600s. It was hardly cutting-edge research. The idea was, as you may remember or you may know, Galileo imagined how to relate the coordinates of, say, someone riding on a boat moving at a constant speed down a river, let's say, and how that person might relate her coordinates to those of an observer standing on the shore or the side of the lake, standing still, they would have said, so that you could relate the one set of coordinates to the other by taking into account the relative motion. If we imagine that we're the observer on the shore watching this boat drift by, then we can relate how the coordinates of the person on the boat-- let's say, the origin with that person calls x equals 0-- how that will shift with respect to our coordinate system. We realize the boat is drifting at some speed v. So at later moments in time, the position x equals 0 on the boat will have drifted away from what we call x equals 0 on the shore. And we just take into account of the relative motion with this speed v. This is what's called the Galilean Coordinate Transformation, and this was indeed quite familiar. It got codified by Newton in the later part of the 1600s. This was really pretty old, literally textbook stuff by the time people like Maxwell and Lorentz came along. But then Lorentz found something that he did not expect. When he used the totally standard way of relating coordinates when there's relative motion between, in this case, say the emitter and the receiver of light, he put those into Maxwell's equations that related, remember, derivatives or variations in space to rates of change over time, he got something that looked a bit more like a mess. So when he tried to apply the coordinate transformation to Maxwell's equations for the propagation of an electric field in the ether, he got a form that no longer looked quite so simple as Maxwell's original form. Why that have bothered someone like Lorentz? Because that meant the solutions to this transformed equation would no longer behave as simple sines and cosines. And now, you're all on mute, but I can imagine you're screaming, much like this great Edvard Munch picture. That's how I feel. How could it be? Why was it such a headache for Lorentz? Why was it so alarming? Because people had been measuring light on Earth for a long, long time and had already figured out that light behaved more or less like sines and cosines. In many, many, many applications here on Earth in laboratories in increasingly precise optical laboratories on Earth, really getting throughout the 19th century, people had measured properties of propagating light waves, or more generally electromagnetic waves of various wavelengths. And they routinely found that these waves were behaving like very, very much like sines and cosines, exactly as Maxwell had found. Why that matter? Well, Lorentz knew that he was doing these measurements, or his colleagues were on the planet Earth. He knew the Earth was moving through the ether. It was moving in its rotation around the sun. There's no reason to think even the sun was at rest in the ether. The Earth was certainly very likely to be not the center of the universe anymore in the years well past Copernicus. So it seemed all but certain that the planet Earth was in some state of motion through the ether with this light-bearing ether. So we should have to take into account this coordinate transformation because the light waves being either emitted or received on Earth were not being treated-- were not at rest with respect to the ether. They were on Earth. The Earth is moving, so we're in some relative motion with respect to the ether. And yet, in optics labs around the world, people really did measure properties of light as if they were just simple sines and cosines. How could you square these two things? So Lorentz came up with one part of what turns out to have been a two-part response. So we're going to look at both of his responses today. This is going to take up much of our class today. His first response was really mathematical. He was, after all, a remarkably accomplished mathematical physicist. So he published this first in the 1890s, this first part. He just introduces a new time variable. He calls it local time. So if you recall in the Galilean transformation-- let me bring it back up for a moment-- the standard way to relate coordinates, I should have emphasized earlier, you take into account a change between x and x prime. You watch the boat drift, so you see that the boat observer's x equals 0 has moved with respect to your x. But you assume, as Galileo assumed, as Newton assumed, that there's no reason to change your time coordinate. Your clocks shouldn't be affected by the fact the boat's drifting down the river. That was the assumption. So typically there was no change in the time coordinate, even when you were considering relative motion. Well, Lorentz, in a move of really desperation-- cleverness, but also desperation-- said, well, what if we revisit that? What if we introduce a new time that he calls local time t prime where that also becomes a function of both the time of the original coordinate system, the location, say x, and also the relative speed of motion between them. If he then reverse engineers what form of transformation would he need for this new time coordinate t prime, then he could actually save the form of Maxwell's equations when he transformed both x and t-- not only x-- in a particular way, he could make sure that the Maxwell's wave equation was invariant under this new set of coordinate transformations. He could save the form of Maxwell's equations. Now, Lorentz always, always thought this was just a trick. He absolutely did not think this was a physical change to how clocks would measure time. He did not think that this was a real effect in the world. It was a clever way to try to relate coordinate systems that he left really as mathematical. In fact, he himself calls it fictitious in his own papers. This was just a way to try to say, let's refer everything back to the ether rest frame, and we'll have to do a little more complicated mathematical jujitsu to understand why light should continue to behave like sines and cosines. So I'm going to pause there and ask if there are any questions on that part so far. Any questions on any of that? I see the chat has a few items. So indeed, ether has been spelled many ways. I see that in the chat. That's definitely true. And thank you, Tiffany and Julia, for weighing in on that. Good question. He was definitely fixated on this mathematical coordinate called t, and that had been the time coordinate since Galileo's time, if not before. He was intentionally adjusting how he would compare the time coordinate for, let's say, one state of motion versus another. So that part wasn't so surprising, either for himself or his colleagues. It was the t coordinate. T stood for the time read by a clock. That's what Galileo meant. That's what Newton meant. That's what Maxwell meant. What he wanted to be very careful about was to make sure his readers knew he didn't take this literally. It was really a mathematical move to say, if we somehow have to relate our own local coordinates to the ether rest frame, which is the physically relevant comparison, as far as they were concerned-- these were disturbances in the ether-- then he wasn't sure what to make of the fact that the t coordinate for this person might not be the same t coordinate for someone at rest with respect to the ether. He would agree it was all about how we measure time on a clock. He just was convinced it was a kind of illusion. It was a fictitious thing. But if you do that, then you can make Maxwell's equations look like the way one wants to. Ah, good. Gary asked the question, why did Einstein later say he built his theories on Maxwell rather than on Lorentz? That's a great question. I can invite you to ask again in about one or two class sessions. We'll talk quite a bit about that, actually. Yeah, that's an excellent question, Gary. But I'm going to pause on that one. Any other questions on what Lorentz was doing in the 1890s? What was the nature of the challenge or this question about the behavior of that wave-like equation that Maxwell found? It looked so pristine and powerful until you asked the next obvious question, what if either the sender or receiver of light is in motion with respect to the ether? Uh-oh, the usual tricks for relating coordinates seem to lead to some real difficulties. That was the main point of that first part. Any other questions on that? If not, I'll jump into the next part. OK, good. Let's go on now to the next part, which is all about the reading you had for today, the Michelson-Morley experiment. So we've seen many, many times by now Hendrik Lorentz, like pretty much all active mathematical physicists and experimental physicists of his day-- Lorentz knew, just knew, that light propagated in the ether. These were these transverse undulations in the ether. So another natural question to ask is, could you ever detect the fact that our Earth, our own home laboratory, was moving through the ether? As I mentioned on that slide a few slides ago, the Earth was moving around the Sun. That was very clearly established for this generation long ago well past the age of Copernicus. So there's no reason to think that the Earth is at rest in the ether. Maybe even the sun isn't at rest. There could be all kinds of complicated motions that we on Earth are going through with respect to the universal ether. So could we ever detect our own motions through the ether based on the behavior of light? Light are these waves in that all-pervasive ether. We're moving through the ether. We measure properties of light all the time. Could we recognize that we're actually in motion? And the idea was an analogy much like you might think of these days if you're lucky enough to get some fresh air and go for a bicycle ride. So first, imagine you step outside on a day like today, where, at least where I'm sitting, I can see out my windows there's very little breeze or wind. If I were to walk outside right now, I would feel no particular wind in my face. The air is still. If I just stand still outside, I don't feel anything on my face because the atmosphere is not in any kind of state of motion. If, however, I were to either run really fast or get on a bicycle and pedal really fast, I would feel a wind on my face. I would feel the breeze. Even if I had stopped pedaling and stood still again, that breeze would go away. I would feel the effect of my motion through the medium a headwind, not because the medium was in motion, but because I was moving with respect to the medium. Even on a still day with no breeze, the leaves on the trees are still, the blades of grass are not being blown around. If I move rapidly through that medium, I will literally feel it. I'll feel a breeze on my face. I'm belaboring the point probably quite familiar to you, also like you might see on this cartoon. So the question was, could we measure the effect of our own headwind? Could we feel that effect of our own motion through this physical medium-- the medium, in this case, being the all-pervasive elastic luminiferous ether if we, like the bicyclist, are in a state of motion through it? So that was a challenge that many people thought was a neat question to ask. One of the first to really tackle it in a very systematic experimental way was this remarkable figure named Albert Michelson, who began in earnest to try to tackle this in the 1880s. I should just say, I think Michelson is really, really amazing and fascinating as an individual. He was born to a very poor Jewish family in Central Europe on the border of what would soon become the borderland between Germany and Poland. When Michelson was two years old, his family emigrated to the United States. They actually made their way to California in the midst of what became known as the Gold Rush in the late 1840s, early '50s. And so his father became a merchant to try to supply the people chasing their fortune in the Gold Rush in California. They moved around from one little tiny mining town to another. Michelson actually wound up getting a fellowship to the Naval Academy, so he got a free university education because he qualified for Annapolis. And that's where he really fell in love with physics and mathematics and began thinking about optics, in particular-- more to be said, totally crazy, fascinating story. Michelson then got a fellowship to study after his undergraduate days in Germany. So he was able to learn some of the latest innovations in electromagnetic physics from some of these experts who had studied with people like Hendrik Hertz and others who were deeply, deeply enmeshed in Maxwell's equations and the propagation of light in the ether. So Michelson sets himself this task-- could he design an experiment or a device with which one might measure this headwind of the Earth's own motion through the ether? And what he did was design a really quite novel, very ingenious device called an interferometer. And there's a lot on this in those separate lecture notes. You get a little taste of it from the Michelson-Morley reading for today as well. So we'll talk a bit about what this instrument was like. So Michelson's work is also interesting not only because of his own quite dramatic life story. He was a championship boxer when he was at the Naval Academy. I often joke, perhaps unfairly, he was probably the only championship boxer who also won a Nobel Prize in physics. If you know a counterexample to that, please let me know. He's the only one I know about. anyway, interesting guy. Not only was he interesting because of that, he was also interesting historically because the work that we'll look at in this part of today's class was really among the first examples of research in any of the natural sciences, including physics, that really started getting the attention and earning the respect of some of the very elite scientists in Western Europe. There were a few others in Michelson's day, but this was pretty new. The United States was still seen, often rightly, as a intellectual scientific backwater, at least as reckoned by the experts in Western Europe. Michelson's work begins to change that. And in fact, as I mentioned, or as you see here, he becomes the first physicist based within the United States to win the Nobel Prize. He wins this Prize in 1907. And he wins it for the kind of work that we'll talk about now. So to get our heads into what was going on with this interferometer, I like to think of this analogy in terms of swimmers. And again, we'll go through it here. Some of the extra steps of these derivations, you can find in the separate lecture notes. So what Michelson was really doing-- as you know from the reading, Michelson, especially with his partner, Morley, was using the interference of light waves to conduct tests and experimental tests of really unprecedented precision and accuracy. So before we talk about interference of light waves, let's talk about a race that we can imagine being conducted by two swimmers in a river. So we'll sit with this analogy for a few moments here. So imagine we have a race where one swimmer is going to leave from point A, swim directly to point B a distance L, and then swim directly back from B to A. That's swimmer 1. Swimmer 2 is going to cover the exact same round-trip distance, but she's going to swim across the river instead of directly up the coast here. So swimmer 2 has to set out from point A, swim across the river a distance L to point c, and then swim back to point A again. The question is, who will win? Which of those two swimmers will get back to point A first? And the other thing to bear in mind is that the river is flowing. There's a current in the river of a constant speed, v. So there's a downward current here of a constant speed. So let's consider swimmer 1. Swimmer 1 is leaving from point A, swimming directly against the current to get to point B. So for that half of her lap, she's swimming at a net speed of c minus v. As you can see in the text here, each swimmer is required to swim at a constant speed with respect to the water. And swimmer 1 is swimming at speed c with respect to the water, but the water itself is flowing back against her with a current of speed v. So her net speed is measured, say, from the shoreline, from, say, the judges sitting here on the shore or the banks of the river. Her net speed for that part of the journey is c minus v. So she takes a time to get from A to B. That's the distance traveled divided by her speed. So her time to go from A to B is just the distance L divided by her net speed, c minus v. That's probably pretty clear for you to go through the reasoning. On the way back, now she gets the boost. Now she's swimming with speed c with respect to the water. But the water itself is flowing and helping her out because she's now going with the current. So her net speed as measured, say, from the shore or the riverbank for the return journey is actually quicker. It's now c plus v, same distance L. So just a few lines of algebra-- and again, you can go through that a bit more on your own or with the notes. Her round-trip time, her lap time to go from A to B and back to A again turns out to have this relatively simple expression. It depends, of course, on the distance traveled. There's a 2 here because she has to go there and back. She covers total distance 2L, her speed with respect to water and then, it turns out, going directly against the current and directly with the current with a little algebra leads to this correction factor here, 1 over 1 minus the square of v over c. That's how long it takes swimmer 1 to complete one lap. Now, what about swimmer 2? Swimmer 2 knows that she has to accommodate the fact that she's going to be encountering this current of the river. So she has to leave point, and she has to actually land at point C to qualify for her race. That is a distance L across the river. To get there, she has to swim at this diagonal. So while she's swimming across the river, the current will nudge her downward. So by the time she crosses the river, she winds up at point C. If she just set out and tried to swim straight across river, she'd get knocked off course by the current. So she has to travel along the hypotenuse of a right triangle. So she's traveling along this path, capital R, during the time that she's heading from A to C, that tAC, she's drifting down in this direction a total distance of v times that time. That's the distance that she'll that she'll be nudged in this direction by being in the water for this duration of time and being nudged by the current at that speed. So thanks to the greatest invention of ancient Greece-- which I often say was neither democracy nor the epic poem, it was, in fact, the Pythagorean theorem-- we now know how to relate the squares of the lengths of these sides of the triangle. The direction the length of this side R squared is, of course, equal to the sum of the squares of the other two sides of the right triangle, this side squared, and this side squared. We have a little extra information we can use because we know the length of this line segment R is the speed with which she swims with respect to the water times the duration that it takes her to cross the side. So now we can plug in a new value for capital R as well as for this drift displacement, vtAC. And now, again, just a few lines of algebra, which I know you can do, and there's a little bit more steps in the notes, but just a few steps. We see that her lap time, her total time to go from A to C and back to A again, looks pretty similar to swimmer 1, but not identical. It also is proportional to the total length. That's a 2L. It, of course, involves her speed with respect to the water and has some correction due to the flow of the current. OK, so I'm going to put these both back up side by side, the lap time for swimmer 1, the lap time for swimmer 2. The question is, who wins the race? Who gets back to point A first? Now, it might not be so clear if you just eyeball it. So let me define a very helpful quantity, a quantity you might have seen before, a quantity we'll see many, many times in the coming lectures, including throughout today. Typically it's abbreviated by the Greek letter gamma. So I'm just going to define gamma as a convenient combination of v and c. I'll define it as 1 over the square root of 1 minus quantity v over c squared. And here's a plot of what a gamma looks like as the ratio v over c gets closer and closer to 1. In fact, it diverges. It actually becomes infinite if v is exactly equal to C. I just truncated my plot here. But you can see it stays it stays pretty close to 1, but greater than 1 for any nonzero speed. And it rapidly becomes very large at larger speeds compared to the speed C. So let me ask again, who wins the race? If I do a little extra algebra, making use of this new quantity gamma, we can rewrite the total lap time for swimmer 1, who goes from A to B and back. Her time is proportional to gamma squared. Meanwhile, the total lap time for swimmer 2 goes across the lake and back, across the river and back. Her lap time is proportional only to gamma. Now I just emphasized for you, gamma is always greater than or equal to 1. If there's any current flowing, if v is nonzero, the gamma is a quantity bigger than 1. So now hopefully it's a little more clear to see swimmer 1 will actually lose the race. Swimmer 1, time tABA, is actually a larger quantity in general than the time for the return journey for swimmer 2. tABA is actually bigger than time tACA. There's a time difference. That means there's a clear winner. They have different times to complete the laps. And it turns out, the person who goes directly against the current and then directly with the current will take overall more time to complete her lap than the swimmer who goes diagonally across the river and is with that drift displacement from the current. Not only is there a difference in time that depends on this quantity gamma, if we expand gamma for speeds that are small compared to c, if v over c is a small quantity, then we can just do a Taylor expansion in that small quantity. We see the difference in time is what's called a second-order effect. That's just a fancy way of saying that the difference in time goes like the square of that small quantity, the ratio of v over c. So there should be a winner if there's any current flowing in the river. If v is nonzero, then there should be an absolute clear winner in this race. It should go like the second order in the ratio v over c. So now, why did I talk about this crazy swimming race? Because that's pretty close to what Albert Michelson designed for light waves. And this is his instrument called the interferometer. It's really quite brilliant. So the way the interferometer worked was instead of swimmers, he had a bright source of light of nearly-- it was nearly monochromatic. He was using sodium arc lamps in the beginning. So it was shining very brightly with basically one characteristic color or wavelength of light. That was important. He shines that light from a single source onto a half-silvered mirror. As the name suggests, that's an object that lets about half the light pass straight through like a window and reflects about half the light. So it's not totally reflecting. It reflects, on average, half the light that falls upon it. So from this single source of light S, this sodium arc lamp, half the light will pass through like it just sees a clear window and travel a path, a distance L, until it encounters a fully reflecting mirror, this top mirror up here. That mirror then will reflect all of the light that hits it. That light will then bounce back to that half-silvered mirror, a portion of which will, again, be now reflected and hit some screen. That's like swimmer 1 starting at point A, traveling to point B, and coming back to point A. Meanwhile, half the light that encounters this half-silvered mirror will not pass through. It'll actually be reflected. And so this becomes like swimmer 2. So half of the incoming light that is incident upon this half-silvered mirror will be deflected along path 2. It will travel a fixed distance L until it encounters a fully reflecting mirror at the end of that path. That will reflect the light back. Half of that light will then pass through the half-silvered mirror. So now we've set up a race. We have a single light wave that starts off at the same time. Swimmers 1 and 2 both start at point A at the same time and set off on their distinct journeys. If there's any difference in the distance they need to travel-- or excuse me-- if there's any difference in the time it takes them to travel along either path 1 or path 2, those light waves will come back no longer in phase. The crest of one will no longer line up with the crest of the other. You could get interference. So if the light waves that travel these distinct paths take different times, just like the swimmers took different times to complete their laps, the waves will come back out of phase with each other. And you should see interference when the two waves can be joined back together on some screen. You should see a characteristic interference pattern. The amount of offset will depend on the amount of time delay between traversing these two paths. So that was the idea. Michelson first built a 1-meter version, where each of these paths' capital L were about 1 meter in distance between the half-silvered mirror and the fully reflecting mirror. And then he actually got external funding-- partly from his father-in-law, or an uncle, or something like that, and from other sources-- to build a super-sized version. And this is the second version he did with his colleague Morley once he was now on the faculty at Case University, which is now Case Western Reserve University in Cleveland, Ohio. They built a ginormous version of this interferometer. The lengths between the half-silvered mirror and the fully reflecting mirrors were 11 meters, like 33 or 34 feet-- enormous paths. And to try to damn down on vibrations from things like not only horse-drawn carriages, but early trolley cars outside the laboratory to try to dampen any kind of source of systematic error, they put this entire 34-foot long optics table floating on a VAT of mercury, which I do not recommend, by the way. They were breathing in horribly, horribly poisonous fumes throughout this experiment. Don't try that part at home. So this is a figure taken from that paper-- we have the paper in our reader for today-- where they then finally tried to put together, was there any offset? Did the interference fringes-- when those two light beams came back together on that collecting screen, was there any evidence that one path required a different amount of time than the other? And the short answer was no. They found no compelling evidence that there was any time delay, any time difference, between the path taken by light that traveled path 1 versus the light that traveled path 2. There's a little wiggle to these curves. They were convinced that was almost certainly just experimental noise, systematic noise. In fact, as they wrote, the dotted curves that you were comparing to are actually 1/8 of the size of the effect they would expect. And even that dwarfs the measured variation they actually managed to measure. Their instrument was sensitive to the square of v over c. Remember, there's a second-order effect according to those swimmers. So they should have been sensitive to a ratio of speeds one part in 100 million. If you think about the likely speed, say, of the Earth through the ether just taken as an order of magnitude estimate, the speed with which the planet Earth moves around the sun during the course of its annual orbit-- take that as a characteristic speed for our motion through the ether, compare that to the speed of light in the ether. These are incredibly small velocities. And yet, he should have been sensitive to even very, very tiny displacements. There should have been a measurable shift in the interference fringes, even from such a small relative motion. And yet, trying this for days, and days, and nights, ultimately for months, and months, and even years, they find what becomes known as a null result. And here's a quotation from their paper. "It seems fair to conclude from the figure--" meaning from this plot here, "--that if there is any displacement due to the relative motion of the Earth and the luminiferous ether, this cannot be much greater than about 1% of the distance between the fringes. The actual displacement was certainly less than the twentieth part of this expected value, probably less than the fortieth part. It was consistent with no offset at all." They found no shift in the interference fringes that they could have attributed to the motion of the Earth through the ether. So they found little wiggles that were basically consistent with no wiggles at all. So they didn't just try this once. They wondered about maybe try a day/night effect, depending on which arm of this big L-shaped interferometer happens to be moving directly into the ether wind at a given time. That might shift day versus night. It might depend on the time of the Earth's orbit around the sun. So they would check fall versus winter versus spring. And it was this incredibly precise instrument with data collected very, very, very carefully and with great order and regularity. That's what began grabbing the attention of even some very, very elite scientists in Europe. And despite all their efforts and all their attempts to replicate it, they kept finding results consistent with a tie, with no time offset at all. And here is, I think, the saddest part. Michelson lived for decades after the 1880s. He died in 1927. He was the first US-based physicist to win the Nobel Prize in 1907. And yet, 20 years later on his own deathbed, he still considered this work to have been a failure. I say, may all of us win Nobel prizes and still not be satisfied. This is a sad, sad thing. So let me pause there and stop sharing screen. Any questions about that part? I see some things coming up on the chat. So Jade clearly has read ahead or has taken other courses. Jade asks, is there any connection between this correction factor that involves a square of v of a c and the Lorentz factor? They look very similar. They do, Jade. You're exactly right. In fact, we're going to come to that even today, and we'll continue seeing this factor gamma both today, in the next class, and the class after that. And to give away the story, you're right. It should look familiar if any of you have had any coursework on relativity. We'll see that factor does come up, and we'll see exactly what Lorentz thought it would mean. And then we'll later see what people like Einstein thought it would mean. So when we do this experiment today, Alex asks, and we use lasers, does using sodium lamps make any difference? Good, excellent question. Today we do use lasers, both because, first of all, lasers are awesome. Let's just agree on that, because they're cool. But also more important, slightly more relevant, lasers are what we call monochromatic. They emit nearly all their light at one frequency, one color. So Michelson did not have access to lasers. No one on the planet did in the 1880s. So what was really state of the art was to use sources that emitted most of their light at one dominant frequency. And often, they would use these sodium arc lamps. They could measure the output-- we'll come to this actually in a few lectures when we think about early quantum theory. The researchers at the time were getting quite good at measuring how much energy came out of a certain emitter in different colors, in different frequency bins, so they could characterize what we call the spectrum actually quite precisely. So they knew that certain kinds of emitters were pretty close to being monochromatic, and sodium was one source that had become pretty common to use. So it certainly could have mattered if you were doing-- it would have mattered in an interferometer if the light waves were of very different wavelengths to start with. But in that case, you would still be able to use the interferometer because what you would worry about would be whether the interference pattern would shift. If the light that travels the different paths is of inherently different wavelength, then you would not expect to find zero interference fringes. There should be interference fringes even at rest. Then what you look for is a shift in that pattern due to the Earth's motion through the ether. So you could still use interferometry successfully, even if you don't have a perfectly monochromatic source. It's a great question. But they were aware of that, and they were able to make pretty strong arguments even without lasers. Good, another question. Abdulaziz asks, do you need nanoscale precision to make sure the tracks are of exactly the same length? Great question. Again, it comes back to the same kind of answer I just gave. In the cartoon version I described, you would think you would need to have absolute hyper-precise control over those path lengths. It turns out, even if the path lengths are grossly different-- let alone nano scales-- if they're inches or even meters different, then you would have some basic reference interference pattern, and then you could still check to see if the interference pattern changed due to relative motion. So what you really are sensitive to are shifts in the pattern of the interference fringes. And that is what's sensitive to the square of v over c. So in fact, you don't have to machine these parts to one part in a nanometer, thank goodness, right? You actually are sensitive to a shift in the existing interference pattern. And that, again, they could measure with really quite extraordinary accuracy with optical frequency light. Thickness of the half-silvered mirrors could introduce errors. These are excellent questions. Again, what it came down to is none of these effects, they were convinced, would affect the behavior of the interference patterns as the whole apparatus moved through the ether. They could accommodate an existing zeroth-order baseline interference pattern, which could arise from all these very excellent, excellent observations you're making. And the question was, did that baseline interference pattern or a starting pattern of fringes,l did that shift due to, say, the Earth's motion? And that's what they kept finding no evidence of. So we're going to talk-- and I I see other questions about relativity in 1927. Oris asks-- Oris, I see you have your hand up. I also see you in the chat. Is it the same question? Or do you want to ask your question directly? Same question, OK. Were people just not taking relativity seriously? And we're going to spend a whole lecture on that coming up in not too long. The short answer is relativity of the form that we would recognize, of the form that you'll analyze for 1-- due October 2, don't forget-- that was published in 1905. It was certainly not considered a standard or universally recognized result in 1905. It was pretty well accepted by many, though not all physicists, before 1927. But there is still lingering question, including by people like Albert Michelson. So it was not universally accepted, even in Michelson's own lifetime. The balance had certainly shifted by 1927. By 1927, Michelson was a member of a minority, a Nobel Prize-winning minority, but a minority nonetheless. In 1910, that would have been quite standard to either have never heard of Einstein's relativity or think it's probably wrong, or irrelevant, or trivial, or not worth paying attention to. We'll come to that kind of stuff pretty soon, and same with general relativity. Julius asked an excellent question. How did they know the ether is moving at velocity v? So I probably explained it poorly. They assumed the ether was totally at rest. The ether, they assumed, had no inherent motion at all. It was, on average, just sitting perfectly still. But things could move through the ether, including things like us on Earth. So the idea was the ether set an absolute reference frame. Things could be at rest with respect to this comparison substance, the ether, which they assumed was sitting, on average, perfectly at rest. But then things could move through it. Planets and stars could move through it. And if the Earth was moving around the sun in our own solar system, then maybe either the Earth alone or the entire solar system was actually moving through this elastic medium. So they were convinced the ether was the reference point with respect to which our own motion might show up, much like if we step outside on a still day. When we're standing at rest with respect to the medium, we don't feel a breeze on our face. When we start moving on our bicycle, then we feel a breeze in our face. The air hasn't had to move for that to happen. Our motion through the air, the atmosphere, will create that kind of headwind. So how do they zero the interference? Good. So Julian asks, if they were convinced the Earth was moving, then how could they ever get rid of it? Again, the answer is they didn't have to get rid of it. They had to carefully note what the baseline interference pattern was and then measure any tiny deviations away from that. And deviations would happen because they assumed the Earth was moving. So you set a initial interference pattern. The spacing between interference fringes, actually a circular pattern, usually. There's little circles within circles-- concentric circles. And you can measure the space in between those bright spots, the fringes, actually quite accurately even in the 1880s. And the question then became, did the spacing between those tiny fringes expand or contract, a shift in the interference pattern? So a great question. So again, partly, the big problem with everyone in this class is you've all heard of Albert Einstein, which is not such a problem. Remember that he was born in 1879. He was barely on the planet during this stuff. He certainly wasn't active yet. So part of what we have to remember is that that wasn't even in play yet. And part of what I find actually so fun-- it's challenging because of all that we've learned in between, but it's actually pretty fun to try to put our heads back into say a pre-1905 or pre-1927 mindset. What was an obvious question to ask? What would really smart people like Hendrik Lorentz really want to spend their time on? The behavior of light waves when either the emitter or receiver is in motion with respect to ether. What did Michelson set his life's work to be? Really, really precise measurements of optics to go after this big question about things like the electrodynamics of moving bodies. So more specifically, DA, you're absolutely right. There were many ways to try to account for these null results. Michelson himself thought of several. Michelson himself redid this experiment throughout his lifetime. He helped encourage other really quite world-class experimentalists to redo the interferometry tests well into the 20th century, well after the publication of relativity because it did seem, for at least some of that time, to be quite reasonable, maybe even compelling, alternate explanations. These experiments were tricky. The interferometer is an exquisite instrument, but theoretically it works one way. Anyone who's taken Junior Lab, or will soon take Junior Lab, or done any experimental work will know the instruments don't always work the way the manual says they're going to. That's what happens with me all the time, by the way. And so there are all kinds of alternate explanations that actually seem quite compelling to many of these folks that might account for the null result. That's why I find it so tragic, not that Michelson considered himself a failure in 1887. It's that 40 years later, after many, many more efforts to test this and really pursue these alternate possible explanations, that he still considered himself a failure, even after a variety of new developments had come up in between. So there's a question here about Ligo. Was the interferometer an inspiration for Ligo, Muriel asks and the short answer is yep. And so part of what the ironies here, again-- and we will come to some of this a bit later in the term-- is that the Michelson-Morley experiment was done really to try to test for the existence of the ether. As we're finding, Michelson and Morley themselves find no compelling evidence for the ether. And yet, roughly 100 years later, a very similar kind of experiment could be used to now try to test relativity, which denies the existence of the ether. So the instrument has a remarkable intellectual continuity, even as the theories it's somehow designed to be testing or in conversation with have changed quite a lot. So we'll come to that pretty soon. So let's see. Johan asks about these constants like c, mu 0, and epsilon 0, they were measured separately and independently. That's right. Ah, good. So basically, Maxwell, in 1865, was really impressed by just how close the product of these values mu 0 and epsilon 0 happen to be. They didn't equal exactly what the textbook answer for the speed of light was, but they were really pretty close within what seemed to be too close to be merely coincidence. Maxwell, by this point, actually did have a lot of experience with laboratories. He actually became the director of the Cavendish Laboratory for Experiment at Cambridge around that time. So he was quite aware that there would be a unavoidable jostle around any of these experimentally determined numbers. But in principle, there was no reason why the product should be anywhere near 1 over the square of the speed of light, let alone within actually a pretty compelling small interval. So he really takes that as indicative that this probably isn't coincidence. And then that sets up new efforts to both measure the speed of light with more accuracy. That's actually the first thing that Albert Michelson did that really got attention on the continent. He built new efforts to measure the speed of light to new accuracy, and he came within amazing closeness to the modern value. So that's one thing that the Maxwellian work inspired is, can we measure any of these constants to more precision and then pursue theoretical implications if it really is not a coincidence that these things fit together? It's a great question. I'm going to go back to share screen. These are great questions. Let me go back for that last part of today's prepared material. And as usual, if questions come up, please do put them in the chat. OK, last part for today. So we saw that Lorentz had this mathematical response for the behavior of Maxwell's equations when he applies them to bodies moving with respect to the ether. Lorentz absolutely was also following the Michelson-Morley work. You see it in his citations, in his own articles. When the Michelson-Morley stuff starts getting published in the 1880s, Lorentz is very eager to keep up with the latest experimental work. And this really convinces him of a second reason to keep pursuing it, dig in even more on this topic of the electrodynamics of moving bodies. Partly, he wants to actually respond even more directly to this null result from Michelson and Morley. And he comes up with this idea that, to this day, we still call Lorentz Contraction. If you remember-- I'm sorry, I don't remember who put it in the chat. Someone had asked, hey, that factor gamma looks familiar from Lorentz. It sure does. And here's how Lorentz came to it. Lorentz argued that we'd actually left out part of the balance of forces when trying to analyze things like the Michelson-Morley interferometer. He argued that the molecules within one arm of the interferometer, the arm that's heading directly into the ether wind, would actually feel a force on it. There would be a net force squeezing literally the matter-- the molecules in that one arm of the device-- squeezing them closer together because they're moving through this physical resistive medium. I like to think of this in my mind as trying to take an inflated beach ball and dragging it underwater in a viscous resistive medium. The shape of that ball will change. It will get flattened in the direction of motion. It'll become oblate or prolate. I think it's oblate. Anyway, it'll get squeezed in the direction of motion so that the beach ball might be perfectly spherical, at least to our naked when it's outside of that medium of the water. Put it underwater and drag it at some high speed. The water will exert a force literally on the matter of that beach ball. And it'll squeeze it along the direction of motion. Lorentz says the same thing must be happening to every single molecule that makes up that interferometer arm of length L of, say, 11 meters. The arm that's heading directly into the ether, is getting the full brunt of that resistive force, and it should shrink. What if it shrinks by exactly this factor gamma? What if it doesn't just shrink by any old amount? What if the amount of contraction is controlled by that ratio of the object speed v with respect to the speed of light c? Then the arm of the interferometer that's heading directly into that ether wind would have every molecule shrunk by just such a small amount, by a nonzero but tiny amount given by gamma. And there would be a contraction in that direction. In that case, if you think back to the swimmers, let alone the light waves, that's like saying that swimmer 1, who heads directly into the current for the first part of her journey and the directly with the current for the second part, the literal distance she has to travel between points A and B has been shrunk by a little bit. It's been contracted by 1 power of gamma. And he hypothesized. He wonders, is that the case? Is there just enough force exerted by this resistive elastic medium to actually make it an unfair race? Can swimmer 1 actually had a shorter total distance to travel? If you go back to what we calculated earlier, how long her total lap time would be to go from point A to B and back, and you adjust the length she had to travel-- that length of shoreline had been shrunk because of the effect of the current-- then actually you cancel out one of those factors of gamma because her length is shorter. And then it should actually be a complete tie. So if you take into account this physical contraction, a physical shrinking in the direction of motion, then the total duration for swimmer 1 should now be predicted to be identical to the total duration for swimmer 2. It should be a tie after all. And so if there's a physical effect from our motion through the ether, if there's a force exerted by that physical elastic resistive stuff, like that bowl of jelly that Lord Kelvin asked us to put our hands into-- it acts back on our device-- then you would predict 0 total time offset between the two paths. The race should be a tie, just as much for those two light waves as for the two swimmers. So Lorentz is now trying to respond literally directly to the Michelson-Morley results. He says, this could account for why they actually find no shift in the interference fringes. So now Lorentz has two different motivations for reconsidering how to handle coordinates. We had that first one we looked at the first part for today's presentation, this local fictitious time where he's really just trying to figure out how does the wave equation transform? Do I have to do something funny with time or not? And now he has a second, more experimentally inspired or empirical reason to keep drilling down on that same question about how do we transform our coordinates, now with this notion of a physical length contraction. So keeping that same factor gamma in mind, which already looked familiar to some of you, Lorentz introduces what we would now call the Lorentz Transformation. It is to replace the Galilean Transformation. As you can see, and as you might have learned in other classes, this ubiquitous factor gamma, 1 over the square root of 1 minus quantity v over c squared-- that now shows up both when we take care of our spatial coordinate and theirs and our temporal coordinate and theirs. We have a gamma factor, both for x prime and for t prime. And also, both the x prime and the t prime both are directional of motion, our comparison for the spatial direction along which the relative motion occurs, and our comparison of the clock rates, t and t prime. Both of those depend on the relative speed as well as on the positions. So you have a drift or an offset, both in time and in space, the amount of which is governed by that new quantity gamma. And so here's that factor gamma again. And just as we had done before, we can expand this quantity in general for a small ratio of the relative speed v compared to the speed of light. It's a second-order quantity to lowest order, which would explain why Galileo, or Newton, or anyone in between their time and Lorentz's time had never measured or noticed these new kinds of ways to relate our coordinates. Galileo's transformations had worked perfectly well quantitatively for every example they'd been able to test up until that time precisely because the relative speeds involved had always been, it turned out, much, much slower than the speed of light. And so you were worried about a square of an already small quantity. The correction to the Galilean transformation would have been minuscule in nearly every application until you get to light. Until you think about light and either a moving emitter or moving receiver, then, Lorentz says, there's a physical force that will contract any object made of any real atoms and molecules in one direction in the direction of motion. It's a small effect, but a nontrivial one. When we incorporate that, we both get back to the expected form for Maxwell's wave equation for electric and magnetic fields, and we can account for these latest experiments like the Michelson-Morley. So as I had hinted at a moment ago, when Lorentz used his own new system for relating coordinates that incorporates that gamma factor as well as the t prime, his so-called local time, now even in this new set of coordinates, even if either the emitter or receiver or both are moving with respect to this still ether, the way that electric and magnetic fields should behave goes back to looking the way we'd expect. It goes back to being the simple traveling wave expression so that the solutions, even in a moving laboratory-- like for us on Earth-- really should travel at the speed of light, really should behave like oscillating sines and cosines. And so now just to wrap up this part for Lorentz, and we'll have some time for some more discussion in a moment. Lorentz addresses these two puzzles about the electrodynamics of moving bodies-- a mathematical one just about the behavior of a certain kind of equation as he handles coordinates in various ways, and also an experimental or empirical one. He really was following Michelson and Morley's work quite carefully. As I was just emphasizing, Lorentz says there's a physical force, dynamics-- there's a force being exerted by the ether on this object, like the arm of the interferometer, that's going to cause a physical effect, a contraction. The quantitative effect is governed by that new factor gamma, which is there for small speeds compared to the speed of light. So as I say, the strategy for Lorentz and indeed for his whole generation, as we'll see for several classes to come, is you begin with dynamics. You begin with a study of all the forces that are relevant for the physical system you hope to analyze. And you use that to figure out how are things going to move. What are the impacts on what we call kinematics? Start with dynamics. Ask about the full range of forces at play, and use that to solve what we call the equations of motion to say how will objects move through space and time, or kinematics. As we'll see-- and some of you probably have seen already-- there was a different person who was just coming onto the scene at this point at the time, a rather unknown patent clerk by the name of Albert Einstein, who had a bunch of other ideas about that. So I'll stop sharing screen there. We have time for a couple of more questions. So Oris writes quite, quite accurately, this sounds a lot like length contraction special relativity with a strikingly different interpretation. Totally right. I agree. So we're going to see, starting in the next class, Einstein starts coming up with the exact same equations, in many cases, as Lorentz had done. And not only Lorentz, others have found these things too. Sometimes it's called Lorentz-Fitzgerald contraction. There was a researcher in Britain, Fitzgerald, who found the exact same form of gamma right around the same time as Lorentz. Poincaré in France was doing very similar things. So we'll see that Einstein comes to many, many identical equations. And yet, he reads them as telling us quite different things about how the world works, much as we saw with Maxwell's equations then, and now, and so on. Julian had a question about Lorentz's educational background. Very good. I don't remember the details. We could each look it up. Lorentz had certainly become a world premier mathematical physicist by this time. So his training would have been similar to the Cambridge Wranglers. He didn't literally go to Cambridge University, but he was very well steeped in the latest tools of mathematical analysis. He was very, very good at solving complicated differential equations very quickly-- a lot of stuff the Wranglers were drilled in. And so his background would have been pretty similar to, say, James Clerk Maxwell's. He was a little younger, but he would have been drilled in pretty hard math, including differential equations quite a lot. Did Einstein refer to Lorentz when he found the equations? Or was he aware of Lorentz's work? Ha-ha, good question. One of the things that you might consider if you were to review Einstein's paper is whether he makes adequate reference to the work that had come before. Did he cite his sources appropriately? When you tackle paper 1, which is due October 2, your job will be to pretend/imagine you're an editorial assistant at the journal, The Annalen der Physik, to which Einstein had submitted his paper. One of the questions a referee should ask, then as now, is whether the summation accurately reflects the existing literature. Was Einstein appropriately citing the work that had come before him? I'll pose that as a question, and then I'll answer no. The answer is no. He was not citing the stuff at all appropriately, as we'll see actually starting even in this coming next class session. Gary asks, other than the poor citation practice, did Einstein know of Lorentz's work? The short answer is he probably did. As we'll see in the coming class sessions, Einstein was very avidly reading a lot of this work-- probably not Michelson's work, actually, but certainly the work by people like Lorentz and Poincaré in Paris. He was not citing it very carefully. But he and his buddies were encountering this as their off-hours reading group. This is what they did for fun, which I recommend. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_21_Teaching_Feynmans_Tools_The_Dispersion_of_Feynman_Diagrams_in_Postwar_Physics.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So today we're going to continue the story that we were looking at in on Monday's class, some of these early developments after the end of the Second World War to grapple with quantum field theory, trying to piece together a quantized treatment of things like the Maxwell field or electrons and positrons. And we saw last time there were a lot of puzzles, some of which had been identified as puzzles well before the war, going all the way back to the 1920s and '30s, like virtual particles that could borrow energy for temporary periods of time, which led to these infinite quantum corrections that when people tried to take into account quantitatively the impact of those virtual particles, their equations would blow up. They would encounter these divergences. And those were a big, big puzzle really for decades. And it was only very soon after the end of the Second World War when a newer generation of physicists came back to those older questions but now with some new skills, sometimes literally new instruments, and new ideas about how to organize their calculations. So near the end of last class, we looked at some of Julian Schwinger's efforts in that, and today we'll look at some efforts by his contemporary, Richard Feynman and Feynman's circle. So today's class draws on a book I wrote 15 years ago. It's actually hard to believe it's been out for 15 years. And so we'll talk about the book and how it fits into the kind of series of developments we've been talking about throughout the semester. And so as usual, the class has three main sections for today. The first is how did these new tools, in this case Feynman's particular approach to quantum field theory, how did these new tools enter into circulation? How do they become kind of second nature or a standard technique ultimately for essentially the entire profession? So how did these new tools begin to get picked up by people other than Feynman himself? And we'll spend a good chunk of time on that today. And then we'll see that once some people did start using them, they would often deploy these new tools, these paper techniques, pencil and paper techniques often in quite different ways than what Feynman himself had had in mind. So it's not just that new people had to start using the tools. New people began adapting the tools toward new or different ends. And what did that mean for the continuing conceptual puzzles they were wrestling with as they tried to continue to make sense of high-energy physics phenomena? And then finally, we'll ask just very briefly near the end why did this particular set of tools actually stick? Why was it that these tools really became the standard way to this day that physicists all around the world try to make sense of nature when it wasn't so clear they were going to take off at all? So why did this tool come to seem like such a natural or winning strategy? And that's our kind of three-part scope for today. So some of you, many of you, by now might recognize these things. You might have already had them in some classes. If not, you've probably seen them around, these things that we now call Feynman diagrams. They are really unavoidable today. They are literally ubiquitous. In fact, not only are they all over the place, they've become automatic or automated. So there are now computer routines that will both evaluate tens of thousands of them now even on not very fancy laptops. So you have these algorithms to compute very large numbers of Feynman diagrams quantitatively and also routines to draw them. They literally have-- the way they look has now become automated or subject to computer routine as well. They're all over the place in virtually every subfield of the discipline or many, many subfields anyway. And yet, of course, they have a history. It wasn't always that way. Our kind of present day saturation with these things is of relatively recent vintage. And when they were first introduced in the years through and after the Second World War, some people thought they didn't make any sense. Others thought they must be all wrong, and a small number of people began using a couple of them at a time. So it was a publishable feat to evaluate five of them by hand, not 10,000 on your laptop. You could get your PhD dissertation by evaluating a few dozen at most by hand into the early 1950s. So these things weren't these kind of automated tools from the beginning. In fact, they were kind of suspect, at least to some people, for quite a while. And as I mentioned already, they're kind of single-use. The fact that now you can train computer algorithms to evaluate these things unambiguously that, again, took quite a while to settle into place. And in fact, for two decades or more, even arguably about close to three decades, what the diagrams were used for, how they would be turned into elements of calculations and quantified, that was actually pretty malleable for, to me, what seemed at least to be a surprisingly long period of time. They were adapted as well as adopted. And so before I started working on this kind of series of puzzles, how the diagram spread, why did they come to be used, and so on, very little had been written about the spread of the diagrams. I used to tease that there had been more written at that point about the search for Richard Feynman's actual van. This is Feynman and his then very young children with his van that he used to drive around Southern California and with larger than life Feynman diagrams on the side. There was more written about his van than about the diagrams that bedecked the van. And so how was it that this was even something you'd paint on your van in the first place? Not only because it was the groovy '70s, and you're in California. Why would these become something to paint on your personal vehicle? And so that's what I was really kind of curious to try to learn more about. So to make sense of that, I wound up thinking a lot about this word dispersion. I think it's-- I still think it's helpful to organize the kind of different layers of this study. So disperse means many different things, the verb to disperse, where there are two main clusters of meanings. If you go look up, for example in the Oxford English Dictionary or your favorite Merriam-Webster or whatever you like, there are kind of two basic sets of meanings for the verb to disperse. One of them means to put into circulation. Think about dispersing things to people. And that's really the first part for today's class that we'll focus on. But disperse also means to cause to become basically more and more different. Think about optical dispersion, taking light and dispersing it into the distinct colors of the rainbow. And I think that also captures a lot of what was going on with these new theoretical tools, not just for a month or a year, but really over decades even after Feynman had introduced them. So that's what set up these three questions that I mentioned earlier. How do they spread so fast? What do people actually use them for? And then given that variety, why did they stick? So let's talk about how they start entering circulation in the first place. Let's dig into that a bit more. These were introduced at the second of this three sets of annual meetings that we talked a little bit about in the previous class. You might remember on Monday, I talked a bit about the June 1947 Shelter Island meeting. In fact, this photograph was from that Shelter Island one, from the first of them held just on a tiny island near Long Island at the Ram's Head Inn. I've never been there. I don't know if it still exists. You got to go to the actual bed and breakfast where they all stayed. And that was the meeting at which 24 kind of personally selected participants, selected by Robert Oppenheimer, got together for a week to talk about theoretical physics, including these puzzles of quantum field theory. That's where Willis Lamb first presented his results on what would come to be called the Lamb shift. Isidor Rabi talked about the new experimental measurements on the anomalous magnetic moment of the electron. And there were three in a row, so every annual meeting. The second of them was held not in Long Island but in the Pocono area, so due north of Philadelphia, due west of Manhattan, a short bus ride out of Manhattan, again, organized by Oppenheimer, very similar format, roughly 24 or 25 kind of hand-selected participants, many of whom had come to the previous one. This is the first meeting of its kind after the experimental results for things that seemed to suggest at least to some people that virtual particles might actually have some heft to them. They might actually affect measurable quantities. So it was at this meeting, the second in the series, where Feynman introduced his new techniques. He was one of the informal speakers. This is actually from his first published example. We know from notes that were taken, he was scribbling very similar things on the blackboard, and he was trying to motivate for his colleagues how they could try to understand at a quantum mechanical level why it is that, for example, electrons repel each other. We all learned in high school that like charges repel and opposite charges attract. He said, well, why is it if we think at this quantum mechanical level that charges with the same sign of their electric charge would push each other apart, would repel each other? And so here's the kind of story that we could imagine unfolding. There would be some electron coming in here from the right hand side. It would have some likelihood to travel from here to here. It has some separate but calculable likelihood to emit a virtual photon at this point in space. You can't turn off. In certain principle, the vacuum is roiling with virtual particles. By this time, that had become kind of familiar set of ideas. So he wanted to have a way to calculate these effects. A little while later, that virtual photon would have zoomed across space, and it could smack into this other electron. The second electron, which had been otherwise traveling all by its lonesome unsuspected, going from, say, 0.1 to 0.5, all of a sudden it will get smacked by this virtual photon that is carrying some energy and some momentum. It's absorbing that momentum. So after it absorbs that virtual photon, it will be knocked off its course. Imagine getting hit with a dodgeball. Not that I want to trigger bad memories of elementary school, but anyway, you get knocked off your course. You get hit with some object carrying momentum. So its direction of travel will change. Meanwhile, the original electron has shot out that virtual photon. It recoils like a hunter firing a rifle. It's going to have some equal and opposite kickback, so its direction of travel will change as well. So Feynman says here's the kind of fundamental process or the reason quantum mechanically why like charges repel because they're trading back and forth these virtual force-carrying particles, in this case, the photon, the virtual photon. Now he could use this kind of map of that interaction and relate every single part of this little squiggle, this line drawing to particular parts of the integrals. And so he would map this, and he tried to explain this at the blackboard. The electron has some likelihood to travel unimpeded from 0.2 to 0.6. Here it is. The other electron, electron A, let's say, has some likelihood to travel from 0.1 to 0.5. They have a likelihood to both emit and absorb a photon, each of which is proportional to the charge of the electron. The photon itself has a likelihood to travel from 0.6 to 0.5. You just piece it all together kind of LEGO bricks. So now you have a kind of road map to the corresponding equations to map out this series of steps between electrons and their traded virtual particle. And Feynman said this is going to be important, not just because this is one thing that could happen, but because all kinds of things could happen when we start thinking about virtual particles. It had already become clear well before the Second World War that these virtual particles could borrow any amount of energy and momentum, consistent with paying it back sufficiently quickly given the uncertainty principle. But it was even more complicated. The electrons could trade more than one virtual particle. So instead of just saying the amount of energy that this one virtual particle borrowed was uncertain and in principle could be infinite, what if the electrons had traded two photons, two virtual photons, each of which had temporarily borrowed energy and momentum from the vacuum? Well, now it starts getting pretty tricky because there are actually several topologically distinct ways in which two incoming electrons could trade two photons and leave only two electrons coming out. So instead of only one storyline that you have to map into a line of algebra, you actually have all these different ways in which two electrons come in, two electrons go out, and in between, they've traded two virtual photons. And so the diagrams were literally a form of bookkeeping. And again, you could walk through the same series of steps to map every element of these slightly more complicated Feynman diagrams to the corresponding algebra. And you can do the same with three virtual photons or four. In principle, there could be an infinite number of virtual photons, each of which could carry, in principle, an infinite, albeit short-lived energy momentum. So you see the bookkeeping becomes like mind boggling, right? So this is a kind of way to march through the corresponding equations. Make sure you don't omit any terms. Make sure you don't double count certain terms, which have been bedeviling these calculations throughout the '30s. And so now you have a map to make sure you at least have the equations, the correct set of equations to worry about. And then by this point in late spring or early summer of 1948, Feynman could show that if you have the right bookkeeping, you see some of these infinities actually exactly cancel each other. So some of the infinities that had cropped up in these earlier calculations, not all, but some of them would exactly cancel, even before you got fancy with renormalization just because you have to remember that for every diagram like this, there's a corresponding one like that and so on. And some of these have exact cancellations. So the challenge of taming those infinities would become a bit more manageable if you could make sure you had the right set of equations to work with from the start. So how did that go? That sounds awesome. You have this amazing, kind of intuitive, very simple looking technique. This used to occupy literally 30 and 40-page journal articles. When people tried to do this without the diagrams throughout the 1930s, they'd always find these infinities. You think, oh, our hero's in town. Finally, help us on the way. It didn't play out that way for our poor MIT alumnus Richard Feynman. The first thing to keep in mind is that his presentation followed an eight-hour lecture by Julian Schwinger. I want to let that settle in for a second, an eight-hour lecture. That's like unlawful imprisonment. Now, I often run along with my lectures here, and I feel bad. I go like three minutes late, and you're all busy. But you know, anyway, there are limits. So why was Schwinger given so much space? In between the Shelter Island meeting and this follow up one in Pocono, it was Schwinger who had first found a way to tame these infinities. He first calculated with his new techniques that did not involve any of these diagrammatic methods. He not only found a way in principle to respect the symmetries of relativity and find finite answers with his renormalization program, he'd also found right in between, six months between these meetings. In December '47, he published the first kind of successful demonstration that renormalization could yield finite results that actually match unbelievably closely to these latest high precision experiments. Schwinger was the person of the hour, or rather, the eight hours. So it was after that they had a brief break for lunch, and they came back, and Schwinger talked for another four hours afterwards. And then Feynman had 30 minutes at the blackboard at the end of that day. All right, so it's not like great setup. Moreover, he was completely unprepared. He did not have a well-polished presentation in hand. He just wanted to wing it, which is not a good thing to do at the end of a busy day. It was so-- it seemed at least to some of his listeners to be so scattershot and so disorganized, that not only was he repeated all the time. At one point, Niels Bohr, who had been invited to participate, one of the few older European physicists, Bohr got out of his chair, walked up to the blackboard and took the chalk out of Feynman's hand and said you don't understand quantum physics. Not great. God, I wish I'd been able to see that. So Bohr says not just this is like a disorganized presentation. He says your whole framework violates the uncertainty principle. What does it mean to draw a picture of an electron moving from here to here? Bohr says, in effect, you've assumed that the electron has a well-defined trajectory. But no quantum object has a well-defined trajectory because to do so, it would need to have arbitrarily precise position and momentum at the same time. But we can't have that because it's [INAUDIBLE].. Bohr is like this is a nonstarter from the beginning. Feynman's a little flustered. It's not literally trajectory. Oh, let me go on. OK. Paul Dirac was there. Dirac says, well, when you calculate the probabilities with these new diagrams, do they obey unitarity? To which Feynman's response was, what's unitarity? Not good. OK, this was not a great presentation for Feynman. Unitarity is, again, some of you might know, you might have found in other classes, it's really a fancy way of asking, does the sum of all these probabilities add up to 1. There's a 100% chance that something will happen, and it might be 35% chance of that and 42% chance of that. Do you have to add all these up? Does the sum of all these probabilities obey the conservation of probability? Does it add up to 1? That's the fancy way that's the simple version of what Dirac was asking. And Feynman hadn't even thought about the question. He certainly had no answer ready yet. Edward Teller was there. He says what about when you have a pair of electrons in these intermediate states? Two virtual electrons, let's say, do they obey the Pauli exclusion principle? Another kind of cornerstone of quantum physics. And Feynman's response was that's a really interesting question. I'll go think about it. Again, he had no answer. So in general, what this very esteemed but small kind of elite audience kept asking during this scattered presentation of Feynman's was what rules govern their use. It really looked like he was kind of making it up as it went along, and he was not able, at least at that moment, to articulate the kind of rule-bound techniques that would perhaps yield as a comparable set of calculating techniques to Schwinger's very pristine, very precise algebraic approach. So by many, many accounts, Feynman left this meeting disappointed, maybe even depressed. He really-- was not a great start for these new techniques, and he was really down. Now, one might have thought that all he had to do was kind of collect himself, write up some clear articles. Just say point by point first do this, then do that. Here's where this comes from, give a derivation, give a demonstration. He could have cleared this up, one might have thought, by writing some kind of clear articles. Well, he did write some clear articles, and yet what, again, I found really fascinating was that kind of confusion about the diagrams lingered for years afterwards, not just for days, weeks, or months, even years after his articles had been in print. And so I won't go through all of these examples, just some examples I write about in more detail in the book. But you have people either writing directly to Feynman saying, I can't do this calculation. Will you do it for me with your trick and tell me what you get? And then they thank him in the acknowledgments. There's some interesting correspondence circles with people not including Feynman from very, very elite, very well-trained experts in the field who three of them try to do the same calculation with the same diagrams, and they keep getting different answers from each other. So you have this wonderful series of letters because it was pre-email, and they were, in fact, in different countries. They weren't going to make long distance telephone calls. So you have these letters of them trying to figure out what keeps going wrong when they each try to do the same calculation. That's as late as 1950. A few years later, my favorite example actually comes from a letter of recommendation for young physicists who was finishing his PhD at Stanford, and the student's advisor, Leonard Schiff, wrote to recommend the students, saying you should hire my student for the new professorship because he actually does understand Feynman diagrams and actually use them in his thesis. That was five years after this Pocono conference, roughly four years after Feynman's attempt to clarify the techniques with his articles. So if four or five years later, your advisor says my student is unusual because he does understand them, that seems to suggest these were not universally clear to everyone. So they were clearly not so obvious or automatic, even after Feynman did have a chance to try to clarify. And yet, you can see evidence that they're taking off like gangbusters. If you go through the Physical Review, and in the book I talk about similar searches in other journals throughout many parts of the world, in fact, not just in the United States, you see an exponential rise that these diagram techniques are getting picked up. The doubling time I've forgotten, something like two and a half years, or three years, where they're really growing literally exponentially over the first half decade or so. So you have the one hand, you have people saying, I don't get it. I can't do it. And yet you find this growing evidence that at least some people have become early adopters of the techniques. You can then do the usual trick and not just say how many articles use the diagrams, but who wrote those articles? Where were they? What were they up to? And you start seeing some pretty interesting traits, some common features among the people who were beginning to use these techniques in their own papers. So first hand, they were all young in their careers, which is to say they were all either graduate students or postdocs, the vast majority. A few of them were kind of assistant professors. They were all still fairly early in their training or careers. They were all theoretical physicists in this early period. Nowadays, these diagrams are just as ubiquitous among experimental physicists as theorists. But in this early phase, this early kind of exponential rise, it was young theoretical physicists. And the last part was that they were really in contact with each other. You can do a network study and trace it through by co-authorship, by acknowledgments, by unpublished correspondence that I was able to find and piece together, that they knew each other. It was a network that had come together to learn these techniques, and then they began to spread out. So something pedagogical is going on. The majority of these early users were still in the formal phase of their training. They were literally still being taught to be physicists. And yet it wasn't a story of write a textbook, teach some classes, it'll spread. All of the data points shown on this early plot represent articles that came out before the very first English language textbooks had been published that incorporated the new techniques. So the first textbook treatments of how to calculate with these new Feynman diagrams weren't published until 1955, by which time, there was already evidence of this fast-growing exponential rise. So how does that happen? A lot of it comes down to this gentleman, Freeman Dyson. Dyson, you might know his name, he just passed away this past March, just before quarantine in fact. He died in very early March at the age of 96, quite a remarkable life and career. At this point, he was quite a young person. He was, in fact, still in graduate school. He was originally from Britain. He got a very competitive fellowship very soon after the end of the Second World War called a Commonwealth fellowship reserved at the time for British students to study abroad, mostly in Commonwealth countries, like Canada or Australia, but also the United States. So he got a fellowship. He came to Cornell University because he really wanted to study with Hans Bethe, whose work even in the '30s had seemed super exciting. And this young grad student said I want to go study with Bethe. So he gets the fellowship, comes to Cornell. He starts working with Bethe but also very quickly meets Feynman, who at this point was a very young professor whom Bethe had recruited as soon as they were done at Los Alamos for the war. So Bethe and Feynman had interacted at Los Alamos. Bethe as the senior theoretical physicist at Cornell hired Feynman. This even younger physicist Dyson meets Feynman there. The summer after his first year of grad school, Dyson and Feynman drive cross-country together. Dyson wanted to just see the country. He was from Britain. He was sending these really kind of delicious letters back to his family, often writing to his mother on a manual typewriter, a small portable manual typewriter, often more than once a week. And he just wanted to explore the country because no member of his immediate family had ever visited the United States. He was very curious. And Feynman wanted to go back to New Mexico. He had a girlfriend out there. He maybe was going to do some consulting at Los Alamos. So they basically drove cross country together that summer of '48, which is to say just weeks after the Pocono meeting. So Dyson had not been at Pocono. He'd been hearing about these things in Ithaca from his new friend, but now he had a several days long drive to just pick his brain. Meanwhile, once they get to New Mexico, then Dyson took the bus to go to Ann Arbor, Michigan. Again, he was like being a tourist. He then spent about six weeks at University of Michigan attending summer school lectures by none other than Julian Schwinger, where, again, he had a chance to talk directly with Schwinger at some length and some detail, where Schwinger was lecturing on his own approach to quantum field theory and renormalization. So at this point, by the end of the summer of 1948, Freeman Dyson was the only person in the universe who had spent extensive close informal personal time with both Richard Feynman and Julian Schwinger, just at the time they were each working on their separate and very different looking approaches to quantum field theory and renormalization. Then he takes this now famous bus ride back from Michigan, back to the East Coast where he was going to start for the second year of his fellowship at the Institute for Advanced Study in Princeton. By the way, he never finished his PhD. To the day that he died, he was Mr. Dyson. And he thought PhDs were a horrible waste of time. He might be right. I don't know. Anyway, so he spent one year as a grad student at Cornell and one year on fellowship at the institute and then was hired as a full professor at Cornell like four years later, this guy. Anyway, on the bus ride, he has this amazing epiphany that he starts seeing how the two very different looking approaches might fit together, Schwinger's very formal looking approach to a kind of algebraic, almost axiomatic approach to quantum field theory and renormalization and Feynman's diagram-based, bookkeeping, more pictorial, somewhat more intuitive approach. The two of them, meaning Feynman and Schwinger, couldn't understand what each other was doing yet. And so Dyson is the one who actually puts it together, a good chunk of it literally on the bus heading back east. Once he gets to the institute, he then submits two articles that autumn. He sends both of them to the Physical Review. These were published before Feynman's own articles had come out. Feynman was actually quite slow to write up his articles. They came out a little bit later, also in 1949. In the early years, Dyson's papers were cited more often than Feynman's. They were getting a lot of attention, in part because he was so methodical in laying out a kind of point by point almost recipe book, a rule book for how to use them, exactly the stuff that seemed to be missing from Feynman's presentation at least at Pocono. Dyson was able to ground the diagrams in more first principles derivations and really go through worked examples. And it was in these papers where Dyson demonstrates the mathematical equivalence of the three separate methods that by this point were on offer, those by Feynman, Schwinger, and then separately by Shin'ichiro Tomonaga, whom I mentioned briefly last time. Tomonaga was still in Tokyo. Japan was under US occupation after the war. Tomonaga had worked out a method remarkably similar to what Schwinger later came up with. They were working totally independent of each other, partly because of the war. Tomonaga had gotten there to many parts, not all the things Schwinger did, but very similar framework and many of the same insights along the way. Tomonaga had done this starting in 1943, but the results were not known outside of Japan, partly because of the Naval blockade and so on. One of the first packages sent overseas out of Japan after the end of the war was actually a package of manuscripts and reprints that Tomonaga sent to Robert Oppenheimer. And it was actually ferried in official kind of US delivery was from US occupied Tokyo back to the United States because Tomonaga had read in one of the new libraries in a new subscription to Newsweek magazine about the Lamb shift because the magazine had covered everything about Robert Oppenheimer. He was amazingly famous in the United States after the war. The fact that he was talking about the Shelter Island conference became newsworthy. He mentioned the Lamb shift. This gets all the way back to Tokyo through this kind of crazy convoluted system. Tomonaga reads about it and says, hey, wow, and digs back through some of his old wartime manuscripts and reprints and sends materials to Oppenheimer, who then had them effectively mimeographed and shared with all the participants who'd been at Shelter Island. So the US-based and some of the Western European physicists who had heard about the Lamb shift and Rabi's group's measurements and so on, they learned in the days after Shelter Island about some of Tomonaga's work. Then they helped arrange for English language publications. They were very, very eager in fact to try to get Tomonaga credit, partly because it was remarkable, partly, I think, because there was a sense that this was a dramatic development coming from the city that had just been subjected to relentless firebombing during the war. But nonetheless this work somehow got done in the midst of horrible conditions. So Oppenheimer worked hard to get Tomonaga's work into circulation. So Dyson learned of it. And Dyson was able to show in this pair of articles from 1949 that these three distinct approaches to quantum electrodynamics were, in fact, mathematically equivalent very, very much like what had happened 20 years earlier between Heisenberg's and Schrodinger's approaches to quantum theory. So Oppenheimer at this point had become director of the Institute for Advanced Study. He was now basically, in effect, Dyson's supervisor for the second year of the fellowship. Oppenheimer was not that impressed with the Feynman diagrams at first. In fact, he used to come to Dyson's informal presentations and interrupt and basically show that he was still the smartest person in the room. So the postdocs at the institute asked Dyson to repeat each of his many hour seminars in private, and they just wouldn't tell Oppenheimer. So Dyson would give these announced seminars. Oppenheimer would come and essentially kind of heckle and disrupt, and they'd have Dyson give the exact same talk the next day in secret. And finally, after Hans Bethe intervened, Oppenheimer said, OK, OK, I relent. In fact, he literally wrote "nolo contendere," like I don't contest the charge. I give up. And by a few weeks into Dyson's fellowship and after that, the diagrams really began to take off at the institute. So to understand what's going on in this very specific setting at the Institute for Advanced Study and all these new postdocs, again, it's important to step back and understand this is happening in a very particular institutional setting. And again, I found this really kind of fun to dig into when I was working on the book. The idea that young scientists will typically do one or two or now, not infrequently, three separate postdoctoral appointments after their PhD, but before they take on either a faculty job or a research job in industry or anything else, that's now pretty routine today across the sciences, in fact, even in more and more of the humanities and social sciences. The postdoc stage is pretty kind of self-evident, or we've gotten used to it by and large. But again, that has a history. And it's not a very, very long history. The idea of the postdoctoral fellowship really dates to the early years of the 20th century, very soon after the end of the First World War, but they were still pretty rare. They were very elite. It was a big deal if you got this fancy postdoctoral fellowship after finishing a PhD. And in fact, even after the Second World War, only roughly 16% of the US-based PhDs actually did a postdoc, just did one postdoc, let alone two or three. But the numbers began to grow very rapidly after the war. They became more and more common. The idea going right back to the 1918 proposals and 1920s and so on was these postdoc positions were designed to foster a different kind of training compared to the PhD. That was the hope, the ambition that they should foster what historians and sociologists often call tacit knowledge, not explicit kind of book learning, but actually stuff that you know by doing. Even for theoretical physicists who aren't like building apparatus or glass blowing or eventually learning to solder, there's a kind of knowledge that comes at a kind of intuitive level by practice, which is why we work on problem sets all the time, rather than only reading textbooks and dealing with kind of formal levels or explicit levels of instruction. So the postdoc was meant to complement the kind of more formal, more book learning explicit training of the PhD with this more informal tacit knowledge. Moreover, especially in the United States, they were often funded by private foundations and, again, a kind of patriotic ambition to improve and build up US domestic scientific talent and to share the wealth. So you'd have these young people getting PhDs at one school. They would get a fellowship to become a postdoc at a second school. They could bring what they've learned to the new place. They'll learn new things at the new place themselves. Then they'll hopefully get a job at a third place. So you start building in an emphasis on circulation. So that it was hoped would help knit together the US scientific community even more tightly and build in this kind of sharing of skills, especially this tacit knowledge stuff. So after the Second World War, this became especially pertinent for training young physicists in theoretical physics in the United States, which it's still been, it was felt at the time, kind of lacking and not nearly so developed as in other parts of the world. Mostly they had their eyes on Western Europe. So the most important place where this begins to really change soon after the war is this place where Dyson had wound up, the Institute for Advanced Study in Princeton, New Jersey. It's near Princeton University, though it's independent from the university. Oppenheimer moved there just soon after the end of the war. He became the new director for the institute starting in 1947. Partly so he'd be closer to Washington DC, he was still doing a lot of consulting for the government and so on. So one of the first things he did when he moved was jack up the number of theory postdocs by 60% in one year. He really said I'm in charge now. And rather than support other things that he might have done with the institute's budget, he said the most important thing we can do for the scientific community is really kind of rapidly expand the number of slots for young postdocs in theoretical physics, not coincidentally his own field. A lot of people at the institute didn't like that at all. But that's what he did. And he did that year after year. And he was-- I love this letter he wrote to Wolfgang Pauli, who at this point was back in Europe. Pauli had spent a good chunk of the war years at the institute, in fact, visiting and avoiding some of the worst parts of fighting in Europe. By this point, he was back in Europe, and Oppenheimer explained that the institute by this point is not a school in the sense that even the younger people are not listening to lectures or working for doctor's degrees, but it is a school in the sense that everyone who comes learns parts of physics which are new to him. And it turns out that was especially happening once Dyson arrived among the very first of these new cohorts, the very first of these rapidly expanded groups of 8 to 10 to 12 theory postdocs at a time, as opposed to only one or two as before. So by the time Dyson and his first cohort arrived as the new postdocs, although Dyson was actually not a postdoc, he never did a doc, but he was joining these postdocs. The new building that would house him actually was still under construction, so they all shared one office. In fact, they shared Oppenheimer's office. The director's office was big enough for 12 postdocs. So some things don't change. Oppenheimer was in Europe for much of that fall. So they were literally kind of sharing desk space. They were completely kind of elbow to elbow informally sharing space. And then the postdocs would circulate through what Oppenheimer loved to call his intellectual hotel. You don't move there. You stay there for a visit, and then you go somewhere else. They would tend to stay for two-year stays. And while they were there in these early years, Dyson would coordinate these kind of little teams or groups of calculations because he was coaching them in these new techniques for the Feynman diagrams. And again, the word of that began to spread, so Pauli writes back from Europe, what are Dyson and the rest of the Feynman school working on? It was already becoming clear that they were working very hard on that. So when you go back to that curve of these exponential rise of who's using the diagrams, it starts to make a little more sense. It really was a social network. The overwhelming majority of those early users were either direct members of these postdoc cohorts at the institute, who learned and practiced the techniques directly from Freeman Dyson, just as he was working them out himself, or they were students or immediate colleagues of those postdocs, once they took up their first teaching jobs and moved to other universities around the country. The remaining 20%, I mean, not much more than a rounding error of those early articles that used the diagrams had come from Feynman or from Feynman's immediate circle. So we really have kind of two lightly connected networks that dominate the early adopters. And so you get these airline flight maps. This is not literally the Feynman diagram case. I just took it from Google Images. It is a map of domestic air travel route. But it was very similar how the diagrams themselves spread out. You can trace through who's publishing with the diagrams. We have their articles. We can see where they were from their kind of byline. Then look up where were, when did they interact with, say, Freeman Dyson, when did they attend this lengthy set of summer school lectures, and so on. And you start getting this kind of connected graph of how the diagrams spread throughout the United States. Let me say a little bit more about the spread, and then we'll pause for some discussion. So that works-- I think that helps account for how these techniques spread so kind of anomalously quickly within the United States. Go to the Physical Review. At that point, most articles in the Physical Review were contributed by physicists in the United States. That's certainly not the case now, but back then, that was a pretty good reflection of US-based physics. We get a sense of how the US-based users of the diagrams began to fan out. But very quickly, the diagrams were showing up in the journals from other countries as well. So it turns out in the United Kingdom, we have a replay of the same story. The rules of that fellowship that had allowed Dyson to come over to Cornell and to Princeton and the institute required that the recipients return to Britain for a couple of years. So you could go for two or three years on the fellowship, but you had to return to Britain, and you had to stay in Britain for at least two years. Those were the rules. So when Dyson's fellowship expired, he went back to Britain. He began teaching first at Cambridge, then eventually at Birmingham, again, no PhD. He was a professor with no PhD. And when you look at who was using the diagrams in Britain, there were people who learned from Dyson once Dyson went back to Britain. So like no mystery there. It's kind of the same story. This next part I found really interesting. I didn't know about this until my friend Kenji Ito helped me a lot with this, because he's originally from Tokyo. He knows the language, and he's also an expert in the history of quantum physics. We read some of Kenji's stuff I think for the paper 2 assignment. So Kenji helped me a lot with this. I really learned working with him. So what happens with the spread of the diagrams in Japan after the Second World War? Here's Tomonaga, whose work I was describing earlier. It turns out Tomonaga was doing, I think, just extraordinary, kind of superhuman efforts, I think, superhuman efforts to try to rebuild the Japanese community in theoretical physics after the war. The university where he was hired had been leveled. Most of the buildings had been destroyed in the saturation firebombing of Tokyo. So he and his little group of students and advisees would meet in his home, but his home had been bombed out. So they were meeting in his Quonset hut. He literally had temporary shelter that was not meant to outlive the war. And soon after the war, amid shortages of food and paper and clothing and much else, he was working to get his own students back on track, and they would meet together in his residence, which itself was a temporary literally kind of a shell. And they returned to questions they'd been thinking about together, even during the war, things like quantum electrodynamics, virtual particles, and infinities. And they had developed soon after the war their own kinds of pencil and paper diagram bookkeepers, which Kenji found for me, and we wrote about them together, that these were in some of the kind of mimeographed early postwar publications that circulated in Japan. And these were in momentum space. They weren't space time diagrams. But they were trying to do similar things. You have all these ways that virtual particles could be involved. Here are examples of a single electron that's interacting with two separate virtual photons. How do you keep track of it? So Tomonaga and his group had invented their own techniques. Then they began getting news back from the United States after the war about Dyson's new work, not only from reading Newsweek magazine in the new occupation libraries but also because the more senior Japanese physicists, Hideki Yukawa, was actually invited to spend two years at the Institute for Advanced Study invited at the personal invitation of Robert Oppenheimer. While Yukawa was visiting is when he won the Nobel Prize for his work on nuclear physics, and he was unable to send preprints and news back to his younger colleagues in Japan. So Yukawa was learning-- he was among the people who could learn directly from Freeman Dyson and this very active group at the institute and is able to share news back with this incredible, tight knit, but somewhat separated group in Tokyo. And they basically, after a few months, kind of ditched their own momentum space diagrams and began adopting the Feynman diagrams because they were already kind of primed for it. They were already immersed in a lot of these details. They already recognized that some kind of diagrammatic bookkeeping would be important, and then they were kind of quick to go. So now you have a group in Tokyo that's really getting very intensively into this stuff. How do you spread it throughout the rest of Japan? Again, remarkable kind of coincidence. Under US occupation, the general headquarters, which is what the occupying force was called, they issued a decree right at this time to try to weaken the hold of the traditional, the so-called imperial university system throughout Japan. So the US ally occupying authorities kind of by Fiat said that every prefect, kind of like roughly speaking every state within Japan will have at least one new university. Much like in the United States, most states have at least two public universities, Michigan State and University of Michigan and so on. They wanted to do a similar thing in Japan partly to help rebuild the country and expand higher education, partly as a kind of political move to break the hold of the elites at these imperial universities. So suddenly, you have a tenfold increase in the number of universities that need to hire young physicists because they suddenly have to teach young students physics. So basically, Tomonaga's students disperse all through Japan because they're now getting hired in these newly created physics professorships. And again, you can chart when do the diagrams show up in Osaka. Oh, that's because this person just moved from Tokyo to Osaka or Kyoto and so on. Last example. What about in the Soviet Union? This is now the-- Feynman introduces these diagrams, and Dyson really helps make sense of them exactly the time that the kind of broader Cold War rivalry begins to really harden into a kind of standoff that would come to dominate the next roughly 20 years or more. So this is exactly the time. In other words, when it was very hard for journals to get back and forth and even much more difficult for individuals to travel back and forth. In fact, in this earliest period between 1948 until the death of Stalin-- he died in March of '53, and about a year later, there was the first kind of tentative person to person exchanges again between scientists. There was no possibility for the kinds of personal contact that had been so important for spreading the diagrams in the United States and Britain. And maybe therefore, it helps us make sense of the fact that no diagrams were published in any of the Soviet physics journals for several years. There was a few months delay in Japan, which makes sense. And we saw how that was kind of overcome or what changed. It was actually multiple years in the case of the Soviet Union. When they did start showing up, there were just a trickle. There were literally 12 articles over the span of two years at the time when there were more than 100 in the US journal. And it turns out six of those 12 were submitted by one physicist. This was not an exponential rise in the same way at the same time. That one person who did submit them, Aleksei Galanin, had very particular reasons to learn how to calculate radiative corrections to Compton scattering because he was working on the top secret H-bomb project and had to worry about shielding for that high radiation pressure inside various H-bomb designs. He had, let's just say, special incentive to learn from Dyson's articles. When he did-- here's an example of one of Galanin's his early papers he stuck and this is not to fault him he did exactly what Dyson's articles had prepped him to do, but he didn't do any of the-- neither Galanin nor any of his immediate colleagues in the Soviet Union did any of the kind of broader more, let's just say, improvisational or more adaptive uses of the diagrams that were already becoming quite common in the United States, in Britain, and even in Japan. So you have a kind of transmission by text, it takes a lot longer, and then and the range of applications remains rather narrow. OK, let me pause. It was a long chunk, but let me pause there. I see some questions popping up in the chat. Lucas tells me the Ram's Head Inn is still in business. So once any of us can travel, we should all arrange a field trip. I'd love to see that place with my own eyes. Thank you, Lucas, for confirming. Alex says, "It's like when you get a speeding ticket, and you have to take the class." I'm not sure what that was about, but yes. Fisher says, I'd finally received [INAUDIBLE].. Oh, very good. So Feynman was, indeed, you know, [INAUDIBLE].. So Feynman was a professor at Cornell. He had finished his PhD. It's actually a little funny. He had finished the work for his PhD before leaving Princeton. He was in grad school at Princeton before leaving for Los Alamos before the war, but he hadn't formally written up his thesis yet. And then he got really busy working in the theory group at Los Alamos. So he didn't formally file his PhD dissertation until basically a few weeks after the end of the war, very soon afterwards. So he officially got his PhD, but everyone knew it was all but done. Remember, postdoc stages were not all that common. It was not that unusual to be hired straight into a faculty position from one's PhD. And everyone knew at that point that Feynman's PhD was essentially done. Moreover, he had really impressed people like Hans Bethe and Robert Oppenheimer during the war. And so he was actually being multiply recruited. I found letters-- he was made multiple faculty offers. So Oppenheimer first went back to Berkeley. He tried to hire Feynman. Bethe basically a better offer. So he was in high demand as many, many of these young physicists were. So he was starting a formally as a member of the faculty. He was still rather young. And he was relatively close in age to Freeman Dyson because Dyson had also had his studies kind of interrupted by the war. He'd worked for a couple of years in the British kind of military, basically part of the Royal Air Force doing statistical analysis of bombing runs. So they hired a lot of mathematically gifted young people to say what's the most effective way to use bombers, what's the highest kind of kill ratio, so to speak, and what yields the least losses to the British planes. So Dyson was doing a lot of statistical stuff during the war. So his own studies were delayed. So they were pretty close in age, even though they were at kind of different career stages. And Alex confirms that you have to take a class-- would be like having to sit through Schwinger's eight hour lecture. That's right. I've seen the lecture notes that I think it was John Wheeler took on Schwinger's lecture there, and they really they were-- they were beautiful, polished. Like Schwinger had thought this through. He wasn't hemming and hawing for eight hours. So the contrast with Feynman just like winging it for 30 minutes was, I think, all the more stark. But any other questions on any of that material, the early postdoc cascade or anything like that? OK, I will press on. But as usual, please jump in if other questions come up. I'm going to talk next about what were people actually doing with these diagrams, and this will connect with one of the articles that I assigned for today's reading. So remember that second meaning of dispersion is to get more and more distinct from each other, like the single beam of light separating into the distinct colors of the rainbow. So this picture of how they spread doesn't really capture what people were doing with them. And in fact, as we'll talk about in this part, people got actually pretty creative in putting the diagrams to a range of different uses. And so that had a few different seeds, what was driving the kind of distinctiveness of what people did with the diagrams. There were a couple ingredients. One of them was that Feynman and Dyson themselves actually held pretty different ideas about what the diagrams really represented or how they should properly be interpreted. So Feynman always talked about them as intuitive pictures, and I think that's fair. We can see where it's coming from. This is a story that he would often act out as I tried to do in my little Zoom share. The electron spits out a photon. It recoils here. They were very kind of animated tales of things unfolding through space and time. Dyson, whose original training was actually in pure mathematics, not in physics, and throughout his career was always much more formal, much more a mathematical physicist than Feynman, Dyson cautions in his very first article on the new techniques, that these are merely graphs on paper. That's the phrase he uses. These are not pictures. They're not Minkowski diagrams. These are not literal depictions of events unfolding in space and time. What's relevant is their kind of topology that some two lines connect here in a vertex. Two other lines connect over here, and there are only so many ways to arrange various lines and vertices. It was for him more like a kind of mathematician's graph theory. So from the beginning, you're getting kind of mixed signals about what these diagrams are all about. Even more important-- and this is what we'll talk more about in this next part-- was that the questions that seemed most pressing toward which many, many people tried to put these diagrams for use actually were not the problem area for which they'd first been introduced. So these were introduced in the context of quantum electrodynamics, meaning electrons, positrons, photons, every interaction of which is governed by the electric charge. When I say that the electron has a certain likelihood to emit a virtual photon, that likelihood quantitatively is governed by what we call its coupling constant, which is just to say its electric charge. So what that means pictorially is every one of these red dots where, say, electron lines meet a photon line, every time these lines literally connect in a vertex, the corresponding algebraic expression is multiplied by one factor of that coefficient, the electric charge. And then the lines meet again here. So there's a second factor. This is in e squared diagram, we say our second order diagram. This one is a more complicated diagram. There are four spots where electron lines hit photon lines, or vice versa, four red circles. That's an e to the fourth diagram. The algebra in front of the long-- excuse me, the coefficient, I should say, in front of the long algebra would have four factors of this constant number e. Here is an example of a tenth order diagram, where you're trading five virtual photons as an e to the 10th. Now, it turns out that in natural units, the charge of the electron is small. Electricity is a weakly coupled force. In particular, e squared is about 1 over 137 in these kind of natural units. So it's smaller than 1 over 100. So this diagram is multiplied by a number that's 100 times smaller than this diagram. No matter what the algebra is, its coefficient is parametrically smaller, which means this diagram is exponentially smaller still. So that's why these things became so useful in QED. You didn't have to calculate to infinity. These two electrons could have traded an infinite number of virtual particles, but the more complicated the interaction, the more places where blue electron lines would connect with these kind of neon green photon lines, the more powers of that small number would come out in front. So as a practical matter, you could stop. This is a perturbation theory calculation, kind of tailor expanding, and each additional contribution weighs less quantitatively. OK, so that's great. It's a great bookkeeper. None of that works. None of that works when you apply these to strongly interacting particles. And guess what most physicists in the United States applied them to. Strongly interacting particles. It makes no sense at all. They applied it to the scattering of, say, protons off of pi mesons or neutrons off of other particles coming out of the new accelerators. The equivalent, the analog of the electric charge for those nuclear forces was strong in the same natural units. In fact, it's larger than 10, which means that this diagram counts exponentially more than this diagram, and there's an infinite number of them. So they're not doing perturbation theory. And yet they're still using diagrams. That is pretty strange. So that leads to this kind of real kind of explosion in how people adapted the diagrams, often even just the pictorial form of the diagrams themselves because they weren't just doing one cookie cutter thing. They weren't all doing Feynman-Dyson perturbation theory for electromagnetism. They were mostly applying it to nuclear forces. You can't just do the same trick for nuclear forces because the coupling constant is so large, but people still found useful things. So the diagrams are taking off exponentially, even though they're applying them to new and different kinds of applications. So this looks kind of scattershot. It's meant to be scattershot. We begin to get some order once we come back to asking about who knew whom. In particular, who was learning from whom? Who was studying with whom? So I call these kind of family resemblances. In each of these colored rectangles, the diagram that's a little elevated on the left comes from the PhD advisor of the person whose diagram appears on the right. These are literally mentor-student relationships. I used to try to play this kind of parlor game when I could go to parlors, like choose any random Feynman diagram from the Physical Review, at least up through the mid '50s. And with reasonable accuracy I at least would claim I could tell you who drew it or at least what PhD program they came from. It was pretty good. Now, there may be more useful life skills to train oneself on, but that was my way into this. So you could literally see oh, that's a Richard Feynman student, but this over here, that's actually a Norman Crowell student, or this one over here, that person was trained by Robert Marshak and so on. You can see these local adaptations because these different departments or members of these different departments were actually doing different things with the diagrams, often set by local demands or priorities. So for example, at Rochester in upstate New York, not too, too far from Ithaca, but far enough, they had just received, much like MIT had done, funding to build their own small particle accelerator, much like MIT synchrotron. So the theorists there, their main task was to help make sense of all the new kinds of particles that kept shooting out every time they turned that machine on. They weren't worrying about these very fancy virtual particle corrections. There were no single virtual particle or not these so-called loop corrections that led to all the trouble. It was really a way of classifying what goes with what. When these two things smash into each other, what kinds of detritus comes out? It's classification, rather than kind of dynamics. These people were applying the diagrams not to high energy physics but actually to many body theory and what would become known as condensed matter physics. What happens when two things are near each other and the same two particles keep trading force carrying particles over and over again? How do you add up that effect? That might be something like a kind of bound state or some stable system, different kind of challenge they were working toward. So you can start seeing the kind of local variation in who's using the diagrams toward what ends. And now let me talk briefly about one of the even further kind of excursions or even more creative reinterpretations that I wrote about in one of the articles for today a little bit later in time. And this focuses on the work from Geoffrey Chew. He's the one shown here at the blackboard. He was the kind of main theoretical high-energy physicist at the University of California Berkeley over the '50s and into the '60s, most of the '60s. So he was, like many of his colleagues, really interested in nuclear forces, not electrodynamics, but all these many particles coming out of these big accelerators and all different ways they could interact with each other. He knew as well as all of them did that faced with a large coupling constant, you can't do this kind of perturbation theory. And yet there's all this other things that one might try to do to make sense of all the new empirical riches coming from the new machines. So he actually becomes so frustrated with quantum field theory, precisely because it seems not up to the task of handling these strong nuclear forces. He says quantum field theory is dead. In fact, he says, it's sterile and destined not to die but just to fade away. And yet the diagrams, he says, hold a real key to moving forward. So he tries to lift the diagrams out of quantum field theory and toss away what had been used actually to derive them in the first place. That's a pretty interesting move. In its place, he starts talking about something he called-- he and his students called a nuclear democracy and eventually the bootstrap. And the idea was to treat all of these nuclear particles-- by this point, more than 100 of them have been identified-- treat them all as being on an equal footing. Why was that so radical? According to ordinary quantum field theory, you start out with a very different picture. You have certain so-called elementary particles, and some of them might stick together and make bound states or composites. But some are more special than others. He actually called that an aristocracy or a kind of elitism or hierarchy. And he said maybe with 100 plus nuclear particles, let's not try to them into which ones are really special and which ones aren't. They all seem to fly out together. Let's assume they're all equally special, treat them all on the same footing, a democracy. Now, this was unfolding, as I wrote about in the essay, at a time when he was very concerned about if other forms of democracy, a slightly more literal form involving people and fair treatment of people, amid the early stages of domestic anticommunism, the so-called McCarthy era, which we talked about some weeks ago in this class. In fact, he was the first professor to resign from the physics department and perhaps the first one from the entire University of California Berkeley campus to resign in 1949 over a kind of loyalty oath controversy. He got clearance during the war. There was no question that he had any kind of communist association in his past, but he thought on principle, it was simply un-American in his estimation to demand that people swear a kind of political allegiance. That was a violation of people's free political consciences. So he resigned because the university started requiring these anti-communist loyalty oaths, and he said, forget it. He was immediately hired at the University of Urbana-Champaign, University of Illinois, Urbana-Champaign. And he got more and more involved with political efforts on behalf of similar physics groups. He testified before the US Congress and so on. And what's interesting is if you go through his testimony and his op eds and his other publications from this period, you see that same language recur about a kind of Democratic treatment. No one and no nuclear particle deserves special status. They should all be treated as equal partners under the law. And I write more about that in the essay, but that's the kind of gist. He's using this language of treat everyone the same in a lot of domains around the same time. So why does he think these Feynman diagrams might be the way forward? What does he want to do with them in this new kind of so-called democracy? Well, he says that the diagrams seem to hold more content than quantum field theory. So let's see where the diagrams themselves will lead us. In this orientation of a simple-looking Feynman diagram, this rho meson, which had actually just been identified by some of his experimentalist colleagues not long before, this shows two pions exchanging aromas on. So the rho is a kind of force carrier, much like say a virtual photon would be a force carrier in a QED diagram. But if you just rotate that diagram, literally turn it on its side, then you tell a very different story. You see two pi mesons brought together. They were attracted because they felt a force. Once they approached each other, they actually can create a temporary bound state of the rho meson. It's an unstable particle, so the rho meson later will then decay into a pair of pions. And that just comes from rotating the same diagram. Meanwhile, once you create some rho mesons this way, they can be involved in all kinds of other interactions where they would appear as a so-called elementary particle. So the status of the particle type seems to have changed just by rotating these very simple looking line drawings. And so Geoffrey Chew said basically let's take the diagrams at face value. If they make no distinction between these different kinds of roles or hierarchies, what's truly elementary versus composite, then why should our equations? So he wanted to build a new approach to a quantitative approach to the nuclear forces following these kind of diagram-based symmetries, rather than quantum field theory. And they went even further, saying what if every single particle was a kind of self-consistent state? They would each pull itself up by its own bootstraps, using a well known kind of American idiomatic expression. They would all be kind of self-made people. No one was born being more elementary or more elitist than others was the idea. So what if every particle like that rho meson would generate the forces that would give rise to its own production? That's pretty cool. Could there be one unique self-consistent solution to all these equations? Going back to those kind of rotated diagrams, in this example, you have some expression, some quantitative expression for the force that this rho meson exerts, an attractive force between these pi mesons. That expression will depend on the mass of the particle that's exchanged and this coupling constant, the factor that appears at every vertex. Rotate your piece of paper by 90 degrees. Now you see that once these two pions have attracted each other, thanks to that rho meson, they've formed this bound state. The odds for that to happen also has some distinct expression still only depends on two quantities, the mass of the bound state they produce and this coupling constant. So now you have two equations for two unknowns. So you can look for self-consistent solutions. And in fact, he and his students were able to publish the first compelling theoretical account of why these nuclear particles had the quantitative properties they did at a time when no one else could come even close. This is far outside the regime of perturbation theory, and here was this kind of very clever self-consistent way of using their so-called bootstrap to try to account for why the rho meson was particularly heavy and why it had this particular interaction strength with pions. So let me pause there. Any questions on that? It's just too awesome. You're stunned. You're just shocked into silence. I mean, I get that way, too. It's OK. What I mostly want to emphasize with that part-- hopefully it was clear-- was that even after Dyson did such a kind of celebrated job of clarifying exactly what Feynman diagrams should mean, exactly how they should be used in a calculation, in the years after that, that led to more different uses, rather than collapsing to only one. I found that really pretty cool actually, pretty neat. And then some people got even more ambitious with them, like the more extreme example from the Berkeley group. But even at Rochester or Urbana or in Cambridge, England, people were being very kind of-- they treated the diagrams as pretty malleable for a long time. And that was neat. It's hard to get computers to do that. Computers do one thing with these diagrams now. But for much of their history, the diagrams meant a variety of things that could be used in a variety of ways. Yeah, good, Fisher. So Fisher asks when we're talking about perturbation theory, they're referring to everything beyond the second order, beyond e squared, or everything including e squared. I guess technically, we would even include the e squared. The first term would be one, which is that nothing happens, or I should say nothing changes. So your first term in what's called the s-matrix expansion is 1 plus everything else. And so the next term, the first term to enter would then be e squared. That would be the first correction to nothing happening is you could add two electrons scatter. Then they could have scattered in more complicated ways. So it is really like Taylor-- you can think of it like Taylor expanding. And it is remarkable because it works because you have a controlled parameter. You can really do a controlled perturbation expansion. So even without calculating an arbitrarily complicated diagram, you can estimate how much would that matter quantitatively. What would be your error budget if you left it out, so to speak. And you can't do any of that using these techniques for the nuclear force. All right, let me press on. I'll talk about this last part. It'll be pretty quick. I'll try not to go for eight hours. No promises. But this last part is so why do they stick. If they're being used in so many different ways, why do people stick with them instead of design other techniques along the way? So within perturbation theory, when doing these electrodynamic calculations, I think it's pretty clear, and people spoke of it even at the time that these were just an extraordinarily useful tool. Dyson remarked years later, the calculation he first did for Hans Bethe as a grad student before learning about the diagrams took him several months of work and several hundred sheets of paper. I've been there. Dick Feynman, Richard Feynman could get the same answer calculated on the blackboard in half an hour, and that was indeed what became more and more common. Julian Schwinger said a bit more snarkily-- Schwinger and Feynman were kind of like frenemies. He said Feynman diagrams brought computation to the masses, which you could almost see Schwinger sniffing, like my students learn really how to calculate. But any old slob could calculate using Feynman's techniques. That's the spirit in which I take that remark, that these really were effective, and many, many people could calculate, even people who weren't blessed to have had Julian Schwinger as their PhD advisor. This cartoon here ran in Physics Today many years ago, basically saying if you're going to go through these dangerous thickets of perturbation theory as if you were a kind of an explorer in the Amazonian jungle, it helps to have a reliable map. And so the Feynman diagrams would be a reliable map to this complicated, seemingly dangerous, or forbidding terrain. So I think that makes sense, except as I was just emphasizing, that can't hardly be the whole story because most physicists weren't using them in the early days for what they were most good at, for what the diagrams were most efficacious for. So sure, that would explain why people use them in weakly coupled situations, but what about all these other applications? So to make sense of that, I found it very helpful actually, very fun, to go back to some classic studies in art history. These are, after all, a kind of visual representation scheme, and art historians have been arguing for a long, long, long time about why various styles in art, modes of depiction have come to seem natural or have a kind of staying power and what happens when they change. So the art historian Ernst Gombrich wrote this really lovely, very influential book many years ago, where he says that basically, his catch phrase was that painters-- how does it go? They see what they paint. They don't paint what they see. They have an idea of what they want to convey, and that structure is how they even take in the world. He kind of flips the story. You don't just look out your window and say, oh, I'll paint a landscape. You learn techniques, and then you come to see the world through the lens of those techniques, not unlike what, say, the historian of science Thomas Kuhn had argued around the same time. So these are examples of the kinds of things that Gombrich talks about. These were from training manuals in the early modern period to learn how to draw a human face. You break it down, and you literally practice drawing an eyeball over and over and over again until you see actual people through this kind of schema of how you draw an eyeball, or if you're employed by the local church, which most of them were, here's how you draw the adorable little cherubs and little baby angels. You practice drawing these chubby-cheeked faces, and you break it down and you practice and practice, like doing problem sets before you go paint the kind of latest picture to decorate the local church. And so it seemed a similar thing was happening with these scientific images, that the physicists' kind of prior habits helped them see in a new way just as if you stare-- if you practice this technique over and over again, you structure your newer experiences with a kind of scheme that you've already worked hard to master. And so these pictorial conventions can actually help us map a kind of pedagogical lineage, not necessarily a conceptual one. They were often kind of irrelevant to the new uses, but they were more a kind of social or pedagogical value. They helped you get up to speed or helped you make sense of the newer things you then wanted to do. The most significant, at least as it makes sense to me, was that people were slotting these things into a very well established tradition by that point these Minkowski diagrams, which, of course, many of you had already seen probably before this term. We spent some time looking at them together early in this semester, and we talked about special relativity, literally space time diagrams. Feynman was not shy about adopting the exact same conventions when he began he began drawing his little doodles at first kind of private sketches, not just that you have one dimension of space and one of time. He oriented space along the horizontal. There's no law that says you have to do that. He tacitly starts scaling the speed of light to be 1, so that light travels along 45 degree diagonals. He's not literally drawing space time diagrams. These have been completely kind of pedagogical second nature for generations of physicists in many, many parts of the world, not just the United States before Feynman and Dyson introduced their work. This was a kind of context in which students could then encounter Feynman diagrams. These are examples from Feynman's own work. This one and this one from his early publications, this is my favorite. This comes from one of his lecture notes at Cornell when he was lecturing in 1949 in the department. What I love is that he actually gets his own Feynman diagrams wrong. He's so enamored of this kind of storytelling space time structure that he draws diagrams that literally could not be calculated in QED. They have the wrong number of legs entering each vertex, but he got caught up trying to tell his students about how you could have pair production in QED and trying to narrate that story through space and time, and we can forgive him for forgetting that in the theory for which he'd eventually win the Nobel Prize, only two straight legs and one curvy leg can meet at each point. Oops, because there's so much bound up, I think, for Feynman in this kind of space time narrative and the trajectories of thinking moving through space and time. It wasn't only Feynman. If you look at the earliest textbooks, they keep repeating the same scheme even when they move into momentum space for which there's absolutely no meaning, no relevance whatsoever to the kind of 45-degree diagonal. That only means something if you're actually working in space time and scaling the speed of light to be one. It means nothing if you're in momentum space, and yet in these examples and many of these, physicists working explicitly in momentum space nonetheless kept using the space time Minkowski conventions. And I argue that was kind of what they were used to, and it also probably helped some of the students get kind of comfortable with them as well. There was another visual feature that, again, in terms of which these physicists were immersed at the time, and that comes from this explosion of a very large and rapid production of particle accelerators after the war funded by the Atomic Energy Commission, where many, many universities, like MIT, had their own local atom smasher. So what became common was to draw these freehand reconstructions based on the bubble chamber photographs. This was not meant to be a Feynman diagram, but it became clear because it was actually fairly expensive to keep trying to reprint the photographs. You would just reprint these often hand drawn line sketch reconstructions, which consist, like these Feynman diagrams, of propagation lines and vertices. So there was, in some sense, a kind of reinforcement of a kind of realism that had nothing to do with Freeman Dyson's very careful kind of derivation or anything about perturbation theory. So there was a kind of realism. And you see that in this quotation for example, some years later in a very influential textbook that these Feynman diagrams, Richard Mattuck wrote, are so vividly physical looking, that it seems a bit extreme to completely reject any physical interpretation whatsoever. So he does this funny kind of cheat. We will therefore talk about the diagrams as if they were physical, but remember they're not. So he can't help himself but describe them as real things moving through space and time. Other textbooks at the time reinforced that visually. If we could look, if we could really kind of zoom in with our own eye and watch these elementary processes, we would see Feynman diagrams. We wouldn't see cathode rays scattering on a scintillation screen. We would literally see nature as Feynman diagrams if we had a big microscope. OK, so let me wrap up. So scientists then and as now always have to practice using their tools. Our tools don't come for free. That's why it takes many years of training, many problem sets, many late nights, many lab reports. And the tools aren't automatic. The tools themselves can change and can be put to new uses. How do they spread, how these new techniques spread, writing a really great textbook or nowadays maybe making a great viral YouTube video, that can certainly help. But at least in these early days, it took much more than only kind of text-based means of propagation. It really took very specific social institutions, like postdocs and their dispersal, to help move these techniques around. Moreover, even the kind of fanciest new techniques will be incorporated, will be made sense of in the context of what generations had already become comfortable with. So I think we can make sense of the particular rhythms of the history of these diagrams by thinking about pedagogically what had previous generations already become used to. And so in that sense, you can see the diagrams and their users kind of being forged together, that the early users of the techniques were themselves being molded in these formal training stages of their career, and they're often molding the diagrams at the same time. So I will stop there. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_13_Physics_under_Hitler.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Today we're starting this new unit for the class. So the first main unit-- we had a kind of warm-up unit of a kind of 19th-century legacy, as you probably remember. And then we spent a good chunk of time right up until the previous class session on looking at physics really during the first 20 or 30 years or so of the 20th century-- the real focus on what were some of the kind of approaches conceptually and intellectually that physicists in many parts of the world were using to try to make sense of nature. And that got kind of bound up with what we now call the origins of modern physics-- relativity and quantum theory in particular. We were also looking at the kinds of institutions. What kinds of settings were many of those investigations taking place in? And so with today's class and for the next several to come, we're now looking at the next main unit, which is when physics and physicists start interacting much more directly with statecraft, with formal overt politics, with governments-- nation-state governments and even international relations. And that, as we'll see, brings up a new host of questions, intellectual challenges, institutional relationships and all that for physicists throughout-- what we'll mostly focus on-- throughout various parts of Europe and then the United States. Although, there's lots to be said about other parts of the world too, and I'd be glad to talk about that if you have questions. So today we're launching this next main unit of physics-- physicists and the state, meaning through governments. And we're going to start by looking actually at developments in Germany where so much of that work on what we now call modern physics had unfolded. So that's our job for today. We have, as usual, three main parts for the class. We're going to recap or revisit some material that we actually looked rather briefly at a few sessions ago when we were talking about Einstein and the general theory of relativity and this movement that was called Deutsche Physik, which is usually translated into English as Aryan physics. A literal translation would be German. But it really meant this kind of race or racially based notion of what a proper approach to physics would be. So we'll talk about Deutsche Physik. Then we'll pivot and talk for a good chunk of today about some developments in nuclear physics, which is quite, quite new in this period. And in particular, the ideas about nuclear fission. And as you see there, again, optional lecture notes on the course Canvas site to dig into some of that a bit more. I'll go through quickly some things, but there's some more details in the optional notes. And the last part for today is really looking at the collision or the union of these first two topics. And we'll look at how did Werner Heisenberg himself and many of his immediate colleagues who stayed within Germany after the rise of Hitler, what did nuclear physics mean for them? And what did they think they were doing as they worked very, very hard during the war years on topics related to nuclear physics and nuclear fission? So that's what we're going to talk about today. And then on Wednesday, we'll turn and look at some developments in Britain, and then especially in the United States around similar timescales. OK. So this is just a reminder-- something we talked a little bit about when we talked about general relativity. As early as the spring of 1920-- I believe the very first rally was April or so of 1920-- several political opportunists within Germany took advantage of the fact that Einstein himself, as well as his general theory of relativity, had become kind of overnight sensations. Einstein was everywhere in the news after the dramatic eclipse expedition results. He was heralded around the world as this amazing genius who had toppled Isaac Newton's physics. And so some opportunists within kind of war-ravaged Germany took advantage of Einstein's fame to get their own message out. Their message was not really about physics, so they used physics-- or the debates about some ideas in physics-- to stage what was really ultimately a kind of political movement. So they began staging anti-relativity rallies in places like sports arenas and opera houses and music halls and so on. And the kind of public face-- the faces that were most often headlining these events-- were these two German Nobel Laureates in physics, both experimental physicists-- Johannes Stark and Philipp Lenard. And again, there's all kinds of ironies here. Stark had actually invited Einstein back in 1907 to write a review article on Einstein's own work because Stark thought it was interesting but not getting sufficient attention. Philipp Lenard won his own Nobel Prize in 1905 for Lenard's experiments on the photoelectric effect, which of course, triggered Einstein's imagination directly. These folks had all kinds of physics interests in common in the early years of the 20th century. But by the 1920s, they had really come quite far apart. So the rhetoric of what became known as the Deutsche Physik, or Aryan physics movement, was that of the tatmensch, the man of action, in their terms, that Newton and Galileo and Michael Faraday, according to people like Philipp Lenard, had all been Aryan. They'd all been of this what Lenard and Stark considered the kind of purest kind of racial stock, that they were just like the kinds of people the Nazis wanted to further elevate, according to Lenard, even though Newton and Galileo and Faraday themselves would certainly not have recognized that as such. And Lenard argued that these people, unlike people like Einstein, these older heroes of physics had partaken in the same kind of active-- man of action kind of spirit as Adolf Hitler himself. Really quite striking to go back and read some of this material. And as I mentioned in the previous time, in one of Lenard's books from the 1940s, he would include these portraits of people like Isaac Newton shown here to try to make the point that these people did not have so-called Jewish features, that even literally the shape of their face, their nose and these things proved, according to Lenard, that they were of appropriate pure racial stock, unlike the people that Lenard now was denigrating so vehemently. And so the group began on the fringes. This was really a fringe effort starting in 1920. But just a little over a decade later, they had moved squarely into the center within Germany, especially after the Nazis achieved power in January of 1933. Again, as you may know, Adolf Hitler was elected chancellor of Germany in late January of '33. And the entire Nazi party then began to take over or be put in charge of a series of German government ministries. And so in particular, they took over things like the Education Ministry. As you may remember from early in class, in Germany in particular, especially after there was a single unified Germany, there was one federal level, government level ministry that would place professors in basically every open slot within this kind of state-run university system. So the adherence of Deutsche Physik, who had begun as early as 1920, were suddenly basically in charge of things like every single professorial appointment in all the universities in Germany. Of course, much beyond that as well. So Hitler's elected in 1930-- in January of 1933. By April of that year-- very quickly-- the Nazis had begun to implement so-called civil service laws. These were basically race-based requirements for people to hold government positions. And this included things like university faculty. There were state-run universities. So the so-called civil service laws which forbid people of non-Ayran descent, like Jews and others, from holding government jobs, this starts to trigger a very, very rapid exodus of scholars out of Germany. Either they were fired, or they were not personally fired, but were concerned about the direction things were heading, and they left in protest over the way their colleagues and students were treated. And so this triggers about 100 physicists and mathematicians who leave Germany pretty quickly starting in around 1933. And many of them head toward both Britain and the United States. They move. Some of them move permanently. Some of them move for the duration of the wall and then relocate back to Germany. But many of them never went back. So most famously, Einstein himself left Germany, renounced his position with the Prussian Academy of Sciences, and he moved to the Institute for Advanced Study in Princeton, New Jersey. In fact, he was the first faculty member hired there. The Institute itself was brand new. So Einstein moves to the United States. Erwin Schrodinger, who is not Jewish, but nonetheless was very concerned very quickly at the direction of these so-called civil service laws, he resigned his position in Berlin. He moved to Oxford and then settled in Dublin for the duration of the war. Emmy Noether, who was an immensely influential mathematical physicist from Gottingen, she left Germany and she was hired at Bryn Mawr, a women's college in Pennsylvania. Max Born also left Gottingen. He went first to Cambridge, then to Edinburgh for the duration of the war. Younger scholars, like Hans Bethe, left positions in Germany. And he moved to Cornell and had a very, very long, long, long career at Cornell. He arrived at Cornell around 1935, and he stayed there. He was a professor there for maybe 60, or even more than 60, years. He had an enormously long career. James Franck moved to the University of Chicago, Felix Bloch to Stanford. Viki Weisskopf eventually makes his way to MIT, first by way of Rochester and so on. It's literally 100 of these cases that have been well-documented. And what's really important to keep in mind is that these were by no means easy transitions. For some, they were pretty easy. Einstein was welcomed immediately at the Institute for Advanced Study. It was a great boon for that new young institute. But many of these other folks who were neither so famous as Einstein, nor entering such brand new institutions, for many of these other folks it was actually not an easy move. The United States itself, like many parts of the world, not just in Europe or North America, was deep into a Great Depression. This was already several years in. So there were many US-based scholars in this case looking for university positions, as well as all kinds of jobs. Unemployment was rampant throughout the United States. But even beyond that, there was an entrenched, and again, by now well-documented, anti-Semitism throughout many, many US universities, including sometimes most rabidly among the most elite universities-- the Ivy League and others like that-- so that sometimes it was actually hard to place these people who were extremely eminent and already decorated in their fields. But many US universities balked or dragged their feet. An example of that involves Robert Oppenheimer-- someone we'll be talking quite a lot about in the coming lectures. Oppenheimer was born in New York City. He wasn't someone who had to flee Germany. But when he was hired at Berkeley in 1929 before the rise of the Nazis, the department had had to work extra hard to get him hired in a department that already had something like 60 faculty-- it was a huge department-- because the department already had one other person of Jewish background in the department. And so the department chair had to fight against the notion of having, quote unquote, "too many Jews" in a huge department by hiring this second one in. And that was even before you have this kind of exodus with the rise of the Nazis. Likewise, at Dartmouth College-- my own alma mater, so I'm not picking on a stranger here-- again, once these outflow began in the early '30s, correspondences turned up that they were perfectly happy to bring in a refugee faculty candidate as long as the candidate, quote, "shouldn't seem too Jewish." So there was a kind of entrenched anti-Semitism among even very, very elite, or especially very elite US universities. And that compounded the difficulties of placing some of these people, many of whom had to flee under very dire circumstances. OK. That's for the people who did leave after the rise of the Nazis. What about some who stayed behind? So within Germany, especially once the Nazis had taken over, Nazi officials, which included the German Education Ministry and beyond, began criticizing physicists who were not themselves of Jewish background, but who seemed to demonstrate what at least the Nazi officials considered insufficient loyalty to the regime. And one way that these folks supposedly demonstrated insufficient loyalty was that they continued to teach what was by that time branded so-called Jewish physics, which meant physics either by people who were of Jewish background, like relativity, or physics that struck some of these Deutsche Physik acolytes, like Stark or Lenard, as being somehow too mathematical, too abstract to remove from kind of proper forms of reasoning. Often, it was actually just more crude. Was that work done by someone who was Jewish? Then it's Jewish physics. Often, it was just a simple conflation. This reached its real apogee, the real highlight, or the highest point of this kind of attack occurred in 1937, so several years into the regime, when in fact, the acolytes of Deutsche Physik, who now had the ear of the German Education Ministry, they managed to block the appointment of Werner Heisenberg, who everyone within Germany and beyond Germany had just simply assumed would be appointed, would get a big promotion when his own main mentor retired. So as you may remember from some lectures on so-called old quantum theory a few sessions ago, Arnold Sommerfeld was the kind of head professor, the ordinarius professor, in Munich for theoretical physics. He trained an enormous number of gifted disciples, including Heisenberg, Wolfgang Pauli, Hans Bethe-- just a huge list. And by this point, Heisenberg was clearly Sommerfeld's most famous, most accomplished student. Heisenberg had already received the Nobel Prize by this point for his work on quantum theory. Sommerfeld retired after a long career. And everyone just assumed, including Heisenberg himself, that the central ministry would simply appoint Heisenberg as the successor. Instead, the Deutsche Physik kind of ideologues organized a press campaign against Heisenberg labeling him what was called a white Jew. And again, that was their term. They said, we know he's not personally of Jewish background, but he behaves in too friendly a manner, in their reckoning. He was too supportive of things like Jewish physics because he kept teaching relativity, for example, in his classes. So they labeled him a white Jew and began a real kind of press smear campaign in the Gestapo-controlled press. Things could really have gotten worse. In fact, there was a real concern that Heisenberg himself might have been sent off to a concentration camp, or certainly could have faced much worse treatment than only being denied a promotion. And what finally stopped the attack was that Heisenberg's mother interceded directly with a close family friend, who happened to be the mother of Heinrich Himmler. Himmler's shown here in this circle. Himmler, by that point, was the chief of the SS, one of these paramilitary Nazi forces. Here's Himmler with Hitler and other Nazi officials just a few years before they kind of took over. So it was really kind of an accident of who knows who. Heisenberg's mother calling up the mother of Himmler because they had known each other when they were each younger and basically said, can't our boys get along? It was quite extraordinary. It took that level of backroom negotiations to make sure that not worse happened to Heisenberg than only getting passed over for a fancy promotion. OK. So that's usually seen as the kind of apogee of the power of this Deutsche Physik movement within Germany. Obviously, the Nazis weren't done in 1937. As I'm sure you know, the war dragged on until 1945. But this was really the high point of this kind of power of Deutsche Physik. So not that the Nazis went away, but this notion of having a kind of racially pure or kind of ideological tests for physics, that begins to wane very soon after this very dramatic showdown over Heisenberg's promotion. And in fact, what many historians have come to conclude is that the regime-- the Nazis in power said, we actually could have real uses for all these physicists, at least the ones who weren't of Jewish background, that physics might not be only associated with kind of philosophy or ideology or kind of political talking points. But there was something stirring which got even to the highest levels of the Nazi party by the late 1930s that maybe physics and physicists could be useful, could be manifestly useful to Nazi aims and not merely something to be kind of policed as a kind of thought police. And what had changed really was summed up in two words-- nuclear physics. So let me pause there. We're going to have some questions and discussion. And then we'll look at some of the work in nuclear physics that they were talking about. Any questions about that? Had anyone heard that story before about Heisenberg and his mother and Himmler's mother? I just find that astonishing. Talk about small world networks. No. OK. I'm happy to press on. We got a lot of juicy stuff we can talk about for nuclear physics. But if any questions come up about Deutsche Physik, of course, chime in. But if not, I think I'll press on. OK. Obie, did you have a question? Was it-- Oh, OK. OK. Yeah, if only somebody called Hitler's mom. Yeah, exactly. Can you imagine? Gary says, if all these remarkable physicists stayed in Germany, what would have been the result of war? Yeah. Gary, thank you. In fact, we'll be coming to that. That sets up much of the rest of today's class. Yeah. Seeler says if only he'd been accepted at art school-- meaning Hitler, not Heisenberg. So some of you may know, Hitler was himself a kind of aspiring painter as a young person. And he felt slights at every turn, including that he was never admitted to hoped for art school. The world was out to get him, as far as he was concerned. Alex asked a very intriguing question. Why did Heisenberg stay behind? Very good. And again, we'll talk a bit about that. But that will come up in some of the themes in the next part. But as a preview, it's important to clarify. Heisenberg never ever joined the Nazi party. He never expressed anything like a clear sympathy with the most vile parts of the Nazi worldview. So I want to be very clear. On the other hand, Heisenberg was very, very patriotic. He was someone who had a deep, deep pride and a longer view of German learning and German culture. And again, we'll come to that actually pretty soon. So he was a patriotic German and not a Nazi. And he believed, as many of his colleagues did-- many, many German scientists stayed behind. He believed that the Nazi-- he hoped, in the early years-- he believed that this would be a temporary aberration, that the Nazis were so counter to what Heisenberg himself considered the kind of highest points of German learning and German philosophy and German culture, that this would be a temporary kind of fever. The fever would break. The Nazis would be run out of town soon, he hoped or believed. And therefore, there should be some kind of intellectual leaders who stayed behind to rebuild. So he thought there would be a need for many, many smart, devoted Germans who weren't Nazis, but who were proud of the kind of longer heritage of German learning and culture. Think about all the composers and the poets, and they had this long list of which they were very, very proud. So they thought they'd have to stick around because this Nazi thing was going to go away soon, they hoped. And then Germany would have to rebuild. And that was-- and so it was a kind of patriotism rather than Nazism per se. But that became for many other people who did leave Germany, either because they felt they really had to for personal safety, or people like Schrodinger who were critics of the regime but were not in the same kind of personal danger-- that kind of argument didn't convince everyone. Obviously, not everyone thought this-- thought that-- some people thought Heisenberg was kind of fooling himself, or the people like Heisenberg who chose to stay. So it was not at all obvious that this was-- even at the time, this was the best course of action or the morally appropriate one. But it was tricky. And so the short the shorter version is Heisenberg was deeply patriotic and not a Nazi. And he hoped he could help kind of rebuild the country he loved so much. And he hoped it would come soon. That's a good question. Any other questions on that? That's actually a great segue to the next part. OK. Let's press on. Let's see what else-- what might have helped convince even the ardent Nazis that there were other reasons to think about physics in new ways? So we're going to step back a little bit and look at some of the conceptual developments and experimental developments that have been going on really just in remarkable synchrony with the Deutsche Physik movement and the rise of the Nazis. So throughout the late '20s and into the early '30s, several research groups really throughout-- certainly throughout Europe and beyond-- we're working on radioactivity. That dated back to the 1890s. But this group by this point, about 20 or almost 30 years in, began to suspect that there was more going on within atomic nuclei than only protons, that there might be actually a whole second kind of particle within these nuclei. And again, to us this sounds like, no, duh. How did they know that? Well, we know that because they worked so hard to figure this out. They thought there could be an electrically neutral particle whose mass was at least pretty close to that of the proton but would not respond to the electromagnetic-- would not respond to electric attractions or repulsions the way a proton does. And they had all kinds of reasons for thinking along these lines. Among the real kind of World leaders on that topic was a husband and wife team, or a wife and husband team, Irene and Frédéric Joliot-Curie. Irene Curie was one of the daughters of Marie and Pierre Curie. So she entered the family business. She married Frédéric, who was also a nuclear scientist. And they also set up a world-class laboratory in Paris, really kind of taking the mantle from Marie and Pierre. And one of the things that they were especially adept at studying was something called artificial radioactivity. They really meant induced radioactivity. So Marie Curie and Pierre and members of their generation, even younger folks like Rutherford, they'd been excited about natural emitters-- substances that were radioactive without you having to do anything to them. They would emit these radiations-- alpha particles, beta particles, gamma rays on their own because of some kind of natural radioactive properties. By this next generation, literally next generation, meaning the daughter Irene and Frédéric, they were wondering could they actually create radioactivity or induce it by taking materials that were not on their own radioactive, irradiating them-- having a radioactive source bombard them with radiation, alpha particles, beta particles whatever-- and have that new target that they shown this stuff onto, could that then become radioactive as well? That became known as artificial. It really means induced radioactivity. And this became the next big frontier for trying to understand radioactivity in general, and by means of that, trying to get more clarity on the structure of atoms and nuclei. And so-- and as you see, Irene and Frédéric shared the 1935 Nobel Prize in Chemistry for their work. They really had already become world leaders in that. Just a little bit before they won the prize, and they're already very clearly at the top of the game in that field, one of their colleagues, James Chadwick, who was in Britain, followed up on one of their suggestions. In fact, he basically redid an experiment that the Joliot-Curie's had done almost exactly. But he thought they weren't quite interpreting it quite right. So he wanted to zoom in on this. So Chadwick had been a student of Ernest Rutherford's at Manchester, and by this point was himself now a more senior researcher at the Cavendish back in Cambridge. So this, again, really was an experimental design almost entirely coming from Irene and Frédéric Joliot-Curie. But Chadwick gave it a little tweak to try to clarify some things. The idea was to take one of these natural radioactive sources-- in this case polonium. And that's what we called an alpha emitter. So all on its own polonium will be radioactive, and its form of emission is alpha particles. So then they would shine the alpha particles onto a target, a block of beryllium. And then some different kind of radiation would come out. They had now induced radioactivity. Beryllium on its own was not radioactive. They could induce a radioactive kind of response by irradiating it with alpha particles. Some as yet unknown stuff came out. They could then-- Chadwick's idea was to shine that onto paraffin wax, a very hydrogen-rich substance, very-- each atom of which is very, very lightweight. And then protons came out. And then he could measure basically the kind of stopping power of those protons. So what Chadwick clarified-- which one of the few times that Irene and Frédéric Joliot-Curie had not kind of just nailed it the first time, so Chadwick came in and clarified-- was this unknown radiation, the stuff that came flying out upon irradiating beryllium with alpha particles was this long suspected neutral particle, electrically neutral with a mass pretty close to that of protons. So the way that Chadwick made sense of this reaction was that there were alpha particles coming out from the polonium and countering the beryllium that would convert into a stable carbon atom and have this unknown radiation be the neutron. And again, this is probably familiar notation for you. The lower-- the subscript number here is what's called the atomic number. That just counts the number of protons. The placement of any of these items on the periodic table is determined by the number of protons in the nucleus. And the superscript, the raised number, is the atomic mass. So how many basically proton masses worth did that item weigh? So an alpha particle has two units of electric charge-- twice the charge of the proton but is four times as heavy as a proton-- beryllium carbon [INAUDIBLE]. So the neutron would have had no electric charge. It had zero times the charge of a proton but had about the same mass as a proton. And that's what Chadwick finally clarified, in part because of the paraffin here. He was able to not-- to have the particles of this unknown radiation collide-- basically have two body collisions with the very proton-rich targets in the paraffin. And then he could measure the energy with which the protons came out with a kind of stopping power, much like we saw, say, Lenard do for the photoelectric effect. So with that, he could infer that each individual particle coming out here had a mass pretty similar to that of the protons because of the recoil pattern. And that's when the first really compelling empirical evidence for this new thing in the nucleus called the neutron, that really was introduced. And so Chadwick's work was also recognized very, very quickly. So in the year that the Joliot-Curies shared the Nobel Prize in chemistry Chadwick won that same year the Nobel Prize in physics. This was capturing people's attention right away. Among those people whose attention it captured right away was Enrico Fermi, another-- at this point, still pretty young-- full professor in Rome. Fermi was very eager to get the Rome group kind of on the map. Everyone was kind of jealous of Paris and the Cavendish and some of these other centers of research. And Fermi thought it was time to get his group really up to the same kind of quality. So Fermi had just only recently been made a full professor and given a kind of research lab, well-funded group in Rome. And he realized that neutrons, unlike alpha particles because the neutrons are electrically neutral, they might be able to do even more effective induced radioactive reactions because you're not going to have a kind of electrostatic repulsion of the alpha particle with its positive charge getting repelled by the target it's trying to strike, like the positively charged nucleus of a target substance. So the neutron, since it had no electric charge, Fermi realized, could maybe be an even better inducer of these nuclear reactions because it can somehow get perhaps even closer into the action, so to speak, without suffering that Coulomb repulsion. So he and his group were very, very methodical. And they basically tried to get purified, elemental sources of almost every single element on the periodic table starting practically with hydrogen, helium-- starting very, very early on, certainly beryllium. And marching just one by one up every single known chemical element, or as many of them as could, they got to the very end of the known chart. They got up to uranium, which was, at the time, the heaviest known most massive element on the periodic table, the one with the largest number of protons in its nucleus-- atomic number 92. And they would do the same trick. They would irradiate purified samples of each of these elements, including uranium, with a source of neutrons. And they would often-- not every time-- but often, they would be able to induce radioactivity much as the Joliot-Curies had been doing, and they could measure the response with Geiger counters and all the rest. What Fermi found was they got especially strong reaction rates. They would really get this uranium start acting like a radioactive material when Fermi placed a block of paraffin, this lightweight wax, between the neutron and the uranium target. So now, with Chadwick the paraffin was to interact with the neutrons that came flying out after the neutrons had already come from a target. Now, Fermi places the paraffin wax between the source of neutrons, and in this case, say, the uranium target. So what the Rome group assumed they were measuring-- and you can see in this one of the whole series of papers that they published-- this one by Fermi himself. Others had about five or six co-authors-- they thought they actually had made new elements. They thought they had gone beyond the then highest known atomic number. As you see here, his paper in Nature was the possible production of elements of atomic number higher than 92, meaning atomic number beyond uranium. So they thought what they were doing was taking a uranium target-- so each atom which had 92 protons, a total mass of 238. They would irradiate it with neutrons. And the uranium target would do what would soon be called neutron capture. So it would absorb that incoming neutron. So what happens then? The atomic number has not changed. This has zero proton units. So the 92 plus 0 remains 92. You still have uranium there. But the actual mass has increased by one unit. So neutron capture does not change the chemical identity of the target. It's still uranium. It still has only 92 protons in the nucleus. But you change it to a different isotope-- same number of protons, different total atomic mass. In this case, one more neutron. And then after some time, the neutron that had been absorbed would itself undergo a radioactive decay, a so-called beta decay, where the neutron would transform into a proton-- sorry-- transform into a proton. And then a beta ray, an electron, would come flying out. It was actually Fermi who built upon suggestions by people like Wolfgang Pauli, saying that it actually it has to be more than just a beta decay to get the energy and momentum to balance, to work out. So an electron and some as yet unseen extra particle that they eventually called the neutrino-- now we call it an antineutrino-- stuff comes flying out. The main thing is that the neutron transformed into a proton. So now, inside that nucleus you have 92 plus 1 protons. You've actually perhaps, possibly made an element that's chemically distinct from uranium. You've actually pushed it up one place on the periodic table Because now it seems to have a total of 93 protons. And yet, since the proton and neutron have basically the same mass, the atomic mass stays the same. So the idea was Fermi and his group were convinced they had conducted neutron capture followed by beta decay. They thought they had produced the first transuranic element. This would eventually be called neptunium. Just like in the solar system the planet Neptune is the first planet beyond Uranus, this would be the first element beyond uranium. I actually got to see these materials the last trip I was able to make before the pandemic, although I didn't know it at the time. I actually was in Rome for a few days-- a glorious few days this past January, not quite a year ago. And I got to see in this brand new Fermi museum. On the very site of his laboratory, there's now a museum that in principle is open to the public, though it's sealed up now I'm sure. And this was the actual lead-lined case in which they had their radioactive sources. This is the block of paraffin that Fermi used. It's like, I couldn't believe it was right there. I've been reading about this thing since I was a kid. And here are some of the examples of these purified elements-- the targets that they would irradiate with the neutrons that had gone through the paraffin wax. And here, this is what happens if you start filming in the streets of Rome, you get pulled over by the police. So it was actually-- they're working on a documentary with NOVA about Fermi and neutrons and neutrinos, and the police didn't realize we had all the proper paperwork. Here's Rosemary Cafferty, the production assistant, saying no, no, nice police officers. We do have the permits. Anyway. So I got to see lots of things in Rome, including Fermi's actual experiments. That was pretty awesome. OK. So Fermi's work actually triggered a lot of reactions in the human realm and not only in the nuclear realm. So this work was, again, seen as just unbelievably interesting and important. He starts publishing this in 1934. The main body of work comes out throughout 1935. By 1938, he again-- this work had been recognized as worthy of a Nobel Prize. Here's Fermi getting permission-- excuse me-- winning the Nobel Prize, receiving it from the King of Sweden December of 1938, just three or four years after starting that whole series of investigations. Now, that timing is pretty remarkable. As again, many of you probably know, Italy by this point, since the '20s, had been ruled by a fascist dictator, a self-pronounced fascist dictator, Benito Mussolini. Mussolini was not in the earliest days as rabidly anti-Semitic as he would become. But after Hitler attained power in Germany, Hitler and Mussolini made a series of pacts. And Mussolini's policies began mirroring or getting much closer to those of the overt anti-Semitism and so-called racial purity of the Nazis. So by the early and mid-1930s, Mussolini's regime in Italy was becoming as dangerous for people of Jewish background as Hitler's was in Germany. That mattered a great deal to Fermi. Fermi himself was not Jewish, but his wife Laura was from a Jewish family. And even though Fermi was this big fancy professor getting lots and lots of support from the central government for his research, it was becoming more and more clear that this was no-- this was not going to remain a kind of easy existence for them with a Jewish member of the family, as Mussolini began aligning policies more and more closely with those of the Nazis. So what Fermi arranged, upon learning that he would receive the Nobel Prize, was that his entire family, his immediate family, could travel with him to Stockholm to win the prize, and they basically snuck out. So Mussolini couldn't help but let Fermi leave the country. For Italy's grandeur, it was important to let Fermi receive the Nobel Prize. And then basically behind Mussolini's back, Fermi had arranged in secret to have his immediate family skip town, to leave almost directly from Stockholm, get on a steamboat out of, I think, the UK, and sail right to New York City. It's a lot like-- if you've seen the famous movie, The Sound of Music, it's a lot [AUDIO OUT] a ceremony full of fanfare and just escaped. And so Fermi and his immediate family moved and resettled for a time in New York City. So this is a reminder of the timing. We have this exciting new work in nuclear physics, new particles found, new kinds of nuclear transformations. The world scientists are getting very, very excited about this at just the moment when fascists are taking over many, many parts within Europe. Now, it turns out other groups were actively working on exactly these kinds of nuclear transformations. After all, Fermi won the Nobel Prize partly because everyone-- many, many people in the field agreed this was really, really kind of hot stuff. This was important stuff. So many groups were working on either replicating Fermi's series of experiments with this neutron capture, or trying to irradiate other sources, find other isotopes and possibly even new elements. And one of the most active in this group was a team based in Berlin. What's really interesting about this Berlin group is that they were explicitly multidisciplinary, even more so-- certainly more so than Fermi's group. Even arguably more than the Joliot-Curies, or similar, at least, to them. So the group included a theoretical physicist, Lise Meitner, as well as two very accomplished nuclear chemists, Otto Hahn and Fritz Strassmann. And for today's readings, we have a little-- an excerpt that's drawn from this really quite amazing biography of Meitner by Ruth Lewin Sime. Again, for those of you who might be interested, I can't recommend this book highly enough. I just think this book is gripping and moving and beautifully written-- Sime's biography of Meitner. So I couldn't assign the whole book, but I do encourage you to read even more from Sime's biography if you have the time. There's a little taste of an article that Ruth wrote with some colleagues in Physics Today, and then a kind of short piece inspired by her book by the very talented science writer Maria [INAUDIBLE].. But I think there's no substitute for this biography. Meitner, I think, is just endlessly fascinating. So what do we learn from Sime's biography? Meitner-- Lise Meitner grew up in a Jewish family in Austria. And she was therefore only allowed to attend formal school until age 14. At that point, the rules in the late years of the Habsburg Empire, girls couldn't even finish high school. They weren't allowed to finish high school. They could go to school through age 14. And during Meitner's kind of later teenage years, there was a series of reforms-- nationwide reforms in Austria that actually relaxed those rules. So then Meitner rushed through all the years she'd missed of the remaining years of what would have been a standard high school curriculum mostly through self-study. She finished high school effectively in a few months. And then even more astonishing, passed the entrance exam to study physics at the very elite University of Vienna, which otherwise would never have been allowed to her until these reforms had come into place. So she then studied very hard, and she was among the first women anywhere to earn not just an undergraduate degree, but actually, a full PhD in physics. She earned her PhD in 1905. And then she very quickly began to collaborate with Otto Hahn in Berlin. She had a few short-term kind of fellowships. And then eventually was hired-- sort of hired-- in Berlin. She was allowed to work with him. Hahn was glad to work with her. But she was only allowed to go into the basement of this Institute in Berlin because women were literally not allowed into the main institute, not just that there weren't women's restrooms. They weren't allowed on the first, second, third floor. So Hahn, to accommodate this very, very brilliant collaborator, colleague, agreed to work in what was essentially a kind of shop-- a kind of woodworking shop in the basement of the otherwise very fancy institute in Berlin. And that's how their collaboration started. Meitner, as a young and upcoming theoretical physicist, and Hahn, as an accomplished chemist. And here they are many years later working in their lab together in the later '30s. So what happens is in 1933, when Hitler is elected and the so-called civil service laws go into effect by that spring, people who are of Jewish background and German citizens could no longer hold university positions or other government jobs. Lise Meitner was not a German citizen. She was an Austrian. So it's quite astonishing that she was Jewish. Everyone knew she was Jewish. And she was able to keep her job because according to these new laws they didn't apply to her. They applied to German citizens of Jewish background. That changed in the spring of 1938, or beginning in the spring of '38 because, again, as some of you may know, by that point, there was what is called the Anschluss, which is when Nazi [AUDIO OUT] welcomed it. But in any case, Austria became absorbed within the German-- the Nazi Reich. So now Austria, and therefore, Austrians, were subject to the kind of German laws, including these so-called civil service laws, these anti-Jewish employment laws. So only in late spring, early summer 1938, rather than spring of 1933, was Meitner no longer entitled, according to these new laws, to keep her job. So she actually had to flee in a hurry several years after this kind of exodus had begun. So Meitner actually got a temporary position in Stockholm. She was able to flee to Sweden. And in the meantime, Hahn and Strassmann, in their Berlin laboratory, continued these neutron bombardment experiments, again, doing just like what the Fermi group had gotten so famous for. They redid Fermi's experiments multiple times, even to the day at which Fermi was literally shaking the king's hand and receiving his Nobel Prize right throughout the month of December. But unlike Fermi, and unlike the Nobel committee, Hahn and Strassmann concluded that neither Fermi nor are they in their own lab had actually produced these elements beyond uranium. So literally, while Fermi was receiving the prize for having made transuranic elements, Hahn and Strassmann convinced themselves that no, he didn't, and neither had they. So this is an example of a periodic table from 1938. You can see it ends at uranium. So whereas, Fermi and the Nobel committee and all the experts-- nearly all the experts. There were a few detractors, but nearly all the experts had assumed that Fermi had nudged these nuclei up by one or maybe two places here beyond uranium. What Hahn and Strassmann kind of grudgingly conclude using their chemist knowledge, not their not the notion as physicists, is, in fact, the uranium target had been split. It hadn't been nudged to one step larger, one step beyond uranium on the periodic table. It had actually broken into two much smaller pieces midway down the periodic table into, for example, a barium fragment and a krypton fragment, where again, the atomic number would add up. You had 92 protons to start. You have 92 protons at the end, but not by having made one slightly larger blob of, say, neptunium or some transuranic. But, in fact, by splitting that initial target nucleus into two much smaller pieces. And they wrote this up in December. It was published almost immediately in some Prussian Academy publications in January of 1939. And they conclude this with this very famous closing. They say, as chemists, we must actually say the new particles that result-- the products after this reaction-- behave like-- do not behave like radium, but in fact, like barium. It looks like they really had been knocked all the way down here, not in the vicinity of uranium. So as chemists, we say we found barium. As nuclear physicists, we cannot make this conclusion, which is in conflict with all experience in nuclear physics. There was no known nuclear transformation to date after 40 years of studying such things, in which there had been that large a leap either up or down the periodic table. And I talked a bit more about this in the optional lecture notes. All the known transformations-- alpha decay would knock you down two places. Beta decay knocked-- bring you up. You would be moving one or two places in your immediate vicinity, not halfway down the table. So while she was now on the run-- she had just arrived at a kind of temporary position in Stockholm-- Meitner received an update from Otto Hahn about these latest experiments indicating the presence of barium, which again, to emphasize, Hahn and Strassmann as chemists knew how to do proper chemical analyzes to test for barium. And they were more and more convinced that's what was there. So Meitner gets the update from Hahn. She has a little break with her nephew, another theoretical physicist, Otto Robert Frisch. He often went just by Robert. Frisch was actually at this point a postdoc in Copenhagen. So he was also in Scandinavia. He was able to come see his aunt. They had a few days together outside of Stockholm in a little ski vacation-- cross-country skiing. And it's while there that Meitner had just received his letter from Hahn. Frisch comes, and they spend the day talking about how Hahn and Strassmann's results could possibly be true. And while away from any kind of workspace, while literally spending the day out in the woods-- snow-covered woods-- they work out the first ever physical model of nuclear fission. It's quite extraordinary. And again, there's plenty more in the optional notes to spell this out in more detail. They began to argue or to convince themselves that slow neutrons were the key, that remember, Fermi was finding increased reaction rates when he put that block of paraffin, that wax between the neutrons and the target uranium. And they argue that that must have been because the paraffin was slowing down the neutrons. There'd be enough scattering and recoil that the neutrons that made it through the paraffin would have lost a significant amount of energy from having scattered off an object of comparable size. And so they should be slowed down by their travel through that moderator, through that material that would slow down their kinetic energy. And then Meitner and Frisch-- unlike Fermi at first-- Meitner and Frisch realized, well, if the momentum of these neutrons has been reduced, then the quantum-like behavior should have been exaggerated. If you go back to the de Broglie wavelength-- remember, we saw this inherent waviness associated quantum mechanically with any solid matter, it's proportional to Planck's constant, but inversely proportional to the momentum. So if the neutrons are being slowed down through, say, that paraffin wax, then the velocity would be small. The waviness, the characteristic size of this quantum wave, would be enhanced. So maybe even though this uranium nucleus is nearly 240 times bigger than-- or more massive than-- the inbound neutron, if that neutron had been slowed so that its quantum properties have been stretched-- the quantumness, so to speak-- the wavelength might be comparable in size to this entire uranium target, maybe you could set the entire target wobbling coherently to get a kind of coherent or collective response to the single bombardment by the incoming neutron. And they began, again, working out this kind of picture that this neutron might be like a kind of liquid drop-- that was a model that Niels Bohr himself had been working out for a while-- this kind of barely stable equilibrium between a kind of surface tension keeping it together and a volume pressure that would work to-- work against the surface tension. And there might be just a balancing point when you get to very large nuclei like uranium. So once this kind of stretched-out, slow neutron encounters the nucleus, it gets the whole drop wobbling. And in fact, you could actually have this thin neck appear. And then finally-- because this is now a very-- a dense pack of positively charged protons, this is a separate densely packed region of positive protons that can now repel each other, maybe this neck will rupture, and you'll actually get two smaller pieces. So maybe the one large nucleus could split into because it was hit by a slow-moving neutron. They go on-- again, this part is done in much more detail in the notes. I'll just go quickly here. Still on the ski holiday while basically on a bench, or leaning against a tree, they try to wonder, would this work? Could they get a kind of order of magnitude estimate of this kind of process? And they realize that the energy scale, the rough energies involved, if this splitting of a uranium nucleus were to hold, they could estimate by the kind of Coulomb repulsion of all the pieces-- all the protons within that nucleus. They have about 100 protons-- 92. So order of magnitude about 100 protons each with unit charge. And they're packed within a very small volume. The average nuclear radii they knew by this point was on the order of 100th of an angstrom. And so on the order of-- oh, no, excuse me-- a 10,000th of an angstroms-- around 10 to the minus 12 centimeters. Meanwhile, the kind typical energy scales for chemical reactions, where you're basically moving one or two electrons across distances of an atomic size, not a nuclear size. So if you take the ratio of the kind typical energy scales that seem to be involved in these nuclear transformations, compared to typical energy scales for chemical ones, the nuclear ones should be about 100 million times larger energies just from this how many charges can you-- are you moving around and what kind of volume. It's very, very kind of order of magnitude rough estimate that they're doing literally on a bench while taking a break from skiing. Meanwhile, after splitting apart, if this really were to happen, you have two roughly equal pieces of what had once been a single nucleus, each piece will carry as they go through-- and I replicate that algebra in the notes-- each piece following splitting would carry about one third of that starting energy. So this nuclear energy of the starting blob of roughly 100 charged particles, after splitting, each piece would only carry about a third of that. So you have two pieces. You have one third of that kind of raw energy still to account for. And that would be this energy released every time a single, large, unstable nucleus undergoes this splitting. So you have one third of this enormous energy scale available or released every time-- they estimate-- every time a single nucleus is split in this fashion. It turns out that estimator is consistent with-- people who then fill this in within a few months-- with a different way of estimating the energy scales not based on a kind of classical Coulomb repulsion-- how many protons over here are repelling how many protons over there-- but actually, based on E equals mc squared, that the total mass of the uranium nucleus before it splits is actually greater-- is actually-- sorry-- less than the mass of the barium and the krypton. There's a binding energy left over-- negative binding energy. And that actually is what's released when this initial thing splits, and that release times C squared gives you, again, the exact same kind of estimate as you get from this classical kind of electrostatic repulsion. That was worked out in detail by Bohr himself with another colleague. So Frisch returns to Bohr's institute in late December. He tells Bohr all about this. Bohr was very excited about nuclear physics by this point. Frisch was Bohr's postdoc. Frisch tells him all about Meitner's and Frisch's ideas, that maybe this big nucleus is just barely stable and could actually be split apart into two small pieces. He asks Bohr to keep it to himself. He and Meitner were going to keep working on it, maybe even perform some laboratory tests and so on, which in fact, Frisch did wind up doing. Meanwhile, Bohr was set to leave almost practically the next day-- very soon afterwards-- to sail to the United States. He was scheduled to spend a sabbatical at Princeton-- Princeton University, right near the Institute for Advanced Study. So he does that. He sails to New York. He's actually met at the docks in New York City by Enrico Fermi, who had just fled. He had just left by way of his Nobel Prize ceremony. Younger folks like Sam Goudsmit, whom we heard about before-- we'll hear more about soon-- Goudsmit had also fled the Netherlands and had moved to the United States some years before. Bohr basically gets off the boat and tells him, you won't believe what I've just learned. Nuclear fission is possible. So even though Frisch had asked him to keep it quiet, Bohr basically can't help himself. He's spilling the beans to his physics colleagues practically on the docks as soon as he arrives in New York City. Then he gets to Princeton. He tells other émigré physicists like Wegner, Albert Einstein, John Wheeler, who's an American physicist, but had done his own postdoc with Bohr some years earlier. They knew each other very well. Within days, several laboratories up and down the East Coast had actually verified this reaction. And for them, it was easy because they knew what to look for. The hard part was having these chemists, like Hahn and Strassmann, really do the kind of chemical assays to find barium. Once you know to look for barium, then even physicists, with some chemists' help, could verify that some of the fission products were, in fact, barium-- way down the periodic table, not things near uranium. So during that sabbatical, Bohr then kept working with Wheeler. And they worked out a more detailed quantitative analysis building very directly on Meitner's and Frisch's work. By this point, Meitner and Frisch had published a few very short letters about their work that are duly cited by Wheeler and Bohr. And they work out a kind of lengthy detailed quantitative theory of this nuclear fission. In fact, building again much on Meitner's work, it's Bohr and Wheeler who finally conclude that the fissionable, the most unstable isotope of uranium, is not the most common kind you find in the ground. That's U-238. The fissionable one, the one that's most easy to undergo this kind of splitting reaction, has a couple of fewer neutrons than the common one, U-235. What's kind of chilling or stunning to me is their article was published in the Physical Review literally on the same day that the Nazis invaded Poland, which was the final kind of trigger for the start of the overt fighting of the Second World War. Again, we see the collision of these time scales. So everyone in physics-- everyone in physics knew immediately that nuclear fission could lead to bombs. This was an enormous release of energy each time a single nucleus split. The energy scales were nothing like typical chemical scales-- chemical reaction scales. So everyone knows that. And then their second thought was, oh, the Germans must know this, too. After all, fission had been identified in a Berlin laboratory. And as we were just saying a few minutes ago, although many researchers fled Nazi Germany, it still retained some of the world's leading experts in nuclear physics, or nuclear science, more broadly. Heisenberg, as we said, had stayed behind. Otto Hahn, Hans Geiger, who invented the Geiger counter, Walther Bothe, Max Planck, Max von Laue-- a long list of these folks were still in Germany, let alone at the forefront of laboratory tests of these things. So again, just to give a sense of how rapidly these things were all kind of colliding, the conceptual work, the laboratory work around these new kinds of nuclear transformations, and the headlong rush into a world-spanning war. So I mentioned the Anschluss, who were the Nazis, basically absorb Austria. That was in March of '38. By December of '38, we have these developments of Hahn and Strassmann identifying barium, Meitner and Frisch working out the first real physical explanation. Bohr arrives in New York City. That spring, the Nazis then occupy Czechoslovakia. And Poland is what finally triggers the announcements, the declarations of war by many other countries. So within days of the invasion of Poland, Britain, France, Australia, New Zealand, and Canada-- notably, not the United States. But many of these other countries explicitly declare war against Germany. The Soviet Union invades Poland soon after that. This is all happening at essentially the same time. What's also happening is that in many, many of these countries on multiple sides, multiple fronts of what would become the actual warring parties, physicists are consulting with government officials to say there could be a vastly new type of weapon based on these nuclear transformations. So this is just a handful of these, again, happening with kind of lightning speed. As we now know, as early as April 1939, barely four months after the very first indication from Hahn and Strassmann that nuclear fission could happen, the German Reich Ministry of Education started holding secret meetings on military applications of nuclear fission, meaning weapons. They could make-- they were beginning to be briefed about the possibility of nuclear bombs. And they began banning the export of uranium. They figured they need a lot of this fissionable material. That same month-- it was much less well-known-- other historians have now by now documented it quite clearly-- the Japanese government began its own secret nuclear weapons project. It was codenamed Ni. I'm not sure what that stands for. This, unlike the German one, was really underfunded. It was not seen as a high priority for the current war. In fact, what it turned out to be mostly was a kind of wait-- for senior physicists to keep younger physicists out of direct fighting. It was kind of like instead of being drafted, you could do some research is more or less how it functioned. But there was a formal Japanese nuclear weapons project founded as early as April 1939. Very soon after that, Britain starts considering nuclear weapons, and they begin ramping up, especially after Robert Frisch, who then had moved-- once the Nazis took over Denmark he was no longer safe there. So he resettled in Britain, as did Rudolf Peierls, who had to leave Germany. They compiled a top secret memo to the British government, again saying not only are nuclear weapons possible, but you only need a little bit-- comparative little bit-- of this fissionable, this rare isotope of uranium. They actually, as we now know, underestimated how little fissionable uranium you'd need. So it looked even closer to being feasible and convinced the British government to start real efforts. And at that same time in the Soviet Union another physicist, Igor Kurchatov, starts informing the Soviet government about nuclear weapons and so on. Much like in Japan, in the Soviet Union we now know this was a low priority at first. But nonetheless, was a formal project. And then again, perhaps most famously in the United States, Einstein himself really signed a letter. He didn't compose this letter. Some of his émigré friends and colleagues in physics wrote the letter in hopes that Einstein would sign it. And he did. They convinced him to. He wrote a letter directly to President Franklin Roosevelt. By this point, Einstein was such a worldwide celebrity, they had channels to get this literally into the hands of the President of the United States. And you can see the letter here. You can Google it. You can find the text where Einstein basically says, I've recently come to learn this nuclear fission thing is possible. I've also learned that Germany is now kind of hoarding uranium. This could be a very serious development. All these folks recognized very quickly that nuclear fission could have very immediate worldly effects. So I see Obie asked do they-- how do they contain the heat of a reaction? Obie asked about the nuclear fission. Good. What's important to recognize is that these very earliest experiments were never leading to anything like what we would now call a chain reaction. We'll talk more about this action in the next class. So they were never getting unlimited numbers of nuclei to undergo fission. Thank goodness. They would have blown up the laboratory. So the heat released when one or two or three, a small number of nuclei fission, was not remarkable. If they had expected that, they could have maybe instrumented their laboratory to just barely maybe measure it. But I bet it was not even measurable, given the fission rates they were encountering. We'll see, of course, soon that no longer becomes true when these reactions get scaled up. And that's more-- we'll talk more about that in Wednesday's class. Iyabo asks, why did Chadwick win the Nobel Prize for physics as opposed to chemistry? Ah, good. In the reading by Crawford, Simon, Walker, they identified one reason it was denied the Noble Prize was that radioactive was considered a chemistry project. Iyabo, thanks. It's a great point. So I think the reason that Chadwick won the Nobel Prize in physics was because he was identifying a new physical particle. I mean, or what he was credited with. The reason they thought this work was so important was not that he-- not only that he was dealing with radioactivity, which was actually winning prizes both in physics and in chemistry. Marie Curie's first prize was actually in physics and then in chemistry or vice versa. She wound up winning in both. But for Chadwick, it was actually identifying a new physical particle, which I think was then seen as a kind of physics domain, more or less, as opposed to the kind of chemical transmutations, or the transitions in the identities of chemical elements like what the Joliot-Curies were doing. And they, indeed, won the prize in chemistry that year. But, I mean, your-- the question points to a-- is a good one. It points to a larger theme. There was an awfully fuzzy line then as now, but especially then. What would count as chemistry versus physics often could become kind of political. I mean, in the sense that the very small circle of members of the Royal Swedish Academy of Sciences that decided these things, they could frankly push one way or the other based on other motives. Did they not want to give a prize to one person, or often a representative of some country? We get very nationalistic. Don't honor this country. Do honor that. They could frankly kind of bend the rules or move that move that boundary to suit many kinds of purposes. So it's not that there were clear criteria separating, say, what would count as a physics prize versus chemistry. And there's a lot of evidence, including from the first author of that article. Elizabeth Crawford was really immersed in the Nobel Prize archives for much of her career-- much as we learn from Crawford's work. So it's a good point. But I think for Chadwick, the argument would have been he found a new kind of piece of matter, a new piece of nature. And that, I think, struck him as being more like the other physics prizes. Very good. Other questions about that-- any of the nuclear physics concepts, or again this kind of amazing, we might call it a tight coupling between events in various laboratories and kind of physicist networks-- who's writing to whom, who's taking the steamer ship to where, with these kind of worldly geopolitical shifts. Amazing conjunction. Do we know if Einstein ever spoke with FDR? He did not. Thank you, Gary. So in fact, there was a bit of a delay. FDR didn't actually receive the letter for a couple of weeks. He did read it. We know it was handed to him literally in the Oval Office, and he read it. What's important, though, is that the myth that Einstein invented the Manhattan Project with this letter, that is just crazy, crazy, dramatically overblown. Einstein signed the letter. He didn't write it. He signed it. He thought it was important. And he was glad to have some intermediary get it to FDR, where it was delayed and had almost no impact. And so we'll talk more about this actually in Wednesday's class. There was a little kind of study group that was put together a few months after Einstein's letter was received, not-- well, certainly, not only because of Einstein's letter. By that point other science advisors had the ear of Roosevelt and said, this really does look like it's worth paying attention to. The British were already now much more engaged as well after the Frisch-Peierls memo. So there were many reasons for the US federal government to begin paying a little attention. We'll see they paid actually a little attention to questions about uranium and fission and weapons around the time of Einstein's letter, but not only because of Einstein's letter. And then we'll see in some more detail on Wednesday that the real kind of ramp-up came only quite a bit-- with quite a bit more of a delay. So Einstein never spoke directly with FDR about this, and he never followed up. He wrote-- he signed the letter and then-- and that was that. So the notion that Einstein kind of jump-started the Manhattan Project, which one can still find with what I'll call lazy googling, that's really kind of totally out of proportion. Good. Any other questions on that? Now, there's one last part I want to talk about today I think is very juicy. Let's launch into that and then some more time for discussion as well. OK. So let's talk about Werner Heisenberg and what was going on within-- for those who stayed within Germany with this constellation of events. So by September of 1939, right around the start of the-- the overt start of the Second World War, the German army ordnance office took over the Kaiser Wilhelm Institut fur Physik, this really quite beautiful, funky building in Berlin, or just in the outskirts of Berlin-- Berlin-Dahlem-- exactly to coordinate research on nuclear fission. They'd been briefed. They'd had these secret briefings since that spring. Fission is a thing. It could lead to weapons. So the army took over the Physics Institute-- took over control of it. A little while later, Heisenberg was actually placed in charge of it. So Heisenberg, from the very early days, became a member of what was called the Uranverein, which is the uranium club, the little informal group of nuclear physicists and chemists who were working on fission trying to learn more. Heisenberg personally advised the military multiple times about possibilities of nuclear fission, both for weapons-- this could lead to explosive release of explosive power in a bomb-- but also for civilian power generation-- what we now call reactors. And, again, this has been shown now in a lot of detail that Heisenberg and other from the small circle of colleagues were actively advising the army ordnance from as early as '39, '40. And like I say, within a few years, then Heisenberg himself was put in charge of the entire nuclear effort. At the same time, Heisenberg was sent on these diplomatic missions throughout neutral countries, or especially occupied countries, including, for example, Denmark. And now this is right on the heels-- this is only two years or three years after he had been denied this promotion by the Deutsche Physik movement. So you can see how rapidly Heisenberg's star had risen yet again among leading German government officials. By 1939, '40, he was seen as actually quite useful in the context of nuclear fission. So what he would do is basically go on diplomatic missions to basically be one of the public faces of the German government, not to proclaim pro-Nazi slogans, but rather to show off the kind of grandness of German accomplishments in higher learning. Here's this very young Nobel Laureate who knows about the atom and these mysterious things. So he'd give these very well-attended public lectures usually in places that the Nazis had just taken over and occupied, and sometimes in neutral countries. He-- as I say, to many colleagues who heard him, colleagues who had known him for years, he sounded-- he often sounded explicitly nationalistic. He was never spouting the kind of most what I would consider grotesque or most obvious Nazi kind of speaking points, but he was certainly proudly German. And at some points, even seemed to suggest, at least to some of these colleagues heard him, that maybe it would be good thing if Germany ruled all of Europe-- not the Nazis, but if Germany really extended its rule. Because after all, this was the high point of European culture and learning. This is what inspires this play-- this amazing play, Copenhagen, which hopefully some of you are familiar with. There's a link on the Canvas site you can actually watch for free through the MIT library site, a really quite beautiful BBC production, a filmed production, of this play Copenhagen by Michael Frayn. It stars Daniel Craig who plays the young Heisenberg-- the same actor who would then go on to play James Bond. It's a really high-end production. So this play is really fascinating. I encourage you to watch the film or read the play. And it swirls around one of these real life visits where Heisenberg was sent on one of these diplomatic missions, in this case, to Copenhagen very soon after the Nazis had occupied Denmark. So while he's in Copenhagen giving these fancy lectures, he visits with his own mentor, almost kind of father figure, Niels Bohr. Now, they were afraid that by this point Bohr's house might have been bugged, might have recording devices placed in it by the Nazis. Bohr was well-known to be not at all sympathetic to the Nazis. And so Bohr and Heisenberg take these long strolls, as they always used to do, in the gardens near Bohr's house away from any inside microphones. And what the play does I think just beautifully, very evocatively, is try to reimagine scenarios of what they possibly could have talked about away from the microphones-- sometimes with Margrethe Bohr, who we know was almost always part of Bohr's kind of conversation-- scientific, political, and otherwise. So the play has these three characters-- Margrethe Bohr, Niels Bohr, and Werner Heisenberg. What do they think is going on with the world? What do they think the scientists' responsibility is and so on? It's a marvelous play. OK. So we know that Heisenberg advised the German military authorities, and multiple times actually, that nuclear bombs were possible, but probably not during the present war. And that was as much because it looked like the Germans would just win very quickly. This was the period of what was called the blitzkrieg, the lightning war, where everyone expected Germany would win right away. They invaded Poland on September 1, 1939. Everyone figured the war would be over within a year or two because it started going so well. Very few-- the Nazis suffered very few setbacks militarily once open warfare had started. But the authorities, nonetheless, saw a future promise for this kind of weapon. They were imagining, remember, a thousand-year Reich. They had a long view. And so they continued to fund Heisenberg's effort and also do things like seize the Belgian Congo-- the so-called Belgian Congo-- the territory within Central Africa that was known to be very rich in uranium ore. And so they wanted to get more raw uranium and fund Heisenberg's efforts and eventually install him to the head of the Kaiser Wilhelm Institut. Members of Heisenberg's team, then, of this group, began working on what would it take to scale up these nuclear fission reactions. One member, Walther Bothe, estimated that the moderate-- the material to slow down the neutrons, like the paraffin wax in the early experiments, that to do that with carbon or graphite, you'd need kind of ultra-pure carbon. Any impurities would absorb neutrons and not slow them down. If you absorb the neutrons, you stop the fission reactions. You stop a chain reaction. So Bothe, as we now know, overestimated how hard it would be to do this with carbon. So he advised instead that they turn to heavy water-- water that's made not with ordinary hydrogen, but with deuterium, with hydrogen atoms that have extra neutrons in the nuclei. So you have what's called heavy water. There was this kind of amazing commando raid to blow up a heavy water plant in Norway that the Nazis wanted because they wanted to steal large amounts of heavy water. Literally, like parachuting in in cover of night-- crazy, crazy stuff. Meanwhile, Heisenberg began on the theoretical side to estimate how much of this fissionable isotope U-235 would then need to have a runaway explosion-- the so-called critical mass. And he actually overestimated by a factor about 10 how much you'd need, at the same time, though, in ignorance of the underestimate by this Frisch-Peierls calculation. So as the war dragged on, the bomb project gets lower and lower priority within Germany, at first because they figured they'll just win by conventional means. And later, as the war begins to really bog down, and the Nazis do get turned back militarily at various points, then the Reich needs to actually direct resources to the short-term immediate military priority. So it's a low priority at first because it looks like they'll win. It remains a low priority later because they have other short-term priorities. So the physicist, Sam Goudsmit, who helped introduce the notion of quantum spin, he had emigrated from the Netherlands actually in the late '20s-- well before the rise of Nazism. His family stayed behind. And in fact, they later perished in Auschwitz, in Jewish background He led the Allied reconnaissance missions inside Germany to learn about this German-- the Nazi nuclear effort, and to literally kidnap German nuclear scientists before they could flee to either the Soviets or anywhere else. This is happening before the end of the Second World War. There's other crazy stories, that a pro baseball player named Moe Berg-- this really happened-- was basically drafted into what would become the CIA. It was the Office of Strategic Services at the time in the United States. He spoke German. He was sent to neutral countries near Germany, like Switzerland, to listen to these public lectures by Heisenberg. The baseball player was armed with a pistol. And he was basically a amateur spy. And if it sounded like the Nazi bomb project was getting too advanced, Berg was ordered to assassinate Heisenberg. He didn't because it didn't sound like they were that close, and these amazing, pretty ridiculous or crazy stories. Meanwhile, the Alsos mission is successful. They gather enormous documentation from the German efforts. And they also captured 10 German nuclear scientists in the spring even before Germany formally surrenders. They capture them as basically prisoners of war, and they ferret them out of the country to Farm Hall, this quite lovely country home-- country house in rural England not too far from Cambridge University. This was called Operation Epsilon. Once again, the house was bugged. The conversations were constantly audiotaped, transcribed, and translated. And you have an excerpt in the reader-- I'll let you read through here. Let me just say very quickly-- I know I'm running late on time-- the first reaction upon hearing that nuclear weapons had been made and had actually been used, in this case against the city of Hiroshima by the American forces, the first reaction is utter disbelief. Heisenberg couldn't believe that anyone, let alone the bumbling Americans, could possibly have gotten so far along in this project, which his own group had made only halting progress on. A little while later, as the transcript reveals, it's a different physicist, Carl Friedrich von Weizsacker, who says, well, we didn't make a bomb because we didn't want to. We didn't do it, as he says, on principle. If we'd wanted Germany to win the war, we would have succeeded. And Heisenberg then says, ah. I was convinced of the possibility of making a reactor not for power, not as a weapon. But I never thought we'd make a bomb. And at the bottom of my heart, I was really glad it would be a reactor and not a bomb. They began making-- trying to make sense of what's happening so quickly around them. Again, just to go quickly here. Sorry for the-- I'll post the slides, of course. You can see them. The idea that Heisenberg had actually purposefully resisted Hitler by dragging his feet, by slowing the project, was not a position that Heisenberg himself ever articulated. But other people began saying it on his behalf. Heisenberg said, we'd worked hard on reactors, which was true. He just chose not to emphasize they had also had ideas about weapons. But other people speaking in some sense on behalf of Heisenberg, put together a story that Heisenberg had purposefully dragged his feet so as to deny Hitler a nuclear weapon. And we have this amazing correspondence of Heisenberg writing privately to these journalists saying, you've got it all wrong. Here's an example. I would not want this remark to be misunderstood, as saying that I myself engaged in resistance to Hitler. And so I'll stop there. I apologize for running a bit long. I have time for a few questions. I'd be glad to stay on longer if people would like. Of course, feel free to head off to your other classes. And again, the slides are on the Canvas site. So anyway, here's part of, again, this kind of unsteady mixture of really cutting-edge nuclear physics unfolding in real time with this kind of really fast-changing series of political and kind of military and bureaucratic maneuvers all getting wrapped up together. Any questions on that? So I encourage you to go back to the Operation Epsilon excerpts. We do have the excerpts of the actual Farm Hall transcripts, which I put on the Canvas site, including the fateful day of August 6-- their reactions to the BBC reports of the bombing of Hiroshima and lots, lots more to talk about that as well. We'll see some hints of this in the documentary film. We can talk more in our informal discussion next week. And in the meantime, we'll pick up this story then on Wednesday with what does some of the physicists outside of Germany do with these same set of ideas about nuclear fission and weapons prospects. So we'll talk more about Allied efforts during the Second World War on Wednesday. So sorry for running long. Stay well. Good luck with paper two, and I'll see you soon. Bye, everyone. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_22_Quarks_QCD_and_the_Rise_of_the_Standard_Model.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Today, we're now sort of in the middle of our unit of our last main unit on a kind of quarks to the cosmos, trends in high energy physics, astrophysics, and gravitation and cosmology. And so today, we'll be picking up part of the kind of threads that we were looking at before the Thanksgiving break, and we'll be looking in particular how was it and over what kind of time scale was it that most physicists came to be convinced that many, many types of matter are actually formed of quarks, that there's a constituent elementary unit within many kinds of matter. How did people come to be convinced of that and over what kind of time scale? So that's what we'll look at for part of today. So the three parts-- we'll revisit some of the trends that we already began looking at briefly, again, in the class sessions before the Thanksgiving break, in particular this what seemed to be a really unexpected and at times almost overwhelming proliferation of the number of particles that seem to come out practically every time that the physicists turned on their often brand new particle accelerators. We saw there was a huge spate, a kind of building boom coming out of the Second World War of making larger and larger more and more powerful accelerators. MIT had our own synchrotron in operation for many years. Many, many individual universities had them, and of course, there were larger national laboratory facilities as well, all a kind of carryover into the post-war period of the wartime Manhattan Project. These were Atomic Energy Commission facilities by and large. So what were some of the responses to this proliferation of new kinds of particles that seem to be popping up all over the place? We'll see that that was really on many people's minds when they began thinking about what we now call quarks. The idea about quarks itself has a really, I think, fascinating kind of up-and-down history where not everyone was on board at the same time for the same reasons. There was a kind of long ambivalence or confusion or-- what are they? Are they real? Are they not? And so we'll look at that for a good chunk of today's class session. And the last part will be a kind of what has become called the so-called standard model. We'll look at a key part of that called quantum chromodynamics, or QCD, which you've probably heard of, at least. You probably recognize the name. We'll talk a little bit about where that comes from and how that fits into the kind of larger picture today, and it's drawing on this many decades coming before it that we'll look at for today. So that's what we're heading to. Quick reminder-- as I just mentioned, after the war, pictures like this were becoming more and more common. So by 1955, Berkeley's Bevatron, a billion-electron-volt accelerator, was filling kind of factory room floors. Again, here's a human operator to give you a sense of the scale-- absolutely enormous, certainly for its time. And as I mentioned, that was driving this exponential rise in the numbers of seemingly new, seemingly elementary nuclear particles. When physicists could achieve higher and higher interaction rates, they found not just higher energy stuff flying out but new kinds of stuff flying out, new kinds of particles among the decay chains and all that. So that was often called the particle zoo. We saw in the previous class session one of the first challenges, kind of intellectual challenges to try to make sense of this very large number of nuclear particles is that they interacted strongly with each other. In fact, we now call this the strong nuclear force. It's called strong because the analogy to the electric charge, which we often abbreviate just by the letter G, the coupling constant, is very big. So whereas the electric charge is small in kind of appropriate units, it's roughly 1/137, so you could do a perturbation expansion, kind of Taylor expand, and more and more complicated contributions, just countless because they have more and more factors of a small number in front. The opposite seemed to happen for these strongly interacting nuclear particles, where an arbitrarily complicated interaction, a Feynman diagram with many, many vertices, would actually weigh more in your final answer than the simple ones, and there was, in principle, an infinite number of complicated ones. It really didn't make any sense for how to calculate in this strong nuclear force regime. There was a second challenge, which was maybe a bit more meta, which was really this question of, could all these hundred or soon 200 or almost 300 seemingly elementary particles-- are they all really elementary? Are they all kind of to be understood on the same footing? Is there any pattern or any underlying order that one might be able to bring to what otherwise looked like really just a kind of scattershot display of soon hundreds, many hundreds, of seemingly elementary particles? How do you make sense of all that stuff? So we saw, just a brief recap for last time, one response to this, very creative response, which was really dominant for a better part of 15 years or so, was associated with the particle physicist Geoffrey Chew, and we talked a bit about him and read about some of his work last time. He introduced the idea of nuclear democracy and then, with some of his students, the idea of the bootstrap, and they were really being very creative with these simple Feynman diagrams, where if you simply rotate the diagram, turn it on its head by 90 degrees, then the roles that these particles seem to play could swap. And so maybe, Chew and his very active group-- maybe, they wondered, that all these many hundreds of particles are actually, in a sense, equally elementary because they're all bound states of each other. They're equally composite and equally elementary and that the labels we apply to them really depend on which orientation of a kind of scattering diagram or Feynman diagram we happen to consider. So it was all about-- Geoffrey Chew's program was all about dynamics. Is there one self-consistent set of forces with which they could make sense of all these many types of particles and their scatterings and interactions? So one response to the strongly coupled particle zoo was to look for a single, kind of self-consistent bootstrap that might encompass more and more examples of this nuclear domain. So what we'll mostly talk about today is actually a distinct approach, a kind of not even rival but a complementary approach that started to gather more and more attention around the same time period, during the 1950s and 1960s, and that was to basically ignore or kind of bracket for the time being questions of forces or dynamics and just say that's beyond us right now but focus instead on classification. Were there patterns among those dozens and soon hundreds of nuclear particles that came out of the accelerators? Were some more like others, more like each other and distinct from others? Are there family groups? Are there symmetries? Are there ways to group this plethora in a kind of ordered fashion, so to classify? So in some sense, that approach was really also not very new. As we'll see, it actually reached back to some ideas from the earliest days of nuclear physics. In the early 1930s, one idea in particular I'll talk about now was introduced by Werner Heisenberg as early as 1932. Heisenberg-- it was a series of articles under the title "On the structure of atomic nuclei," the first of which was received in June of 1932, you may remember, though it was several weeks ago. James Chadwick presented the first really kind of compelling evidence that the neutron exists, that atomic nuclei might have more than one kind of particle within them, both protons and neutrons. He presented that evidence very early in 1932, January, Februaryish, pretty early in the year. Within not even half a year later, Heisenberg had really picked up that idea and run with it and introduced this idea called isospin. So what was the idea that Heisenberg had? The early experiments with both protons and neutrons in these early months after Chadwick had kind of isolated evidence for the neutron itself. The early follow-on experiments suggested that protons and neutrons were kind of interchangeable when it came to nuclear reactions, that they seem to have, for example, the same scattering strength, that what would later be called the coupling constant g, the analog to the electric charge. It looked to be the same, so you get very similar patterns, reaction rates, and so on whether you considering proton-proton scattering, proton-neutron scattering, or neutron-neutron scattering. It looked like it was just one kind of particle, not two very different-seeming particles. There was a kind of symmetry in the way these particles were experiencing the nuclear force. So Heisenberg suggested in this series of papers back in 1932 that maybe it's actually not two separate particles after all. Maybe it's not protons and separately neutrons. Maybe there's one kind of nuclear particle inside the atomic nuclei, but it had different internal states. And so it would be one particle that could be in one kind of condition or another but not actually different kinds of particles altogether and that the symmetry, one particle that could show up as either proton or neutron-- that symmetry would be broken. We would notice the difference when we put those particles in external electromagnetic fields. Then we'd see quite obviously, for example, that the proton has one unit of positive electric charge, whereas the neutron is electrically neutral. But we only notice that if we're measuring electromagnetic effects, if we're putting these particles in an external field. So Heisenberg said, that could be just like an electron, which could be either spin up or spin down. Its internal angular momentum could be spin up or spin down along some direction of space, but we'd only notice that if we put that electron in some external magnetic field like a Stern-Gerlach device. So we don't say there's two separate electrons. We say there's one electron that could be one of two internal states. Heisenberg was saying maybe these nuclear particles were similar. So maybe it's not protons versus neutrons. Maybe there's one kind of particle. After the war, it became common to call that single type of particle the nucleon, which could manifest as either a proton or a neutron, depending on its internal state, but one could imagine them as not being literally separate or distinct particles. So this is a kind of internal symmetry that would be distinguished by a new quantum number, not the same set that people had been thinking about through the '20s, an additional way to characterize that state called isospin. And the proton we would assign to, say, plus 1 unit of isospin, and the neutron would be isospin down. But maybe they're just internal states of a single nuclear particle. So that was an idea that Heisenberg suggested very creatively in the earliest days of nuclear physics, and after the Second World War, a number of theorists kind of went back to that and said, let's look at that again. Let's maybe take that a bit more seriously because there were now many, many more particles to try to sift through and make sense of. So one of the first patterns that a few physicists began to notice soon after the Second World War was that certain of these new particles-- they were often dubbed "strange" because they were unexpected or unfamiliar. These strange particles didn't show up one at a time. They seemed to come in pairs. They seemed to come together. This became known as associated production. They were produced in particle scatterings in association with each other, so fancy way of saying you didn't get only one of these strange particles. They seemed to come often in pairs. And so the idea then was could one assign to these, again, yet another internal quantum number, not isospin but a whole new kind of charge or quantum quantity? And so independently, Murray Gell-Mann, who was a very young particle theorist at the time, and Abraham Pais, a little bit older, still pretty young in his career-- they separately came up with the idea that maybe to make sense of these patterns that these kaons and hyperon particles, for example, were always produced together or, in a similar way, the sigma particles were produced with kaons but not separate from these kind of other strange particles. Maybe there's a kind of charge called strangeness, and some particles were strangeness neutral, like the familiar proton, neutron, and pion, whereas other particles carried 1 or minus 1 units of strangeness. And strangeness had to be conserved. And so it was out of the way to make sense of these patterns. And so a few years after that, the better part of a decade after that, Gell-Mann and then, again, separately in this case, a theorist based in Israel named Yuval Ne'eman-- in 1960, they separately introduced a new way to try to account for these kind of patterns. Remember that no one yet is talking about forces or decay rates. They're really just trying to say, is there a kind of pattern to this very large number of new particles? Are there new ways to make sense of why some particles seem to appear with others in these large experiments, for example? So they introduced something that came to be called hypercharge. So take a couple of these as yet entirely hypothetical internal quantum numbers. They just keep inventing new ones that help us sort what goes with what. One of them they called baryon number, and that applied to many familiar particles like protons and neutrons. It would also apply to many of these new strange particles. So one kind of charge would be baryon number or baryon charge. Some of those particles would also carry this so-called strangeness charge. So the hyperon particles or the sigma particles would have one unit of baryon charge and one unit of strangeness charge, whereas the more familiar, the not strange baryons would have 1 unit of baryon charge but 0 units of strange, that kind of thing. So it's a new way to try to out this large kind of Sudoku puzzle of new particles. And then, again, Gell-Mann and Ne'eman showed in 1960 if you pick up on Heisenberg's notion of isospin, this internal state that could distinguish, for example, a proton from a neutron, and then add in 1/2 of the hypercharge, this new one they had just basically made up, then you could, again, start making kind of sense of these patterns of these particles. So if you go back to the familiar nucleons, the proton, according to Heisenberg, we could assign as this isospin-up state, so its isospin is plus 1/2. It has one unit of baryon charge but is not strange, so its hypercharge is plus 1. Plug in over here-- you see, well, then you'd expect any appropriate units. It should have 1 positive unit of electric charge. Conversely, the other state of that nucleon, the neutron, would be an isospin-down state. It also has 1 unit of baryon charge but 0 strangeness, and so you would expect it to be electrically neutral. And then you can play that game dozens and dozens of more times to see, is there a kind of single, self-consistent pattern with which you could start grouping like with like among the even less-familiar particles, not just protons and neutrons? And so in 1960, Gell-Mann presented these groupings, these group theory structures, that seem to make sense of families of particles, and he could array them by mapping them along two axes of hypercharge, this new internal quantum number he'd invented, he and Ne'eman invented, so hypercharge versus isospin as opposed to other ways one might try to group particles, electric charge, and mass and like that. He was finding order by placing them in these abstract kind of mathematical spaces, grouping them based on their hypercharge and isospin, and he found these very distinct patterns. Some of them were eightfold patterns. Others were tenfold or decuplets. So he labeled this first one the eightfold way. He was being very playful. Actually, he was just kind of showing off. Gell-Mann loved to the very end of his life-- he only died a few years ago. He lived quite a long life. He loved showing off his knowledge of many languages, many literatures and cultures of the world. We'll see more examples of that even today. So he very playfully borrowed the term for this from the Buddhists' eight-step path to achieving Nirvana, which, for a long, long time, had been known in English translation at least as the eightfold way. Gell-Mann said that's just as important or just as evocative a term for these eightfold particle groupings he was finding when he mapped some of these nuclear particles in hypercharge isospin space. So here are the familiar neutron and proton. They have 0 strangeness charge. They're not strange because they've been known for a while. Here are the sigma particles. Here's the neutral hyperon. Here are other particles with 2 units of strangeness charge instead of 1 and so on. So we call that the eightfold way. He went on to show there are other groupings, other very specific group theoretic structures, again, in this abstract hypercharge isospin plane, which no one in their right mind otherwise would have thought to use. But there were some times not just eightfold patterns but tenfold, decuplets, again, of particles that seemed to be kind of associated with each other based on their varying hypercharge and isospin. And this involves, again, some of the both unstrange and strange particles. So the delta particles seem to have 0 strangeness. That meant they weren't subject to that associated production. You didn't have to make a separate strange particle with them. So they seemed to not carry conserved strangeness charge. Here are the sigmas, the cascades, and so on. What he pointed out this time was that there seemed to be such a pristine geometrical pattern and yet a gap. There was a missing, an as yet unknown particle that Gell-Mann suggested maybe really is out there after all. To fill out this very clear pattern, he thought maybe there should be a particle with minus 2 units of hypercharge but 0 units of isospin, that red circle here, the kind of gap in his otherwise very clear mapping. Moreover, for this decuplet, he went further and found that there was a kind of pattern in the masses between these rows as you went down in hypercharge. So the delta particles all had a mass of roughly 1,200 million electron volts, whereas the sigma particles had around 1,380, a gap of roughly 150 MeV. The cascade particles were another 150 MeV heavier still. So Gell-Mann went even further, and in fact, very dramatically, in the middle of a conference with a number of experimental physicists in 1962, he basically stood up and challenged them. He said, I bet if you look hard enough, you'll find a particle with exactly these properties, with hypercharge minus 2, isospin 0. And he could then work out it should have 1 unit of minus electric charge. And he could even give a mass estimate. He said it would probably be about 150 MeV heavier than the cascades, and sure enough, about not quite two years later, experimentalists came back and announced that they had actually found evidence for exactly that particle, almost exactly with the properties that Gell-Mann had very dramatically predicted. The mass was extraordinarily close to Gell-Mann's kind of back-of-the-envelope estimate. It had indeed 1 unit of negative electric charge. It seemed to be consistent with a strangeness of minus 3 and so on, the so-called omega minus particle. That was a big, big deal. In fact, it was such a big deal that Gell-Mann was awarded the Nobel Prize only five years later for all of these efforts to bring a kind of classification order to what it seemed like an orderless, just this kind of sludge of new particles. This kind of development impressed many of his colleagues very, very quickly in real time. So let me pause there and ask if you have any questions on that classification approach. I really find it astonishing. I don't know if any fans of puzzles like Sudoku or other highly constrained, "you can go here but not there" kinds of pencil and paper puzzles like a crossword puzzle, but he's doing that now with 300 seemingly elementary nuclear particles and finding, I think, these very abstract or at least not the most obvious properties to focus on. But he just sort of rearranged things then began finding order where few had found it before, so it's pretty cool. I should say Gell-Mann did his PhD at MIT, so we should be proud of him for that, I guess, so a local fella. He had been in graduate school-- I think he finished in 1951, so this is some work he was doing pretty soon after his PhD. Any other questions on it? If not, I'm happy to press on. Oh, here's a question from Tiffany. Oh, yes, right. So that's true. Some of you might know Sabine Hossenfelder is a colleague of mine. I know Sabine pretty well. She wrote a really interesting popular book about a year or two ago, not too long ago, called Lost in Math, and she's become a really very trenchant and outspoken critic of developments like string theory in high-energy physics. And so her argument in brief is that sometimes, in more recent years, physicists have maybe become too enamored of these kinds of symmetry arguments, that just because there's a pattern that nature must have taken advantage of what we consider a beautiful symmetrical pattern, and could we have a little more input than that? And I should say, back in Gell-Mann's day, the early part of his career, he could make an announcement in 1962 with real confidence that his colleagues could actually try to find empirical evidence that was consistent with it within a short while, let alone 50 years later. And so the kind of symmetry argument was getting a lot of feedback, let's say, positive reinforcement in the '60s because these experiments were feasible, and many, many places could do it. And it actually wasn't based only on mathematical beauty but a kind of pattern that maybe one could then have a kind of empirical dialogue about. And I think Sabine's point, quite well taken, is that, more recently, the energy scales involved in the hypothetical string theories and so on are so far removed from earthbound experiments that we've lost that kind of back-and-forth kind of conversation, so to speak, between theory and experiment in a lot of these ideas so that what seems to some people like beautiful mathematics seems to others like groundless speculation. And that seems to be a different situation than in the mid '60s. So she's a very good writer, and I think she's a great kind of popular science writer anyway, let alone that she has, I think, really interesting insights into some of the sociology of more recent high-energy theory. Yeah. Good. Any other questions or comments? If not, I'll jump in to the next part. OK. Well, let's see what people do with all this crazy symmetry stuff. So I mentioned that in 1962, Gell-Mann made this kind of very dramatic prediction at a conference, saying, go look for the omega minus right where I told you to look for it, essentially. Not long after that, very early in calendar year 1964, Gell-Mann and then, again, independently, another much younger theorist, George Zweig, separately suggested, proposed a new hypothetical way to bring order to these kind of hypercharged isospin groupings that Gell-Mann by this point had been doing for quite a while. So Gell-Mann's article was received at the journal, you can see here, received on the 4th of January 1964. It was published three and a half weeks later. It was pretty quick to get through peer review. George Zweig's preprint from CERN-- he was at that point a very new postdoc at CERN. His preprint was dated the 17th of January 1964. So they were working separately, and not even two whole weeks elapsed between each of them committing this to paper. And they came up with a remarkably similar set of ideas that maybe actually the order that Gell-Mann in Yuval Ne'eman and others have been finding for a couple of years, eightfold way and the decuplets and so on-- maybe that was due to an even more deeper underlying symmetry involving actually a very small number of truly elementary particles or constituents. Gell-Mann very famously called them quarks. You see them even in quotation marks here on the first page of his-- it was only a two-page article. Zweig had called them aces. These were really very, very similar ideas, and it is that all the patterns on these hypercharged isospin plots among the then-known nuclear particles could be reproduced under a very simple-sounding assumption that there existed only three types of elementary constituents, quarks or aces. And if you were very careful in assigning them very specific values for hypercharge, isospin, strangeness, and all the rest that you could reproduce those group theory patterns of eightfold patterns and tenfold patterns. So Gell-Mann, as I mentioned earlier, loved to show off his wide-ranging knowledge of languages and literature and so on. So among this very short list-- this is the entire reference list, eight references only for this entire two-page article, number six of which is to James Joyce's very famous and famously obtuse novel Finnegans Wake, where this line many of you, by now, might recognize-- "Three quarks for Muster Mark!" If that doesn't make any sense to you, good. It's all nonsensical. The Irish novelist James Joyce loved basically making up his own words and not bothering to, for example, define them, and "quarks" was one of these kind of made-up names. He kind of liked the sound of it. Well, Gell-Mann liked the sound of that, so he borrowed it to label this new hypothetical constituent. Zweig stuck with "aces," which may be a little more familiar but still kind of a nonsensical word. So they could go through this exercise and assign baryon charge, strangeness charge, and therefore, those two together would give you hypercharge. Likewise, just kind of intuit or posit that the two of those quarks would be an isospin doublet. So one would be spin up. One would be isospin down. The third one would be what's called an isospin singlet, which means it has 0 isospin, and if you do that, then again, you can reproduce the electric charge assignments. So these became known as the up quark, the down quark, and the strange quark. And the idea was that any of those so-called strange nuclear particles, the kaons, the hyperons, the sigmas, and all that, must include at least one constituent strange quark. That's what gave it its strangeness charge. So it has strangeness charge minus 1, whereas particles that were made only from up and down quarks would have 0 strangeness charge like protons, neutrons, and pions and so on. So with those assignments, they could go through the known particles, even the more exotic or more recently found particles like the omega minus, and make these self-consistent assignments always on either triplets of these constituent quarks if they were baryons, like protons, neutrons, or the omega minus, or a quark-antiquark pair, bound states of only two quarks, if they were mesons, like pions or kaons or others. So it was, again, even more a kind of-- it went from 2D Sudoku to 3D Sudoku, basically, even more tightly constrained assignments for these three constituent hypothetical entities, and they could reproduce all the kind of grouping structure in these abstract spaces like hypercharge, isospin. And there seemed to be a unique assignment for every one of those particles. And so this looked like a remarkable kind of efficiency. Instead of having 100, let alone 300, seemingly elementary nuclear particles, you maybe only had to worry about three of them and then all these combinations. It looked more like the periodic table from chemistry where there are on the order of 100 chemical elements, but based on developments from the early 20th century, it looked like only a small number of elementary constituents whose combinations would yield 100 or more distinct chemical elements. It was that kind of move in the classify and simplify. So with these two papers written early in 1964, the idea was to put forward these very specific assignments for three constituent pieces within those nuclear particles. Now, this raised new questions, and it was not unfamiliar even to Gell-Mann and Zweig themselves. The first one and maybe the most obvious is over here in this orange column, which maybe gave some of you pause already. If not, I'll draw your attention to it. The suggestion seemed to be that these new particles of nature, the up, down, and strange quarks, would carry fractional electric charge. The up quark would have plus 2/3 unit of charge in the unit of, say, the electron. Both the down and the strange quarks would have minus 1/3 units of electric charge, and that rightly should give people pause. It certainly gave the authors of these ideas pause. There had been no compelling experimental evidence for half a century at that point for fractional electric charges. There have been early, kind of ambiguous evidence in the early years of the 1900s and 1910s. It triggered, actually, some famously bitter fights among physicists in the early 1910s about whether labs had found evidence for fractional electric charges or not. By the 1960s, that had seemed to be very well settled, and the answer was no. There was no compelling evidence, despite half a century of earnest experimental searching, no evidence for fractional charges, and now they were saying that all of nature, all these nuclear particles, every constituent inside even very humdrum atoms like hydrogen and helium, is somehow teeming with particles of fractional electric charge. That seems hard to square with the evidence. So they knew that. They wrote about that, as I'll mention more in a moment. There's a little more subtlety if you look at this great triumph of Gell-Mann's earlier symmetry-driven approach, the omega minus particle. That was consistent on this new scheme with being a bound state of three strange quarks, s, s, and s. But that now raises a little more subtle question for quantum theory based on the Pauli exclusion principle, which we looked at together in this class a few weeks ago. The idea is that if you have spin-1/2 particles-- and that these quarks were assumed to be spin 1/2 in the angular momentum kind of spin-- then Pauli's exclusion principle forbids having any two spin-1/2 particles in the identical quantum state at the same time. So if you're assigning all these internal quantum numbers to these particles, how could you possibly have three strange quarks bound together? That means at least two of them must have all these assignments plus also either be two of them spin up or two of them spin down. The last charge you have to assign would be the actual angular momentum spin. That could also either be only plus 1/2 or minus 1/2. If you have three of them bound together, two of them are going to overlap with spin. That seems to violate the exclusion principle. That's a bit more abstract than no evidence for fractional charge, but the exclusion principle had been awfully well tested by the 1960s. That seemed like a pretty significant conceptual challenge. And then third, as I mentioned earlier, in neither Gell-Mann's nor Zweig's scheme was there any discussion of forces of dynamics. Why do these objects interact with each other? Why do some have very large reaction rates, others have very small reaction rates, and so on? So there's still no idea of the kind of forces that might interact either between these constituent quarks or as a result of them. And so as a result of that, Gell-Mann, who was no dummy, he hedged. You might have noticed on the very title of his article that I showed in the previous set of slides was called "A schematic model of baryons and mesons." You may remember back to very early this term we talked about when Einstein introduced the idea of light quanta, he called it a heuristic suggestion, and that's very much like what Gell-Mann is doing here. It's a schematic model, he announces even in the title. He goes on to say-- near the very end, the last main paragraph of this brief article, Gell-Mann writes, "It's fun to speculate about the way quarks would behave if they were physical particles instead of purely mathematical entities." That's my italics. A search for stable quarks at the highest energy accelerators would help to reassure us that they don't exist at all, reassure us of the nonexistence of real quarks, precisely because of things like no one's ever found fractional electric charges, these subtleties about the exclusion principle. And so Gell-Mann, in his very first article from 1964, was not saying, eureka, I found them. He was saying, it's helpful to think about as if protons, neutrons, and omega minuses and everything else were made up of these constituent parts, but it's really just a kind of mathematical shell game. These are mathematical classifiers rather than parts of nature. That's, I think, a fair reading of what Gell-Mann is saying in this 1964 article. Meanwhile, George Zweig, a couple years younger than Gell-Mann, didn't even get the benefit of the doubt. So Gell-Mann's paper was rushed into print after three weeks. Gell-Mann and he wrote two papers within a few days of each other in January of 1964, both of which were rejected for publication. Neither of them even made it through peer review. He was a very young postdoc at that point, and they said, this doesn't make any sense at all, no fractional charge, exclusion principle, and so on. Plus, everyone seemed to have confidence by this point-- many people did-- that actually Geoffrey Chew's approach, the kind of single self-consistent bootstrap, was going to save the day. That had made enormous progress experimentally by the late '50s, and this seemed to go exactly counter to that single self-consistent bootstrap idea. This was reintroducing or seemed to be reintroducing the idea of a small set of special particles breaking that so-called nuclear democracy that had been so central to Geoffrey Chew's otherwise very successful program. So the young postdoc George Zweig can't even get his papers published. Gell-Mann is maybe a little wiser, a little bit more long in his career, and he knows to bracket this as kind of schematic hypothetical mathematical [INAUDIBLE]. So this is now in early 1964. A few years later, some very dramatic new experiments were conducted. Depending on who you ask, they're either called the SLAC-MIT experiments or the MIT-SLAC experiments. I'll let you guess which coast prefers which version. These were some of the first experiments conducted at what was then a brand new accelerator-- you can see it above ground here-- that now we simply call SLAC. That stands for the Stanford Linear Accelerator Laboratory. It is a linear accelerator. You see that straight line. It's 3.2 kilometers, basically 2 miles long, and it's a series of electrostatic voltage gaps along which you can accelerate electrons to very, very high energies, basically up to a significant fraction of the speed of light. You just keep shoving electrons with very clever electric fields down a 2-mile track and smash them into stationary targets. And so one of the first applications of this new device built for the same reasons we've talked about before as part of the Atomic Energy Commission effort to get lots and lots of physicists kind of trained and at the ready-- not that this would help make weapons for defense, but it would make people well trained who could be mobilized in a new Manhattan Project if needed. This is a huge version of that post-war policy of making very, very expensive machinery available for nonmilitary purposes to keep communities well trained, even though this device itself is strictly for kind of peacetime questions. So SLAC came online in 1966, and one of the very first sets of experiments to go in there was actually directed and designed by two members of the MIT physics department, one of whom is still with us. He's emeritus Professor Jerry Friedman. You might have met Professor Friedman. He comes to colloquia and so on-- and then Henry Kendall, who passed away some years ago. The two of them were partners in these experiments. The idea was to accelerate these electrons to very high energies and smash them into very proton-rich targets, so you could actually study very high-energy electron-proton scattering. In the detector bay at the end of that line, you have this unbelievable equipment-- you can see this is like train tracking, and here's a person for a sense of scale-- where you could actually change the angle at which you measure the detritus that comes out of these very high-energy scatterings. This is basically a redo of Rutherford scattering but now across miles as opposed to across a single desktop. You may remember back from in the early-- several weeks ago in our class, we were looking at Rutherford scattering from around 1909, 1911, firing alpha particles at very thin gold foil and then surrounding it with scintillating screen, so you could get information for the number of scatterings per angle as a function of scattering angle. And Rutherford was so shocked when he found a significant number of backscatter events, where the incoming projectile scattered almost all the way backwards, which was consistent only with a very, very small, very massive inner core to the atom. That became known as the atomic nucleus. Now they're doing the same thing but not with a thin metal foil but literally moving these enormous, enormous detectors around in a semicircle on these kind of inbuilt train tracks, so they could get things like scattering rates as a function of angle. I just find that astonishing, that in that case, well, roughly 50 years or 60 years, maybe, they'd gone from the same basic concept but now taking miles and train tracks as opposed to a grad student sitting in the dark. Anyway, what they found, the SLAC-MIT experiments, was again very similar pattern to what Rutherford and his team had found about atoms. When they were scattering off of protons, they found evidence for an internal structure within protons. Protons seem to have internal hard scattering sites just like atoms in that gold foil seem to have internal hard scattering sites that we now call the nucleus. The particle physicist by this point were using a bit funny variables. It wasn't just scattering angle, but this angle q depends both on the energy but also on the scattering angle. It's a larger q here. q squared corresponds to larger scattering angle. And so if there were just kind of equally diffuse smush inside a proton, you'd expect a very rapid falloff with angle. You'd expect very few large backscatter events, but instead, they were finding on this logarithmic scale quite substantial numbers of large-angle scatter. So again, the argument was really identical conceptually to the Rutherford scattering. So how do you make sense of the quantitative details, not just the fact that there seems to be a structure inside these proton targets? But could you actually reproduce those curves with the real specific structure inside the proton? So a number of theorists who were following the SLAC-MIT experiments closely-- they jumped in, one of whom is Richard Feynman, some of whose work we already looked at together in this class. The other was, at the time, a much younger theoretical physicist named James Bjorken. He often just goes by BJ for the start of his last name. They were able to make sense of the scattering data quantitatively, not just kind of qualitatively. There's a scattering site. But they could actually reproduce these very specific curves, those scaling laws, by introducing what they called partons. This was Feynman's kind of joke. So he was absolutely clear not to call these quarks, partly because, as we'll see, they were conceptually quite different, partly because, by this point, both Feynman and Gell-Mann were together at Caltech in the same department of physics, and they were kind of friendly rivals. Feynman had a lot of these relationships throughout his life. So by this point, he traded Julian Schwinger as his main rival for Murray Gell-Mann. Their offices were just down the hall. And so Feynman was as much tweaking his nose at Gell-Mann as being kind of maybe appropriately skeptical by very intentionally not calling his new theoretical particles quarks. They were called partons, meaning they're some parts inside of protons. I thought it was a cute name. These are parts inside protons. So the idea was, why would this help? How would you make theoretical sense of these experiments? And this, again, I think was kind of classic Feynman in his use of intuition. At low energies inside a nucleus, a proton would actually be a big mess. If it has internal structure, whether they're quarks or anything else, if they're parts within a proton, then they're going to be subject to very strong internal forces. After all, protons don't fall apart, so whatever internal parts it might have must be stuck together very, very tightly, very strong nuclear forces. So it would be some jumble of moving parts with a big dynamical mess. And people had no idea how to calculate that. Remember, perturbation theory failed and all that. So at low energies, a proton would basically be extremely difficult to characterize quantitatively. Luckily for Feynman and Bjorken and their colleagues, that was not the situation at hand. To make sense of the SLAC-MIT experiments, they were going to, in a sense, the opposite regime. The protons were stationary targets, but what mattered was the view from the speeding electron. And the electron had just as much license as the person riding on Einstein's imaginary trains to pretend that the electron was fully at rest, and it saw the target racing toward it at very high speed. So if you imagine sitting on the electron, you could say you're sitting at rest while the proton target races toward you at a very high fraction of the speed of light. So from your point of view as the electron, you see the proton undergo length contraction, become a pancake along its direction of motion. And its internal clocks will be slowed. It will undergo time dilation, at least as far as you're concerned. So all of the time scales for these very big, complicated nuclear forces will be slowed down as if they were kind of frozen in time. What the electron would see in this target of the proton would basically be a kind of frozen set of free, non-interacting stationary targets. All the stuff that people had no possibility to calculate, the strong, fast changing nuclear forces become irrelevant in the limits of very high energies as relevant for these SLAC-MIT experiments. So you can go for these internal parts called partons and analyze all this very specific scattering data based on free partons because in the energy regimes of interest, the dynamical forces should basically be irrelevant or subdominant. So you might say, was this a victory for quarks? Well, not right away. As I've mentioned, Feynman went out of his way not to call these partons quarks. The two sets of ideas really were answering different kinds of questions. So Gell-Mann's or even more importantly George Zweig's notion of quarks from early 1964-- that was really a kind of constituent model. What are the little pieces that are bound tightly inside protons, neutrons, hyperons, and all the rest? It's about the kind of internal parts and eventually the forces that must keep them bound together so that we don't see things like free fractional charge. And Feynman's partons were really answering just an entirely separate set of questions, very high energy behavior. When you basically ignore the internal forces, you have effectively free scatterers as opposed to bound strongly interacting particles. And again, you don't just take my word for it that this wasn't seen in its day as direct evidence for quarks. Gell-Mann himself, three years later, three and four years later, mentioned at a very well-attended conference for high-energy physics that quarks were still fictitious, his word, and that his own favored theoretical approach was actually equivalent to the ongoing work from that Berkeley tradition on the bootstrap, which disavows a set of literally elementary particles and talks about the kind of self-consistent composites one can make from the nuclear particles. So Gell-Mann himself didn't leap on the SLAC-MIT results and say, now we found quarks, let alone Feynman. So this was, again, a kind of slow drift. So let me pause there and see if there are any questions on SLAC-MIT or any of that kind of stuff. I know I've said it before, but I just love that contrast from the Rutherford scattering-- and the whole apparatus was maybe a meter across-- to 2 miles with that train track to change scattering angle of your detectors. I just think that's pretty-- that right there is the story of high-energy physics in the 20th century, that transition. You're asking basically the same question, getting remarkably consistent results, but you had to do a lot of work in between and convince a lot more people to pay the money to let you do it. Any questions on that? OK, I will press on. If any questions come up, of course, please chime in. Let's go to that last part for today and talk about quantum chromodynamics. So right around the time when Gell-Mann stood up at the conference in Florida and said, quarks, as we all know, are fictitious, he was actually working on a whole new theoretical approach, sometimes with coauthors and others working independently. Another key architect of this was a younger theorist named Harald Fritzsch. They were developing what came to be called quantum chromodynamics, or QCD. The idea now was, in some sense, to finally ask this question about dynamics, about forces, which Gell-Mann himself had very successfully kind of bracketed or left aside from the '50s and '60s with his focus on classification. The idea now is to try to look more squarely at forces and try to find a quantitative way of making sense of these nuclear constituents. So they're working in very explicit analogy to QED, to quantum electrodynamics, the work that we saw in previous class sessions that really had kind of come together soon after the Second World War in the late 1940s, though the roots of it went back to the early days of quantum theory. So there really was a kind of step-by-step analogy. In the new work, in place of electrons and positrons, the new work would focus on quarks. Those were the kind of elementary particles that would be subjecting each other to forces. So just like electrons could repel electrons, quarks could repel or attract other quarks, and the quarks would interact in the new scheme by exchanging a new kind of force-carrying particle called the gluon. This would be the glue, the nuclear glue, that would keep, say, an up-down and down quark bound within a proton or within, in this case, a neutron. So the gluons, which were maybe not the most creative name-- they were meant to be nuclear glue. They would play the analogous role to the photons. Remember, we saw electrons on the idea from quantum field theory would repel each other by firing force-carrying photons at each other back and forth, virtual photons, and now you have a similar structure, at least hypothetically, about quarks interacting by the exchange of gluons. So there are some new ideas, again, to try to make sense of all the newer classifications and symmetries and internal charges and stuff that have been introduced for the nuclear particles over the previous 20 years. The main idea the main new ingredient conceptually for this newer QCD was to introduce yet another internal quantum number, or charge, again, strictly hypothetical at this point, and they called it color. The idea was that each quark carries yet another internal quantum number. This one could have one of three values. So instead of spin or isospin, which were usually one of two values, spin up or spin down, the color charge could come in one of three values, and rather than call it minus 1, 0, or plus 1, they called them-- they gave them names like colors, the primary colors, red, green, and blue. And they did that because they posited several features of this color charge, that the color charge overall is conserved-- you can neither create nor destroy, say, red quark charge-- that the free particles you could ever measure or experience in nature-- bound protons, neutrons, pions, or anything else-- have to have an exact balance among the color charges. So if you have one unit of red color charge, you must have exactly one unit of both green and blue, and vice versa. And that's why they appealed to the primary colors. If you mix red light, green light, and blue light in equal intensity, together, they will make white light, or all the color will vanish, basically. You'll have perfect color balance. So that's what they were positing among the color charges on these quarks, that baryon, a bound state of three quarks, must have exactly one of each color to balance out. The mesons, which were like a pion, which is one quark and a bound antiquark, would have to have one red and one anti-red quark, so together, the color charge would exactly balance, or one blue and anti-blue and so on. So that's the first two assumptions. Color charge is conserved, and you have to have an exact balance. And the last one is that this new force law has to be symmetric with respect to random permutations of the color charge, and that's, again, a lot like the symmetries of isospin, that when you consider a lot of these nuclear interactions about, say, proton-proton scattering or proton-neutron or neutron-neutron, it didn't matter. Any random permutation gave the same results, any random permutation of isospin. So the nuclear reactions were isospin symmetric, and the idea was that maybe that same kind of symmetry held among this as yet unobserved, maybe even unobservable charge on these elementary constituents, the quarks. And they jumbled up in any old way as long as the total balance remained. So you'd have no free color charge leaking out because you'd have an exact balance. So those were the assumptions. Then you could go back and resolve some of the puzzles like the omega minus. So now maybe the omega minus would not violate the Pauli exclusion principle because it was not three literally identical spin-1/2 particles in a balanced state. They differed on one more set of internal quantum numbers that had not yet been taken into account. The omega minus would have to be a bound state of one red, one green, and one blue strange quark, so no two of them would be in the same quantum state, even if they had the same values for all the other quantum numbers, including spin. So now you can have bound states like the omega minus. This also suggested at least why it might be possible to avoid having fractional electric charge. If you always have to have this exact balancing, maybe the electric charge always has to come in integer units and never see, say, a plus 2/3 charge that wasn't appropriately balanced by the color charge-preserving quarks that, as a consequence, would also get you back to only integer values for electric charge. That was at least a hypothesis. So as I mentioned even just in that brief discussion a moment ago, really central to the idea of quantum chromodynamics, or QCD, is the idea of symmetry, that some property of the system remains invariant or unchanged even when certain parts are changed to undergo what we call transformations. We saw how important this was in the study of relativity. It plays an increasing role in quantum theory, and by this point, by the early and mid-1970s, the theorists like Gell-Mann and Fritzsch and their colleagues were elevating this to a kind of guiding principle to be driven by what has to remain unchanged even among symmetry transformations. So again, we're all familiar with symmetries. You can imagine, for example, a sphere. It will appear unchanged. Its appearance will remain invariant even if you rotate the sphere by an arbitrary angle around an arbitrary axis. That's an example of a continuous symmetry. The transformations can be by any angle, 7.1 degrees, 7.1003 degrees, any continuously variable transformation. Likewise, you could rotate the axis itself by any amount. We're also familiar with things like discrete symmetries. Imagine a square, a simple two-dimensional square. Its appearance will remain invariant if you rotate it by any integer number of times 90 degrees. Rotated by 90 degrees, you can't tell the difference by 180, by 270. If you rotate a square by 37 degrees, you can tell the difference. It no longer looks unchanged. So that's an example of an object that remains invariant under discrete symmetries, a more kind of constrained set of transformations that will nonetheless leave the appearance unchanged. So we're familiar with these kind of everyday examples from geometry. Then we're going to start applying this to these more abstract, internal mathematical spaces of these things like color charge. But what if there were symmetrical rotations, in this case discrete ones, because only three values of the color charge, red, green, or blue? So that's more like the square. You can rotate a red quark by kind of 1 unit in color space or 2 units or 3, but not by 1.2 units. That would be like the square being rotated by something other than 90 degrees. So it's a discrete symmetry, and not that you're literally picking up a quark and rotating it, but you're changing its description in this abstract mathematical space of color space. So you could perform rotations on your quantum field that represents a given quark. You could transform it by rotating it by some angle in color space. The angle has some constraints. Like I mentioned, it's a discrete symmetry. Moreover, in this case, you could be changing the angle of rotation at different times and places. This is what's called a local transformation. So it's not just that you take a red color quark and you rotate it once and you're done. The excitation of the quanta of your quark field could be rotated by 2 clicks of the turnover here but by minus 1 click over here, and you could change your mind over time. So these are called local transformations, and the idea, which was really just a hypothesis that was driving people like Gell-Mann and Fritzsch and others, was that what would it take to leave any observable features of such a theory invariant even under that set of quite wide-spanning transformations. So one thing you could do is make sure that the observable features of the system only depend on the kind of length of that quantum field. That would be like a radial symmetry, if you think back to the sphere. If you rotate your quantum field by some fancy phase here, if the observable only depend on the absolute square, then the rotations will cancel out. Any local transformation would fall out. But remember, they're now trying to handle dynamics. They're trying to explain change over time and not just a static configuration, and that's when it gets much trickier. And this is really where the real work had to come in. If you try to describe the behavior of that quantum quark field over time quantitatively, it obeys an equation kind of like the Schrodinger equation, a little different, but it has time dependence. It has spatial gradients. It has derivatives. Well, now if you perform these local transformations where your rotation angle itself can, in principle, depend on both space and time, you get in trouble with the kinetic energy, with the change of that quantum field. If the field itself undergoes a rotation-- so psi goes to psi prime-- now we're taking the time change of this slightly more complicated object, and we'll have a chain rule. There'll be psi times the derivative of theta plus theta times the derivative of psi. It will not be such that the phase terms will just cancel out. If we go back to this term for the kinetic energy, if the derivative of the rotated quark field had just a phase pulled out front, then it would cancel. We'd have one term times its complex conjugate. It would be just like the radial symmetry here, but we don't have that anymore because of the chain rule. We have to involve a more complicated set of dynamics to ensure that overall symmetry is preserved. And so that's what then Gell-Mann, Fritzsch, and their colleagues do next. To maintain that symmetry under local rotations in that color space-- red, green, and blue quark color-- they add in these force-carrying particles with very specific properties that the gluons have to behave a certain way as well. In fact, the gluons in some sense, are added for the sole purpose of enforcing that symmetry. So the quark field could undergo these very fancy rotations, all the permutations of color charge, and to leave all the observables, including the dynamics of forces over time. To leave all those unchanged, these other kinds of matter, the gluon fields, have to have very specific properties. And this goes is a little technical. Don't worry if this goes by too fast. There's no quiz. But what they do is they construct what's called a covariant derivative, which combines the change in space and time. This is just shorthand for taking the time derivative and the spatial gradient. This is just the derivative of the quark field in time and space. But instead of only worrying about its change that way, you say it also coupled. It also interacts with these gluon fields. And if the gluon field itself has a very specific transformation-- so if you rotate a quark color, you also have to do a transformation of the gluon field at the same time. If you choose that transformation very cleverly, then your covariant derivative, this combination of changes in space and time plus the very particular way that the quark field interacts with the gluon field, that will leave this souped-up derivative, the covariant derivative, such that you actually get this overall phase factor out front, and now the kinetic energy remains unchanged. Now your e to the plus i theta is exactly canceled by its complex conjugate, so now your souped-up kinetic term remains totally invariant even under these more elaborate color charge symmetries. So you can have quarks that run around the world with different colors-- red, green, blue. The color assignment to a given quark could change over time and across space, and as long as you have an additional kind of matter running around with them, these gluon fields, and if you specify very quantitatively how the gluons have to behave much like the quarks, together, the whole assembly will remain symmetric under that symmetry. That's, again, a bit more abstract, but the idea conceptually is, I think, really pretty astonishing. Basically, Gell-Mann and Fritzsch and their colleagues have to dream up, have to hypothesize a whole new kind of matter, the gluon fields, strictly to enforce a symmetry which was just of their own thinking in the first place. So because they're so enamored by the symmetry properties that made sense of all the kind of classifications of the nuclear particles-- hypercharge, isospin, and all those eightfold ways and tenfold ways and so on, they're really smitten with this idea of an underlying symmetry. But when they really, really want to describe the forces that would obey that symmetry, remain consistent with that symmetry, they can't do it with quarks alone. They have to have quarks interacting in a very specific way within a whole separate family of matter, the gluon field, and in this kind of coordinated dance, and then the whole collection, this kind of rickety collection would protect or enforce that symmetry. I find that pretty astonishing. So the symmetry of quantum chromodynamics is more complicated than that of other examples of these kinds of models, like quantum electrodynamics, and therefore, the properties of the gluons are actually more complicated than those of the photon. In particular, the color charge is a discrete symmetry, whereas the phase freedom for the electric charge is a continuous symmetry. So the gluons really have much more kind of articulated mathematical structure, and one consequence of that is that the gluons can actually interact with other gluons. Or at least, they're all still hypothetical at this stage, but in order for all that symmetry preservation stuff to work, for the math to work out, the gluons have to be able to scatter off of other gluons or attract other gluons, whereas photons don't do that. Photons don't have electric charge, so photons only scatter off objects with electric charge, at least according to QED. So photons will scatter off of electrons or positrons or protons. Photons won't scatter off of neutrons, at least not directly, and they certainly won't scatter off each other, whereas gluons can scatter off each other as well as off of the quarks. Another way to say that a bit more succinctly is that gluons also carry color charge much like quarks do, whereas photons do not carry electric charge. So the fact that gluons can attract each other actually changes the kind of force law between quarks. The force between quarks will actually grow with distance because the distance between them is filled with gluons, and the gluons themselves can interact with each other. So it starts looking a bit actually like a kind of stretched rubber band. There's a kind of elastic restoring force at least as predicted by quantum chromodynamics which has no analog in, say, the force between an electron and a positron. Remember, the attractive force between opposite electric charges falls off with the distance. That's a famous Coulomb's law. In fact, the force falls off like the square of the distance. So they could still exert an attraction across space, but if you double the distance between them, the strength of the attraction falls by a factor of 4. With quarks, attracting each other by this highly symmetric, strong nuclear force, it seems to go the other way, that the force actually seems to get stronger with distance, and the explanation of why it's so different is because the gluons that are mediating that force can actually kind of glue on to each other, so to speak. They can attract each other. So if you imagine trying to stretch to a bound state of a quark and an antiquark apart-- this could be, for example, a pion, some sort of meson, and you could instill external energy by pulling on these two particles to try to pry them apart by adding external energy to the system. At some point, you will have to add so much external energy, you'll be tugging so hard, adding so much external energy, that you'll actually create a whole new quark-antiquark pair. In a sense, these were already around. They were virtual quark-antiquark particles, but they can now borrow the energy you've given to that system to become real particles. They now can pay off their debt to the vacuum. If they were virtual particles, they now can pay off the energy they borrowed as soon as you give to the system enough energy to cover their masses. So in fact, when you try to pull a quark and an antiquark apart within this scheme, at least hypothetically, you'll never get a free quark isolated. You'll never see a fractional charge. In fact, what you do is you create two bound states of quark-antiquark pairs where before you only had one. And so that, again, comes because the gluons can interact with gluons directly. So that was a prediction from QCD by the later 1970s that was worked out a few years after this symmetry-preserving scheme, and that, again, gave experimental colleagues something to look for, some specific kind of thing to look for in the high-energy scatterings of these nuclear particles. And the first evidence in concert with that idea was presented by an experimental team in Europe. I think this was a group in Germany in the early 1980s. These came to be called jets. The idea would have these jets streaming off where the original beam might have gone in one direction and is not quite torn-apart but rapidly reassembled pair of mesons would jet off in opposite directions or nearly opposite directions. So you can have a very specific pattern of the debris coming out of the very high-energy collisions of these particles because in the effort to crack apart a quark and an antiquark pair, for example, you wind up making more kinds of particles that will jet off away from each other. So these QCD or quark-gluon jets were actually a prediction of this new theory based, again, on the idea of preserving symmetry and having a certain way in which the force-carrying particles interact with the quarks, and that was at least assistant with some of the new experimental data within just a few years. It was at that point by the late '70s, early 80s, not in 1964 or 1967 or any other steps along the way-- it was really at that point that the community began to coalesce around the idea that quarks might actually be real particles in nature and not just a mathematical abstraction. So let me wrap up here. One of the responses to this particle zoo after the end of the Second World War, as we saw, was to double down on dynamics, like Geoffrey Chew, to say all these particles are, in some deep sense, equally elementary because they're all composites of each other. But as we've been tracking in today's class, there's a kind of parallel complementary approach really kind of led by Murray Gell-Mann, though many other people were making contributions along the way, which was to bracket the question of forces or dynamics for a while, really, for 20 years, and just to try to figure out, are there patterns in the kind of detritus that comes out of those new accelerators to emphasize classification and internal abstract symmetries? So Gell-Mann and others-- but Gell-Mann, again, really took the lead at this-- began grouping these particles into families based on these kind of invented new quantities that did not have an obvious relation to measurable properties. So he invents hypercharge. He makes use of Heisenberg's older notion of isospin, and he starts grouping these particles in these very abstract, mathematical, highly symmetric spaces of hypercharge versus isospin. That enables him to find kind of gaps where he predicts very boldly that there should be actual particles out there. Famously, this works for the omega minus, and then he suggests very kind of schematically, heuristically early in 1964 that maybe those patterns were consistent with a small set of something more fundamental that might not actually be real bits of matter. So the quarks or what Zweig calls aces are very kind of tentatively suggested as a kind of way of accounting for symmetries rather than saying these things are out there part of the world, in part because there was such steep conceptual challenges to thinking of them as part of the world-- fractional electric charge, exclusion principle, and all the rest. The SLAC-MIT experiments or, I should say, the MIT-SLAC experiments from the later '60s produce suggestive compelling evidence that protons and similar nuclear particles had internal structure, but it was not at all clear to the community, even to people like Gell-Mann, let alone to friendly rivals like Richard Feynman-- it was not at all clear right away that those internal scattering sites were the same as these quarks. In fact, even years later, Gell-Mann would say that quarks are fictitious. There might be parton structure within protons, but that need not be the same as quarks. We saw that was in hindsight taken to be very, very compelling evidence that quarks exist. Jerry Friedman won the Nobel Prize for leading these experiments. And now we say this is in hindsight one of the really important touchstones for why we think quarks are physical particles, but again, it's important to see this didn't tie everything up with a bow right away. It was really the combination of that plus follow-on experiments like the jets from 15 years later plus a much more articulated theoretical structure which reintroduces quantum fields, doubles down on these internal symmetries that becomes known as quantum chromodynamics. This combination of theoretical and experimental ideas and contributions come together over a kind of 20-year period, by the end of which most physicists then would say, the world is made of quarks. They obey a very specific kind of force, and even though quarks themselves are physical particles with fractional charge, the particles we ever can measure in our experiments only have integer units of charge. So I'll stop there. We have time for some more questions or discussion if people would like. Stay well. I'll see you on Wednesday. Take care, everyone. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_8_Rethinking_Light.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: We're going to launch in today on the first part of our new kind of material, new set of material on quantum theory. So we talked a bit about the road toward relativity in the first few class sessions. We're going to pivot now and talk about this other amazing edifice of what becomes known as modern physics, quantum theory, as that was getting pieced together over the first quarter century or so of the 20th century. So for this material, we're going to spend this class session and the next one talking about what came to be known as old quantum theory. Of course, it only became known as old quantum theory once there was a new quantum theory that seemed to replace it. And so physicists themselves introduced the term old quantum theory within the time span that we'll be talking about. So in their own reckoning, by the mid to late 1920s, physicists began to refer by the term of old quantum theory to this collection of work that had unfolded really between around 1900 and 1924. That looked like the old period, once the new stuff had begun to coalesce in 1925, and '26, and so on. Now, here, I have an asterisk. I hope you can see that. These dates are approximate. One of the things we'll be really sitting with throughout these first few discussions about quantum theory is just how much in play these developments remained well after the nominal date at which they were put forward. Much like we saw with relativity, it wasn't that Einstein published his paper on the electrodynamics of moving bodies in 1905, and then the next day everyone woke up and was a devoted Einsteinian or a convinced relativist. Likewise, here with these developments on the early steps toward quantum mechanics, these ideas themselves were unfolding over time. They were subject to sometimes quite wide ranging debate and varying interpretation. So these dates are really meant to be approximate to help us understand the general flow of this body of work between roughly 1900 and 1924. So I find it helpful to divide up that first period of roughly 25 years or so into a set of developments in which physicists began rethinking the nature of light. And that's what we'll talk about today. So today is rethinking light. And really interwoven with those developments, as we'll see in our next class session, were a series of equally surprising developments in which physicists were rethinking the nature of matter. They didn't always make that division so clear at the time. But again, with a bit of hindsight, once the newer work began to coalesce, what we now call quantum mechanics, this division of the strands within old quantum theory made a bit more sense. And again, just a reminder for today, it's thoroughly optional, as always. But for those who might be interested and have a bit more time, I did post some additional lecture notes on the Canvas site that are going to go into a little more detail about parts 1 and 3 for today's material in particular, both blackbody radiation and Compton scattering. So if some ideas go by really quickly or you have no idea where that particular equation came from, there is a bit more that you can delve into on the course site. OK. So before we even talk about who was rethinking light where and when, it's really helpful and very important to step back and remind ourselves again of where this is happening and why it was happening at that time. So a lot of the work that we're focusing on today's class in particular-- not all of it, but a lot of it was happening within this newly unified country called Germany. We saw this a few times. There was no country of Germany until 1871. There were German-speaking territories. But a single, unified, national country called Germany emerged really as one outcome of the Franco-Prussian War, the Prussian state war against the country of France not too long before Einstein himself was born. One of the things that the new country of Germany began to do was to invest very aggressively in a program of rapid industrialization. Once it became a single unified country, the new leaders of the country looked around and were concerned that they were falling far behind other European neighbors in basic industrial capacity, especially Britain. To some degree they worried about France, though they had prevailed in the recent war. So the country begins investing quite a lot in industrialization. And that often meant investing in science and technology, what we would now call science and engineering. So a few years into this new country's life, its existence, the leaders put together a new kind of institute-- not just new to Germany, kind of new even across Europe-- a specially designed, government-funded institute called the Physikalisch-Technische Reichsanstalt, which is fun to say-- we can just call it by its initials of PTR-- that really stood for the imperial or the German National Physical Technical Institute, the PTR. The idea was that this new kind of space, this new institution, should foster several kinds of research and try to get them talking together to really help jumpstart what the country's leaders hoped would be this very rapid pace of industrialization. So the PTR was designed on purpose to foster research into basic science, into curiosity-driven research in physics, and chemistry, and related areas, but also support work in applied research and industrial development. So it was pretty similar to what would soon be developed in the United States. It was originally called the US Bureau of Standards. Now, you might know it by the name the National Institute of Standards and Technology, or NIST, which has several National Laboratory sites throughout the US. The PTR was in that mold. It was really forming that mold-- government-sponsored to try to encourage certain kinds of research, both in basic sciences and industrial applications. One of the earliest pressing priorities for this new institute was to evaluate competing proposals for public works projects, like large scale electric street lighting. Some of you may know, the incandescent light bulb, the electric light bulbs were actually quite new. People think about people like Thomas Edison and others in the United States, similar work going on elsewhere. And one of the earliest efforts was to put this new kind of technology, an incandescent light bulb, an electric light bulb, into use in public spaces. Imagine the difference for city life, for commerce, for communities in general, if cities were pitch dark just as soon as the sun went down. Now, of course, there were many, many competing ideas about how to do that-- how to do it efficiently, how to make the right kinds of bulbs that could give out a lot of light with hopefully not too much power usage and so on. And so one of the first questions that this new PTR had to wrestle with was how to compare these competing proposals for things like electric street lighting. What they were asking about is, how can you measure the amount of light that comes out from these very hot filaments in these electric bulbs? Now people had known for a long, long, long time, a long before the PCR-- humans had known that when materials are heated up to a sufficiently high temperature, they will begin to glow. They'll give off some kind of radiation, some light. Think about very casual observations like embers glowing in a fireplace or charcoals on a grill or now, tragically, the photographs we see from these horrific forest fires out in California and up and down the West Coast. When you heat stuff up to a high enough temperature, it will give off light. It will glow. And in fact, the color of the emitted light shifts with the temperature to which the object has been heated. So the colors, the main frequencies of light that dominate that glow, will shift with temperature. And that was, again, something that was known casually long before there was a PTR. Well, that feeds into this very specific work at this new institute, the PTR, because one of the tasks of the new group was to figure out calibrations for these different kinds of electric light technologies, these proposals. And what the PTR researchers began to notice was it looked like there was a kind of universal or shared pattern in the kind of light, in the pattern of light that came out from various objects when they were heated to high temperatures. And they thought it was universal because it looked like the pattern of light, how much light came out at which particular colors, seemed to depend only on the temperature to which those materials had been heated, but not on the material, the chemical makeup of the materials themselves. Different kinds of filaments, for example, even if they were made up from entirely different atoms and molecules, would glow with the same kind of pattern once they were heated to a sufficiently high temperature. At least, it looked like that might be the case in these early tests to calibrate these different kinds of electric light fixtures and so on. So these researchers began to postulate that there was some ideal so-called blackbody. The idea was imagine an object that absorbed all the light that fell upon it and reflected none back. So it would appear to our eyes to be black. It would reflect no light at all. We'd see it as emitting or reflecting no light at all. So you want to remove any accidents of the kind of light that might have shown on it and concentrate on the light being emitted by this so-called blackbody when you heat that object up to a high temperature. And it shouldn't have mattered whether the blackbody was actually made of wood, or charcoal, or anything else. If it really is a universal glow that you can imagine as this idealized, otherwise unspecified kind of object, a blackbody. And therefore, when you heat this object, this mysterious or hypothetical object up to a sufficiently high temperature, the pattern of light that would be emitted should tell you something about universal properties, not the accidental features of this or that chemical material. So this universal blackbody spectrum, or at least the hypothesis that there might be this universal behavior, that seemed interesting for at least two reasons of exactly the kinds of things that this new PTR had been set up to foster. This could be really useful for calibrations and standardization. Take any new electric light fixture or similar device. You should now have a universal standard against which one could measure its own light output-- what pattern of light would come out from, say, this or that filament because now you have a universal standard with which to compare it. And it also suggested this might tell us something deeply fundamental and very basic about the interactions between light and matter. It shouldn't matter whether you're talking about this kind of material for a filament or that kind. This tells us perhaps something very universal about light and matter at their core. So these researchers at this newly generously funded Physikalisch-Technische Reichsanstalt, the PTR-- they began conducting more and more sensitive experiments on the pattern of light, what became known as the spectrum of light, that was emitted from these blackbodies over the course of the 1880s and 1890s, soon after the PTR itself had been founded. And they began finding curves that looked like this. Now, in the early data-- this is an example of real published data from the researchers, Lummer and Pringsheim-- they often would plot the spectrum, the amount of energy that came out as one varied the wavelength of light. Typically, these days, we tend to characterize it in terms of the frequency. But remember, that's an easy choice, a trade-off. We can always relate the frequency of the light that comes out, nu-- we'll use the Greek letter nu-- that's inversely proportional to the wavelength, like in these data here. And the constant of proportionality is just given by the speed at which those light waves are traveling, the speed of light. So let's look at this curve, U. U is a certain kind of quantity called a spectral energy density. That's kind of a mouthful. And again, I go through this in some more details in the optional lecture notes. What these curves are showing is the amount of energy per unit volume in some region of space per frequency-- so how much light gets shown out in the form of-- how much energy gets shown out in the form of radiation in some box of fixed size within unit volume as you vary the color of the light. And so this suggests you get a certain amount of energy radiated from one color. As you vary the frequency of the light, a different amount of energy comes out, in this case more energy, as you go to a higher frequency, different color, and so on. What the researchers kept finding was that the nature of this curve depended only on the temperature to which the objects had been heated, but not the material composition of those objects. This universal feature became more and more evident as they conducted more and more tests. And so this is a plot now showing the form of this spectral energy density for three different temperatures. It varies only with temperature. Let's consider a relatively low temperature here, some medium temperature, and some high temperature. Already, we see the trend becomes pretty clear. As you raise the temperature to which you've heated that blackbody, that object, the amount of energy that's radiated grows. So you get more and more energy coming out. The area under this curve, of the blue one, is much larger than the area under the other curves. Total energy output, or the intensity, increases with temperature. And the peak frequency, the frequency at which most energy gets emitted in the form of light, that also shifts to higher and higher frequency. So at a relatively low temperature, the peak frequency is a kind of reddish color. As you increase the temperature, it shifts through the spectrum to orange and ultimately to the blue end. So you have this pattern emerging. So as the empirical pattern, as these empirical measurements in the laboratories became more and more clear, how do you make sense of that pattern? That actually was very, very non-trivial. That was not at all clear to these early researchers. So one of the first to really delve into this to try to give a theoretical explanation for that pattern was Max Planck. So Planck had only recently moved to Berlin. He was one of the first people in all of Germany to have this full professor, this special Ordinarius professor chair in theoretical physics. Remember, we talked several weeks ago, several classes ago, that until the later years of the 19th century, especially in the German language universities, there would be one full professor, one Ordinarius, for all of physics. And that person was, by default, an experimental physicist. Only by the later decades of the 19th century were there sometimes two full professors of physics-- one to handle experiments and one for theory. Planck was one of these early Ordinarius professors of theoretical physics in Berlin, starting in the early 1890s. He was not at the PTR, but he was nearby. And he was in touch with his colleagues there. Planck was an expert, in particular, on the new work on statistical mechanics by people like James Clark Maxwell and Ludwig Boltzmann and others. How do you make sense of large collections of objects like molecules in a gas? How do you assess their collective properties? That's what he focused on. Because he was now close to the PTR, he began paying more and more attention to this blackbody research as well. And he realized, as many others did, that it was really, really hard to make sense of this characteristic pattern, this seemingly universal shape of this blackbody spectrum, the amount of energy per unit volume per frequency. And in fact, if you used quite standard-- by that point, very familiar arguments-- from people like Boltzmann and Maxwell, the kind of people whose work Planck was an expert in studying, you should predict a very different behavior from first principles. In fact, it should look like this dashed purple curve, which doesn't look anything like the ultimate curve that was measured. They agree very well at very low frequencies. They both overlap here. But then you see a very significant difference between their patterns. And in fact, this purple one became known as the ultraviolet catastrophe. I love that term. It's a catastrophe. If you follow the then standard arguments-- and I go through these, again, in some more detail in those optional lecture notes-- then the theoretical model seemed to predict that you should get more and more energy per volume as you raise the frequency. And it should grow without bound. It should never stop. So it seemed like everything should be glowing infinitely all the time. We clearly don't see that. That argument came from two different pieces. Again, I go through that in a bit more detail in the notes. But the high level summary is that, according to work by people like Maxwell and Boltzmann, very well established by the time Planck began to worry about this stuff, in thermal equilibrium, very generally it had been argued, each degree of freedom, each way that these either the light waves or little bits of matter in that blackbody-- each of these things should share a kind of average energy proportional to the temperature to which the object had been heated. And with a constant known as k, we now call that Boltzmann's constant. This was known as the equipartition theorem, that each way in which a system could wiggle or move, each so-called degree of freedom, should have an equal average energy. That's step 1. Step 2, using Maxwell's treatment of light, of electromagnetic radiation, the argument was that the number of radiation modes, degrees of freedom per unit volume per frequency should go like nu squared. Combine those together, you get this expectation that the spectral energy density, u, should grow quadratically, grow as a square of the frequency without bound. It should rise without limit. So everything should be glowing all the time, giving off an infinite energy. That clearly is not what we see. So Planck, as I say, had special access to his colleagues doing these experiments in real time. He knew better than anyone that the real curves looked nothing like this constantly growing curve of the ultraviolet catastrophe. Everyone knew that these curves must begin to fall with increasing frequency. Planck had an extra insight, or extra information, not just that the curve should fall, but how it should fall the, actual shape of that curve, like this part here-- actually, this part here in wavelength. How that curve begin to fall as you go to shorter and shorter wavelengths or higher and higher frequencies? So he began to tinker. And in fact, very famously, on a particular date December 14, 1900, he presented a paper to his colleagues at a Physics Society meeting, in which he presented this form for this spectral energy density, that it should rise not as nu squared, like the ultraviolet catastrophe. But in fact, it should have this a bit more complicated structure. And this was really interesting because it would match very well both the arguments from people like Boltzmann and Maxwell in one region of the graph-- sorry, for very low frequencies, very long wavelengths, so over here on this real data, or over here on this plot. It converges to the original expectation. But then it has this natural turnaround so that at higher frequencies, at shorter wavelengths, in fact, you should have this gently decaying tail, this exponential tail. Both of those features are contained in this one expression, as you take the appropriate limits. He also had to introduce this new constant, h, which had not been introduced before. We now call it Planck's constant. That was at least in part to make his units match. The quantity kT has units of energy. T is the temperature, k Boltzmann constant, together form units of energy. Nu here is the frequency, 1 over time, 1 over seconds, Hertz. So that doesn't have the same units as energy. So he added this extra kind of fudge called h to make sure the units would match and also to try to begin to match the actual quantitative shape of the data from his colleagues. So in this treatment, the very closing days of the year 1900, Planck introduced not only a new form for the spectral energy density-- and there's more on that in those lecture notes-- he also introduces this new universal constant we now call Planck's constant. It's this number, h, just a constant that really sets the scale for these departures based on the calculations we would make from Newton's physics or even Maxwell's equations. How much do these newer ideas of what becomes known as quantum theory-- how much should they depart? Or when should we expect a deviation? And Planck's value that he inferred from his colleagues' measurements at the PTR was actually remarkably close to the modern value we use to this day. Here's a modern value. Planck's inferred value was pretty close. In units that are familiar for macroscopic everyday human activities, the so-called CGS system, where we measure distances in centimeters, masses in grams, and times and seconds-- that's where the CGS comes from-- energy, as you may recall, is given this unit of erg, which really is just 1 gram centimeter squared per second squared. In those CGS units, h is incredibly small. It's exponentially tiny in those natural units for human-scaled affairs. So h is small. If we consider, for example, drop a single grape from a height of 5 centimeters, roughly 2 inches, and then ask what's the kinetic energy that grape will acquire over that very short journey, let the grape fall from rest a total of 5 centimeters. Multiply its acquired kinetic energy times the duration during which it was falling, roughly a second or so. And you get an enormous amount of energy times time, if we measure in this Planck-- in these Planck units. So just that tiny little grape falling for about 1 second from roughly 2 inches high has 10 to the 28 of Planck units of energy time. Some of you might know that this particular combination of units, or energy times a times unit, erg seconds, is also the unit in which we measure things like angular momentum. What's the momentum of circular motion? So again, let's take a household example. Instead of dropping a grape, imagine some really annoying housefly buzzing around your head at some radius of a couple centimeters from your head, super annoying. What's the angular momentum of that tiny little fly as it just buzzes lazily around your head? Again, it will have angular momentum exponentially large if we were to measure it in units of Planck's constant-- again, around 10 to the 28. So this is not a scale in which we expect to find strange quantum features in our everyday life-- grapes, houseflies, automobiles, and so on. And yet, Planck could only make sense of that PTR data if he introduced this new scale, this new constant, very different from human experience, and yet not 0, some finite value, even if it's small, in our human terms. Now, one of the readings you had for today was by the historian Thomas Kuhn. Kuhn wrote a really fascinating book. Here's the book cover. And the article version that you had draws on the same body of work. And I find this super fascinating. So we know that Planck published that formula, the one that I showed before, where the frequencies go like nu cubed over e to the h nu over kT minus 1. We now call that the Planck spectrum. We use it all the time. We know exactly when he wrote it down and published it. What we still don't really know, or still is controversial, is what did Planck think he was doing when he wrote down that expression. How did Planck interpret his own equation? Here we are, yet again, thrown into this really, I think, delicious question of how a single equation could be subject to many often competing interpretations. And Kuhn was among the first to try to argue, in particular, that Planck's own roots to that now very famous equation looked nothing like how we interpret the result. So when we derive Planck's expression-- like, for example, in my own notes or any textbook-- you might look it up in a modern textbook-- we say we get Planck's result for that form for this spectral energy density by making a new conceptual leap by requiring that the energy exchanged between matter and radiation can't take any old value. It can't be 6, or 6.01, or 6.02 in appropriate units, but has to come chunked. It has to come quantized in these units of a particular size, Planck's constant h times the frequency of the light involved, rather than treating the energy exchange between matter and radiation as continuous the way we would if we were describing light, for example, as Maxwellian waves. It shouldn't matter what the frequency is of that wave. We know how to calculate the energy associated with any process. And there should be no limit, no chunking or quantization. So Kuhn was asking very directly, did Max Planck think that's what he was doing in around 1900 when he introduced what we now call Planck's formula? And so just to give the highlight of Kuhn's article-- it's a complicated article. If it was confusing, that's OK. I want to talk through what I consider the headline news, the main takeaways from Kuhn's argument. It's a very complicated argument. Here's what I consider the biggest revelations. So Kuhn argues that, in Planck's original derivation, this now very famous canonical formula, Planck fixed the total energy of the system, all the imagined little resonators or make believe molecules in that blackbody-- he fixed the total energy of that system to be an integer number of these units h nu. But according to Kuhn, Planck did not fix each actual resonator, each degree of freedom of that system, to separately have this value h nu so that Planck was fixing the total energy for convenience as a kind of accounting trick, not fixing the unit of each moving part. Whereas today-- again, as I go through in some details in those notes-- we derive Planck's result by fixing the allowable energy of each individual part, each E sub i, so to speak. OK. In Planck's description, as Kuhn reconstructs it, the energies for each of these subsystems, each kind of moving part according to Planck's original derivation, was actually assumed to fall within some continuous range between E and E plus delta E, not fixed, snapping in place in these quantized units. So Kuhn goes on. It's a very complicated argument. The book is even more complicated. But according to Kuhn's analysis, Planck uses bins of size-- we might use the Greek letter epsilon-- bins of size h nu for his accounting-- but again, only to say how many of these resonators or oscillators-- again, roughly speaking, moving parts how many of them had energies within 0 and epsilon, within 1 unit, between 1 and 2 units, not that they had to have exactly 1 or 2 units, the way we'd say today. Moreover, Kuhn goes on again-- he does this in more detail in this longer book. Years later, six years later, Plank was giving lectures at the university about this topic. And he still spoke in his lecture notes of a kind of continuous rather than quantized energy exchange. It wasn't just a momentary lapse in December of 1900, argues Thomas Kuhn. It was years and years during which Plank thought about his own equation quite differently than the way we would today, even though the equation itself hasn't changed. Now, that's Kuhn's argument. He has lots of evidence that I found very interesting. It's also really complicated. There's a fascinating much more recent article by the physicist and historian Michael Nauenberg you can look up that actually draws the opposite conclusion, that Plank had a much more similar notion to what we have today, even as early as 1900. It's really complicated. So I don't know who of these analysts is right. What I do know is it was really, really not clear, even to Planck's own contemporaries, what exactly to make of this new formula. And Planck himself was not shy about that. He wrote to a colleague decades later, 31 years after deriving his now famous equation-- he wrote, what I did back in 1900 can be described as simply an act of desperation. He was trying to match the updated data from the PTR. He knew that curve couldn't just keep rising forever. He was desperate. Introducing these bins of fixed quantized size, Planck continued, was purely a formal assumption. And I really do not give it much thought, which is interesting. So let me pause there. That's what I wanted to share about Planck and the blackbody spectrum. Any questions about that? Ah, Jade asked a good question in the chat. How did they measure the spectral energy density? Very good. So the short answer is this is the kind of thing that the experimentalists at the PTR were very, very good at. So they could use basically things like diffraction gratings-- and they might have even had access to fine prisms-- to measure very specific energy outputs in very specific wavelength bins. So that's the kind of thing they were getting very good at. I think they did mostly use diffraction gratings. I'm not positive. So they had a cavity, a vacuated chamber, that they could heat up in an oven with a little hole in the side. So mostly, they had this empty space that would heat up. And they would let a-- it was a cavity-- let a little bit of this light sneak out a little window. And that's what was taking the place of this blackbody. So no light was coming in onto that box. It was sealed up like a metal trap, cavity. A little light escapes through a little porthole. And so what you should be measuring is only the light due to this thermal radiation because nothing's reflecting on that stuff. That's how they made the blackbody in real life, in the actual experiments. Then they could subject that light that came out to very fine measurements by splitting it up into its colors with things like diffraction gratings or prisms, and then measure-- let's see. They must have measured the intensity, the brightness. They must have had some kind of photometers. And I'm not sure how they did that. We can look it up. But they were good at measuring very finely detailed wavelengths. And in fact, they were getting better over the 1890s in various parts of the spectrum. They knew some of the early data points early on, the ones that matched this rising curve that looks like it would lead to this runaway energy, this so-called ultraviolet catastrophe. The earliest data were at low wavelengths, small frequencies, where it really did go like nu squared. And what was most important, as everyone knew, was to measure the other end of the spectrum. That couldn't keep rising forever. And the researchers, like Lummer and Pringsheim, were getting more and more data at the other part of the spectrum, at shorter wavelengths, higher frequencies, and watching just how that curve began to fall, what we now would recognize as this exponential tail. So they were doing that, again, with more and more precision. Excellent question. Johan asks a very good question. How did they do that without a graphing calculator? He was no Cambridge Wrangler. But Planck was a pretty well trained mathematical physicist. It was-- and the the other fact, it was painstaking. And also, how did he come up with these particular forms of his equations? You can read Thomas Kuhn's 400-page monograph, which I have forced-- rather, encouraged-- some of the TAs to do directly. I will say encouraged. I reread it every few years. It's really complicated. I mean, what was Planck doing what was he doing when? What did he think he was doing? That is really complicated. And other really smart, dedicated researchers like Michael Nauenberg come back to some of the same materials, the same obscure lecture notes, the same publications. He says, no, look very carefully at equation 25b. He does something else there. It gets really, really hard. What is clear, as I'll say actually in the next section, is that some other of Planck's own contemporaries, like a still very young Albert Einstein-- Einstein was convinced that Planck actually hadn't gone far enough. So it's not just historians who debate this the better part of a century later after Planck. Even some contemporaries who were trying to make sense of Planck's own argument thought that Planck's reasoning was, at best, muddled and maybe different from what they themselves thought. And that's why I like that letter that Planck wrote in the early '30s saying, yeah, I didn't know what I was doing, is essentially how I read that quote. I was desperate. So I think that's really interesting. Any other questions on the blackbody spectrum or Planck's formula? If not, let's go on and see what Einstein begins to do with that in that same famous year of 1905. So let's go to that next part. So now, we're talking about Einstein and what becomes known as the photoelectric effect. It turns out Planck was concerned about the interaction between light and matter. He wasn't even, in his own writings, talking about the propagation of light on its own for which he just took right off the shelf Maxwell's by then quite standard treatment that light is clearly a continuous wave spread out through space. And so in 1905, just a few years later, young Albert Einstein, as we still know patent clerk third class, began to think about the nature of light on its own, even when it's not necessarily interacting with matter. This was actually the first of these four kind of amazing or surprising papers that Einstein wound up submitting to the Annalen der Physik in that year, 1905. The first of them that he sent in to the journal, back in March of that year, was the one on what he calls light. Now, Einstein was thinking not only about Planck's work, though he was thinking about that. He had other recent experiments or descriptions of light in mind as well, one of which-- the one that he was even more focused on-- was a series of, again, puzzling experimental results that had just been coming out through the years up through 1902 by the German researcher Philipp Lenard. If that name sounds familiar, it's because we just talked briefly about Lenard in the previous lecture. Lenard went on, years later, to become one of the front men, so to speak, of that Deutsche Physik movement, one of the people who began denouncing Einstein and relativity starting as early as 1920. Lenard was conducting these experiments on what became known as the photoelectric effect in the early years of the 20th century. The sharpest, clearest experimental results were published in 1902. He was recognized very early on for that work. He won the Nobel Prize in 1905, the very year that Einstein begins trying to come up with a theoretical explanation for these experimental results. So what were Lenard's results? Here again, in of cartoon form, are the fundamentals of the experiments that Lenard was pursuing. In fact, others around the world were doing similar things. Lenard had, in some sense, the cleanest data that forced the most sharp showdown with how to make sense of these results. Others were doing similar things. He had a very simple kind of apparatus. So these two blue pieces here represent metallic conducting plates. And across, Lenard could apply a voltage of an amount he could vary. He had a tunable voltage between his conducting plates. He would then direct an ultraviolet light source-- sorry, an ultraviolet light source onto one of those plates. So he has some source of ultraviolet light shining it on one of those plates. And under certain conditions, the light source, when it irradiates that plate, would eject, would kick out, some electrons. The electrons would then travel toward the other conducting plate, completing a circuit. So when the light kicked out electrons, ejected electrons from this so-called cathode, from that metal conducting plate, you could complete a circuit because the electrons would now travel through the intervening space and hit this plate. And you'd know you had electric current flowing because he hooked up an ammeter, a measure of electric current. So he knew he had current flowing. He knew electrons had been kicked out of the metal when the ammeter measured an electric current. And then he could use this tunable voltage-- he could change the amount of voltage applied-- to basically ask, how much energy were those electrons kicked out with, by asking how strong a voltage he had to tune up to block their passage. So when he applied basically no voltage, the electrons would come across the plate and complete the circuit. So how much countervailing voltage would Lenard have to apply to block their passage, a kind of repulsive electric force to basically repel the electrons away from reaching this far plate? So when will the current stop? Then he knows what's called the stopping voltage. And that tells him exactly how much energy the electrons had because he had to counteract that much energy, that much electrostatic force, to turn off that current. So now, he could start comparing the stopping voltage, which for him is a measure of the energy with which these electrons are ejected. So he can compare the energy of the electrons with the frequency of this light source. He shines light of certain frequencies within the ultraviolet range onto this metallic plate. And he measures the amount of energy of the ejected electrons. And again, in cleaned up form, the data started looking like this. It had a very specific, very striking shape. So on a plot of the energy of the ejected electrons versus the frequency of the incident light, the energy electrons seemed to rise linearly, but only above some threshold frequency. So as the light was tuned to lower and lower frequencies-- the light that's being shown from this source-- there would be no electrons ejected at all. There would be zero electrons ejected, no energy crossing that gap. After you reach some very specific threshold frequency of the light, you start kicking electrons out. As you increase the frequency of that light, the energy that the electrons carry rises. The energy of the electrons seems to rise only with-- seem to depend only with the frequency of that light, not with the intensity. You can make this a brighter or less bright light source, holding the frequency fixed. And that had no impact on the stopping voltage, on the amount of energy required to stop those electrons from traversing that gap. That seemed pretty strange. Why was that so strange? Why was it strange to have a threshold frequency? And why was it strange to have the energy of the electrons independent of the intensity of that light, depending only on the frequency of the light? Well, again, if one took on Maxwell's by then quite standard description of electromagnetic waves, including in the ultraviolet part of the spectrum, the energy carried by those light waves from Lenard's device should have been proportional to the wave's intensity. The energy, script E here, was proportional to the intensity of the waves. The intensity, in turn, went by the square of those field strengths, the electric and magnetic field strengths. So why shouldn't Lenard have been able to get very large amplitude fields, high intensity fields that happen to have a long wavelength, that were a low frequency? That should carry as much energy as a short frequency wave, but of a smaller amplitude. Meaning, why couldn't he tune the intensity up and still get electrons ejected all the way down here? If light really acted like Maxwellian waves, why should there be any threshold frequency at all? You could always tune up the intensity of lower frequency light, was at least the expectation. Likewise, according to Maxwell's work, turning the argument around, once you do kick electrons out above that threshold, why should the energy that they carry be independent of what people thought would be the energy imparted by the light? The energy imparted by the light, again, should vary like the intensity. And yet, the energy that these electrons get kicked out with seemed not to depend on the brightness, the intensity of the source, only on the frequency. OK. So that was becoming increasingly clear experimental data, so clear that Lenard received the Prize just a few years later, the Nobel Prize. But it was by no means clear how to make sense of that combination of results. So Einstein's first paper of this remarkable year of 1905 was trying to tackle this directly. In fact, in a letter to one of his friends from the Olympia Academy-- his friend, Conrad Habicht-- here, Einstein actually called this his most radical of all the papers he was working on that year. He thought this one was the most strange or most unexpected-- more so than relativity, more so than all the rest. So what he offers he calls a heuristic explanation. You see in the title, [GERMAN],, a heuristic kind of hypothetical or suggestive explanation. He's not saying, eureka, I found it. He's being maybe uncharacteristically a bit more modest here. He says, here's one way to think about these results. A heuristic idea would be to say, what if light itself were quantized? What if light traveled through space not like a continuous spread out wave, like Maxwell's theory would suggest, but instead like a collection of localized quanta? Here's a quotation from this paper in 1905 translated to English. He writes that, it seems to me these observations about the production of cathode rays-- those ejected electrons off that cathode by ultraviolet light-- he's now talking about Lenard's experiments. It seems to me these observations are more readily understood if one assumes the energy of light is discontinuously distributed in space. The energy of a light ray spreading out from a point source is not continuously distributed over an increasing space like a wave would be, like a Maximilian wave, or an ocean wave at that, but rather consists of a finite number of energy quanta, which are localized at points in space, which move without dividing-- they literally can't be further divided. They are quantized-- and which can only be produced and absorbed as complete units. These become known as light quanta, as little particles or corpuscles of light, rather than what was by then, going back to 1800 let alone the 1860s, the taken for granted assumption that light was a wave, going all the way back to, say, Thomas Young's work or the work of several French scholars. So Einstein is suggesting heuristically-- he's not saying this must be the case. He's saying it's suggestive and worth considering that experiments like Lenard's might make sense if we change an everything we know about light. So why would that help? Let's go back to this cartoon version of Lenard's data. Einstein was, in fact, following Lenard's experiments, the publications quite closely-- much, much more closely than anything like the Michelson-Morley experiment. He really was looking up Lenard's papers and looking at the graphs and tables very carefully. Einstein recognized that it wasn't only a linear relationship between the energy of those ejected electrons and the frequency of the incident light. But the slope, in particular, looked awfully close to this new value that had just been introduced in a totally different context by Max Planck. The slope of this plot looked very much like Planck's constant, h. That's something that Einstein begins to recognize by going carefully over these experimental results. So that leads Einstein to think in the following way. What if each of these discrete little bundles of light energy-- what if each individual light quantum carried a fixed amount of energy proportional to the frequency of light with the constant of proportionality being this new constant introduced by Max Planck? What if light quanta, these indivisible pellets of light, had to carry a fixed, quantized amount of energy, not some continuous range that could have been any old value? If that's the case, then you have on this photocathode-- on this conducting plate in Lenard's experiment, you have raining down on it pellets and pellets each carrying a fixed amount of energy, h nu. So then you can imagine an individual light quantum, not some spread out wave-- an individual particle basically smacking into an individual electron. Now, it looks like colliding billiard balls. So how will the energy of the electron change after it gets smacked by a discrete pellet of energy, a light quantum with energy h nu? The electron should acquire a predictable amount of energy. It should absorb that energy carried by the light quantum. And yet, there will be some extra energy holding that electron bound to its atoms and molecules in the metal. That became known as the work function. That's kind of like the binding energy. These aren't free electrons in space. These are electrons bound to a piece of metal. There's some intermolecular or atomic forces. Whatever they are-- and they might vary by material-- there's some function that is often called by the Greek letter capital Phi-- again, a kind of binding energy. And so the electron won't be kicked out of that piece of metal unless the energy it absorbs from that discrete light quantum exceeds that binding energy. The energy holding it in place has to be overcome or exceeded by the light-- by the energy transferred by that light quantum. If that's the case, then, of course, there should be a threshold frequency. Only when each individual quantum of light, each little pellet, discrete carrier of energy in that incoming light carries enough energy individually to overcome that binding energy or the work function-- only then would any electrons be ejected from that piece of metal. So the threshold frequency should depend on that binding energy, that work function, divided by Planck's constant. Basically, what's the threshold that would set this energy exactly to 0, as opposed to it having a negative energy, which would say it's bound in place? So you just solve for when does the electron just cross that threshold as energy. Ah, it's when nu equals phi over h. That would give you the threshold. Any additional energy, any higher energy carried by light quanta, as you tune the frequency of the incident light higher and higher, each electron will absorb more energy than that threshold. And it will continue to grow linearly with slope h. So the idea that Einstein puts together-- this heuristic idea, this suggestive idea in this 1905 paper-- is that the light shining on this metal plate is not some extended continuous wave lapping up on shore like a Maxwellian wave, but actually a shower of discrete marbles, each of which could collide like two-body collisions, like billiard balls on a billiard table, or on a pool table. They would have two-body collisions, each of which could then impart certain energy to their colliding electrons. So I like to think of it this way. Imagine electrons are like bucket full of ping pong balls. So ordinarily, they're bound in place. That's like that work function. The ping pong balls aren't free to move any old place. They're stuck. They have a certain kind of binding energy. Think of that as like that work function phi. Then you start chucking marbles at it. You imagine flinging marbles into that bucket. If the marbles individually carry enough energy, they're going to collide with individual ping pong balls. And if the marble's incoming energy is more than this binding energy holding the ping pong balls in the bucket, you'll kick a few out. That's the picture that Einstein has in mind. In that same actually quite long article, this very same article in 1905, Einstein also revisits Planck's derivation of the blackbody formula, that function that we called the spectral energy density, little u. And he rederives Planck's formula assuming that the radiation being emitted from that cavity radiation could be treated like a gas of these individual particles, each of which has a quantized energy E equals h nu. This is the modern derivation that we inherit and use today that's much more like the derivation I give in those brief optional notes. So in the same article, Einstein both offers a heuristic explanation for Lenard's otherwise quite puzzling experiment results, he revisits and reinterprets Planck's own by now rather well-known expression for blackbody radiation, all from the starting point of view that light is quantized and each individual light quantum carries a unit amount of energy set by h nu. So we might figure, OK, he solved everything. Everyone must have been convinced, right? No, no, no. We've learned from this course already, things hardly ever work that way. And in Einstein's case in particular-- remember, he was still a little known patent clerk out of the main elite centers of research-- this was facing enormous skepticism, even years after Einstein himself became much more prominent. It was really more than 15 years until the majority of members of the physics community really took this idea at all seriously. And here's, again, one of my favorite examples of that. Here's, again, our old friend, Max Planck, writing this, a letter of recommendation. He's trying to say how great Einstein's work is by 1913. Now, Einstein is no longer the unknown patent clerk. Planck was trying to convince his colleagues to offer a very prestigious new job to Einstein at the Prussian Academy of Sciences, which, in fact, Einstein would soon be offered. He would move to Berlin. And in his letter saying how great Einstein's work is, Planck says the following, "In sum, one can say there is hardly one among the great problems in which modern physics is so rich to which Einstein has not made a remarkable contribution." That sounds nice. "But he may sometimes have missed the target in his speculations, as, for example, in his hypothesis of light quanta cannot really be held too much against him," which is to say, a lot of the work this person has done is very interesting. He's clearly mistaken about light quanta. But let's let him into our new club anyway. I find that terrific. OK. Let me pause there again to ask any questions on the light quantum learning. Fisher asks, in Einstein's paper, did it make any nods to Newton's original thought that light was corpuscles? Very good. Honestly, I don't remember. That's an excellent question. Let me back up. Others might know as well. But it's worth emphasizing. The big picture question-- does light consist of waves or particles-- that actually has a really long history. I guess I gave a little preview of that, although I didn't linger on it, even in one of our first class sessions. For this class, you might know it from other readings or other classes. The reason it was actually a big deal around 1800, going back to early in Michael Faraday's own career-- it was a big deal to think of light as waves because, since at least Newton's day-- another century and a quarter before that, going back to the later years of the 17th century, the 1660s and so on-- Newton and other leading scholars had convinced themselves that light actually consisted of a stream of particles. Newton called them corpuscles, exactly as Fisher says. So Newton was convinced that light was corpuscular. We might now say the light was made up of a stream of particles. That's in 1660s, '70s, '80s. It would become super influential. He writes a whole book called Opticks, Newton does, first published in around 1703 or something like that. It was like, OK, Newton has spoken. Light's particles. Got it. And so it was a pretty big deal when, about 100 years after that, the consensus shifts, both in Britain, and in France, and ultimately many, many other places that, hang on, light seems to have all these wavelike properties like interference, and diffraction, and refraction, and all these things that helped set in motion the ideas of a light-bearing ether. If light is a wave, what is it a wave in? So you have these huge century-long shifts between assumptions about just what's light made of in general, let alone these specific questions about how it interacts with matter. So it was certainly well-known well before Einstein's day that Newton had been, let's say, a corpuscularist. But it's a great question, Fisher. And I'd have to go back to my own copy in English translation by [INAUDIBLE] to see whether Einstein paused to reflect on that. I don't remember. Interesting question. Gary asks, having written as a heuristic, did Einstein believe it'd be quanta in 1905 or only later? Very good, Gary. So my best understanding-- and again, I haven't looked as directly in this. But from my colleagues who have really pored over Einstein's, at the time, unpublished correspondence, his notes, and so on-- again, we have a huge documentary base of how his thinking was evolving in those years. It seems that Einstein thought this really, really was true. But for once, he kind of hedged rhetorically. So it seems that he was offering a heuristic explanation and going through actually a whole number of episodes or scenarios where this suggestive hypothesis might be helpful. But it seems that he thought that this was not only a fiction. He was, however, not sure how to square this. In 1905, he wasn't sure how to square that with a centuries worth of evidence that light had wavelike behavior like reflection, refraction, diffraction, interference in general. So it's not that Einstein said there's nothing ever wavy about light, it's always only particles. But he was certainly-- and in his own more private musings, he seemed to think that one had to adopt this it's really like particles a position in some scenarios more than others. He was groping toward what we see actually in the next part of today's class and what will chase us through upcoming class sessions. He was beginning to grapple with what would come to be called wave particle duality. He certainly wasn't calling it that yet in 1905. But as his additional documents from the time seemed to suggest, he was already grappling with we really, really need to treat light like waves for certain scenarios-- interference fringes being an obvious one. And yet, in his own heart of hearts, he seemed pretty convinced that this corpuscular or quantum nature seemed more than just a coincidence in other kinds of scenarios. And that would become more and more of a paradox to some or a challenge, even beyond Einstein himself. And so the real question actually becomes, why was Einstein hedging his bets rhetorically rather than what he really believed? Because we saw it in his paper just a few weeks later that he submits on the electrodynamics of moving bodies, he's certainly not shy about dismissing the ether. He's making other rather bold and almost irresponsible statements, given the standard assumptions of his day. He wasn't shy, in 1905. With this particular one, I imagine it was more like he didn't yet know how to square the circle here of what comes to be known as wave particle duality. So that's a great question. And again, we'll see an example of that even in the next part today. And we'll see echoes of that throughout the next few classes. Good. Any other questions on the photoelectric effect, on what was Lenard measuring, on Einstein's explanation? It's all pretty clear? OK. Right. Good questions. Let's look at this last part for today. Why did anyone begin to take light quanta a bit more seriously? One of the most compelling sets of new information, new inputs for this came yet again from a laboratory in the United States, which, again, is itself still pretty unusual. This was among the next set of really significant experimental work in physics that got even the experts, sometimes very snobby experts in Western Europe, to pay attention. This was now decades after Albert Michelson. And the Michelson-Morley experiment is now the early 1920s, not the 1880s. But this was, again, an example held up almost immediately for real acclaim. A series of experiments performed by the physicist Arthur Compton at the University of Chicago in 1922-- here's Compton. He looks like a kind of movie star. You might recognize the name Compton. Arthur's brother was Karl Compton, also a trained physicist. Karl Compton then went on to a different kind of career. He became the president of MIT starting in 1930. And he served for nearly 20 years. So a number of enormously consequential reforms at MIT were put in place by Karl Compton just at the moment or around the same time that his brother Arthur Compton was experimenting on the nature of light. And again, as I say, this is one of the sets of new kind of inputs coming out of US-based laboratories that really began to make even experts in Europe pay close attention. So Compton, Arthur Compton, had not set out to test Einstein's hypothesis. He, like everyone, practically everyone in the field, remained pretty skeptical, much like in that letter from Max Planck from 1913. They figured Einstein, by this point, was pretty smart, had some amazing successes to his career. But this light quantum thing was probably not one of them. Compton instead was really interested in the behavior of high energy x-rays and how these x-rays would interact with little bits of matter. So he was conducting experiments by basically beaming x-rays, very high energy electromagnetic light-- as far as he was concerned, Maxwellian waves of very short wavelength, very high frequency-- and bouncing them off of graphite, basically carbon molecules. So he wanted to bounce them off the electrons in carbon. What Compton was measuring in particular was change in the wavelength, in the wavelength of those electromagnetic waves, the x-rays, after scattering. So you might be familiar, of course-- for any of these Maxwellian waves, we can measure the wavelength, the distance, the physical distance between neighboring crests, neighboring peaks, of, say, the associated electric field. So what Compton was doing in his laboratory was measuring the wavelength prior to scattering and the shifted wavelength following scattering. So the prime here means after the scattering. He knew the wavelength ahead of time. He could control that with his apparatus. Very much like the PTR, he could use very finely spaced diffraction gratings and similar techniques to measure the shift in wavelength after the x-rays have basically scattered off these electrons. He found that there was a shift. The wavelength following collision was not equal to the wavelength prior to collision. And there was a very specific angular dependence that the angle at which these x-rays bounced off, this angle theta, was somehow related to the degree in which their wavelength changed. That is to say, the shift in wavelength went like 1 minus cosine of the light wave's scattering angle. And that's what he could measure. You can see here it's mounted on a wheel. He could actually pivot his diffraction gratings essentially and measure the scattered light at different angles as it bounced off that graphite target. So Compton was convinced that this could be made sense of using totally standard, by that point, Maxwellian electromagnetic waves. These were high frequency Maxwell waves scattering off of a certain target. And yet try, and try, and try as he might, he could not come up with any sensible explanation or accounting for this empirical relationship. When he kept finding conducting the experiment measuring more and more carefully the shift in frequency with the scattered angle, he couldn't come up with any kind of theoretical explanation that would account for that shift until, in yet another act of desperation much like Max Planck, he then grudgingly tries on-- takes on this Einstein hypothesis or heuristic suggestion. Only when Arthur Compton, again kind of grudgingly, adopts this suggestion from Albert Einstein to treat light-- in this case, the very high energy x-rays-- as little collections of discrete particles each with their own discrete packets of energy rather than as Maximilian waves-- only then can he make sense of this empirical pattern in his laboratory. So here's the diagram from his actual published article in the Physical Review. It was submitted in the very closing days of calendar year 1922. It was published spring of '23. So here's the actual illustration from his article. And again, in the accompanying optional lecture notes, I go through the algebra a bit more slowly. So I'm going to show you where it gets. Don't worry if some of these steps are hard to do in your head. They're hard for me to do in my head. Take your time, if you'd like, and go through the algebra in the notes. So what Compton winds up doing is treating the interaction between the incoming x-ray-- that's what's here. He now, by the published version, has decided to call this incident light quantum. He's already now borrowing Einstein's ideas. It's not restarted. Pardon me. So here's the incoming x-ray. And how would you characterize the energy or the momentum of that incoming source of light? That's what Compton starts with, before collision. So even by using Maxwell's theory, people knew they could calculate an associated momentum carried by a light wave as related to things like the Poynting vector, if you want to get super fancy. But basically, Maxwell's equations suggested the momentum carried by radiation, by light, is equal to the energy carried by that light wave divided by its speed. That was, again, a classical result, even from wave theory of light. And now, Compton starts adding in this new heuristic stuff from Einstein. What if each individual light quantum-- and now, I'll be a little anachronistic and just use the term photon. That's not the term that Compton used. It gets introduced soon after Compton's results. Within the 1920s, it had already become common to use the term photon, as we still do today, to describe these light quanta. So now, let's take on Einstein's suggestion that each individual photon of light carries a quantized amount of energy proportional to its frequency. Then let's use the usual relation for any wave between frequency and wavelength with the constant of proportionality, the speed. So now, you can come back to this Maxwellian expression for the momentum carried by a classical light wave, add in this new Einstein-like quantum thing, and come up with an expression for the momentum carried by each individual photon, or light quantum. And that would be the energy divided by its speed, h over lambda. Remember, that's not where Compton starts. That's where it gets out of desperation. So now, Compton has an expression for both the momentum and the energy of this light quantum prior to scattering. He's going to treat this, as I say, like a two-body problem, as if these were two billiard balls rather than like a wave scattering off of a little chunk of matter. So prior to the collision, he's going to say this electron-- over here, this target, this little dot here-- is just sitting there at rest. Its momentum is zero. It has no velocity. And it has some rest energy, mc squared. Remember, by 1922, Einstein's work on relativity was indeed much more better known and seemed to have passed a number of tests. So Compton had no qualms about borrowing Einstein's work on relativity, including things like this rest energy, mc squared, even though he was not going into this convinced of the light quantum stuff. So now, we have the ingredients we need to describe the two scattering billiard balls, the two objects, prior to collision. We can characterize the energy and momentum of the incoming x-ray as a collection of discrete pellet-like light quanta. We have the energy and momentum before collision of the target, this electron just sitting still about to get smacked. After the collision, the light quantum gets scattered off at some angle, theta, compared to its incoming direction. That's the angle theta here. It now has, in general, some different wavelength. So the momentum of that photon-- Compton now taking on Einstein's work-- is going to be given by some universal constant. That hasn't changed. That's just Planck's constant. But the wavelength indeed might have changed. In fact, he was measuring exactly that change in wavelength. Likewise, the energy of the photon therefore will change because its wavelength will have changed. So he has now an expression for energy momentum of a light quantum following collision. And likewise, the electron has now been smacked by some very high energy pellet, this high energy light quantum of the x-rays. So now, it will recoil in some other direction, some angle-- let's call it phi-- after getting smacked by the incoming light quantum. So now, it has some non-zero momentum. In fact, if it's a high enough energy incoming x-ray-- x-rays are very energetic light waves in general-- then the electron might actually acquire a relativistic momentum. Its recoil speed could, in general, be comparable to the speed of light. It could get quite a jolt from that collision. So again, Compton uses by then the standard relativistic expressions for momentum and likewise for energy. And again, I have a set of lecture notes that were optional on the Canvas site from when we talked about Einstein and relativity to go over things like E equals mc squared, relativistic momentum, and all that. So you have a chance to go back to those notes, if that's not familiar. For what we need to know today, this stuff was, by that point, standard issue. Compton was doing nothing controversial here in adopting the relativistic energy and momentum for the scattered electron. OK. Now, he does what anyone would do when you have two particles colliding-- not a wave smacking into the shore, but two discrete little bundles of energy that collide like billiard balls on a pool table. One thing we have to do in any two-body collision is conserve energy, just as we would do in ordinary Newtonian mechanics. So what's the total energy coming in before collision, the incoming energy of the photon, this one here. The incoming energy of the electron is just this rest mass. And that total energy has to balance the total energy of the system following collision. So he has his corresponding expressions for the energy of the photon and the electron following scattering. And we can now fill in from his table what those values should be. Likewise, we have to conserve momentum. As we all know, momentum is a vector quantity. So we have to conserve momentum both in the x-direction along the original direction of travel for the incoming light quantum, as well as in the perpendicular direction, in this case the y-direction. And so again, this is done in a bit more detail in the notes, if this is hard to parse in real time. But the upshot to this it Compton's doing completely standard stuff for two-body scattering, once he's done the non-standard thing of treating this light as an incoming pellet or particle. So now, what's so great is Compton has three equations and three unknowns. I've just now rewritten those expressions we just had. By treating the problem like ordinary two-body scattering between discrete localized particles, Compton has three expressions-- conservation of energy, conservation of momentum in each of the two perpendicular directions-- and he has three unknowns. He doesn't know what the scattered wavelength is. And he doesn't know what the angles of scatter are-- either the angle theta for the light quantum or the angle phi for the recoiling electron. So now, it's just algebra. He has three equations for three unknowns. And as you'll see in the notes, now it's really just a short number of steps to relate the change in wavelength, the lambda-prime following scattering, compare that to the incoming wavelength, which you can control. He controls the x-ray source. And it gets related in an automatic way to that scattering angle in exactly the form that Compton had been finding empirically. Not only does he get the angular dependence correct, or at least matches his empirical results, he actually even finds a quantitative shift, this coefficient. The amount by which the wavelength should shift is now fixed by these universal constants-- Planck's constant, the mass of the electron, that target, and the speed of light. And so Compton is treating light like particles. And he's measuring changes in wavelength. Just going back to Gary's point from a few moments ago, Compton is now finding further examples where half the time he has to pretend that light is just a wave, because he's measuring wavelike properties-- like wavelength instead of necessarily a wavelike property. It's the distance between crests of a wave, after all. That's what he's controlling and measuring. That's his empirical input, is a wavelike quantity, a shift in wavelength. And yet, he's accounting for that by talking about a discrete particle scattering off another discrete particle. And so this becomes known as the Compton wavelength, this combination of fundamental constants, that tells us how much the wavelength of the scattered light should shift as one changes the angle, what's the overall scale by which this wavedness should shift. And so this becomes a kind of yet another hybrid or blend of particle-like and wavelike ways of reasoning about light. As I mentioned I think in the lecture notes for today, this work, much like Philip Lenard's, was immediately greeted as very important. Compton received a Nobel Prize, I think, by 1927. The paper was published in spring of 1923. So much like with Lenard's work, just a handful of years passed between the very first publication of these experimental results and the work being honored with the Nobel Prize. So people were paying attention to Compton's work very closely, very carefully in real time. And it was this kind of work that began to convince even the remaining skeptics that Einstein's heuristic suggestion that light should be treated at least in some aspects like a collection of particles-- that finally begins to gel and achieve something like community agreement whereas in Einstein's his earlier expressions that had been lacking. OK. Let me sum up. Then we'll have time for some more questions. So these three moments in what became known as old quantum theory-- each moment is physicists grappling with the nature of light. And remember, this is not happening in a vacuum. To begin to understand why anyone cared at all, let alone invested with such priority in these particular experiments, that really does go back to larger framing of a newly independent country of Germany, decisions by its leadership to invest in industrialization to make new kinds of places like the PTR and so on. It's within those spaces for specific industrial applications, like electric street lighting, that they were really committed to studying things like blackbody radiation and what it might reveal about universal properties of light and matter. Max Planck had just moved to Berlin as this full professor of theoretical physics, still a pretty rare job to hold within any German University. He was close by to PTR. And he was getting updates from his colleagues, sometimes day by day, with their increasingly precise measurements of that spectrum, the pattern of radiation emitted when you heat up any material to a sufficiently high temperature. No matter how he got there, by the end of December, Planck had introduced this now-famous expression for the spectral energy density. To get there, he had introduced this new constant of nature. We now call it Planck's constant-- very, very tiny on human scales, and yet not zero. And then we saw that whatever Planck thought that meant, within a few years, Einstein takes that up and, in a sense, treats it more seriously than Planck himself had even done. So Einstein not only rederives Planck's result from a very specific conceptual starting point-- what if light consists of collections of discrete quanta, soon to be called photons? And that same new concept, new set of ideas helps Einstein make sense of experiments like Lenard's on the photoelectric effect. However, as I've emphasized many times, that seemed to be compelling to Einstein in 1905, but to very few others for years and years later. And it really took the better part of two decades until consensus began to converge around this idea of light quanta, or photons. A very important piece of that was these new set of experiments by Arthur Compton at Chicago, where he could only make sense of his own new results by doing this kind of dance, by talking about x-rays as waves with wavelengths, and then accounting for the shift of those wavelength properties in terms of the particulate, or quantum-like scattering of discrete bodies. So I'll stop there. We've got time for some more questions and discussion. So let's see. Stanley writes, is there a reason Compton was able to measure-- was able to assume energy was conserved? Oh, very good. How did he know energy was not lost? In a sense, that was actually an assumption, Stanley. It's an excellent question. And in fact, it's a very powerful question because, at around the same time, other leading figures, like Niels Bohr himself-- almost exactly the same time, by 1923 and '24, in fact. Niels Bohr, in responding to other curious features of recent results, suggested that maybe energy is not conserved at the scale of atoms and parts of atoms. Compton didn't go that far. And most of his colleagues never went that far. Even Bohr switched back and said, nope, energy should be conserved. So Compton was basically taking on board the assumption-- just an assumption-- that energy is conserved always even at the atomic scale. That was not proven. That was an assumption. What Compton was trying to do was basically make as few changes as needed to a straightforward way of analyzing the scattering data. How would any of us analyze, say, the collision of two balls on human scales, two billiard balls, a baseball and a baseball bat, or any of these things? We would assume energy is conserved. We would assume momentum is conserved in each of its relevant components. And that's what Compton is doing. And that was actually what most physicists continued to do. Although, you're quite right to point out, Stanley, that really was an assumption. It wasn't written in stone. It wasn't guaranteed. Nowadays, when we look back, we have tons and tons of reasons to think that energy really is conserved, even at the atomic scale, even when we think about light quanta. That was by no means hard and fast evidence at the time. Compton assumed it. And that helped him get the results that he otherwise was aiming to explain. Excellent question. Obi asks, it's interesting that Einstein accepted this duality because he rejected the ether before because it required conflicting explanations. Obi, I agree. That's an excellent point actually. You're totally right. So one might have expected him to say, the assumption of Maxwellian waves is merely superfluous, right? That was the same language he used about the ether. You've got me back to Gary's point from earlier. I wonder if Einstein didn't make that move here because he himself knew that there were things that he thought wavelike properties really could explain. I think Einstein had convinced himself, by 1905, that invoking the luminiferous ether didn't help people understand anything that it was meant to explain, that the ether was meant to explain. Thinking about light as waves still had explanatory power for things like interference, diffraction, refraction. So I think he was-- I'm assuming. We can go back and look more. And other scholars look more carefully at this. My assumption is that Einstein didn't want to give up on waves as never being helpful to think about light because of this body of evidence that he thought really could be for which a wave explanation was actually helpful. Whereas he had convinced himself, partly from his reading of Ernst Mach and so on, that invoking an ether seemed never to be helpful. Wavelike behaviors, there were all kinds of phenomena, even in casual daily experience, let alone in precision experiments, where things like interference, fringes, and so on were essential. That's my guess for why Einstein called this heuristic, didn't announce the notion of thinking about light waves again. And again, Einstein becomes one of the most, frankly fearless-- one of the most conceptually daring, I think is the word to say, in really trying to pursue this wave particle duality over the next two decades. He really sits with it. He doesn't say, oh, this hurts my head. It must be one or the other. He actually digs in actually, in a series of developments throughout what becomes known as quantum mechanics. So he really found this delicious and enticing. Maybe those are the wrong words to use. But he certainly found it worth sitting with, as opposed to it must be A or B. And we'll see some examples of that again in the coming class sessions. Excellent observation, though. Any other questions? Since I have three minutes, I'll share the following. One of the things that was done by not Arthur Compton but by Karl Compton soon after he became president in 1930, president of MIT-- and this is for all you physics majors. And my apologies to every other engineering major. You know who you are. Compton decided that MIT had become too closely aligned to both engineering and industry. He thought, in Compton's own words, that MIT had sold its soul to industrialists. And so it was time for MIT to redouble on the basic sciences. It was at this time, starting in 1930, when Compton, Karl Compton, put into a reform that every single undergraduate at MIT had to take two whole years of physics. You little wimps-- kidding, just kidding. You wonderful students only have to take one year of physics because MIT relented in 1965. For 35 years, Karl Compton's vision held. And literally every MIT student, whether they were management majors at the Sloan School-- I'm looking at Gary-- or economics majors, or history, or mathematics had to take two years of physics. It wasn't quite the Cambridge Wranglers. But we had two years because Karl Compton, the physicist, was convinced that we had to have a whole new emphasis on the so-called basic sciences. It's at this time when the laboratory requirements start coming in for biology, and chemistry, and mathematics. And it was only 35 years later, in the mid-1960s, when some colleagues long after Karl Compton stepped down said, maybe that's too much physics for every single student. And honestly, when I started teaching at MIT in 2000, some of my by then rather senior colleagues still remembered 1965 and thought everything had gone downhill since then. Those people have since retired. But MIT has never been the same, in their point of view, since we gave up forcing/inviting every single undergraduate to take two whole years of physics. So that's one of the legacies of Karl Compton, not Arthur Compton. It's true. Alex puts in the chat-- weren't the problem sets originally questions from industry? Many of them were, in fact, if not problem sets, then certainly research projects. There was a kind of research for hire program, which never really ended, but was a very high focus at MIT starting soon after the First World War, in the years right after 1918. It was called the MIT Plan or the Tech Plan. And roughly 12 to 15 years into that, there was a kind of course correction. And people like Karl Compton said, we have to rejigger that. Fisher says, I assume this requirement naturally only applied to physics and-- oh, no. That's right. Fisher's exactly right. So the requirement of two years of physics coursework applied only to physics for every single undergraduate. And grudgingly, if you had to take chemistry or biology, fine, that was required. You have to take laboratories. You had to take at least one year of mathematics. So we can recognize parts of our so-called GIRs to this day survive from Karl Compton's era, even though critically the one horrible moral lapse, according to some of my colleagues, was dropping the two years of physics in place of only one. I leave you with that thought. I invite you all to take more than one year of physics. I think many of you have been anyway because you're physics majors. That's Karl Compton. We'll actually talk more about Karl Compton at MIT in the coming weeks when we talk about MIT, and the Second World War, and radar. We'll hear Karl Compton's name again soon. In the meantime, we'll pause and sit with Arthur Compton's results for which, again, there's more of the algebra in those notes, if that went by too quickly. Any other questions about this material? Anyone wish that everyone had to take two years of physics? Or any other comments on today's class? If not, I'll leave it there. I'll wish you good luck wrapping up your paper 1 draft. Please don't forget, it's due this Friday. And we'll pick up the story of old quantum theory again with our next class. Thanks so much, gang. Stay well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_20_A_Conservative_Revolution_QED_and_Renormalization.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: With today's class, we're now transitioning into the last main kind of unit thematically for the course. And so the last main section is trying to follow one set of threads, a very set of ideas and developments weaving from the end of the Second World War toward more recent time periods. And I would joke and say we're going to follow what our objectively the single most important developments in the history of human thought. That might not be how everyone reads it. There are things that I find actually really cool and exciting, so that's why I chose them. I'll just be honest, this is my favorite. And so this last unit, we're going to be looking at some examples within-- trends within postwar physics, looking in particular at a merger that gets cobbled together between high-energy physics, nuclear and particle physics, and certain ideas about astrophysics and cosmology. So there are many, many other exciting developments then and now. Of course, I actually don't mean these are the only important ones, but these ideas are themselves, I think, still very exciting, and they also offer a way to trace through some of the further embedding of very particular sometimes very esoteric-sounding ideas in concrete-changing institutions, social contexts, political features, and all the rest. So we can read these developments in high-energy theory and cosmology and astrophysics as taking place in shifting contexts by real people, much as we work so hard to do with the first and the earlier sections of the class as well. So that's our we're going to launch in now for our last main unit. And again, if you have any questions, of course, please don't hesitate to reach out to me. So I think I'll go ahead and start sharing screen, and we'll launch into this last part of our adventure. So today, we're going to focus on some developments that would begin to coalesce very soon after the Second World War into what came to be called high-energy physics. That wasn't quite what people called it at the time, but we recognize it today as heading toward things like particle physics or high-energy phenomena. And we're going to see that this wasn't-- as you might expect, this work was not unfolding in a vacuum. In fact, it was tethered in some surprising ways to developments during the Second World War. So it really does, we see again, a pivot from that middle unit of the course as we head toward the last unit. So our three main steps for today, we're going to actually turn the clock back a bit. We're going to go back to the early days of quantum theory, even before the Second World War, and we'll see that there was a lot of work being done in the 1920s and 30 seconds. And then for the second and third parts of today's class, we're going to see how some of those efforts were spun in new directions, how they got a kind of unexpected series of inputs or even jolts from many physicists' experience during the Second World War. So that's our plan for today. So if you remember from not too, too long ago in our class, we were looking at what came together in rapid order around 1925 and '26 that came to be called quantum mechanics. We saw Heisenberg's work on what came to be known as matrix mechanics, and very soon after that, Erwin Schrodinger began working on wave mechanics. And before too long, several people had shown that these two very different-looking approaches actually were mathematically relatable. So we had something called quantum mechanics in hand really by 1926, '27. Right on the heels of those developments, many of those same physicists began right away trying to apply these new ideas and the new formalism to radiation. Not just to say the structure of an atom, like the Bohr atom as it gets rethought en route to things like Schrodinger's approach to hydrogen, but actually trying to apply some of those same conceptual ideas even to Maxwell's treatment of light, to electromagnetic radiation. After all, as we did see a number of weeks ago, Einstein himself had suggested that light might consist of light quanta that, by this point, came to be called photons, Einstein suggested that, as we saw, all the way back in 1905. By 1927, by early 1927, Paul Dirac, the British physicist, some of whose work we looked at recently, he actually proposed a way to try to reconcile quantum theory with Maxwellian waves, with electromagnetic fields as early as 1927. This is where many historians now date the earliest stirrings of quantum field theory. In particular, Dirac's work came to be called Quantum Electrodynamics, or QED. It was a quantum treatment of Maxwellian electrodynamics. And you can see the first main article which really launches this was submitted already in February of 1927, quite early in the study of quantum theory more generally. So again, as we've seen many times, you would have seen this even before well before this course, physicists, including very mathematically adept ones like Paul Dirac, they'd known for a long, long time that when we have a arbitrary, very complicated-looking curve-- let's say some function that has some very strange-looking pattern to it, we can always decompose that arbitrary curve or that funny-looking curve thanks to Fourier analysis and treat it as the sum or the superposition of very regular waves, very purely periodic waves, of varying wavelength or frequency. And as long as we get the weights right-- 2 parts of this, 3 plus 4i parts of that, the weights could be complex numbers. But in general, if we weight these kind of basis vectors or waves of known properties and we tune the weights very carefully, then we can actually reproduce virtually any arbitrary-looking curve, and that's the foundation of Fourier analysis going back to the 18th century. That part is not new. What Dirac wondered was, could something like that be used, even in this quantum realm, for a funny-looking wave or a field extended in space like a Maxwellian electromagnetic field? Now his first thought, as early as 1927, was that the approach won't be merely or only Fourier analysis. After all, even Heisenberg had found, much to his initial confusion, that when trying to deal with these quantum mechanical ideas, certain quantities no longer obeyed ordinary rules, say, of multiplication, of these so-called commutators or commutation relations kept popping up where the product of two quantities would depend on the order in which they were multiplied. An x times p was not the same as p times x. And in fact, the difference between those two scaled with Planck's constant was an indication there was something inherently quantum mechanical going on. So Dirac realized-- or suggested-- that something like that must be going on with these quantum fields as well. And so it won't just be a Fourier analysis. You have to get some of this order of operations structure to come in, but he wanted to keep as much as possible of this quite familiar Fourier approach. And so he began introducing this notion of fields extended through space and time, and did, going all the way back to Michael Faraday, which we looked at in some of the very earliest classes this term, but having to do a little tweak. It wouldn't just be a Faraday or a Maxwell field distributed through space changing over time, it would have a little structure-- so a little added structure to take into account this kind of new quantumness, let's say. The fact that these quantities would obey sometimes some funny commutation relations and all that. So he wanted to get as close as he could to Fourier analysis while recognizing that some things might need a little bit extra mathematical structure. So if we look at this expression, the new parts that Dirac has added really can be summarized in this a and this thing that's called a dagger. These are clearly related to each other. They're complex conjugates of each other, they're very similar to that. And in Dirac's hands, these things weren't only numbers that were kind like an ordinary Fourier analysis, these were actually quantum operators. They did stuff or represented specific kinds of changes to the physical system. So the one over here that is called a dagger, that could be interpreted, Dirac suggested, as creating one quantum, one photon in a very particular state, a state of given energy and momentum. And this a with a hat but with no dagger would be interpreted as taking away, as annihilating or absorbing one photon in a definite state of energy and momentum. And so-- and then you'd have different weights in front of each of those processes. So you have standard Fourier modes, just like in ordinary 18th century mathematics. These would be ways of representing, say, a photon in a state of definite momentum. You would have this kind of accounting to say, did you add or subtract a specific quantized excitation of that field? That's what the a and a dagger show. And then again, you have something more like Fourier weights. How many of those do you add in to get back your arbitrary field through space and time? So if we do this kind of accounting and it's clearly inspired by ordinary Fourier analysis. Then Dirac showed we could represent the state of this field-- let's say it's the Maxwellian electromagnetic field, you can represent it as a collection of these quanta in various well-defined states. The field was nothing other than a collection of quantum particles in states of definite energy momentum. And then if a field changed over space or over time, you could interpret that as changing your Fourier weights, as either emitting or absorbing quanta in these specific states. So it's a way of just doing accounting so far. Not too long after that, though-- really, just weeks after Dirac submitted this first paper, Heisenberg introduced his uncertainty principle, and not long after that, spring/summer of 1927, a number of people began trying to these ideas together. Put together Dirac's accounting scheme to represent some field extended through space and time as a collection of quanta in particular states, but then to add in this very critical caveat from quantum physics of the uncertainty principle. The uncertainty principle that we looked at together briefly in class a number of weeks ago concerned a trade-off between position and momentum along that direction in space, say delta x and delta p. It was very quickly shown by people like Heisenberg right around the same time. There were other pairs of quantities that had similar trade-offs that also obeyed an uncertainty principle relationship, one of the most important being the energy involved in a given physical process and the time over which that process unfolded. So if these two quantities could not be precisely specified at the same time to arbitrary precision, then much like position and momentum, there would be this inevitable trade-off. And in the context of Dirac's kind of Fourier-inspired accounting trick where the field could be decomposed as collections of quanta in various states, the way many people began to interpret the uncertainty principle in the light of Dirac's formalism was that certain quanta, certain excitations of that field could temporarily break the rules. They could temporarily violate the conservation of energy by borrowing some energy, delta E, as long as they paid it back sufficiently quickly so that there was a trade-off between the precision with which the energy of that state could be specified at a given moment in time and the time over which these conservation-violating processes could unfold. So again, if you go back to Dirac's accounting scheme, that suggested to many people, again, as early as 1927 and 1928, very early in this game, that the exact state of a quantum field at any given time could never be stipulated to arbitrary precision. There would always exist what came to be called quantum fluctuations. These were associated with something called virtual particles. Particles that were in some state of energy and momentum, but they had-- they were like-- they had broken the rules temporarily. They were virtual particles by exploiting this energy uncertainty relation to borrow energy to be in a particular state briefly and then pay it back correspondingly quickly. The first time I heard about this, I was actually in high school. I was really enamored of these popular books by the writer Isaac Asimov. I'm sure of you know Asimov's writings, including his famous fiction novels, science fiction novels. He wrote a number, I think, really very nice popular books-- nonfiction books as well. And I remember coming across this idea in this book here, part of his series called Understanding Physics. And he had, I think, a really lovely metaphor for this notion of virtual processes that could temporarily violate the conservation of energy, and he described them that they're really kind of like misbehaving schoolchildren. That when the teacher has her back, his or her back turned, children might break the rules of the classroom-- in this case, passing a note or something more extraordinary like standing on your desk and dropping your pants or whatever it might be. There could be all kinds of rulebreaking before the teacher turns back around. And given-- depending on the nature of the violation, the students would be wise to wrap it up correspondingly quickly. If you're only passing a note, that's not such a big deal. You might take your time with it. If you're doing something more outrageous compared to the rules of the classroom, you better get it done with a correspondingly shorter delta t. That always-- I found that actually very helpful. This notion that elementary particles might be like elementary school children temporarily violating what seemed to be the laws of nature. OK. So why would that matter in the context of Dirac's formulation of quantized fields? Again, people began putting these pieces together already by the late 1920s. So imagine trying to understand the behavior of a single electron that's traveling through space over time between locations, say, x1 and x2. And let's say it has a definite particular value of momentum. If these virtual processes could be happening all the time-- in fact, if they can't even be avoid it because they arise from the uncertainty principle, then there's some chance that along the way during its journey, this electron would spit out a virtual photon, a virtual quantum of light, as long as it reabsorbed it some short time later. So it could be-- during its path from x1 to x2, there could be a so-called virtual process or a quantum fluctuation such that what we thought originally was just a single particle moving by itself might temporarily dissociate into the electron and some virtual quantum of the Maxwell field, a virtual photon, that borrows some energy and some momentum k and pays it back some short time later. What became so curious to these physicists early on is that this photon, according to this scheme, could borrow any amount of energy. The borrowed energy, the violation of the conservation of energy as dictated by the uncertainty principle could be any amount as long as it paid it back correspondingly quickly. So there was no-- this seemed to be no upper limit on how much energy or momentum could be borrowed, not quite stolen, by this virtual particle. So then the question became, could you calculate likely scenarios in the context of these temporarily rulebreaking virtual particles? So Dirac and others went on to show, based on the equations of how these fields seem to evolve, that the probability amplitude-- something like-- remember, like the square root of the probability, roughly speaking-- would go like 1 over k where k was the momentum to both emit and then reabsorb one of these virtual photons. And that makes some at least intuitive sense. That's like saying, it's less likely for a very, very high momentum state to be temporarily created and destroyed. It's more likely to have borrowed to have broken the rules more softly, let's say. So that makes some sense. It was derived of course, much more carefully than that, but you can make a little bit of sense of the nature of the relationship. Less likely for very high-energy virtual processes. But nonetheless, as long as the virtual photon paid it back correspondingly quickly, there seemed to be no upper limit. And so therefore, to calculate the likelihood for the electron to travel from x1 to x2, you have to sum up all of these possibilities. The virtual photon could have borrowed some value k or k prime or k double prime all the way, in principle, all the way up to infinity. So when you start summing up all of these possibilities, you actually get something called a logarithmic divergence, which is a fancy way of saying, the integral blows up. It goes like the log of k, but you're supposed to integrate k the way up to infinity as if this photon had literally borrowed an infinite amount of energy, everything up-- every possibility up to infinity. So it looked like the likelihood for an electron to travel from point x1 to point x2 became infinite. That didn't seem to make much sense. The way people began to describe it, again, by the late 1920s and early '30s was to say the electron's self-energy diverges. The energy due to its interaction with its own electromagnetic field blew up. If the self-energy is diverging, that was similar to saying its own mass was infinite, because as we saw many, many weeks ago, according to Einstein's relation, E equals MC squared, an energy and a mass are essentially interchangeable. So it was like saying that the quantum corrections to the mass of an electron became infinite and that did not make a lot of sense. So the very first idea, which a number of physicists began exploring, again, by the early '30s, was to simply cut off the integrals. Let's just say that these virtual photons could not actually borrow literally infinite momentum. Maybe there's some upper limit. And there's some upper-- some finite upper limit to the integral, then you should have a large, perhaps, but finite answer. But there are a couple problems with that, which, again, were identified very early in the 1930s. If you just put in by hand some upper momentum, up to which one integrates these expressions, then these equations would no longer respect the symmetries of special relativity. You would be inserting a preferred momentum that if you just tried to describe the same process while riding very quickly on a train, you'd get totally different answers. Even though this was a series of equations that began with Maxwell's equations, which, of course, almost by definition, Einstein showed us, must obey the symmetries of special relativity. So how would you have a self-consistent description of elementary particles that could also be consistent with relativity if you just put in by hand some upper limit to these integrals? That didn't seem to work just mathematically. A little more deeply conceptually, what that suggested was that if you just put in a cut-off by hand at some highest but finite value of this of borrowed momentum for these virtual particles, then any two observers would basically disagree on anything they tried to calculate. By losing these symmetries of special relativity, then there was no reason for any two calculations ever to agree for any two observers. That seemed like a pretty big challenge. Meanwhile, it wasn't only that an electron moving through space could temporarily emit and then reabsorb a virtual photon, the inverse process could happen as well, all within the scheme of Dirac's accounting system for quantum electrodynamics, or QED. So the inverse could happen in which a single photon is moving through space from, say, location x1 toward location x2-- sorry. But mid-course, it could actually temporarily, spontaneously emit and then later reabsorb a pair of virtual particles-- in this case, a virtual electron and a virtual positron, the antimatter cousin of the electron. This would conserve electric charge, it would violate conservation of energy only temporarily just like in that virtual photon case. So if the uncertainty principle combined with Dirac's accounting scheme was to hold, then this kind of virtual process should be happening all the time as well. So then one could ask about an electrons interaction again with itself, with its own electromagnetic field. After all, the electric and magnetic fields on this quantized view merely consisted of collections of photons. That was the whole point of Dirac's accounting scheme. How many photons in this state, how many photons in the other state? That's what the Maxwell field consisted of on this new view. So if a single photon could spontaneously emit these electron-positron pairs, then that must be happening all the time in the vicinity of any charged particle because the electromagnetic field, that electric and magnetic field propagating outward from that charged particle could always be subject to these virtual processes, at least in principle. So the picture that emerges-- this one looks like a daisy, some sort of flower, we have the original electron you actually asking about, and then this spontaneous set of pairs of positrons and electrons that could be popping in and out of existence all the time because of the uncertainty principle. And this became known as vacuum polarization. Polarization, as you may remember, is just a fancy way of saying that there's a preferred direction in space. The idea was that all these virtual particles would become oriented in space because the positively-charged positrons would be attracted to the original electron, the negatively-charged virtual electrons would be repelled by the original electron. you'd have this kind of real structure in space, an orientation, or indeed, a polarization of space. So what seemed to be empty space itself, the vacuum, might not be so empty. After all. In fact, it might have a space-dependent structure to it, and that came to be called vacuum polarization. So one could, again, try to account for the size of this effect. What would happen if you tried to measure the charge of that electron? Well, you would no longer encounter just the electron, so this thinking went, but actually, the electron plus this cloud of these virtual charged particle pairs. So if you're measuring the electron from over here, you're really seeing the complete effect of the electron plus this screen of the virtual pairs around it. So you could then ask, well, what was the impact of the screening from those virtual-charged particles? You could try to do the same trick as the self-energy calculation for the electron. Count up all the ways that these virtual pairs could temporarily borrow some energy momentum, as long as they paid it back. They could, in principal, borrow an infinite amount of momentum. And so once again, the integral diverged. It diverged again logarithmically, it was soon clarified. So it was like saying that the electron had some charge of seven in appropriate units plus infinity, just like it had a mass of 5 plus infinity. It didn't seem to make any sense at all. And so the effect of these quantum fluctuations or virtual particles seemed, once again, to give an infinite change to the simplest level or approximate equations, and it didn't seem at all clear how that could be happening in real life. So this led to, as you might imagine, a lot of discussion. This was right on the heels of quantum theory, this is still in this period when the physicists, especially in Central Europe, were able to communicate all the time by letter, by conference, by visiting each other. They'd go to Bohr's Institute, they'd visit in Gottingen and so on. And so this became like the hot topic, one of the pressing concerns for this early kind of quantum theory generation. Many of the leading physicists-- again, mostly in Europe at this time-- who were confronting these new challenges of quantum processes that simply couldn't be calculated, couldn't yield any finite answers, they thought this pointed to a really big next conceptual revolution. Remember, the revolution, as they thought of it, a revolution of quantum theory, was like months old. They'd just gone through this seemingly huge head-spinning rupture of the uncertainty principle and noncommuting matrices and all these things. In their own memories, some of their own earlier years as younger physicists, they tried to work through and incorporate the large conceptual shifts associated with relativity. They think, OK, we're due for another one. And this became a steady drumbeat interpretation by people like Niels Bohr and Werner Heisenberg. That that's OK. We've been stuck like this before. This shows that we're going to learn something really, really drastically different about how nature works to finally get around these virtual processes. So one example that Heisenberg floated by the mid-1930s, for example, was that maybe space is actually not continuous. That's a pretty big rupture. Maybe there's some shortest possible length, some fundamental shortest length in space, that space is not a continuous fabric as described by Einstein's relativity, but in fact, maybe there's a quantumness to space itself, a discreteness. If that were true, then there would have to exist some largest possible momentum that would go inversely with that length. And so there'd be a finite largest momentum, That means all those integrals would not actually go to infinity, they would go to the physically allowed largest maximum length, and that would give a mathematical cure to these poorly-behaved integrals, but at an enormous conceptual cost. I mean, yet another pretty significant rethinking of space and time, the very fabric of a continuous space-time would no longer hold. So that seems like a pretty big challenge to relativity even if it did give a reason why these integrals might be finite. Other people said, well, maybe the problem isn't with the nature of space, maybe the problem is with our understanding of the Uncertainty Principle. This whole idea of these virtual particles, these quantum fluctuations in the state of an extended quantum field, that really seemed to come directly from the uncertainty principle, that delta e delta t being non-zero. So others said, well, maybe the fault is with quantum theory itself, and that this seemingly bedrock feature of quantum mechanics, Heisenberg's Uncertainty Principle, maybe that isn't quite right has to be either tossed out or somehow amended. And so the mathematical challenges of getting integrals to converge, which is a well-defined, though very hard challenge, that inspired in many of these folks by the late '20s and throughout the 1930s, visions of some sort of grand rupture in how they would have to understand space, time, and matter at a very deep level. So I'm going to pause there. Some time for questions on that. I see something in the chat. Ah, good. So Sarah asks what led to the idea that virtual particles existed? Thank you, Sarah. I should have clarified. This was a hypothesis. It seemed to be, at least to many of these folks, what looked like a straightforward corollary of the Uncertainty Principle, but you're absolutely right. One of the first questions they had was, did we misinterpret the Uncertainty Principle? Or is there some kind of theoretical out that will enable us to keep the Uncertainty Principle but not attribute this particular kind of process to nature? Another question was like, if they're virtual, if they're just stealing this energy and paying it back so quickly, how would we ever know? It was the kind of-- if a tree falls in a forest kind of argument. So some people said scrap the Uncertainty Principle, others said this is kind of an unobservable theoretical story. And so if your equations blow up, then so what? That doesn't mean nature's doing this after all, and we'll come to see how that question, are these virtual particles part of nature or only an artifact of the equations, that gets picked up again after the Second World War. So there were split decisions, split opinions throughout the 1920s and '30s on that. It's a good question. Fisher asks, in the case of a and a dagger used by Dirac, was that generalized to all systems points in space or just approximating them as a quantum harmonic oscillator? Good, Fisher. So the idea was indeed modeled on these harmonic oscillators, but the idea was that these would be, how we describe, extended objects through space and changing over time as these collections of these field oscillations, of these quanta. And it was actually Heisenberg-- I think it was Heisenberg who, a few years later, applied the same thing even to-- not just to light like the electromagnetic field, but even to the field of an electron. So the idea that gets-- that starts coming together by the early mid-1930s, really building directly on this early work by Dirac and others, was that every kind of matter and radiation in nature was the hypothesis. Everything. Electrons, positrons, protons, neutrons, as soon as they were known about a few years later, that they are all excitations, localized excitations of some kind of underlying quantum fields. And that to characterize the state of the field, you could equivalently just add up how many quanta in this state, how many quanta in that state, how many-- much like Fourier analysis. And so you could change the state of the quantum electron field by emitting or absorbing electrons, you could change the state of the quantum Maxwell field by emitting or absorbing photons, and that would be like just representing a continuous function in Fourier space by manipulating the Fourier weights or dealing with the original function. That there should be a kind of equivalent accounting scheme. But that these were-- that the particles that people began measuring in cloud chambers, in cosmic rays-- from cathode rays, that these were really localized excitations of a raw quantum field that somehow came first. The excitations were accidents of the state of the field, something like that. And that it was, again, building on Dirac's early approach to those others, prominently Heisenberg. Said, this must happen-- what if this happens for every form of matter we've ever thought about? Electrons, positrons, and so on. So that was becoming more and more their common set of assumptions, and that's what led to things like virtual electron positron pairs, but then the effects seem to blow up. Virtual photons, but the effects seem to blow up. So that's what led to this one step forward, an infinite number of steps back, which is not very good progress. Dealing with infinities is not usually very fun. Excellent questions. Any other questions on the virtual particles quantum field idea more generally? That was, of course, a bit schematic, but I want to get at least a gist across. If not, that's OK, I'll jump back in. Because we're going to see how some people return to this question soon after the end of the Second World War. So the questions that are already being raised get seen in a new light. So as a reminder some of the things we looked at more recently in class, during the Second World War, researchers all over the world, and certainly in many labs in the United States, got involved in all kinds of basically military-oriented projects, and the United States most famously. That included the Radar Project, and Manhattan Project, but also dozens and dozens of other projects. And likewise, for physicists, chemists, and many kinds of engineers in many, many other parts of the world. So at the MIT-based Rad Lab at the Allied Headquarters for Radar, we saw that researchers, physicists, many kinds of engineers together had really pushed the envelope in both producing and measuring electromagnetic radiation of a particular wavelength band. So the area that they made the most progress in, where they focused most of their efforts was in the microwave region of the spectrum, and in particular, a centimeter wavelength, 1 to 3 centimeters because you need to do things like pick out a periscope of a German U-boat, that kind of thing. So going to short wavelength as opposed to multiple meters' length like in radial band. The microwave spectrum was really operationally critical. We saw a bunch of examples of that during the Second World War. So after the war-- I still find this mind-boggling given how hard it is to do anything with the federal government these days as a researcher. After the end of the war, there was all this suddenly surplus equipment at places like the Rad Lab. And many, many researchers who had worked photon at the Rad Lab during the war were basically invited to bring along their pickup trucks and load up their trucks and just take away whatever equipment they wanted, almost that informally. It was only slightly better controlled than that. It was almost a free-for-all. There was an enormous amount of surplus equipment from places like the Rad Lab. In this case, extraordinarily sensitive electronics tuned, finely tuned to be very accurate in both producing and measuring microwave-band electromagnetic radiation, things like filling this table in our own Building 4 back from the Rad Lab days. So that wasn't only researchers at MIT, this was researchers who were going back to their home universities throughout North America, many of them in the United States, to places like Columbia University, for example, and many other sites. So you had this very informal dispersal, I'd say very generous dispersal of equipment that was once highly, highly coveted because it was so mission critical to the Radar Project, and it went to many, many labs across the country for postwar peacetime research in the microwave band. And so that's what helped to jumpstart two very significant sets of experiments that took place at Columbia very soon after the end of the war by Rad Lab alumni, by physicists who indeed had spent the wartime working intensively on radar. These two new experiments were called the Lamb shift, which we'll talk about first, and then something that became known as the anomalous magnetic moment. And I just want to underscore, both of these came from people using specific skills that they had honed during the wartime Rad Lab, and in each case, literally using equipment that was surplus from the wartime projects. Let's look at the Lamb shift. First it was named for the principal author, Willis Lamb. That's why it's called the Lamb shift. So as you may remember, going back when we looked at quantum theory a few weeks ago, even according to the Bohr model of the atom from so-called old quantum theory, let alone from the fancier and fancier treatments of the hydrogen atom for Schrodinger's equation by 1926, and that's something we didn't talk about in this class, but Paul Dirac's relativistic generalization of Schrodinger's equation, which he introduced around 1928. According to each of these very highly quantitative treatments of the electron in a hydrogen atom, the energy of an electron in the 2S state and the 2P state should be identical. There might be a splitting if you put the atom in an external magnetic field, but just on its own, in the absence of an external field, the energy of the electron depended only on the so-called principal quantum number-- in this case, 2, not on the angular momentum that would determine distinguish an S state from a P state. So according to every single iteration to date of quantum mechanics, say up to the 1940s, there should be literally zero difference in the energy levels of a 2S state and a 2P state for that lonesome electron in a hydrogen atom. And instead, very soon after the war, this first author, Willis Lamb, who was, by this point, a mid-career accomplished professor who had spent the wall working on radar, and his then-graduate student, Robert Rutherford, they were able to use some of the surplus equipment, and indeed, the skills they'd really worked on throughout the war, to actually measure a very, very tiny difference in the energy levels of exactly those two states that should have had no difference at all. In fact, it was roughly one part in a million. That would have been literally unthinkable that such a tiny shift could even, in principle, be measured before these advances from the Rad Lab with the new equipment and so on. If one didn't have this kind of exceptionally sensitive microwave-band electronics, then even if you thought there might be an energy difference of that order, no one would ever dream that it could be measured. That's what begins to change very dramatically after the war when people like Lamb realized they were measuring exquisitely sensitive shifts in the context of honing their radar devices all the time. Moreover, this energy difference would be right in the sweet spot for which their electronics had been designed during the war. And in fact, I highlight this part from the very first page of this now-famous article, they go on and say the great wartime advances in microwave techniques in the vicinity of 3 centimeters wavelength, by which he means radar, made possible the use of new physical tools for the study of the n equals 2, principal quantum number 2, fine structure states of the hydrogen atom. He basically tells us, we can do this because of radar, that was true. It was just amazing to see him write it. The energy difference corresponded to be something-- corresponding to radiation with wavelength just under 3 centimeters, exactly the sweet spot they'd been focusing so much effort on during the war. In particular, what they found when they did these experiments was the energy of the 2S state, the spherically symmetric state for the electron, was just a little bit higher, by about part in a million, compared to that of the 2P state. So the 2P energy state was consistent with these earlier 1920s calculations for the energy and the 2S state seemed to be elevated just a little bit by roughly one part in a million. So just coming back to Sarah's question from the chat, this so-called Lamb shift, this tiny but now remarkably measurable difference in the energies between the 2S and the 2P states, that began to get a bunch of theorists, after the war, to go back to this question of virtual particles. Could this energy shift, this now measured experimental result, could that be accounted for? Could that be attributed to the real physical impact of these previously hypothetical virtual particles? This was what looked like the first maybe direct evidence that could lead to a notion that virtual particles are part of the world and not only a pencil-and-paper hypothesis. So Willis Lamb presented these preliminary results at a meeting I'll talk more about later today, a meeting on Shelter Island, which is a tiny little island just off the north fork of Long Island. Many of you, I'm sure, have heard about Long Island near New York City. And there are some very much smaller islands nearby, one of which is called Shelter Island. And there was a meeting there, a private meeting of physicists in June of 1947 right around the time that they submitted these results and Lamb had been invited to present. So on the train ride back to Upstate New York from that meeting, the Cornell physicist, Hans Bethe, actually worked out a rough calculation of what came to be called the Lamb shift by trying to take literally, to take seriously this notion of virtual particles. So we'll look at sketch level of what Bethe's new argument was like. This was not the end of the story by any means, but it really helped reorient a number of physicists to think seriously about virtual particles in a new light. So his idea was that if these virtual particles exist, they're constantly popping in and out of existence. You couldn't not have them because the Uncertainty Principle. Then the original electron in that hydrogen atom would behave-- would be subject to something like Brownian motion. It would be constantly jostled by this bombardment by these virtual particles that are constantly borrowing and paying back this energy. It'll be nudged here, nudged there. So the effect of that on the behavior of the original electron in the hydrogen atom, Bethe suggested, would be smearing out of the position of that electron because it's constantly being bombarded. There'd be some quantum fluctuation in its position delta r. So if its position is now smeared out with some within some finite range, then how would that impact its potential energy? After all, remember, in the hydrogen atom, you have a positively-charged proton exerting a Coulomb attraction, an electromagnetic attraction on this negatively-charged electron. So if the position of the electron is being constantly jostled because of the real physical impact of virtual particles, then you'd have to Taylor-expand the potential energy. It would be what you originally thought it was, plus some correction due to the jostling, some delta r displacement. And just for the ordinary rules for Taylor expansion, that would then multiply the first derivative of the potential in the radial direction. There'd be a second-order correction proportional to the second derivative of potential, and, in principle, on from there. Bethe next argues that this linear term that goes directly with the-- directly proportional to this smearing, that'll probably average out. That should cancel out, Bethe suggests, because the electron's as likely to get kicked downward toward the proton as to get kicked further away. So on average, that displacement should vanish. Even though it's non-zero at any given moment, its average value would likely vanish. However, as is typical for any fluctuation phenomena, in classical physics as well as quantum mechanics, Bethe next argued that the square of that displacement will not vanish typically. That there'll be some non-vanishing average value for the square of that smearing, the delta r squared term. It might be tiny, but it won't actually average out to 0. If that's the case, then you really better look at into this term that it multiplies in your expansion, how do you incorporate the second derivative of the potential? Well again, if you use the particular potential of interest, the Coulomb potential, the second derivative will yield-- will have most impact really right at the origin, right at r equals 0. In fact, it's proportional to a delta function. And so in a picture form, what that suggested to Bethe was that because of this unavoidable jostling of the electron due to constantly being kicked and pushed around by these virtual particles, there will be a contribution to the effective potential that is a positive-- look at the sign here, it's a positive shift, some small amount but positive addition to the energy you otherwise would have calculated, and it'll only really matter very close to the origin. So whereas this function would, in principle, fall off all the way to minus infinity as you go toward the origin when r-- if r literally could become 0, this thing would diverge. Instead, you shift the shape of that function with this positive correction term that really would only matter very, very close to the origin, to near r equals 0. So then he goes on further, he's working this out on the train ride back up to Schenectady before he gets back to Ithaca. It's a pretty good train ride. Next, Bethe says, well, this really should preferentially affect electrons in the 2S state rather than the 2P state. Remember, Lamb had found not just that there was a difference, but it was the 2S state whose energy was ever so slightly raised compared to the quantum mechanical prediction. He said, oh, well that also might make sense. If the positive shift in the potential energy really only matters very near the origin, it's the electron in the 2S state-- that's this spherically symmetric shape that has any non-vanishing chance, every now and then to find itself near the origin. That's the fancy way to say that, is to say that the quantum wave function for the electron in the 2S state has a non-vanishing value at the origin, it has some probability to be found at the origin at least some of the time, so its energy will be affected by this modified shape of the potential. On the other hand, an electron in the 2P state has basically no chance ever to be at the origin. Its wave function literally vanishes at r equals 0, and so its energy is basically unaffected by the change. If you're only changing the shape of the potential in a region that this electron never goes near, then its energy should be unaffected, and yet there might be a small but positive shift in the S state. That's pretty cool. It was at least trying to say, here's a plausibility story for why there might be a shift and why it might be raising the 2S state-- the energy of the 2S state. Now keep in mind, Bethe still couldn't calculate this exactly, he still had all these integrals blowing up when he tried to really calculate the effect. In this case, it was like saying how would you calculate this part? That's the part that really comes from summing up all the borrowed energy from this quantum fluctuations. So he did something again pretty darn clever on that fateful train ride. He said, well, if this integral is blowing up like the logarithm, let me put in by hand a maximum value. So it breaks the symmetries of special relativity, it has all the illnesses that people had identified in the 1930s, but he said he had a new motivation to try because it looked like maybe this is a measurable effect, after all. So he chooses a perfectly reasonable maximum momentum that could have been borrowed, and he gives it equal to-- basically the equivalent to the rest energy of an electron. So he's going to integrate not to infinity, but to imagine that these virtual particles could have borrowed up to the energy of an actual electron at rest. And he said, if it's a little different than that, the answer will only vary logarithmically. So that's a pretty good order of magnitude, physically reasonable scale to try even though mathematically, it's totally illegitimate. It breaks all the symmetries and all the rest. When he put in that value, he found very close agreement to Lamb's measured result. Not only was there a positive shift for the 2S state, he could now estimate semi-plausibly how big that shift should be and compare it to these highly-sensitive new results that Lamb had just presented. So that starts getting people to think pretty seriously that virtual particles maybe are actually part of the world and not just a kind of clever trick on paper. The other big result right around the same time that really cemented that new approach turned out came from the same kinds of historical processes just down the hall. So literally down the hall from where Willis Lamb and Robert Rutherford were working at Columbia, another group of physicists was working at Columbia, also Rad Lab veterans, also using surplus equipment for microwave electronics. This was a group led by Isidor Rabi, who I think had either just one or was about to win the Nobel Prize, and then two of his younger colleagues, Nafe and Nelson. So they were also trying to perform these very, very sensitive experimental measurements on the behavior of hydrogen atoms, very similar to what Willis Lamb was doing, because like Lamb, they now had this incredible experimental control and experience in the microwave band. So only a few weeks before Lamb and Rutherford wrote up their results, this group, starting in mid-May of '47, had likewise performed very sensitive measurements on properties of an electron in a hydrogen atom, and again, they found a very tiny but now measurable deviation from their theoretical expectations. This time, it involved quantities related to the spin of the electron. Remember a couple-- again, a couple classes back when we were talking about the last vestiges of the old quantum theory, people like Samuel Goudsmit and George Uhlenbeck and soon others introduced this hypothesis of quantum mechanical spin. Maybe the electron spins like on its own axis like the Earth spins to turn day to night, and not only has an orbital spin around the proton, say, an analogy to the Earth moving around the sun. That intrinsic spin or angular momentum would yield a magnetic moment. The behavior of such a spinning charged object in an external magnetic field would depend on this quantity called the magnetic moment. In fact, by this time, this had been named the Bohr magneton, the actual value of the magnetic moment associated with the spin of an electron. It has an electric charge, it has some unit of angular momentum, h bar over 2. And so there would be an expectation for a magnitude, how large would the magnetic moment be for an electron. And what Rabi and his colleagues found was that if you really measure this extremely sensitively in a way they only could now just begin to do, the value they kept measuring of the magnetic moment of electron was actually a little bit larger-- in this case, about one part in a thousand instead of one part in a million. But still, something that would have been barely measurable, if at all, before the Second World War now is within their toolkit. Now they can actually measure slight differences of a few parts in 10 to the 3. And again, it seemed to be shifted to the larger value than what the quantum theoretical prediction suggested. So again, like Willis Lamb, Rabi presented this experimental result at that same Shelter Island conference, which I'll say more about in a moment, in June of '47. So these mostly theoretical physicists who were attending the meeting at Shelter Island, they heard from a small number of experimentalists, including Willis Lamb and Isidor Rabi, and these were, again, like hot-off-the-presses, totally cutting-edge results, some of which hadn't even been published yet, these folks at the meeting had a sneak preview of these very high-precision new wartime-enabled new experiments. So once again, theorists, hearing this kind of unexpected result from Isidor Rabi, wondered if this could also be a kind of measurable empirical effect of genuine virtual particles changing the behavior of that electron. To go back to this kind of flower petal or daisy picture of the polarized vacuum, you have the original electron and the virtual electron positron pairs springing up all around it due to the uncertainty principle. Well, each of those are also objects with some electric charge and with some intrinsic spin. So maybe the system, the electron plus its cloud of virtual particles, maybe that whole system was contributing to the effective magnetic moment. That would be like having a bunch of compass needles all aligning in the Earth's external magnetic field. And so the total effect would be-- you'd have to add up the effect from all of those spinning compass needles, not just the one you began with. And maybe that would be enough to just slightly nudge the measured value of that magnetic moment to be a little bit higher, this contribution from the cloud might indeed be a little bit higher than if you only consider the electron it's its own. So I'll pause there. Any questions there? So Alex says, the greatest reuse pile ever. That is basically true. I just can't believe, again, how generous and relaxed the federal government was. The federal government might not often merit those two adjectives these days. It was generous and relaxed in letting people like literally just truck back to their home labs tons and tons of surplus equipment. And Tiffany writes only to me-- I don't if you mentioned it to everyone, Tiffany, but you say, [INAUDIBLE] benefit from this as well. Do you want to say more, Tiffany? Very directly, yeah. In fact again, much like with the Columbia groups-- thank you, Tiffany. It wasn't just the equipment, it was young people like Ray Weiss, who, at the time, was quite young, as you know. So Ray-- our very own Ray Weiss, I will go on record and say our beloved Ray Weiss, I adore him, and also our Nobel Laureate, he got his start at MIT as an undergraduate working in the then-brand new Research Lab for Electronics, which exactly, as Tiffany says, was an intentional carryover of the wartime Rad Lab to a postwar footing. And Ray is basically the equivalent of a UROP student from very early on, began working literally with surplus equipment and learning how to use-- how to conduct very precise microwave-band electronic measurements. That's a great example, Tiffany. Yeah, good. So thank goodness. Thank goodness the black holes still collided and we had government surplus stuff to measure it a little while later. Yeah. Good. Any other questions on the Lamb shift, the anomalous magnetic moment, the impact of the Rad Lab on tools and skills and personnel? I find that really-- I just love that-- that unexpected continuity. And we'll see another example of that in the last part of class today. If there are no other questions, again, I'll charge on, but please don't be shy about asking. Let's see, Fisher has a question. How did Lamb to look for this? Good. Very good. I don't think that Lamb was intentionally trying to ask about virtual particles, per se. I think what Lamb realized was that there were all these-- it was actually called the fine structure, all these very minute shifts in the energy states of electrons and hydrogen atoms. Remember, that had been recognized even in the 1890s, although obviously not nearly the same level of precision, of course. But that's what gave rise to things like the Zeeman effect, the anomalous Zeeman splitting. If you put a gas of hydrogen atoms in an external electric field or an external magnetic field, the spectral lines that come out can show these very minute shifts, and that was read back as saying there's a shift in the energy states of the electrons. And so part of what was of interest for people like Willis Lamb was to ask, could there be measurable effects from, say, the spin of the nucleus? Could the could the fact that the nucleus has some moving parts affect the measured energy states of, say, the electrons from the spectrum and so on? So those were things that were just never going to be measurable using the techniques from the 19-teens or '20s, but he thought, again, the article-- the title of the article itself was about the fine structure. He was interested in, what's the best, most quantitatively precise way with which we can characterize the energy states of electrons in hydrogen? And so he didn't go in asking about virtual particles, per se, but he said, I have a pretty cool new set of tools to measure in that energy band, and he and his student, Rutherford, then relatively quickly were able to find this, indeed, unexpected energy shift. Good. OK. I'm going to go on to the last part for today, but again, of course, please keep the questions coming, I'd be delighted to talk more. OK, so that middle part for today's class was all about some kind of experimental continuities, the fact that it mattered that people were asking these questions after the Second World War probably rather than before. And So for the last part today, we're going to ask a similar question, but now about some theoretical responses, which, again, will-- I think it's quite amazing-- will find also seem to depend in a pretty critical way on the intervening experience of the Second World War. So I mentioned a few times that Shelter Island meeting, and I'll just say a bit more about that meeting itself. It was actually none other than Robert Oppenheimer who organized it. In fact, as we'll see in the next class, he organized three in a row, three annual meetings of which this was the first. The June 1947 meeting about one week in duration was the first of a planned series of three. And Oppenheimer was very explicit. He really thought it was time to get the physicists of thinking again about open questions at the cutting edge of theoretical physics, including these open questions in quantum field theory which had really ground to a halt at the time that he himself was a student and a postdoc going back to the '20s and '30s. He thought, now that the wartime efforts were behind them, it was time to literally jumpstart the work. He also had very, let's say, patriotic-- we might say nationalistic-- goals in mind. That he wanted to really help further the US-based community in theoretical physics. So he invited several leading Europeans over for the meeting, some of whom were already in North America. But his real goal was to get the younger generation of US-based theorists in a room together-- in fact, in a lovely bed and breakfast called the Ram's Head Inn. They literally lived together for a week and did nothing but talk about quantum physics for a week being paid for by the National Academy of Sciences. So here's one of the famous photographs. This is actually Willis Lamb standing here. Lamb had actually been trained as a theorist, but had converted over to an experimentalist. Here's John Wheeler. I love this, he's like reading the funny pages, not even paying attention. I love that photo. Here is Abraham Pais who had actually fled the Nazis as a young person and relocated in Princeton. Here's Richard Feynman, MIT class of 1935 or so, 1934. Here's Herman Feshbach, who then made his career at MIT, and then here's Julian Schwinger. So here's a photo from the bed-and-breakfast on Shelter Island literally during that June '47 meeting. So nearly all the participants actually already knew each other even though some of them were still quite young. Feynman was 29 years old at the time, Schwinger was also about the same age. So it wasn't that these were very senior figures who'd been around each other for decades, they had been around each other on the wartime projects. They knew each other because of some of these very intense experiences from very recent times even though there were still-- many of them quite young in their careers. And as we'll see-- and this is work that I really owe to my own mentor, Sam Schweber, who wrote this amazing book on-- a little piece of which I'll describe now, that the reaction from some of these folks soon after the war to these very stubborn problems of infinities in quantum electrodynamics, their response was really quite different than what we saw from people like Niels Bohr and Werner Heisenberg. Very few of these physicists who came back to the question of virtual particles and infinities and all that after their wartime experiences, very few of them talked about a grand, sweeping conceptual revolution. Very few of them said, we're going to sweep away everything we know about relativity and quantum theory. We'll have even grander conceptual leaps. Instead, as Sam has shown really quite powerfully in this landmark study, most of these wartime project veterans said, well, we have a bunch of equations, we now have some measurements from people like Lamb and Rabi and soon others, what can we do to get the numbers out? That's the phrase that Sam associates with this generation. How do you find some clever workaround to get some numbers out to compare with real empirical measurements? So rather than getting stuck with fancy-sounding conceptual first principles, you have some equations, you have some reason for confidence in them, get to work and get some numbers out and compare with what the other groups are measuring. And really, I think an amazing example of that comes in from Julian Schwinger. Schwinger actually had spent the war at MIT. He was at the MIT Rad Lab. He then went all the way back to Harvard after the war was over. He was actually, for many years, the youngest tenured professor in any field at Harvard. He was tenured at age 29 and had spent the war, as I say, at the Rad Lab. He was one of these invited participants at Shelter Island. And as we saw some weeks ago, he had been immersed in these kinds of engineering effective styles of calculation during his work at the Rad Lab. And he himself years later credited the engineers with really changing his own thinking about some complicated physical systems. So as we saw, if you're working with these kinds of waveguides, rather than spheres and perfect cylinders, then the usual symmetry arguments that physicists love and that are often so powerful for us, those are simply not going to cut it. So rather than trying to calculate from first principles, if you're looking at some really, frankly, ugly piece of real-world equipment or some complicated, highly non-symmetric piece of real-world equipment, don't sit down and start using spherical symmetry or cylindrical symmetry. Learn from the engineers to work in terms of effective circuits, or really more generally, input-output. So as we saw last time, instead of worrying about the rules for how to resistors add if they're in parallel or series and all that, just, say, stick a lead on here, stick a lead on here, there's some effective resistance, and get on with your life. As they say, there's a war on, buddy. So don't get stuck on this first principles calculation. So when he then came to Shelter Island, really just two years after the end of the war-- not even two whole years-- after the end of the war, and he heard directly from people like Willis Lamb and Isidor Rabi about these new microwave-band measurements, he began thinking about virtual particles. By his own admission, he began trying to apply this Rad Lab engineering-style input-output calculation to what had previously seemed really incalculable. So let's take, for example, that question of the anomalous magnetic moment where this collection of the electron plus its cloud of virtual particles seem to have a different value of its ratio of charge and spin to its mass compared to what one would expect in the absence of those virtual particles. So Schwinger realized, we could never turn off the uncertainty principle. So you would never actually have just the electron its own. So why would you ever set up your equations as if you have a pristine, pure single electron and then try to fold in all the messiness of the world? That's not how we handle waveguides. You don't start with spherical symmetry and then cut back to add in more realism. Just never separate them in the first place. Treat it more like an input-output relationship. So his main idea was to never try to calculate the effect of this electron and then separately add in the effects of all these virtual ones. Just manipulate your equations from the start so it's always a combination that you're dealing with. Again, that would be like the effective resistance on that circuit. So you can rewrite your equations before you sit down and try to calculate particular things, so you're always dealing with the sum of the so-called bare mass or bare electric charge plus the corrections or the impact of those virtual quantum processes. So instead of trying to write this down and then calculate this from scratch, rewrite your equations, you always have the input-output, the actual quantity that would ever be subject to measurement. So if you do that, Schwinger then goes on to show, quite astonishingly, then you never need to calculate these things on their own because they never appear on their own. They always appear in combination with these other constants, the so-called bare mass or the bare charge. And as we'll see, I want to-- I just want to remind us or give a foreshadowing, it turns out, it wasn't only Julian Schwinger who pieced this all together, it just boggles my mind, but totally independently, and in fact, four years earlier, really, in the middle of this very horrible, bloody war, Sin-itiro Tomonaga, a very young Japanese theoretical physicist working in Tokyo, had come up with the exact same series of steps. And he also, it turned out, was immersed in the Japanese radar project. And he also, after the war, credited his switch in thinking to this input-output effective quantities approach to his work on radar. So these two independent approaches to a longstanding challenge, and it turns out, Tomonaga really actually got there first for many of these steps, so that was not known at the time in the United States because the nations were at war and there wasn't regular mail service to deliver journals and all the rest. So I'll come back to some of Tomonaga's work in the next class, but I do want to mention it. This was jaw-droppingly original when Schwinger did it in the late '40s to his US-based colleagues, and only later did they learn that it had been done actually even earlier under much more difficult conditions. OK. So what was the rest of Schwinger's argument, then? If you never separate the kind of corrections, so-called corrections from quantum theory from those bare quantities, then you can, all of a sudden, find finite sensible answers after all. So he goes on to show, that just as the experts had found in the 1920s and '30s, if you actually try to just calculate the effect of all these virtual particles on their own, you really do find these logarithmic divergent integrals. And this is-- you have a little more information about this in the reading for today by physicist Robert Mills, that so-called tutorial on the infinities in QED. This is the kind of thing that Mills writes about. So Schwinger confirmed that, yes, indeed, the form of the equations, we can write them kind of abstractly across many, many applications, whether it's the corrections due to the bare mass or the corrections to the charge, they take this form in general. And they are generally logarithmically divergent where y here is basically like the borrowed energy or momentum of those virtual processes. So yes, you get log of infinity, which is still infinity-- sorry. But instead, if you don't just calculate that on its own, you always do this grouping first. So you always deal with the effective quantities, then, in fact, some remarkable cancellations can occur. So this is what he began working on in the weeks and months after hearing from Lamb and Rabi at Shelter Island. So Schwinger first rewrites all his equations in terms of these effective quantities-- and this could, again, be charge or mass, it works-- you have to do it for both. Let's say it's charge in this instance, you have some bare charge which is never separated from the screening, you can never turn off the Uncertainty Principle. In this case, the screening would be a minus sign because you're actually screening off part of the original charge that comes in an opposite sign. The effective charge is a little weaker than the bare charge would have been because of this cloud, he reasons. So now, instead of calculating this part on its own due to the virtual particles, calculate the combination together because they never appear separated from each other. So all of a sudden, now, you have this as your integral to evaluate rather than this on its own. And now, it's just a few more steps of algebra, which even the youngest tenured professor at Harvard was able to do. So now you can just combine the denominators, now you have a new numerator, just a little more algebra. All of a sudden, what's happened, what matters is you've increased the power of momentum in the denominator. Instead of integrating over one power of momentum-- that's what gives you that logarithm, now you're integrating over two powers, and that's going to be a convergent integral. And in fact, you can do that integral, it becomes a logarithm of a perfectly finite answer. So this became known as renormalization. That's the fancy word that was applied to this based on this early demonstration by people like Julian Schwinger. You first arrange, always have the so-called bare and quantum corrections together because when you evaluate them together as one joined quantity, you get these exact cancelations. It turns out, the bare charge of the electron really is infinite in this scheme. It's just that you can never interact with an electron on its own. So that infinite initial charge is partly screened by the infinite effect of those virtual particles and not arbitrary infinities-- in fact, particular forms of infinity that you can show exactly cancel. As long as we can write them in this grouped form, the infinity here is exactly canceled by the infinity here, and it leaves a perfectly unambiguous finite result. So this avoids that problem of cutting off the integrals at some arbitrary level. This can now be made, as Schwinger show very elegantly, this approach can be made to respect all the symmetries of special relativity. Different observers, now in principle, should agree. You extract finite results. Basically the trick is by never trying to calculate this apart from that. That's really the upshot of that maneuver that Schwinger introduces. Then it goes on in the next six months to show not only is this feasible in principle, that this type of logarithmically divergent integral could be cured if you never treat it on its own, but he then applies it to the very specific measurement that Rabi and Nafe and Nelson had reported on at Shelter Island, that anomalous magnetic moment. And for the first time in the history of the universe-- he got here even quicker than Tomonaga, this part, Schwinger is able not just to find a finite answer, but to find an answer that agrees to remarkable accuracy with that hot-off-the-presses experimental measurement that the Columbia group had just reported. So not only does he get an answer that the correction from incorporating the virtual particles goes by actually a fairly simple-looking expression involving just constants of nature-- it's the charge electron squared divided by h bar and c with this critical factor of 2 pi, evaluate that numerically, the shift by which the effective magnetic moment should have risen compared to the original quantum mechanical prediction is about one part in 10 to the 3. In fact, he could calculate out to multiple significant digits and compare that, and it's unbelievably spot-on within the experimental error of that early measurement from the Columbia group. That finally convinces people, by December of 1947, that these virtual particles might really be part of the world after all and that maybe not all hope is lost. If you focus, as a very pragmatic wartime researcher had had to do, if you focus on getting the numbers out and focus on what relationships should appear on their own or only in combination, Schwinger shows-- and others were quickly able to generalize, that you might be able to calculate the effects of virtual particles after all. So let me summarize, I'll have time for some more questions. So beginning right after the introduction of quantum mechanics, several physicists, including the names we already know, Dirac and Heisenberg and Jordan and Pauli and others, began developing a quantum field theory trying to treat extended objects through space and time as collections of field excitations obeying these special quantum mechanical rules. And so you could represent any field as a collection of quanta in particular states with the very important caveat that the uncertainty principle at least suggested that these virtual processes could temporarily break the rules as long as they paid the energy back correspondingly quickly. However, that was a compelling cartoon story. It led to a total lack of ability to quantify those behaviors throughout the 1930, and that led some people to claim we need another sweeping conceptual evolution that the whole edifice will change yet again because these quantum virtual processes yield infinite results. And yet after the Second World War, a new generation of both experimentalists and theorists returned to this now decades-long challenge, and they came at it with different skills, sometimes literally different equipment, different theoretical approaches, and a different kind of mindset for many of them, and were able to find a way to actually yield finite and, in fact, quite sensible numerical values from these very abstract-sounding quantum virtual processes. So I'll stop there. I'm happy to say if people have questions for that. Any questions or comments or concerns? I should also say, that Robert Mills piece is a little more complicated than I would have liked. It's a pretty handy reading, but it still assumes a lot of formalism. So if it was a hard reading, don't worry about it. What I wanted you to get out of it is at the level that we talked about at in class today. I'd be glad if people have questions, I'd be delighted to talk more about more details, but if that article was a little off-putting or seemed too hard, don't worry. If you've got this basic lesson, that that's what we want to get out of the Mills piece, at least that much for now. OK. If there's no questions, I won't force you to stay, but I invite you to come to office hours, drop me an email anytime. And we'll pick up this story on Wednesday when we look at other folks who returned to the question of virtual particles and quantum fluctuations. So see you Wednesday. Stay well. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Optional_Discussion_Containment.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So as I think you know, today, again, we're going to have just an informal optional discussion. You've all come. So I guess you've exercised your option. But we'll be talking today in 8225 SJS 042 about some of the themes raised by that second documentary film that hopefully you had a chance to watch, the film called Containment, the film made by Peter Galison and Robb Moss and their team, their whole team. And like last time, like our discussion after the film The Day After Trinity, so the floor is open. So I'm just curious what people make of it. Let me ask you a more specific question. Was there anything in the film that surprised you, that you really didn't expect, or that you hadn't seen, or you knew something about, but this looked at it from a different angle, or something like that? You might know, it might not have been so clear. I can't remember. Peter and Robb, the filmmakers, were working on this project before the Fukushima earthquake and reactor meltdown. They were already immersed in this project. And they were already actively filming. And then the earthquake in March 2011 occurred. So in some sense, the film took a turn, while they were already immersed in it. Considering how squarely the film begins and ends with a lot of material about very fairly recent developments in Japan, I was just curious if that shows in your viewing because the film originally-- of course, they didn't know that was coming. They were starting the film, were working on the film before that even had occurred. Did the film hold together? Did that seem like it was tacked on? I'm just curious what people make of this body of material. Thank you. Let's sit with that. Peter and Robb already were quite interested in areas that do show up a lot in the film, like the waste storage facilities in the continental United States. And as many of you might know, there've been long, long longstanding, decades-long debates, just within the US context, about how, where, or whether to develop a domestic storage facility. Should it be at Yucca Mountain? Should it be some other site, the so-called WIPP facility was already then well under-- My apologies. The power went out in my house temporarily. So here I am. It's clearly just a little blip. Anyway, sorry about that. So I have to say, I might be repeating things that I missed when I was offline for a moment. One of the parts of film that I do find most fascinating are these efforts to get a range of types of specialists to design some kind of symbol system or communication system that could plausibly and reliably be legible, be readable to people hundreds of thousands of years in the future. Whether it's archeologists, and linguists, or other kinds of anthropologists, or experts in SETI, the Search for Extraterrestrial Intelligence, or sculptors, or all these kinds of range people to say, how do we make something that will tell people still, still, still don't dig here because some of these highly poisonous to humans at least-- highly poisonous radioactive isotopes have half lives in the tens of thousands-- multiple tens of thousands of years. And so if you need activity levels to fall by many half lives worth, you're looking at 300,000 year time horizons into the future. And you think about we've had the printing press since approximately 1450, plus or minus, or 1350 anyway. We can measure that expanse in a couple thousand years, not a couple tens of thousands of years. The latinate alphabet dates to, again, on the order of maybe-- I don't know-- 5,000 years, something like that. These are a drop in the bucket time scale-wise compared to a 300,000 year projection into the future. And I found that part of the film really just fascinating. I think in part of the interview that was also on the readings for today, an interview that Peter Galison had done while he was working on the film, it's part science fiction. It's an interesting thought experiment just to imagine a communicative system for our vastly future selves. So it's partly science fiction and also really, deadly serious. This stuff will still be harmful to humans and even humanlike creatures, plausibly, in n generations forward. So anyway, I found that part of the film-- that's something I didn't know much about before learning about it from Peter when he's working on the film. That really stuck with me quite hauntingly. Any other thoughts on the material? It doesn't have to be limited to the film. Other things we talked about in class recently as well, it's totally fair game as well. Yes, I worked with a number of students and colleagues who thought very hard about SETI from an astronomical point of view, but also as historians or anthropologists of science studying these communities in present day. And it does feel like that work is often a projection of our assumptions about ourselves. What will the others look like? Well, we have a range of experience to base that on, which is what we know about. And so it does have a similar kind of ambition and yet maybe limitation from just the horizon of our own imaginations. And so I think it struck me as totally appropriate, once I learned of it from, say, Peter and Robb's film that these department of energy experts would call on SETI experts because the nature of the intellectual challenge does seem quite similar. And at the same time, it also feels exactly as, well, as big a remaining challenge as it is for anything else because the SETI folks have are also dealing with what they know about, what they've experienced, what they can even imagine, the horizons even of their imaginations. So in that sense, it does feel like a remarkably similar exercise. I mean, I wrote a little piece about this some years ago, as many of you might know, as a bit of an aside. But SETI, a large part of the SETI work has focused on something called the Drake equation, named for astronomer Frank Drake back in 1961. And there was an effort to try to quantify and estimate-- admittedly, very loose estimates-- how likely a signal might actually be expected to be from, say, an intelligent extraterrestrial civilization. And the last term in this equation is always the expected lifetime of the civilization that would have been sending the putative signals. And in the Cold War, in exactly the period we have our heads in the middle of in this class, that last term in the Drake equation, capital L, always was a stand in for-- to those thinking about at the time, for all out nuclear Holocaust. So it wasn't that there was a biological lifespan. They thought it was, did the civilization blow itself to bits the way it seemed to many people at the time that maybe people on Earth were about to do or were poised to do? So there's actually a high nuclear component even to SETI as well. The community had already been thinking about a kind of knife edge of survival in a nuclear age, at least what they consider to be a knife edge. At the same time, there have been all these kind of technological spinoffs for certain kinds of multichannel analyzers and other technological fancy electronics and code that was developed first for SETI. How do you sift through millions of channels at once, millions of frequency bands, say? And some of those were brought back, so to speak, behind the fence into certain kinds of very sophisticated nuclear testing, especially after the ban on actual test blasts in this era of what's now called stockpile stewardship. So SETI seems like it's always about the faraway, the distant, the literally extragalactic or certainly extraterrestrial. And yet, its own history has been grounded and intertwined with the Cold War nuclear age from its own beginning. So maybe all the more appropriate then that some SETI experts, as well as linguists, and philosophers, and sculptors would be working with the Department of Energy to think about symbol systems for 300,000 years hence. Were there parts of the WIPP facility-- what was it called? I can't remember what WIPP stands for-- Waste Injection Pilot Project, something like that. That's a great question, Alex. I don't remember the specifics. Thank you for asking it. So I think, in general, it's smaller. I think the ideas for Yucca Mountain were ultimately much more grand. It would have had a much higher storage capacity, for example. At least, it never got past kind of design specs. But that was the ambition for Yucca Mountain, was it really was supposed to be like a one-stop-shop and we solved it. That's it. We have enough capacity for the indefinite future. And I don't think anyone at WIPP ever thought that. I think WIPP was always seen as a proof of principle center. And Ilona helpfully reminds us what WIPP stands for-- thank you, Ilona-- Waste Isolation Pilot Plant. OK, good. Thank you. So WIPP, I think, was built to be, as its name even implies, a kind of pilot project for something that could potentially later be scaled up. And then as it turns out, there are some real technical challenges even with the pilot design. That's why they build pilots, I guess. You find out what might not work the way you think it would just from the engineering specs. But I think, in general, the plans for Yucca Mountain were to be forever, like we've solved it, was the ambition or the hope. There would be the one place with enough capacity, with presumed geological stability, that can be estimated-- again, in the hundreds of thousands of years frame-- I don't know how one could ever have that confidence. But that was at least something people were thinking about, and that it would be the one site needed at least for the domestic United-- continental United States, maybe even for worldwide. And I think that certainly has not yet come to be, to put it mildly. And there's that, I think, really interesting footage in the film about some of the local town hall meetings in and around where the WIPP facility I think was ultimately made. Well, you can see there's a robust range of opinions. Is it good for the local economy in the short term? Does it make good, secure, well-paying jobs? Does it improve the local tax base? Those kinds of arguments and other concerns that are raised about is that a worthwhile trade-off. And that's just for, again, this pilot facility in one area. It is a fascinating question if you get to-- participatory democracy for the here and now is already really hard. I think, again, we're getting reminded of it every day. Now, imagine this when you're trying to make decisions, I think as you're saying, they'll have implications for generations to come. Who has say on behalf of them? How do you even incorporate that, let alone the next five year or 10 year plan? Will this be better or worse for the local community? Yeah. Yeah, and again, that's something much like with Fukushima. Obviously, the filmmakers were immersed in the project long before that had occurred. So the film was itself a kind of moving-- not a moving target, but the film evolved more over the course of its making than I think films otherwise often do. And again, it was a genuine surprise-- surprise to filmmakers, a surprise, of course, to many of the operators in the locals. And again, one could say, well, that's why you do small scale engineering pilot projects. You want to learn from these and hopefully get better and better. And maybe it'll scale. It raises some extraordinarily difficult questions. If, as you say on a kind of decade-long , time scale let alone 10-- 10 to the power of 1-- 10 to the 1 years, as opposed to 10 to the 5 years. That's a big change. Can we get 10 to the 1 years done or even a century let alone 100,000 years? So yeah, I was surprised by that, too. Yeah. What about some of the scenes-- some of this was new for me as well, some of the scenes within active reactors, just the scenes of the kind of machinery of what it's like for humans to be interacting with these very hot materials, hot in many ways. I don't think I'd ever seen moving picture of that myself, before seeing the film, just even in a reactor site that's operating perfectly appropriately, perfectly canonically. Just how do you deal with these spent fuel rods and all these kinds of things? Just getting a visual of that and the scale of it, what it's like with all the benefits of our own present day high technology, let alone when people were doing this back in the '40s and '50s, I found that really actually pretty riveting too, just visually, viscerally, what's it like to be working in a site like that when things are going right, when things are going perfectly as the engineers and scientists had hoped. I don't know, Alex. That's a great question. I don't know if anyone else on the conversation knows about that. I bet we could find out with some quick googling, but I don't know offhand. Yeah. As you may know, there was a time in the-- starting in the late 1940s, the early years after the Second World War, really accelerating throughout the '50s, when many, many very small scale research reactors were built on university campuses across the country. MIT was, in that sense-- MIT's is unusual for having survived so long, not for having been built when it was. And so I don't know what the longest-- what the larger original plan was for collection and containment of these unavoidable waste products. The Atomic Energy Commission, the successor to the wartime Manhattan Project really had a very, very-- at the time, it seemed a very generous program to build these facilities, small scale facilities, I mean, just by the dozens and dozens across liberal arts campuses, big research universities, and everything in between. And so I don't know what the plans were for that larger collection of stuff, let alone what we've been doing very locally. Lucas, it looked like you were about to jump in as well. I cut you off, I think. Yeah, especially-- I mean, by now, it fades into the background. We've all been used to-- the basic knowledge has been there seemingly forever, at least for generations by now. Many of the folks-- the GAs certainly won't remember this. One time when Cambridge residents did have a reason to recognize these things was not an accident at the MIT reactor thankfully, but what had been a particle accelerator closer to Harvard's campus, actually really contiguous with Harvard. It was, at the time, called the Cambridge Electron Accelerator, or the CEA. And also, one of the things-- MIT also had one. The Atomic Energy Commission had also subsidized the construction of dozens and dozens of research accelerators, so particle accelerators, not just reactors. And the one in Cambridge and closer to Harvard was one of that set. And in-- I think it was 1965 or '66, mid '60s, there was an explosion that actually killed a graduate student, did an enormous amount of damage, property damage, as well as a loss of life from the accelerator. So basically, at the time, a lot of the detectors required cryogenic cooling. You keep these things really cold. And that often required lots of volatile gases under pressure, these so-called bubble chambers. And I believe it was a case of-- others on the call, Tiffany, or Julie, or others might remember better-- Peter Galison had written about this in one of his older books. So I think what happened with the CEA explosion was I think it was part of the cryogenic system that blew. It blew the roof off this factory-sized building. I think at least one graduate student was killed in the explosion. I think only one loss-- one person killed. But that's a wake-up call to the neighborhood in a big way. Wait, what that little, quiet, sleepy, factory-looking, nondescript building practically contiguous with Harvard's campus, they're doing with what volatile materials? It could really do what? So there's certainly been moments in Cambridge's own-- in our own Cambridge History, going back not super long ago, going back 60, not quite 55 years or so, where things where accidents happen, right? All these things are built-- designed and built by humans. None of us, it turns out, is perfect. And when you build lots and lots of these things sometimes in pretty dense, quasi-urban settings, that can be a bad recipe. That was nothing like the enormous accelerators that were built with much more careful shielding, and site selection, and all that kind of stuff. It wasn't like it was Fermilab or the Stanford Accelerator, which was pushed far away from campus. This one was really right near highly populated parts of Cambridge. So things like that certainly have happened. Luckily, that did not-- I don't think there was a concern about radioactive waste being spread by that. I think that was a volatile set of chemicals that exploded. And the damage was immediate, but not showering immediate land with isotopes that would be here for another-- through the next Ice Age. So in that sense, it's not like a reactor meltdown. But still, dangerous stuff that really, every now and then unfortunately, doesn't operate as designed. Yes. And is that a different kind of-- is the threat different today in a Google Earth, everything's in principle under some kind of camera surveillance, anyone can map anything anywhere in principle? Is that different than when it took work to identify where these things even were? No one had automatic GPS coordinates downloadable. So I do think I think the nature of that challenge has changed in the last, let's say, 20 years as well. Tiffany, you mentioned in the chat about Lake Anna, Virginia. I actually don't know about that one. Do you want to tell us about it? Yeah, I didn't realize-- I've never been to the WIPP facility. A number of colleagues from MIT have been because, again as you may know-- and maybe they mentioned it in the film-- that was a site for a bunch of underground fundamental physics experiments as well. You want to be shield it from cosmic rays. And lots and lots of physics experiments-- and this is Julia's mind very directly. But for generations, lots of fundamental physics experiments have been in abandoned or no longer used mines because you want to get underground for various shielding reasons. So I have several colleagues who had been at WIPP regularly for neutrino experiments or other kinds of particle physics things. So they've been there. I've never been there, let alone driven past it going to or from Vegas. But Tiffany, your comment also reminds me of something I didn't have a chance to really linger on in the class material. But for many years, when nuclear weapons tests were still above ground, before they went underground with the limited test ban treaty in the early '60s, these things were often pegged as tourist attractions. So obviously not to get too close to them, but in Vegas in particular, it was within sight lines of one of the continental testing grounds. And so the casinos and the hotels would actually sell advertising. Come this weekend because you'll get to see an explosion off on the horizon. It won't be in your hotel room. But you could have a clear view of this temporary display out the window. It was seen as not just something to be tolerated or otherwise based on what seemed like Cold War realities, but really-- I don't know if we'd say celebrated, but seen as a tourist attraction, something that would help you sell hotel rooms and come to Vegas this weekend kind of thing. I just find that chilling. I mean, it's just astonishing, the way-- in this case, the weapons not even reactors, weapons, were seen in popular culture. So again, coming back maybe to Lucas's point, it's not only were they not trying to keep secret when the blast would be-- you think well, are you worried about any kind of bad actors messing around with these nukes? Not only did they worry about keeping the date and time secret, they were actually advertising it as if they could almost sell tickets. Again, just our attitudes toward these things over the last 50 or 60 years, I just find that stunning. Yeah. I mean, maybe as you're saying, there is-- I don't if we call it flourishing. But there is an actual nuclear tourism-- oh, maybe we can call it industry. One of my colleagues wrote a book, a very interesting book a few years ago called The Nuclear Family Vacation, which is a pretty fun pun. She and her husband went traveling. So it was like a nuclear family that also went to all these nuclear family vacation-- they went to visit nuclear sites actually around the world, including-- I can't remember they went to the Chernobyl site. But they certainly went to many places in the former Soviet Union, as well as, I think, Eastern Europe and throughout the United States. She wrote-- I can't remember if they co-wrote it together. They're both journalists, excellent writers. Anyway, just saying, how many of these sites are accessible in general? And what's the appeal to go to them? You want to go see, collect them all. I've seen this one, and this one, and this one, this one, like other families might try to go to all the national parks and in some region of the country. They were going to tick off all of these kind of nuclear-related sites as a kind of tourism. Yeah, thank you. I agree, Deborah. I find that very hard to watch myself. I'm not an expert at all in the present day biomedical hazards and what precautions are actually being taken. So I don't know. But I think I mentioned briefly in a previous discussion, I know that there was a very cavalier attitude toward these things during the Second World War and in the early years afterwards. And often, the cavalierness was not telling workers enough to inform them of risks that actually were otherwise well-known by other experts, let alone the experts decided to take their own risks or to behave certain ways. And so I don't know what the current training is like and so on. But 3.6 ranking is not great, not terrible. I don't remember what kind of exposure would amount to 3.6 rankings. I don't remember the scales, what to expect. Do you do you remember? We can look it up. Right, thank you. I'd missed the reference. Very good, yeah. Right. I agree as well. While we're at it, let's hawk all of Kate's books. She's my colleague. Kate's most recent book-- oh, shoot, what's the title? And that just came out roughly a year ago. There was a collection of Chernobyl-related books came out around the same time, one of which was Kate Brown. As you may know, Kate's a member of the MIT faculty. She moved here roughly a year ago. She's a real expert on-- both on the Hanford site and the longer term historical development of nuclear projects in the United States. She's also trained as a Russian and Soviet historian, an environmental historian. She's done a lot of original work in Russia, Ukraine, and many, many non-English language sources, which I can't do. She does marvelously. So she's really a remarkable scholar of comparative views of a lot of these materials. Yeah, I don't know. And this does raise the other questions. I mean, this came up a lot during the debates over Yucca Mountain, when it was still considered a live prospect. Let's say you have sufficient geological stability that you can somehow estimate for an unbelievably long time scale. Let's say you have enough sufficient storage and containment. How do you get the stuff there? I mean, do you put this on commercial railway cars. Coming back to Lucas's point before, someone knows there's some bad stuff on that train, why don't you just knock out the train and have some horrible dirty bomb-type attack? So how do you move this stuff, let alone in what form is this stuff most stable and most easy to move? And there are plenty of experts who study this full-time. I'm absolutely not one of them. I don't what the current best practices are. But I know these were the kinds of things that had to be thought through-- not just can you predict the geological stability and the groundwater flow over multiple millennia. I think the answer to that is not with sufficient confidence, right? That's already just incredibly hard, just as a narrow scientific question. And then you start getting to practical terms like, well, this stuff has to travel you know 2,000, 3,000 miles-- 2,000 miles. How do you get it there? What happens along the way? And those were, let's just say-- again, in the physicist terminology-- nontrivial, which is code for I have no idea. It's really hard and prone to all kinds of additional worries and concerns. Because even for the best intentioned humans, we're still human, let alone any kind of nefarious things that could interrupt it. So I know those kinds of considerations have been live issues since the '70s or '80s, let alone in more recent times. But to Tiffany's more direct question, I actually have no idea how that set of decisions were made. Is this format more stable to handle than others? That is well beyond anything I know about directly. Some years ago, I mean, maybe 15 years ago by now-- I've lost track-- again, a little off-topic, but there was another historian of Cold War nuclear age stuff who put a petition together because-- speaking about Hanford, there was a move to-- let's see if I can get the story right. The local department of energy administrators were going to basically incinerate a bunch of literally garbage, mostly from World War II era or early post World War II times. And this other historian colleague of mine started a petition to say, please don't burn that stuff. Historians can learn so much about what it was like to live on the site from the crumpled up cigarette wrappers. It wasn't like the radioactive sludge. It was like signs of human encampments. It was like what did the workers there-- what was their life like? And you can learn a lot from going through someone else's garbage, right? So there's this effort to save the Hanford garbage, which sounds a little tongue in cheek. It was a robust effort to say, we need to learn about what it was like for all kinds of laborers and other staff on the site. So please don't burn their, at that point, nearly 50-year-old or 60-year-old garbage. And actually, I don't know whatever happened. I don't remember the follow-up, whether that succeeded in saving the historical garbage or not. But I found that an interesting effort. Think about the longest, earliest known, human stories that we still know anything about. And just to your point, they haven't been unchanging in interpreted meaning, imputed meaning, right? And that's over not even 10,000 years, maybe 5,000, on the order of 5,000. I mean, we don't teach the ancient Greek stories, say this was their meaning. We teach them to say, look at the plethora of meanings that people might then or might still make of them. And think about Talmudic commentary. Any tradition, they're treated as living interpretive traditions, often at least, precisely because a single, unambiguous meaning is not what people seem to reach for or identify. And now, multiply that by a factor of 30, right? Now, do that not for 10,000 years but for 300,00 years. I just find that just utterly mind-boggling. So anyway, I share I share your observation that it's interesting that that's what people-- I take that maybe as a sign of we might call it desperation that they'll even go for that as opposed to any other, like Tiffany's saying recognizably high tech, high modernist interventions. I can't open documents that I wrote in college because of technological creep. I was actually using Microsoft Word, whereas my teachers weren't. So there was actually a level of continuity with my machines and my software. And yet, I can't open documents from like the early '90s. Think about that level kind of just getting a message to persist and be readable over, again, tens of years, not hundreds or hundreds of thousands. Right, exactly. Yes, no, I think I totally agree. If not literally invitations to dig, they're at least not very good at stopping people from digging. Maybe. Remember, our baseline since Chernobyl is approximately 30 years, 33 or 34. And again, I'm a little cautious of making the leap from 30 years to 300,000 years. So I guess I don't know. Right. Right, right. But also, I mean, I've seen some of these terracotta warriors. So clearly, part of the site has been not just investigated, but actually parts removed, moved around the world. So I don't remember the full details of it. It wasn't like the entire site was left untouched and pristine. Yeah, no. Again, I take that as a sign of the kind of desperation, the nature of the challenge, and the need to do out of the box thinking. But I'm not sure that they found compelling answers, even with these wild exercises. Yeah. And also, how many cults have survived for 300,000 years or 10,000 years, let alone-- just the mismatch in timescales I just find utterly mind-boggling. And that's a good example, where I think it's safe to say that at each moment in those 2,000 years, there hasn't been a single unified interpretation of what that body of practices and beliefs seems to mean. And that's a mere 2,000 years, let alone longer. Or how many of the interpretations even for people to take it seriously, how many of them line up with the original intentions or the hoped-for message back some generations ago? They're interpretive moving targets for people who make meanings. So that means the meanings will change. You think would be the Pyramids all over again, in other words, right? So build some kind of fairground-type marker. That could draw people in, as opposed to tell them to stay away. Yeah. That might buy you 1,000 years in round numbers-- Notre Dame, or the Durham-- there are, I'm sure, many structures in other parts of the world that I don't know about. 1,000 years, pyramids, 4,000 years. We're still missing some zeros on that. Yes. Unlike the Sphinx, of course, nearby, which lost a nose to overeager soldiers with machine guns. So Yeah. Any other parts? I mean, we're focusing a lot on this 300,000 years time scale because I keep harping on it because I find it so mind-boggling. Any other parts of the film, other themes or questions that are raised are brought to mind, things that were unexpected? So that's a really interesting question, Alex. I don't know what we mean by safest or what the metric would be. So I do know we have many colleagues at MIT who are very, very concerned about climate change. That's true broadly. And some subset of them are very concerned that if we're not trying to figure out safe and reliable nuclear power, then is there a longer term viability to address climate change to get off of such a dominance of fossil fuels? I'm agnostic. Just personally, to share, I just don't consider myself an expert enough on that to really understand the layers of trade-offs. But I know that there are plenty of our neighbors at MIT, let alone in the wider world, who would answer your question in the affirmative. And I don't know enough to give a thoroughly independent answer. I can understand at least where they're coming from. And I think the challenge then becomes, if that is the case-- and I say if, I don't know-- how do we learn from the past to not keep replicating the unintended consequences that have surrounded that body of work to date? And again, we're not talking about 1,000 years. We're talking about 50 to 70 years and the number of instances where safeguards weren't put in, where human error was nonetheless not sufficiently guarded against, where bad stuff happened even with the best of intentions. Then I think that makes other of my friends and colleagues concerned about saying, well, there's the answer. Let's just put all our eggs in that basket. Let's go charge right ahead. So I guess I sit astride a lot of these discussions. And I'm personally, genuinely just stuck, just really stuck because I can really appreciate the impetus, especially in terms of climate change and sustainability, and yet also really share the worries that people who knew better in the past didn't always do better. People didn't always know better. What don't we even know today to even worry about, let alone what do we do know and still haven't adequately really addressed in a systematic way? So I just feel paralyzed, I mean, personally, even though I there are strongly held arguments on a whole range. It's not like it's one side or the other, but a whole spectrum. So I don't know. I mean, I don't know. If we had a flawless history for even a five year stretch, let alone 50, then maybe I'd have more reason for confidence. But personally, I see the flaws and the unintended consequences pretty sharply, just personally from where I sit, partly maybe because being a historian who focuses a lot on the Cold War, there are a lot of things that people did know better about back then and yet, nonetheless, citing national security, citing pressing demands and needs would cut corners or not tell people the full story. It'd come out only later. So I guess I come to some of these things with a bit of a jaundiced eye. That's just me, personally. So that's where I get I get stuck. I will just simply say I get stuck. I mean, it's a great point, Lucas. And even stepping away from the nuclear side, I mean, again, you can find these very earnest debates about cost-benefit analyses and risk trade-offs about, let's say, ethanol or other efforts to focus on, say, fossil fuel use. Oh, well, you'd use bio-organic, this or that. And then you try to cost in all of the fossil fuel-based fertilizer that's needed or all the other things beyond the product narrowly construed. And then you find that, at the very least, it becomes murky. The answer becomes pretty unclear, at least to me. And I think that is all the more so amidst these uncertainties like, what do you do with all this nuclear waste that's not going to go away any time soon? Then we come back to the question about nuclear power. So that doesn't mean the answer can never ever be nuclear power. But I mean, the tendrils, the extent of this very extensive system, as opposed to the safety protocols of a given reactor design, I think the breadth of those-- of the range of questions makes me skeptical that we're going to find a clear answer, a clear metric that says, at the end of the day, yes, green light. The answer is plus 7. Go forward. That's where I get more skeptical. What are the competing incentive structures structures? And I'm not saying there's one clearly the right answer. But we have to worry about what decisions are being made according, again, to what measures and what metrics. What are the incentives? Yeah. Why can't it still be used? Oh, that's interesting. Again, others on the call might have a better answer. But I think the reactors that I'm familiar with are usually trying to do very specific kinds of reactions ultimately to extract excess energy. And not everything that's radioactive will contribute to that energy balance. So things become radioactive as a consequence of these reactions from which one can extract energy. But I think the most well understood, and as far as I know the most efficient, reactions that will produce the excess energy ultimately involve very specific kinds of ingredients, let's say, very specific kinds of target nuclei to start from. And so that's why I think it's called waste. The other things that become radioactive are not ones that one can nonetheless put back into the reactor and extract excess energy from in any scalable way. That was a very abstract answer. I'd have to learn more about the particular isotopes and particular reactions, which is not what I work on at all directly. But I think the basic point is that, for these fission reactions, not everything that's radioactive is highly fissionable. And yet, it still could be dangerous because it's still emitting junk that could hurt people, and plants, and the environment. I think that's the upshot. So only a small set of these isotopes will actually fission upon being struck by neutrons of a certain characteristic, and then release the excess energy. The others are shooting out alpha particles, and beta rays, and gammas, but not splitting a nucleus from which there'll be this large excess energy per splitting, per fission event. So these uranium-boosted armaments, these kinds of things? Yeah, yeah. I don't know Nixon's book. I remember reading at least some of the journalistic accounts about more recent concerns about uses of uranium-doped armaments, basically. Yeah, yeah. Good. Well, we can pause there. We don't have to run down the clock. We covered a lot of ground already. Thanks for joining the discussion. I thought that was really interesting. We'll go back to the regular format on Wednesday and thereafter, of course. And then we'll go from there. If you have any other questions, of course, please don't hesitate to email me or any of the TAs. I'll have my regular office hours Wednesday morning at 11:00 and all that good stuff. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_1_Introduction_to_Einstein_Oppenheimer_Feynman_Physics_in_the_20th_Century.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: We'll go ahead and get started. Hello, everyone. Welcome to 8225, which is also STS.042-- Physics in the 20th century. My name is David Kaiser. Today's class is going to be a little shorter than average. I just wanted to go over a kind of quick introduction for the course, an overview, some logistics, course structure and so on. And then we'll jump in with the first full class session next week. So I'm going to go ahead and share my screen. I have some slides just to help kind of orient us for today's session. So let's do that and that. And now, hopefully you can see my screen there. Some nods or thumbs up? So far, so good? Thank you. OK, great. So welcome to the new semester. This is a class I've taught many, many times. I've been teaching at MIT since 2000, which itself is now historical. It feels like a long time. And I've taught this course probably every other year, more or less, over that time period. So I've taught it at least 10 or 11 times or so. And I have to say, if I may say so myself, it's my favorite course that I teach. Hopefully, it'll be a positive experience for you as well. I think it's a really fun opportunity to sit with some material that might be familiar to many of you. We have many folks who are physics majors, many of whom are juniors and seniors. It's also hopefully an interesting kind of preview for students who either are not in the physics department, or are newer to MIT and so on. And I think it's just-- it's a kind of remarkable set of ideas and institutions and events and people that we get to sit with over the course of the semester. So it's a new adventure doing this remotely by Zoom, but I hope we get to really enjoy the kind of pretty wacky ideas and some unusual times and places that we get to immerse ourselves in as well. And I try to just gesture toward that with some of these opening images here on the first slide. So today, as I mentioned, it's not going to be a full-- I don't expect it to be a full class session. But I do want to give an overview of where we're going, and also, some of the kind of nuts and bolts and logistics. So anyway, the plan for today-- we'll talk about the course aims. We'll talk a bit about the instructors. Give the teaching assistants a chance to introduce themselves and say a quick hello as well. We'll talk about the course structure and the requirements of the course. And then I'll give a brief overview of the kinds of material that we'll be engaging with in more detail starting next week. So that's where we're heading. So here, I'm just taking the subject description right from the front page of the syllabus just to give you a sense of where we're heading over the term. The class is meant to explore the changing roles of physics and physicists over really not just the 20th century anymore. By now, we're going to be up to just about two centuries worth of material. For the first main class next week, I'll be starting with the 1820s, which it turns out is now 200 years ago. So really, it's an expansive view, not only the 20th century. We're going to quickly bring ourselves up into what we often call modern physics relativity, quantum theory, again, which might be familiar to many of you, might be new to others. And then we're going to zoom forward even beyond that early ferment in the early 20th century and look at much more recent developments in high-energy physics, in cosmology and astrophysics-- my own personal favorite topics. And we're going to be examining these intellectual developments, these conceptual and intellectual developments, at each moment trying to understand their kind of embedding in a very messy human world, in a changing human world. So part of our goal for this semester is both to learn or reiterate some pretty amazing ideas for modern physics-- might be new to some, might be familiar to others. We're going to sit with those ideas, but also do a lot more work in practice of embedding those ideas in an historical kind of development. And that means looking at institutions, at cultural features of how people have thought about physics and physicists, and how physicists themselves have been part of larger social worlds, political contexts, war time and so on. And so over the course of this semester, we're going to be looking at not just a range of ideas, but a range of places in which those ideas have been pursued or cultivated or argued over or debated or rejected. So we'll start looking at Britain from-- in the 19th century. That's where we'll begin much of our investigation for this semester. We'll look at newly unified Germany in the 1870s, moving forward in time ultimately to the First World War to the rise of Nazism, to the Second World War. Lots of big consequential worldly events which, as I hope we'll get to appreciate over the course of the term, had a really quite remarkable impact on the intellectual world of physics and physicists. And we'll bring the story up through the Second World War and into the Cold War and even beyond. So we have a long timeline really, as I say, the better part of two centuries by now to sample over. And I think a pretty nice broad range of ideas in physics as well. And so just pictorially, some of the early moments we'll be kind of thinking about are the turn of the 19th into the 20th century, when a lot of research was still tabletop scale. Here's a photograph of Pierre and Marie Curie in their laboratory in Paris from right around the year 1900, where the scale of apparatus, if not phenomenal, still felt kind of human-scaled. Also, a bit kind of removed from other elements of human experience. This was a laboratory pursuing esoteric questions in a specialized community. Before too long, as I'm sure many of you already know, many of those investigations began having much more dramatic worldly impact. And we'll look quite a bit in this course at the work on nuclear weapons, the use of nuclear weapons in the Second World War, a whole different arrangement between scientists, engineers, nation-states, and world affairs. So within not a long stretch of time, we go from photographs like the Curies over here to something like this where we have Robert Oppenheimer, with his famous pork pie hat, working side by side with the Army General Leslie Groves of the US Army inspecting, in this case, the site of the very first test detonation of a nuclear weapon in July 1945-- just 75 years ago. Soon after that, as, again, many of you probably know, physics and physicists moved into a new kind of vantage culturally and politically. Here's a famous Time magazine cover of Einstein with his most famous equation, E equals EMC squared, appearing eerily in the mushroom cloud of a nuclear explosion. So the associations between physics and physicists were changing very quickly, especially after the Second World War. We're going to explore a lot of that over the course of the semester. And then zooming forward, we get to where the scale of apparatus no longer fits on a Parisian tabletop. But in fact, this is a photograph from what would have been the world's largest particle accelerator. Some of you might have heard of it-- the Superconducting Super Collider, or SSC. If you haven't heard about it, it's because it was never finished. This would have dwarfed even the Large Hadron Collider that is in operation in at CERN in Geneva. This was a machine-- the Superconducting Super Collider-- that was under construction in Texas not too far from Dallas. And it was actually halted in 1993 after the engineers had already dug a 54-mile circumference tunnel underground-- roughly three times longer than the Large Hadron Collider. So the scale of our apparatus is no longer fitting on a tabletop. And also, once you need $8 to $15 billion to do your experiment, now you have a different place in the world of politics and world affairs as well. And likewise, the story didn't end, thankfully, with the cancellation of the Super Collider. We have a shift to other kinds of still large-scale research. For example, to push to billion dollar satellites like the Planck satellite shown here in its study of the very earliest moments of the universe. So that's in kind of a snapshot, kind of picture form, the kinds of terrain that we're going to cover over the course of the term. And I'll say more about that soon, but I just wanted to give you a preview. OK. Let's talk some nuts and bolts for a little bit. This course has no prerequisites. As I'm sure you know, it's a Communications Intensive subject for physics majors in course eight. But there's no prior coursework required. It's not limited to students in the physics major. It's open to first-year undeclared students and so on. There's no prereqs. Our aim in the course is really to improve written communication skills rather than to worry too much about things like problem sets or problem solving, which are dearly important to me. But this course has a different kind of center of gravity, so to speak. I should say I've had-- in the years past, I've had high school students take the course and do very, very well. Really, really no prerequisites. I want to emphasize that. I want to emphasize it especially because some of the very first readings for the term that you'll have, even in advance of this coming next lecture, might look pretty technical or pretty off-putting. Some-- not all of them, but some of the readings will have lots of equations. They'll look like and often be taken from old textbooks-- might be on topics you haven't had a chance to have a different formal course in and so on. And I want to emphasize that that's OK. If this looks strange, partly it's designed to look strange, actually. So the strangeness is meant to trigger some questions about what did people think they were doing when they read texts like those? So if they look either very hard or unfamiliar or technical, that really is kind of part of the point. And part of the point for this particular class is not to worry about reproducing those calculations or understanding every single step of the mathematics. I like to joke that my very many friends in the physics department, it's their job to help you all learn how to calculate, and many of you, I'm sure, calculate very, very well already. And that's incredibly important. This class has a different set of goals and aims. And so I want to really emphasize not to be kind of frankly scared off if some of the readings look, at first blush at least, a little overwhelming. They should look strange. And if every step in the derivation is not so clear, that's OK in this class, especially. So we're going to be focusing on a range of ideas in modern physics and also their changing contexts-- as I mentioned, many kinds of contexts. And it's, of course, always, always OK to ask for clarifications. The formalism is dear to my heart. Hopefully, dear to many of yours. I'm not saying ignore the equations of mathematics. I'm just trying to emphasize not to worry about getting hung up if at first, especially in a first reading, things look very strange. For a bunch of the lectures-- not for everyone, but for a bunch of them I'm actually writing up separate little short lecture notes in addition to these lecture slides to go through some more intermediate steps of derivations and so on. Those are always going to be optional. It really is just to help as a kind of additional resource. And also, we'll have office hours. And you can always make an appointment to meet with me or your teaching assistants directly. So I just want to emphasize the aims of this course, we're going to work on written communication. And we'll get to do it with some, I think, really juicy amazing fun material. That's our overarching aim for this semester. I like to think of this class as a kind of roadmap. And it turns out, roadmaps work in at least two directions. It could be a map of where you're heading if you're new to MIT, or new to the physics department, or just curious and coming in from a different field. It can be a kind preview of some pretty awesome cool stuff that could be ahead of you. It could also be a kind of look in the rear view mirror for juniors or seniors in the physics department. You will have seen some of this material in other courses already. But this is a chance to begin kind of seeing perhaps how different pieces fit together to synthesize a map of it might have been kind of separated or disparate parts until now. So I like this idea of a roadmap either whether you're starting a journey or looking back how you got to where you are now. Hopefully, this course will help with some of that as well. OK. So let me pause and ask for any questions. If you do have questions, please try to raise your blue hand. I'm going to ask the teaching assistants who are co-hosts of this Zoom session to recognize you, and then they can ask you to unmute yourself. Any questions so far? I hear a resounding silence, so I'm going to charge ahead. But if you do have questions, again, please feel free to use the chat or the blue hand-- raise hand icon. Oop. OK. So for this part-- sorry-- I want to just take a few moments and introduce the whole team. I'm very excited to work with I think a really terrific team of instructors for this class. It's a big group. We have four teaching assistants in addition to myself. So I'll just say a little bit about my own research. I am a professor at MIT. I teach both in our Program in Science, Technology, and Society-- STS. In that department, I conduct research and work with students in the history of science, and particularly, the history of modern physics. I've written a few books. I've edited other books. A lot of the material that we'll cover in this semester-- this term comes, not surprisingly, from things that I have a particular interest in, I've written about, or very dear friends of mine have written about. So I'm sort of-- I like-- I'm very lucky to get to be immersed in the history of modern science when I wear my STS cap. I'm also a member of the Physics Department, a Professor of Physics. Up here, I have a picture of my research group. This is actually a somewhat old photo. You probably won't recognize the faces because even the UROP students in this photo have all graduated by now. I work very closely with Alan Guth in our Center for Theoretical Physics within the department. We work on many aspects of early universe cosmology. We'll actually come to some of those topics near the end of this semester because I couldn't resist. I love it so much. I've also been working for a number of years on foundations of quantum theory, including new experimental tests of topics like quantum entanglement. We'll talk about some of that this semester as well. So on the physics side, I am especially interested in theoretical early universe cosmology and astrophysics, and also, various aspects of quantum theory, and even some interesting recent work on quantum information science. So that's where I'm coming from. Now let's get to what are you going to do in this class? The one thing you're going to do is read-- not a ton. We've worked very carefully to make the reading list, we think, really kind of curated and not overwhelm students with a large page count. And so the best thing you can do is read the generally rather brief reading assignments in advance of the class with which they're associated. It'll help if you've been able to do the readings ahead of the class session. What are the actual subject requirements in addition to keeping up with the reading? As I mentioned, this is a CI course for physics majors. So our main emphasis is on written communication. It will not be on oral communication partly because there's 100 of us by Zoom. So it really will be on written communication, which had always been the overwhelming majority of the course structure even in previous years. So like nearly all CI courses, we have a number of pages of writing over the course of the semester. The way we do it in this term is to break it up into a number of assignments that kind of build up. They build both in length, but also in the nature of the assignment, in the type of writing that we'll be working on, and the nature of, say, the kinds of sources we'll be working with, or the nature of the argument we want to be able to articulate. And these really quite amazing teaching assistants are here to work very closely with you on each of these papers. So we're not going to expect you to do it every step on your own. So you can see now the due dates, the structure. The first-- of course, we'll hand out the assignments well in advance of each of the due dates-- the actual kind of prompt. So pretty soon, we'll finalize the prompt for paper one. We'll make sure it's easy for you to find on the Canvas site. That one will be four or five double-spaced pages-- really, a handful of paragraphs-- not a very long essay. That will be due electronically near the end of September. The second paper we do a couple of weeks after that-- a little bit longer-- six or seven double-spaced pages, and a little more complicated range of types of sources to make use of as you build your argument. Because it's a CI course, like usual, we'll have a mandatory rewrite of that paper two assignment. That'll be due-- handed in separately several weeks after the first iteration. And then finally, the final paper is due on the last day of classes, December 9. So please note, there is no midterm. There are no problem sets. There's no final exam. The student work you'll be handed in will be exclusively these essays-- this series of essays. And again, we'll be working with you along the way. You can see the kind of distribution here about how the different papers contribute to your overall final course grade. Obviously, if any questions come up along the way, of course, of course, please don't hesitate to ask me about it or your teaching assistant. That's the stuff you'll be working on concretely over the course of the term. So again, I'll pause. Any questions on any of those items? So far, so good? And so good. So Sava asks a question. There's zero difference for the mechanics of the course between grading or homework or content between STS and the course 8. It is literally one class. It's just jointly listed. And in fact, I don't even know-- I mean, I'd have-- it would take work for me to figure out which version a given student's actually registered in. We just-- we treat it as one class, as one set of materials, one set of course aims. So as far as we're concerned on this shared screen, it's one class. We're going to work on this cool stuff together. The tricky part gets how do how do the different kind of requirements get counted for different kinds of students. But it's one class, one set of aims, one set of materials. Any other questions on this? All excellent questions. Yes. Definitely, flood Sean's inbox. I see someone helpfully put Sean's email address in there. OK. I'm going to I'm going to move on. If you have more questions, again, please keep chiming in. But I think I'll go back to screen sharing. Those are excellent good questions to get us going. OK. So hopefully, you can see my screen again. So last part. Again, I won't take too too long. We'll finish up early today. I just want to say a little bit more about the kinds of material that we'll get to really kind of chew on, get to really explore over the coming weeks. So we'll be starting with a 19th-century legacy. And, in fact, starting fairly early in the 19th century, going back to the 1820s and '30s-- not for too long. We'll kind of gallop forward in time pretty quickly. But the main question we'll be asking in that unit is what did people think the world was made of? What did they think it was their job to study? And what counted as a compelling explanation? And equally interesting, I think, or equally important, who were the kinds of people who did that, and where did they do it? And what kinds of settings or institutions? How did they work together? What did they think their job was? So of all the many topics we could focus on within the study of the natural sciences or the physical sciences in the 19th century, we're going to look fairly narrowly at examples from electricity and magnetism and optics, partly because that becomes super important for things like relativity where we do want to get to fairly quickly. And also, because I think they're just fascinating in their own right. So we'll be talking about some names and some ideas that, again, are likely familiar to many of you. We'll start with some discussions of Michael Faraday and his ideas about lines of force and the luminiferous ether. We'll spend a long time talking about the ether. We'll talk about James Clerk Maxwell. You're going to read a short excerpt from his famous treatise on electricity and magnetism. We'll about William Thompson, later known as Lord Kelvin, and others. We'll also talk about how did those folks get into their line of work, especially the generation of Maxwell Thompson and some of their own colleagues. So what was it like to become a young, say, mathematical physicist, or eventually theoretical physicist, during the second half of the 19th century? How did they do that? Where? Why? According to who's criteria? Another thing I want to really emphasize, and this one I find incredibly fun, is that many of you probably own Maxwell's equations on a T-shirt-- always in fashion. We still use Maxwell's equations. They're so-called his equations. But what I want to sit with for the first few classes is how remarkably different Maxwell and his contemporaries interpreted those very same mathematical symbols. So we still put his equations on our T-shirts so to speak. Yet, what we think they mean, or what we think they're good for, what we think they're talking about, what they tell us about the world is almost exactly opposite to how Maxwell himself imagined them, or how generations of Maxwellians treated them. So we have this kind of continuity in some instances in the equations we use, sometimes even the T-shirts we wear. And yet, what we think those equations or those ideas tell us about the world, those actually have been anything but static or constant over time. I find that just mind-boggling, to be honest. I love that. We're going to have a bunch of examples of that throughout this whole semester. And one of the first kind of juicy ones we get is with Maxwell's equations. So we'll sit with Maxwell and electromagnetism for the first couple of lectures-- the first few sessions. OK. Before too long, we'll get into what is often called modern physics-- the quite dramatic intellectual changes taking place, roughly speaking, in the first 25 years of the 20th century-- roughly, with some error bars on that estimate. So we'll talk quickly about work by Albert Einstein, and indeed, a circle of colleagues, what became known as the special theory of relativity, and even some general relativity-- very near to my heart and Tiffany's heart, maybe to many of your hearts. How did Einstein come to these ideas? With whom? To what end? What was he doing? Literally, what path was he walking each day as he began contemplating space, time, and matter? We'll then shift to early quantum theory, as the next main kind of set within this unit. And again, some remarkable familiarity, some equations we still use 100 to 120 years later. But what we think those equations are telling us, that actually has not been so constant. Where people were working or coming up with the ideas or with whom they were interacting is also not so constant. So we'll talk about some shifting institutions, like the Physikalisch-Technische Reichsanstalt. That reminds me to apologize to any people on the call who actually speak German. I speak incredibly bad German, but I love trying. We'll have-- especially this first unit-- a lot of German that I will likely mangle. If you know zero German, then you're actually closer to me than you might think. But nonetheless, we'll have some German stuff. Don't worry if it literally sounds foreign or unfamiliar. The point is there were some new institutions, new priorities, new national scale investment in things like physics and related fields of engineering. What role did that play in the kind of heady years of the early onset of quantum theory and modern physics? And then where do those ideas lead some of the leading thinkers? We'll talk, of course, about things like Schrodinger's cat, or quantum entanglement, or Einstein-Podolsky-Rosen, and many, many really quite delicious and delightful elements of quantum theory that I still wrestle with and get to play with with my own students and colleagues to this day. So that's the first kind of main unit. We'll have a quick introduction with this 19th-century legacy, really to set the stage for what we might call the emergence of modern physics as was recognized even in its day. That's the first main unit and the second unit of class. Then we're going to shift again in remarkably rapid order. I mean, to an historian, the time delay was vanishingly short-- a matter of 10 to 20 years, depending on how one counts. Some of those previously very heady, very esoteric or abstract ideas about atoms, about parts of atoms, about nuclei and nuclear forces, they began having an import and a priority well beyond professional physicists and chemists or other engineers and so on, in ways that, again, I'm sure are, at least in part, familiar to you. We're going to sit quite a while with the development of nuclear weapons, nuclear physics more broadly, but especially the weapons projects during the Second World War. The unbelievable transformation of the kind of research and development infrastructure, which really starts setting a kind of blueprint for many features that we would recognize today, even though they were forged during a war that itself ended 75 years ago. So we see a change in what it means to try to do physics, for whom, to what ends, with what kind of support, and with what kind of institutional settings, leading to, among other things, the use of these very new types of weapons, again, just 75 years ago, against the Japanese cities of Hiroshima and Nagasaki in the summer of 1945. So that second main unit-- third unit of course-- the second kind of big juicy thick unit that we'll sit with for quite a while is the kind of new enrollment, the new engagement of physicists, chemists, various kinds of engineers with the nation-state in times of war. And what did that mean for the ideas of physics, for the population of physicists, for the institutions of physics and so on. That's a big-- there's a lot to try to really understand, including a lot of, again frankly, very fascinating cutting-edge science and engineering, but it was being done in very specific contexts toward very particular ends. So we'll spend some time around middle of the semester on that material. The last main unit is then expanding outward from the end of World War II. Here, I was a little more selective. I wanted to cover topics that frankly I like most. And that means the topics are very much biased towards particle physics, high-energy physics, and astrophysics and cosmology. Because, first of all, they're just objectively the most interesting. That was a joke. I just think they're actually awesome. Those are the areas that I am most familiar with, both historically and also in terms of more contemporary research. And so, again, it's an opportunity to take a sample, to explore some material that might be quite new to many of you. You might not have had a chance to have a full semester course on quantum field theory or particle theory or contemporary cosmology. Some of you might have. But there's, again, no need to have already seen this material. And we'll be able to sample some of these still changing ideas within, again, some still changing institutions. Who wanted to pursue those questions? Why? To what ends? And in what facilities? So now, we get to a place in time like the Large Hadron Collider at CERN. The device was completed in the first decade of the 21st century. You see an aerial shot here. What brought us from an era when particle accelerators still were on the size of a kind of person, more or less, to when they actually crossed national borders, where the accelerated protons crossed between the Swiss and French borders billions of times per second. So we'll end the course looking outward toward the cosmos that way. So here's the very last slide. I always like to say this course has one actual goal, which is to make sense of this quotation. I never tire of repeating this quotation. It was published in Harper's Magazine just months after the end of the Second World War observing what had become a real sea change in the place of physics, and especially physicists in broader society. So this kind of witty, kind of pundit columnist wrote in the spring of '46, that "physical scientists are in vogue these days. No dinner party is a success without at least when physicist." And really, I want to know what happened? How can so many people have dinner now without inviting us? That's really why I put this whole course together. Together, we'll try to answer that question. That's only modestly tongue in cheek. Any questions about any of that material, course logistics, course overview, any of that kind of stuff? All righty. Well, thank you everyone. Good to see you, at least in small little squares. Stay well. Good luck with the rest of the start of the term. And I look forward to seeing you next week. Stay well, everyone. Take care. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_23_The_Birth_of_Particle_Cosmology.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: So today we're going to be talking about the kind of invention or the hybridization of a whole new subfield within physics that now is often called particle cosmology. It happens to be the field where I spend a lot of my time. I think it's just fantastic. But it's actually a fairly new invention. This whole field of study came together fairly recently in historical terms. And so it gives us an opportunity to ask both about the ideas that help drive this kind of scientific change, but also, as we've been doing really throughout the whole semester, the kinds of factors or ingredients beyond only cool ideas or new experiments that also can play a very substantial role. So we'll talk about institutions, and politics, and some broader shifts in the field, out of which come this new now quite flourishing subfield. So that's our job for today. So particle cosmology today is, objectively speaking, just the coolest part of physics. That's obvious. I don't have to belabor that. It is actually very cool, and it's also flourishing by any measure. It studies the smallest units of matter, the fundamental forces and elementary constituents of matter. And it asks about what role they might have in shaping, really, the fate of the entire universe. So it's this melding of the very, very tiny elements of matter and the role they might play on literally cosmological scales, from the Big Bang to today. Here's an example. We'll talk more about images like this actually in the next class session on next Monday. This is a series of images of what's called the Cosmic Microwave Background radiation, or the CMB, and we'll talk more about that. It's one of the best examples we have of how we now use our tools about quantum theory and elementary particle physics to try to make sense of and, in turn, be constrained by measurements of characteristics of the entire observable universe, this fascinating interplay between the very large and the very small. The field is doing pretty well these days by other measures. Its annual budget just within the United States is on the order of $1 billion a year, roughly. That's if you combine spending in this subfield from the National Science Foundation, the Department of Energy, NASA, and other federal grants. I think a more telling measure of its kind of state of health these days is that there are on average two new physics papers, two new preprints posted to the central preprint server archive.org every hour of every day just in this subfield. So during this class session, there will be, on average, three new preprints posted just in this subfield. And that doesn't take a break for nights and weekends. That's averaging over 24 hours a day, seven days a week. So it is really a booming, booming subject of study. And I find that all the more astonishing since this field literally didn't even exist 45 years ago. So this is a $1 billion area of study with devoted colleagues all over the world, not just here in the United States. And yet, it's a fairly recent vintage in terms of the kinds of time scales we've been looking at this whole semester. So how did this come to be? There's a very compelling story and the story that we will take some time to look at today that this new field really emerged in the mid-1970s because of changes in ideas, because of new research insights coming mostly from high-energy physics or particle physics and that it somehow this set of ideas that bubbled up in the mid-1970s of that we'll look at today-- that that compelled researchers to change their whole field of study and in the process to invent this new subfield, that it was somehow natural to start asking questions at the interface between particle physics and cosmology. And so there's a lot going for that explanation. It's not flagrantly false, but it's also, I think, really, really quite incomplete. And so if we start using the kinds of tools that we've been working on together this whole semester, I think we can dig in a bit more and try to uncover some of the additional factors that really were at play in especially in the early years of this field. So of the sort that we've been looking at throughout the term, we'll be talking about some changes in institutions, in training, in broader politics, and the support for science, and that these shifts were deeply consequential. And they weren't important to the exclusion of ideas from particle theory, but the ideas from particle theory, as cool and awesome as they are, I think, really were not the whole story. We were pretty insufficient to get us to where we are today. So we're going to, as usual, try to keep all these different kinds of facets in mind. So our three steps for today's class-- the first is we want to set up what were the kinds of questions that were occupying physicists in various branches of the field before this merger took place, before the emergence of what we would now recognize as particle cosmology? And so as we'll see, that really comes around to one clever, one helpful way to think about that is to focus on the question of mass. How did physicists in different subfields in the 1950s and '60s think about this very common seeming notion of mass? Why do objects, say, resist changes in their motion? The m that's in Newton's second law of F equals ma. Was there a deeper origin or way to make sense of this very common notion of mass? And as we'll see, two different communities or sub communities were really focusing on that in quite different ways in the 1950s and '60s. And that helps us trace the precursors for this overlap field of particle cosmology. Then we'll take a look at some of the broader institutional shifts that were also coming to be very dramatic, and some of them quite unexpected, by the late '60s and early 1970s that also helped propel this new merger of fields. And then we'll zoom back in to see what were some examples of the kinds of research questions that now seemed obvious or natural to ask for members of this new hybrid area in the wake both of the intellectual changes that we'll look at here, as well as some of these broader institutional shifts. So that's our three steps for today. So as I mentioned, I find it helpful to think about the precursors for particle cosmology by asking about this very humdrum topic of mass. Why do objects have mass? Why do objects resist changes in their motion? And this actually became a very live question, a very challenging question, but in pretty distinct ways during the middle decades of the 20th century. So in one of these subfields, a pretty tiny subfield at the time-- experts in gravitation and cosmology-- the question about the origin or the impact of mass was often framed in terms of what was called Mach's principle. That's the name for that same Ernst Mach whose work we looked at way back in the early part of this semester, the 19th century polymath, whose work actually helped very directly inspire the young Albert Einstein. It was actually Einstein who named this what we'll talk about in a moment. It was Einstein who named it Mach's principle because Einstein was inspired by some of Mach's writings on this. So Mach himself didn't call it Mach's principle. But it was attributed to Mach by Albert Einstein as early as 1918-- so from the early days of the study of general relativity. So the idea was-- it was often we could frame it as a question. Do the local effects that we attribute to mass-- do the local inertial effects resistances to changes in motion for everyday objects-- do those local inertial effects actually arise from very distant gravitational interactions? Do we have to think about the distribution of matter throughout the whole universe in order to make sense for why this block slides down this inclined plane at a certain rate? Local inertial effects, are they somehow tied to cosmological distributions of all the other stuff? After all, gravity is a long-range force, Both. Newton's gravity and even in Einstein's framework. And so the idea was should we be constantly paying attention to the global distribution of matter and energy when we try to make sense of local phenomena associated with mass? So that was a very challenging question, and we'll say more about it soon. But that was one of the ways that the problem of mass took form, was given a concrete form for these specialists who were pursuing things like Einstein's general theory of relativity in the 1950s and '60s. There was quite a different set of conversations happening around the same time. But now among the community that was focused on nuclear and particle physics, what we'd soon come to call simply high-energy physics. This had nothing to do, at least on the surface, with Mach's principle or even with Einstein's general theory of relativity. These experts were focused on a very different set of puzzles or challenges, and this picks up more directly from the material we were talking about just at the very end of Monday's class. Already by the late 1950s and with greater acceleration throughout the 1960s, a number of high-energy theorists were trying to put together these highly symmetric models to account for things like the nuclear forces. We looked at one example of that at the end of class last time, Quantum Chromodynamics, or QCD, which was really coming together in the early to mid 1970s. There were other instances of that, cousin models, similar kinds of models that were getting a lot of attention throughout the 1960s for the other main nuclear force, for what's called the weak force rather than the strong force. That is to say the force that causes things like radioactive decay. There was a lot of ideas and early suggestive experimental evidence already by the 1960s that the weak nuclear force was somehow mediated was what arose from particles exchanging certain kinds of force-carrying particles-- again, analogies to the photon-- in this case, what we would now call the W and Z bosons. The point is there were new kinds of matter, at least hypothesized. And then when particles tossed these back and forth, that would give rise to things like radioactive decay. But the challenge became very clear very quickly. These nuclear forces are self-evidently of short range, unlike gravitation or electromagnetism, which, in principle, can extend arbitrarily long distances, the nuclear forces really exert themselves across nuclear dimensions, very tiny fraction of the size even of a single atom, let alone macroscopic scales. And one of the most obvious ways to account for that at the time was to assume that the force-carrying particles that were responsible for those interactions were very massive. It would be unlikely for them to travel very far as virtual particles because, after all, they have to pay back the energy they've borrowed from the vacuum. So if they can only travel so far-- I should say, if they have a large mass, they have to borrow a substantial amount of energy from the vacuum as virtual particles. So they have to have a correspondingly short delta t, over which they pay it back. They can only travel so far. So the idea was could have finite range nuclear forces if these force-carrying particles had a very large mass. That will make it very unlikely for that force to be felt across a very large distance because of the whole set of ideas about virtual particles and the uncertainty principle. All well and good, but the problem was with these new fancy highly symmetric models of the nuclear forces, like the weak force and also true as we saw in the strong force of quantum chromodynamics, the symmetries of these new particles are meant to enforce is violated if you give those particles a mass. So you can have one or the other. You could have a short-range nuclear force that does not have any of those fancy symmetries. So that broke half of the motivation for it. Or we should keep all those fancy symmetries, but break this notion of a short-range force. So this was a pretty substantial challenge. It got lots of theorists very exercised over the 1950s and especially 1960s, which is really like, why do these particles have mass at all? Can we account for mass of these elementary particles in a self-consistent way? Because there's other set of puzzles that didn't seem to fit together. So the question of mass turns out to have been on many specialists minds in the 1950s and '60s, but as embedded in quite different-sounding conversations. So let's look a bit more at some of the proposed solutions to this question of mass from within these two quite separated communities. So right around the same time, often in the same year, often, in fact, published in the same journals, members of these two quite different communities-- the gravitation and cosmology crowd on one side and the high-energy particle physics community on the other side-- they were proposing solutions, or at least hypothetical solutions to these problems of mass-- but, again, within their own idiom. So even though the ideas were bubbling up around the same time and often published in the same journals, they still were embedded in quite different research traditions and conversations. So on the gravity side, one of the most significant elements of research in this topic was put forward in 1961 by, at the time, a very young Carl Brans. This is part of Carl's PhD thesis that he worked on with his PhD advisor, Robert Dicke. So this became known as the Brans-Dicke theory of gravity, named for Carl Brans and Robert Dicke. And they published it in the. Physical Review in 1961. Their idea was actually to try to go back to this notion of Mach's principle and more thoroughly account for that within a quantitative theory of gravity. So they wanted to modify Einstein's general theory of relativity in a very specific way to try to address this question of mass as it had been articulated around Mach's principle. So their idea was to introduce a whole new kind of matter, a new kind of particle in nature. They labeled it with the Greek letter phi. And now the idea was that instead of having a single constant unit strength of gravity labeled by Newton's constant capital G-- that's the G that's in like Newton's force law. F equals G M1 M2 over R squared-- that unit strength of gravity. You may remember that in Einstein's general theory of relativity, there's an exact same constant that gets carried over by design because Einstein wanted predictions from his theory to match smoothly with the Newtonian predictions in the appropriate regime. So even in Einstein's theory, the unit strength of gravity is set by some universal constant, the same constant G. The same here and on Andromeda, the same today as it was a billion years ago. So what Brans and Dicke were wondering was, what if that unit strength of gravity is actually not a constant? What if the strength of gravity could vary across time and space? So one way to represent that variation was to say that this unit strength of gravity, Newton's so-called constant actually could vary because it was actually a dynamical field. It was some field extended across space that could vary over time. And whatever its value happens to be here and now is what we interpret as this local strength of gravity. So it was an attempt to modify Einstein's theory of general relativity actually in response to this challenge of Mach's principle. So what were they doing? If you look slightly more quantitatively, in Einstein's version, you can represent Einstein's general theory of relativity by an action, in a sense, by writing down a Lagrangian. And I won't go through all the details, of course, right now. I'd be glad to chat more. But what the relevant term in Einstein's equations looks like this. This R is one of those geometrical objects that he had to learn about from his friend Marcel Grossmann because Einstein had cut too many of his math courses in school. We talked about that. So this is the geometer's tool of quantifying the warping of space and time. It's called the Ricci curvature scalar. And multiplying that is this constant, this unit strength of gravity in Einstein's theory. So what Brans and Dicke do is say, well, instead of this unit constant, let's replace G by 1 over phi. So now 1 over G becomes phi in the Lagrangian or in the action for gravitation, where in principle phi, the local strength of gravity could change across time and space. So if this field phi could vary, if it, in principle, it could be changing eta over space or over time, then they also knew they had to take into account within this total energy budget, the Lagrangian, they had to take into account the kinetic energy associated with every time that field changes either gradients in space or changes over time. So the second term they had to add in-- this part here in red-- is basically the kinetic energy associated with variations in that new dynamical field. And they put in very cleverly this extra dimensionless constant, a fudge factor, that they labeled by the Greek letter omega. And this was basically to control how much their version would depart from the ordinary behavior from general relativity. The idea is that if omega is just a number, just some real number, a positive integer-- or not integer, positive number-- if omega is a very small number of order 1 or a fraction of 1, then, in some sense, it doesn't cost very much. The kinetic energy is multiplied by a small number. So the field varying across space and time wouldn't cost very much of the whole system's energy balance, so it would not, in principle, be difficult for the field to actually be really quite wobbly, that it could vary quite significantly across space and time. That would be like saying you can have quite dramatic differences in the local strength of gravity because this field phi could be wobbling all over the place because it wouldn't cost much from the energy balance. On the other hand, as you tune up that parameter, the simple number omega, as omega gets more and more large, then it costs more and more to have the field vary at all. And so, in some sense, you're stiffening up the trampoline. So as omega becomes larger and larger, the field is much less likely to vary either over space or time. And in the limit that omega becomes arbitrarily large, then phi, in a sense, can't afford to vary at all. The kinetic energy cost is too high. And so, on average, the field doesn't change at all. If the field doesn't change at all, it acts like a constant. So you get back to the Einstein-like limit. So they had this very clever fudge factor, a coupling constant, so that, in principle, the local strength of gravity could be changing all the time. But the amount of that variation could be controlled by this one new parameter in the theory. And as omega becomes large, the behavior reduces to the kind of constant strength of gravity of the original Einstein theory. Now, how does that help with Mach's principle? The idea is that this new form of matter, this new field phi, is extended throughout all of the universe through every nook and cranny of space. All of matter interacts with phi. It's almost like a new ether, you might say. It's everywhere. And then the fact that phi is extended everywhere and is interacting universally is what they thought would take into account Mach's idea that the local inertial effects, the effects of this strength of local strength of gravity right here right now, really is attuned to or sensitive to the broader cosmic distribution of matter. So matter in the furthest away galaxies could be affecting the behavior of phi. And meanwhile, phi very locally affects local inertial effects. So this would be a way to incorporate Mach's principle going beyond even just Einstein's version. OK, so the idea was there's a problem of mass. Why do local objects behave with the mass that we observe? Why do we attribute certain inertial effects to local objects? And the suggestion here by, at the time, a very young Carl Brans and his advisor Robert Dicke was a new form of matter. They'll label it by the Greek letter phi. It extends to all of space, and everything else interacts with it. OK, that's hypothetical proposal number 1. Quite separately but right around the same time, there was a different conversation going on among people who were specialists in nuclear and particle physics and, to some degree, in solid-state physics. And that was this question about, why do certain elementary particles have a short range if we can't accommodate a mass while keeping this symmetry? This is now they're back in this question of things like the nuclear forces. Could you have nuclear forces mediated by the exchange of particles? Could those particles themselves be very massive? So they don't go very far-- so a short-range force-- and yet, still be respecting the very symmetries for which people had invented those particles in the first place. So here was, again, a very clever suggestion coming first actually from Jeffrey Goldstone. Some of you might know Professor Goldstone. He's an emeritus professor of physics here at MIT. He's still very active-- and then independently a few years later by people like Peter Higgs and, in fact, many other theorists along the way. All this work was bubbling up between 1961 and '64, again, often being published in that same journal as the Brans-Dicke work but embedded in quite a separate conversation. So their approach was not to tinker with the kinetic energy like Brans and Dicke were doing, but actually to study a new form of the potential energy. So they, again, introduce a new hypothetical form of matter. They, again, consider a scalar field-- let's say, a field with zero spin, zero intrinsic angular momentum. They again label it by the same Greek letter phi. But now they focus on the potential energy stored that might be stored in this hypothetical field. And as they showed-- in fact, Goldstone was really among the very first to show this-- if you adopt a certain kind of characteristic shape for that potential energy function, then you could accomplish what they came to call spontaneous symmetry breaking. So this picture is actually taken from Goldstone's very first article in 1961. It doesn't look so special. You've probably seen similar things yourselves. It looks like basically a double-well potential. So the idea was, at the level of the governing equations for these nuclear forces, you would continue to assume that these force-carrying particles really were genuinely massless. They had zero mass, just like the photon from electromagnetism. So that would preserve all the symmetries. The equations for the governing nuclear forces could retain all the fancy symmetries for which these new particles were introduced to reinforce that symmetry at each point in space and time. And you wouldn't put in any symmetry-breaking terms by hand. You'd leave those particles massless. However, you'd add in a new additional form of matter even beyond those force-carrying particles. It's what's now called the Higgs field. Though, really, many people were working on similar ideas. That separate field isn't responsible for the nuclear forces. It's responsible for giving everything else the masses that we measure, including those force-carrying particles. So the idea is this. Here's an example of that double-well potential. The equations governing the full system-- the nuclear forces plus this coupled scalar field phi-- retain all the symmetries that seem to be so important for the nuclear forces. In this case, as a toy model, it'd be like a left-right symmetry. So the governing equations don't care whether the field ultimately lands up in this local minimum of the potential or this local minimum. The equations are perfectly symmetrical, in this case, by a simple left-right symmetry. You can make that more fancy with a more involved symmetries of the nuclear forces. So the equations respect the symmetries. However, this is a dynamical field. At some point, it will settle into a local minimum. The system will minimize its energy. That's how it will seek its equilibrium. And so the system will settle in either to this local minimum on the right-hand side or this equivalent local minimum on the left-hand side-- same energy, so neither one is intrinsically preferable. And there's a 50/50 chance. So the idea is that the solutions to this dynamical system will break the symmetries that govern the equations themselves. That's why this became called spontaneous symmetry breaking. The governing equations maintain the symmetry, but the symmetry is broken spontaneously when the system relaxes to some lowest energy state. Now, why would that help? Because I assume, much like Brans and Dicke assumed, that this new hypothetical state of matter, this new field phi, stretches through all of space, through every nook and cranny of universe. And all the rest of matter interacts with it. So now once this scalar field, once this what we now call Higgs field gets anchored or gets stuck in one of its local minima, it now acts like it has a molasses. It actually is stuck at a nonzero value of its field. Instead of being stuck at the origin, it's stuck at some nonzero value. If these other nuclear force-carrying particles, for example, interact with phi, then, all of a sudden, they have drag this field around. They start acting as if they have a large mass. At the level of the equations alone, they have zero mass. They should be exactly as massless as the photon. But unlike the photon, these newer force-carrying particles interact with this new scalar field. When the scalar field gets anchored to some energy-minimizing value, that changes the interactions of the effective dynamics of all the fields that coupled to it. And now they number across space as if they have a very large mass. It's an induced mass coming from this spontaneous symmetry breaking. So now, again, you can have your symmetries and your short range. You can have your cake and eat it too-- very fascinating, a lovely idea being introduced right around the same time as Brans-Dicke also as one way to try to get to this question of why do objects have mass. So these two communities saw very different things even though they use the same Greek letter phi. They were embedded in different kinds of conversations. To people like Brans and Dicke and their colleagues in gravitation and cosmology, this new field, the Brans-Dicke field phi, seemed very exciting. It got a lot of attention, as I'll say more about in a moment, because it offered the first really concrete quantitative alternative to Einstein's general theory of relativity in nearly half a century. And so that helped to spur high-precision tests. Now there was not just Einstein versus Newton, which had really been put to bed already by earlier in the century, but is Einstein's the only possible relativistic generalization of Newton's gravity? Now it seemed very clear it was not the only possible one. And in fact, depending on the value of that parameter omega, there might be really measurable astrophysical effects between whether it's really what Einstein said or this other seemingly self-consistent alternative. So this seemed to be really exciting to learn more about the behavior of gravity on a fine scale experimentally. So lots of people paid attention to Brans-Dicke gravity very quickly. On the other side of these discussions, two theorists focusing on nuclear and particle physics-- what we now call the Higgs field, the separate field phi-- was tremendously exciting for the reasons I was emphasizing a moment ago. It finally seemed to offer a way forward to be able to maintain these very fancy, very abstract mathematical symmetries of the nuclear forces and keep them short-range, that the force-carrying particles could become massive due to their interactions with this all-pervasive field even though they had no intrinsic mass on their own. So that was answering a separate set of puzzles and quandaries. No one suggested, at least in print, that these two scalar fields with the same Greek letter might be similar or even worth considering side by side for nearly 20 years, until the later 1970s. The sets of ideas were published to very wide acclaim in their own separate fields as early as 1961. But it took more than 15 years-- nearly 20 years-- until people began to say, hey, these two scalar fields are meant to pervade all of nature. All of other forms of matter interact with them. They give rise to what we measure as local inertial effects. What if they're actually similar to each other? That question simply wasn't asked for a long, long time. And as I mentioned a few times it's not because these were obscure papers. So I mentioned some time ago that particle physicists, in particular, love to count citations. We love to count our own citations. We love to lord it over our colleagues when we have more citations than them. It's the way we bully each other or assess inherent worth, either which is inappropriate. Anyway, the idea is we have these very strict categories for how we sort citations. And the highest category we could think of is called renowned. It's not quite Beyonce, but that's the best we can reach for. And so a paper is considered technically renowned if it accumulates at least 500 citations in the scientific literature. And that's the highest category we invented. So there's famous, very well-known, but renowned is the highest one. So each of these papers-- the Brans-Dicke paper and the Higgs papers-- became technically renowned within fewer than 20 years. These were getting a lot of attention very quickly. So on the right-hand-- excuse me-- on the left-hand side in red are the worldwide citations to the 1961 Brans-Dicke paper. That was the gravity-type paper. It crosses 500 accumulated citations in fewer than 20 years. And in blue are citations just to Peter Higgs's papers, not even counting the comparable number that went to Geoffrey Goldstone's early work. So in blue, I'm plotting just cumulative citations to Peter Higgs's work specifically on what we now call the Higgs mechanism. And again, they cross 500 within fewer than 20 years. These papers were setting their own fields on fire. These were very highly prominent contributions. And yet, there's almost no overlap. So if you do more than just count and start saying, well, who wrote those papers that are adding up to these kind of dots on each of these histograms, you see it really is like oil and water. These two communities were quite separated during this 20-year span. So you can see more than 500 each. In fact, it's 1,083 distinct papers doing the citing if you add up all the ones between these two plots. And yet, only six of those-- so less than 1%-- cited both the Brans-Dicke paper and the Higgs papers in the same article during this whole 20-year period. So both articles are getting lots of attention, but within really quite separated communities. The earliest paper that co-cited-- the earliest article that cited both Brans-Dicke and Higgs in the same article came fully 11 years later in , say, 1972 after their original publications. And then most of the rest of them out of only these six came after 1975. So it was really in this late period when a very small handful-- still less than 1%-- started citing the papers together. And so that's by article reference lists. You can do play the same game and look at authorship. So there are 990 distinct authors represented by all these papers doing all the citing, roughly equal numbers between the gravitation and the particle theory side. And yet, only 21 of them-- so a little over 2% of the author pool-- cited both Brans-Dicke and Higgs usually in separate papers, but actually cited them in any of their work, again, over that first 20-year period. These are just ways to say that these two subfields really were not strongly interacting. You can have very attention-grabbing research, literally renowned research, bubbling up from the two separate communities at the same time, often in the same journals, asking similar kinds of questions introducing similar kinds of responses and new field labeled finds on and, yet, have almost no kind of crosstalk between them. That's how separate of those fields were. So why was there such a sharp divide? Why was there no overlap? Were these fields just different from each other? Am I just asking a silly question? Why would anyone have ever thought that the Brans-Dicke field had anything to do with the Higgs field? Well, no. Actually, as we'll see in the third part for today, in 1979, two separate theorists working independently of each other actually suggested that the two fields might be literally the same, not just comparable or worth considering side by side. But they developed a very cool model which depended on these two fields being literally the same field-- only one new field of nature, not two. So it's not that they're somehow intrinsically totally separate or different from each other. So instead, their status, really, is historical. How people assess them or what they thought they were good for was changing over time. And so what had changed? That's what we'll pick up in the next part. Let me pause here and ask for any questions. I see something come up in the chat. So Alex asks, was it an accident that both parties chose phi? Not really. I make a lot of that because I just think it's visually so striking they literally chose the same Greek letter. It wasn't too surprising, to be honest. It was not a rule but a pretty widespread convention by that point that a field that had no spin-- so a scalar field-- would often be labeled by the Greek letter phi. The Greek letter psi was often reserved by this point for spin 1/2 particles. So an electron field would often be written with a psi then as now-- or eventually quark fields, spin 1/2 fields. And fields that have one whole unit of spin-- vector fields would often be labeled with a Latin letter, like capital A or capital B. So it wasn't super, super shocking that both groups reached for the same Greek letter. Nonetheless, it is amazing to flip through Physical Review and see the same thing popping up and yet being seemingly in mutual oblivion. So the notation isn't super surprising that it was so similar. But the rest of it that is a new hypothetical state of matter, it pervades all of nature, everything else interacts with it, that's what gives rise to mass-- it was more than just the letter that they chose. There was a lot of what we might have considered similarities. And yet, the two sets of ideas really were treated so separately. This is a good question. Other questions on that? OK, I will charge on. But as usual, please don't be shy. Feel free to jump in with questions anytime. Let's look at what might have changed to make it possible, let alone feasible or likely for those theorists in the later '70s to consider these two scalar fields in a new light. So we might wonder, well, was it changes in data? Did experiments force a new evaluation? No, not really. Let's look at the gravity side first. So I mentioned that part of what got the gravitation community so excited about Brans-Dicke gravity was it now gave them something very tangible, very specific to try to test for to look for what could be subtle deviations from the predictions of Einstein's theory, not the larger-scale differences between Newton's universal gravity and Einstein's. And especially, in the so-called space age, by the later '60s and throughout the '70s, astrophysicists had all kinds of new tools with which to try to look for these effects, including things like human-built artificial satellites that can be sent throughout the solar system. Now you can do very precision monitoring of very weak or subtle gravitational effects, not just in the space of one laboratory, but literally on a solar system scale. So these things were both, as I say, predictions both from Einstein's general theory of relativity and from Brans-Dicke gravity, we're subjected to some very, very clever and increasingly high-precision experimental tests starting, really, in the mid 1960s, some of them invented by Robert Dicke himself. Dicke's quite astonishing, I think. He was one of the, I think, maybe one of the last physicists of the 20th century who was really, really active and gifted both in theory and experiment. Enrico Fermi was like that. But as we've seen throughout this term, there had been a real professional division going back even to the late 19th century between theory and experiment. And Robert Dicke was one of these really quite nimble physicists who was quite accomplished in both. So he actually began testing his own theory with his group, his students, to do things like very sensitive measurements of the shape of the Sun, what was called solar oblateness. If the Sun is measurably different from an actual sphere, then that could actually have an impact on things like the orbit of the planet Mercury. If you have a more of an oblate spheroid from the Sun, then some effects that had been attributed to Einstein's general relativity might actually be different than assumed if you don't assume the Sun is perfectly spherical and, in fact, might be more consistent with this modified gravity Brans-Dicke. So Robert Dicke and his group started conducting high-precision visual measures of the shape of the Sun with the implication being that if that same mass had a different distribution, could you test between them? And again, I'd say, to Dicke's credit, even though he was testing his own beloved theory, his group found basically no evidence for a significant departure from sphericity. So this was not a reason in favor of Brans-Dicke. So Dicke himself found some compelling evidence against Brans-Dicke theory, again, I'd say, to his credit. Likewise, by the later '70s, there were these very cool efforts to do long range distance measurements between not just reflectors on the moon, but even to moving objects like the Viking Mars spacecraft and so on. So these were being done throughout the '60s and '70s. By the end of the '70s, things did not look very good for Brans-Dicke gravity experimentally. That is all the tests were easily consistent with the predictions from Einstein's theory within experimental errors. And yet, to make the Brans-Dicke, the modified gravity version consistent, you had to crank up that one free parameter, that dimensionless number omega, to be very large. Remember, that's the parameter that lets-- that would tell you how easy or difficult it would be for that new field phi to vary. It's almost like its stiffness. You had to tune that new field to be very stiff, so it would behave practically like a constant, which is what brings you back to the Einstein-like behavior. So to match all of these increasingly precise new experimental measurements, Brans-Dicke theory was not highly favored. In fact, Einstein's looked really good. So it was not that somehow experiments were in favor of Brans-Dicke gravity, and that's why everyone else started paying attention. So that's on the gravity side. Meanwhile, as most of you probably know, there was literally zero experimental evidence, a big fat goose egg nothing in favor of the Higgs-Boson until July of 2012, or if you're extra generous, maybe December of 2011, the first hints experimentally-- well past the period we're talking about. Meaning it was not that experiments found conclusive evidence of the Higgs-Boson, and that's what got the gravitation experts to pay attention. So each of these sets of ideas were inspiring intense efforts to do new experiments, but it wasn't that new experimental results had changed people's minds on the time scale that we're talking about for people very eventually to start taking these two sets of ideas consider them side by side. So it wasn't a new experiment. The main story that's mostly given-- I alluded to this in the very beginning of today's class-- is actually hearkens to changes in ideas and, in particular, on the particle physics or particle theory side. And these are brilliant and beautiful ideas. These ideas are well worth appreciating. I just don't think of the whole story. And two of these sets of ideas, in particular, are usually pointed to-- and they came in rapid fire in 1973 and 1974. The first of them is called asymptotic freedom. And actually, it's the reason why our friend and colleague here at MIT, Frank Wilczek, received the Nobel Prize. So he wasn't at MIT at the time, but he's now at MIT. So this was work introduced by Frank and his then advisor David Gross, and independently by a different very young grad student at the time, David Politzer. His grad students doing work for which they won the Nobel prizes-- amazing. And what they found was that the strength of the strong nuclear force, that QCD force that we talked about quite a bit at the end of last class session, the interaction between quarks and gluons, that the strength of that force actually decreases with the energy scale. If you rev up quarks to more and more energy, they actually interact less strongly with each other rather than more strongly. And that was the opposite from how the other known forces behave, both the strength of the electric force-- if you think about the unit strength the unit charge of the electron-- and also this other weak nuclear force, the radioactive decay force. Those forces that literally the coupling constants, the effective charge, they get stronger as you go to higher and higher energies, which is like saying probing shorter and shorter distance scales. And yet, the strong nuclear force, the quark gluon force had the opposite behavior. So this is showing the interaction strength as you change the average energy of the particles involved in any given scattering situation. And so the flow of the average charge for the strong force has the opposite sign to what was then known and was assumed to be universal. So this was called asymptotic freedom. As you go to asymptotically large energies, the quarks would become ultimately free. They would feel no force at all. The effective force would go to 0. That's why it was called asymptotic freedom. They would be free from these forces in the arbitrarily high-energy limit or arbitrarily short distance limit-- same limit. So that was introduced in 1973. That suggests that the behavior of quarks and gluons might be really different at very high energies compared to energies that might be probed even in particle accelerators on Earth. You can see the energy scale here to give it a sense for that. This is measured in units of billion electron volts, or GeV. To give you a sense of scale, the present-day experiments at the Large Hadron Collider are here. So the highest energy particle accelerator on the planet, the Large Hadron Collider, is at about 1,000 GeV, or maybe now getting close to 10 TeV. So maybe it's up here. I'll give them one more notch because they've tried hard. They're here. The effects that Frank Wilczek, and David Gross, and David Politzer were talking about would be noticeable exponentially higher energies, right? Well beyond anything that can be achieved even today, let alone in the 1970s. Slack was basically around here at the time. So it's suggested that if one could ever get to really super crazy high energies, you might see some very qualitatively different behavior among nuclear particles-- really cool, a very interesting set of ideas. Right on the heels of that, there was the introduction of what were called GUTs, Grand Unified theories. My friend Alan Guth likes to call these actually GUTHs because theory should be spelled TH, not-- coincidentally that's his last name. Anyway, we won't give him that. He's not here right now. We'll call them GUTs. So grand unified theories sprung from this idea from asymptotic freedom. Going back to this chart here, if you look at the average strength of the three main forces of nature, setting gravity aside, this first one is basically the strength of electromagnetism, the kind of QED force, photon scattering off electrons. This one is basically the weak nuclear force, radioactive decay, the W Boson stuff. And then this is the quark gluon force. The force strengths look like they might converge. It's not just that these two get stronger with high energy, and this one gets weaker. They actually might overlap at a single value. If you squint and take these error bars somewhat generously, it looks like that at around 10 to the 16, give or take, roughly 10 to the 16 billion electron volts, if you could scatter particles with that average energy, maybe all these three separate forces would have the same unit strength. If they have the same unit strength, maybe they're the same force. So the idea was that maybe all of these highly symmetry mediated forces that we see as very different at these low energies-- they have very different behaviors and characteristic strengths-- maybe they're actually all signs of a single force-- the grand unified theory, which would unite these three forces of nature into a single one modulated by a single force strength, a single effective charge. That was called GUTs. And you can see that only makes sense in the light of asymptotic freedom to get this strength to come down so that it can meet the very gently rising strengths of the two forces. Those are awesome ideas. They're very cool. You can probably get a sense for where this is heading. It starts to become a natural question to ask about conditions when particles could have interacted with literally astronomically high energies, cosmologically high energies. In particular-- and we'll talk more about this in the next class session-- if one takes the Big Bang model seriously, cosmologically, if the whole universe began in a very, very hot, dense state, an unbelievably high-energy state, then at very early moments in cosmic history, the average energy of anything in equilibrium would have been really, really high, that you could have maybe gotten to a time in early cosmic history when the average energy of, say, quarks scattering off each other would have been more like 10 to the 16 GeV, rather than 10 to the 3 GeV. So this became a natural reason-- this is the main argument-- for particle theorists to ask about things like a very high-energy cosmology, very high-energy early universe. And the phrase that was often used at the time, a gendered term, was a cosmology would provide the so-called poor man's accelerator, the poor person's accelerator, that it's really expensive to build Slack, and Fermilab, and the LHC. And yet, all they're doing is probing down here based on technological limitations. So instead of spending another billion plus dollars, turn to cosmology. Let's use astrophysics and cosmology to probe this energy regime of interest. And so this becomes the main reason that's usually given, the main kind of cause for why these two previously quite separated fields of study-- gravitation and particle theory-- were somehow merged into this new field of particle cosmology as an ideas-driven hybrid. And that has a lot going for it. It's not a dumb explanation. I just think it's a radically incomplete one. OK, so this is a rhetorical question. Is it the whole story? Of course, you know by now I think it's not the whole story. It's a large part of ingredients, but it doesn't quite add up. If we go back to things that we can investigate empirically, it doesn't quite make sense of the time scales or some of the other phenomena that we can try to ask about physics work at that time. So here, I'm plotting just the publications on cosmology worldwide, as indexed by these kind of worldwide physics literature searches, physics abstracts, and that kind of thing. You can see that there was a very steep rise, a decided shift, a inflection in the publication rate well before 1973, well before asymptotic freedom, and GUTs, and this so-called reason, this whole theory-driven argument to ask about the particle physics of the early universe. And in fact, the rate goes from roughly six or six and one half papers per year worldwide to around 21 on average. And not only that, you can see that the inflection point really comes quite a few years earlier than this purported cause from asymptotic freedom. And GUTs-- I think even more important is that GUTs became really, really hot years later. So many particle theorists got very excited about that notion of a Grand Unified Theory, but not in 1974, more like in 1980. So if we want to account for changes in 1974-- and even the particle theorists weren't paying much attention till a half a decade later. Again, the timing doesn't quite make sense. The ideas were published in 1974. That's true. But as we've seen a few times in this course, just publishing an article doesn't guarantee that people will pay attention to it right away. And that was certainly the case with that work in particle theory. So even though it offered a very interesting reason to ask new questions, empirically speaking, most people weren't asking those questions at that time. That certainly doesn't seem to be the main driver. So what else was going on? And we've seen this plot a number of times. This is now the familiar plot of a number of PhDs in physics granted in the US over time. We spent quite a while looking at both the reason why it grew so rapidly, both after the Second World War and after the launch of Sputnik, but also this really quite precipitous crash. And a whole jumble of reasons combine, Some. Of them economic, a bunch of them more about policy shifts in geopolitics even that really combined-- that lined up in time that was a perfect storm. The result of which was a very, very sudden collapse in the funding and job prospects and enrollments for physics, which fell, as we've seen, a few times faster than any other field in the Academy. And so physicists, starting the late '60s and really accelerating in the early '70s, saw a really dramatic change in the kind of infrastructure for the discipline. It turns out, again, we can ask a bit more-- that's very coarse-grained. That's looking at the whole field at once. We can dig in a bit more and look at subfield by subfield. And by these kinds of measures, the single hardest hit subfield, the specialty within physics that was affected most dramatically even as all fields felt it, was particle physics. The field that had, so to speak, the most to lose or that lost the most during this reversal of fortune was actually high-energy particle physics. The US budget for that subfield fell in half in just four years. It wasn't a 20% reduction. It literally fell by 50% in four years. That's a very sudden drop. And that was combined with the drop in the job market demand and all the rest. So even as the whole field starts going through some pretty dramatic changes, particle physics feels the brunt of it in the most extreme way. So that leads to an interesting set of internal migrations within the discipline. This plot comes from a series of studies that were produced very soon after the crash by the physics community. This one, I think, was from a National Academy of Sciences study. And so what you see is this is now looking only at the years between 1968 and 1970. The plot was made around 1972 or '73 and is charting inflows and outflows among the recognized subfields of the discipline during the time when, really, the bottom falls out. So when particle physics is getting hit harder than any other field, you have twice as many people leaving that field than joining it. And that's where the people who stay within physics at all that you have a net outflow by a factor of 2 just within the field of people fleeing particle physics, partly because its budget was cut most dramatically. And the job scene was most hard hit, and experiments looked like they'd be on hold and so on. And so in some of these reports that tried to make sense of the crash, like this 1972 report commissioned by the National Academy of Sciences. It was actually a huge three or four-volume thick government report-- 2,500 pages full of tables and charts, lots of fun stuff to geek out on. And yet, this blue-ribbon panel with several leading MIT colleagues who served on it, plus members from all across the country, were trying to survey the US-based physics profession and what had gone wrong during this period of very rapid transition. And they single out particle theory, in particular, for special kind of concern or critique. They say it's not coincidental that these young theoretical physicists in particle theory had the hardest time when the trouble came because this report claimed, at least, they'd been poorly trained. It's not the young student's fault, per se. It's because the programs that were producing many, many, many highly trained particle theorists were training them much too narrowly. They were too narrowly specialized only in the esoteric of these symmetry arguments about nuclear forces. They weren't being exposed to or responsible to the many, many other fields of physics. And so when trouble came, they were least adaptable. Now, I don't know if that's really the case or not, but I think it's quite telling that was the kind of explanation that this blue-ribbon panel of leading educators, including many leading particle physicists themselves-- that was their explanation for why that one subfield fared so especially poorly, even as the whole field had trouble. And so they make recommendations. Their job, after all, is to say, how do you avoid this in the future? It was a very official high-profile blue-ribbon committee. So they make all these recommendations, some of which actually get taken up pretty quickly by departments across the country. And one of the recommendations is to forcibly broaden the training of young physicists in particle theory. They have recommendations for other fields too, of course. But partly because they singled out particle theory, they say, we have to start changing how we train PhD students in this particular subfield. And so the idea is to actually formalize their exposure to more and more parts of the discipline, including explicitly more focus on gravitation and cosmology. So that means that more and more departments, including very elite trend-setting departments across the country, start rushing to offer new graduate courses in general relativity, many of which had offered zero courses in the field until then. Or it was only an elective. Now became a requirement or was only offered in astronomy. And that was also offered in physics. They're rushing to get more and more coursework on things like gravitation and cosmology. And likewise, questions on that field, really for the first time in the United States, start showing up regularly on the general exams for physicists across all fields, all specialties, not just those who wanted to study relativity, and gravitation, and cosmology. So now you have more and more students in their PhDs responsible to have learned something in neighboring subfields, whereas before then, the general exams had not emphasized or required that. And you see a market response as well. You see a flood of new graduate-level textbooks on general relativity on gravitation and cosmology-- twice as many published in the 1970s versus the 1960s. And in fact, even of those 1970s books, the vast majority of those came really in the later '70s, in the wake of these pedagogical reforms. So remember that big report comes out in 1972. You start seeing curricular changes as early as '73, '74. And by '75, '76, '77, you start seeing, in some sense, the market respond with many textbooks being really rushed into print. Some of these textbooks were basically mimeographed lecture notes. It was seen as such a rush, such a demand to get new pedagogical materials because more and more places wanted to offer courses on this where they hadn't before, that you had some informal kind of lecture note transmutes the transmutations into textbooks. And now there's very, very fancy books published in a more typical way. You see really a rush to get more books on that subfield in particular. So let me pause there. Any questions on that? I see, again, some things in the chat. Yeah, so Alex says, quite rightly, if we can convince Alan Guth to clean his office, then we'll work on nomenclature change. Some of you might know that Alan-- I can't talk about Alan enough. He was my PhD advisor, so I've been teasing Alan for more than half of my life, to his great chagrin. So you might know that some years ago, before there was a construction to build a new Center for Theoretical Physics, we all had to move out of our old offices. And it was around that time that The Boston Globe ran a contest for the entire Boston metro area for the area's messiest office-- not messiest academic's office-- the messiest office in the Boston metro area. And I give it away. Guess who won. Single-- I mean, he's won a number of awards, gold prizes and that big award-winning fella. But the award that I take most pleasure in is the fact that he won Boston's Messiest Office. So yes, we'll work on that. Let's see. Fisher says, on the chart of interactions with the field strengths, are those groups? Yes. Fisher, thank you. That's right. So I was avoiding the nomenclature, but you're quite right. On that chart that's associated with asymptotic freedom, what I was referring to as the strength of electromagnetism-- it was labeled on the chart U1. And that is the fancy way of labeling the symmetry group that is associated with electromagnetism. It's a continuous unitary symmetry, which is like saying you could rotate the electron field by any continuum amount, and the equations remain unchanged. And so that calls for certain properties of the force-carrying field. The photon only has to mop up a relatively simple symmetry, the U1 gauge symmetry. Whereas SU2 was what I was pointing to when I was referring to the weak nuclear force. And you're right. That's a discrete symmetry. It's a more complicated symmetry structure. And that's the symmetry group that these force-carrying particles-- the W and the Z particles-- are invented to enforce. So that has different implications for their properties, like we saw last time. And then SU3, that refers to three different color charges of quantum chromodynamics. You have a discrete three-way permutation. And so therefore, the gluons have still a different set of properties, so that's right. So that chart was labeling the symmetry groups associated with the different kinds of forces. And so what's really at the heart of GUTs, or GUTHs-- I'll throw him a bone-- is that maybe you can play that game one more time and say, each of those three symmetry groups are actually sub-- you think about it. You can represent any of them as matrices. Is there one single larger set of matrices that would include the U1, SU2, and SU3 as submatrices? And there is the smallest group that includes those three as subgroups is an SU5. So what if there were five-fold symmetry with still different kinds of requirements on its force carrying particles? So the first GUT was actually an SU5 model put forward by a then very young Howard Georgi and Sheldon Glashow. They were very young theorists at Harvard at the time. And it was to do exactly that, to try to build on that symmetry structure and find the next simplest, most minimalist symmetry structure-- basically a matrix representation-- into which you could fit those three known symmetry groups and just a little bit else. It's a little bit bigger. So that led to new phenomena that were predicted that turned out not to be measured, but that was the idea at least. And that maybe the photon, the W, Z, and the gluons actually are all instances of this one unified force carrier. And at very high energies, they would all be indistinguishable. But as the symmetry gets broken at lower energies, they take on different features. It's another example of spontaneous symmetry breaks. Their low-energy features would be different than their high-energy features. That's right. And that's super cool and fun and lots more to be said about that. But that is, indeed, where that nomenclature came from. Great. Any other questions on that? OK. Anyone else want to share stories of Alan's messy office? No, going once? OK. Then let me press on for the last-- the last part is pretty short, so the last little part, and we'll have time for more questions and discussion after that. OK, so I mentioned very briefly that in 1979, two separate theorists-- in fact, they were working independently at the time-- Anthony Zee and Lee Smolin separately introduced a whole new model where they didn't only cite Brans-Dicke and Higgs, they didn't only say these two Greek letter phis might be similar, they literally united them. They proposed-- it was a hypothesis. It was a new model in which these two fields were literally the same. They glued them together. And I found that really compelling to say it wasn't that the fields were in principle separable. Here were some clever folks coming along almost 20 years later to say, maybe they're actually the same. Maybe there's one field phi pervading all of nature, all of space, with which other matter interacts. So again, if you look at the key parts of the Lagrangian that they put forward-- you can think of it as like the energy balance-- they combine the features from Brans-Dicke for the gravity side. So there's a direct coupling between this new hypothetical field and this geometrical structure, the local curvature of space and time. And so this is really, again, playing the role of the varying unit strength of gravity. That's in place of Newton's constant G. If that field can vary, then you better include its kinetic energy with, again, some fudge factor. That's all the Brans-Dicke stuff. And give that field its own potential energy with that very specific shape, that symmetric double-well type shape from Goldstone, Higgs, and all the rest. Why would you do that? They wanted to ask why gravity appears to be so weak compared to all the other forces. By this point, they knew about things like the strong force. Quarks and gluons interact very strongly. Even the electromagnetic force is exponentially stronger than the gravitational force. The force between an electron and a positron when they're close together is exponentially higher due to their Coulomb attraction compared to their gravitational attraction, even just classically. So why is there such a strange hierarchy? Why such a huge divide in the average strength of gravity compared to these other seemingly elementary forces? So the idea was that this local strength of gravity, Newton's constant, which goes 1 over this phi squared, would get anchored to a very small value when phi gets stuck at a relatively large value. So when this dynamical field, this field phi-- which, they also use the Greek letter phi-- which skitters around the universe, it eventually reaches some kind of lowest equilibrium state at the bottom of one of these symmetric minima of that potential. And so now it gets anchored at some large nonzero value instead of being at a local unstable equilibrium point of phi, roughly 0. So if phi becomes stuck at some large nonzero value, either plus or minus, then the square of that will be some large number, some large positive number. And so that could be setting the inverse gravitational field strength. So why is gravity so weak? They suggested maybe it's because it's arising from some broken symmetry. Much like the Higgs-Goldstone mechanism, the field is dynamical, but it's getting stuck. And only in the broken-symmetry phase do we experience a phenomena that we are used to. So gravity gets stuck being weak because its local strength is arising through the Brans-Dicke field getting accurate in a symmetry-breaking potential. It's lovely. It's a very cool idea. So how did these two individuals come to that? Here's a photo of Tony Zee a few years ago. He wandered into this field really accidentally. He finished his PhD at Harvard in 1970. He was a grad student in the late '60s. His work was squarely in particle physics, the fancy new symmetries and new nuclear forces. That's what he studied for his thesis. He finished his PhD in 1970. He then had a sabbatical early on in his faculty career, and he happened to go to Paris. And he swapped apartments with a Parisian physicist who was about to come to the United States. So they basically swapped apartments for several months. And as Tony recalled, the physicist with whom he again kind of accidentally wound up swapping apartments happens to have been immersed in gravitation and cosmology, more so than in Tony's main field. And as Tony recalls, he found these stacks of preprints all around the apartment that looked interesting. He was basically in a stranger's apartment, reading the mail, just reading papers on the coffee table. And these looked really interesting. And in fact, they sparked. He had a little extra preparation. He'd been an undergraduate at Princeton, Tony had. And there, he studied a little bit of gravity with John Wheeler for his undergraduate thesis. So he knew a little bit about Einstein's theory of gravitation, had worked with a renowned expert in the field as an undergraduate, but really focused on quite different topics throughout his own PhD and post-PhD training. So he accidentally stumbles back into the topic of gravitation and cosmology. And that rekindles an interest. He actually gets back in touch with John Wheeler. He's not doing this on his own after his sabbatical is over. So by the mid to late '70s, he's now asking questions at this interface between his formal training in high-energy particle theory and this new hobby interest in gravitation and cosmology. And it's in the midst of that new set of studies that he writes his version of his broken-symmetric gravity to suggest very creatively but tentatively that the Brans-Dicke field and the Higgs field might be identical. OK, that's his route in-- a kind of accidental. For Lee Smolin, it was really not very accidental at all. He was the other person who independently introduced that broken-symmetric theory in 1979-- same year as Tony. So Lee actually was still in graduate school when he wrote that paper. He entered grad school in '75. So he entered roughly 10 years after Zee had. He was roughly 10 years younger. Both at Harvard, as it turns out, both at the same school, but 10 years apart, in the midst of which some pretty significant curricular changes had begun to take place there and elsewhere. So unlike Zee, who focused pretty exclusively on particle theory as a graduate student, Smolin was actually, from the start, combining the two fields, both in the courses he took and eventually with his advising team for his thesis and for his dissertation itself. So from his first semester as he was taking courses with experts in gravitation and cosmology, like Stanley Deser and Steven Weinberg, as well as experts in particle theory and nuclear forces, like Sidney Coleman, Howard Georgi, I who introduced that GUT I mentioned, and a visiting professor at the time, Gerard Hooft. So he was really being schooled, formally and informally, from the start to work at that boundary. And his paper that introduced this unification of the two scalar fields-- also published in '79-- that was part of his formal graduate training. That was part of his thesis. It was actually that second version that becomes more and more common. So unlike this accident of trading apartments in Paris and reading a few preprints, more and more members of Lee Smolin's generation were going through a training more and more like his, partly by design, in the wake of the National Academies report and similar curricular reforms. So Lee was hardly alone from this generation. People like Michael Turner Edward-- he goes by Rocky-- Kolb or Paul Steinhardt-- all of whom finished their PhDs around the same time as Smolin in different universities. They were trained in very similar ways that they likely had entered with an interest in particle theory. They learned a lot of particle theory, but both formally and informally, coursework general exams and advising for theses, they were working from the start with a combined set of ideas and advisors, quite different from what had been typical for say Tony Zee's generation, or, for that matter, Alan Gates. And each of them, not just Lee Smolin, each of the folks here like, Turner Kolb Steinhardt and many of their colleagues, also took up this interesting idea and, in the years soon after 1979 throughout the early mid '80s, began pursuing other studies of their own in which they physically united the Brans-Dicke field and the Higgs field in new kinds of models to try to understand exotic phenomena. So that becomes the norm instead of the exception. So I find this really interesting because there's a tradition for trying to think about how physics changes over time. We, often think about whole-scale theories, and one theory replaces another. And I think there's just that misses lot of what people do all the time. It certainly misses what people like Lee Smolin, or Tony Zee, or Mike Turner were doing in these instances. So very few people, very few physicists today, even including Carl Brans, think that Brans-Dicke theory of gravity best describes our. Universe It's highly constrained by solar system tests. Any deviations would be pretty modest compared to pure Einstein gravity. And yet, this theoretical object that theorists still think about, the Brans-Dicke scalar field, hardly died off by his experiments. In fact, arguably, interest in the field grew even as it was getting experimentally less and less favored. So this new generation of theorists-- people like Lee Smolin, and Rocky Kolb, and Mike Turner, and Paul Steinhardt, and others-- they were trained to work at this interface. And they found that this Brans-Dicke-like feature kept popping up all over the place. When you try to study the self-consistent quantum mechanics of these scalar fields in a warping space time, you are actually forced into these kinds of Brans-Dicke couplings. That wasn't Brans' and Dicke's motivation, but through this new set of questions, more particle theory inspired. Then these terms come up unavoidably. They became more and more common in other efforts to unify the fundamental forces, like in string theory. They had all kinds of potential roles to play in new work that we'll look at together next time on inflation. So even though experimental evidence for or against Brans-Dicke gravity was tilting more and more against, at that same moment, people were finding more and more creative reasons to pay attention to the scalar field, even though the full-blown theory was falling out of favor. And you see that, again, in this countables. If you reset the clock to zero and start counting citations starting in 1981 just to the Brans-Dicke paper, it becomes renowned all over again. It gets another 500, in face, almost 700 citations just in the 15 or 16 years after 1981, when it was already most disfavored by the astrophysical observations. So the idea that we're picking single theories and that they replace each other, I think, just misses this fine structure, that while Brans-Dicke, as a description of gravity in our local solar system was under most pressure, was when people were actually doing the most things with the actual scalar field Brans-Dicke phi. I find that really curious, and it's still very much alive today. In fact, one of the leading contenders for our understanding of the very early universe does exactly the kinds of things that Smolin and Zee had been doing, trying to unify the Brans-Dicke and Higgs field. So I want to emphasize that neither kind of new experiments nor new theories alone are really going to help us account for either the specific instance of people doing creative things with these two scalar fields or the broader instance of working at this new hybrid or boundary area of particle theory. And in fact, there were a bunch of concrete changes-- some of them geopolitical, the ramping up the Vietnam War, worldwide economic crisis, huge changes in policy priorities within the United States, shifts with the University departments-- all these things on a huge range of scales, from individual departments up to Cold War geopolitics, these things are helping to mold what's going to seem natural or, quote/unquote natural, for younger people to consider doing because it all helps to shape what counts as their formal and even their informal training. And so the training of people like Lee Smolin and his generation just was importantly different from the very excellent training that people like Tony Zee had had even just a mere 10 years earlier. And in turn, these new folks, especially people like Mike Turner and Rocky Kolb, went on to become real institution builders in their own right. So in fact, they were accelerates. Not only had they been trained to think carefully at this new interface, they helped really accelerate the trend. Turner and Kolb became the directors of the very first institutional center devoted to particle cosmology. It was called the Center for Particle Astrophysics at Fermilab when they were still relatively young in their careers. And then they wrote the very first textbook on the field first published in 1990. I actually own three copies of that book. I love that book. And so they were actively working to perpetuate this new hybrid area, not only because of the force of lovely ideas like asymptotic freedom and GUTs. So I'm going to pause there. We have time for-- oh, OK. Sorry, last little bit here. This is really a setup, then, for the next class session. We'll look more squarely at inflation in cosmology. We'll have more opportunities to make fun of Alan Guth, which is what's in it for me. But we'll look at an instantiation of work at this new hybrid area, really of what becomes the poster example of particle cosmology. And we'll look squarely at that in the next time. And as I say, within that work, including my own students and many people now around the world, it's now just totally bizarre not to consider the Brans-Dicke field and the Higgs field is somehow relatable or maybe even identical. So it's gone from really, really just never done for nearly 20 years to now remarkable if people even question it. So it goes from who even thought of it to who wouldn't even try that? What counts as natural can shift in a pretty short time scale. And as we've been seeing throughout the whole term and including today, those shifts can be driven as much by things well outside of the physicist's control, geopolitics, and national scale budgets, and blue-ribbon committees as by the force of new ideas and experiments. So that's where I'll actually pause. And we have time for a few more questions or comments. Any other thoughts on that? With my last moment here, I'll say, before the construction for the new Center for Theoretical Physics, I had an office just down the hall from Alan's. And by a quirk of the old building 6, we had the same key. A single key would open the whole hallway. I guess, we all trusted each other then. Couldn't get rid of it now. So my key would open Alan's office. And one time, my parents were visiting. And I basically broke into Alan's office. They couldn't believe me when I described what it was like to try to work with this person. So I actually broke into his office to show them the safety violation, fire code violation, horror show that was the den of entropy known as Alan's office. So that's a true story. I no longer have a key to his office, so his mess is safe from me. Any questions about messy offices, or group theory, or geopolitics, or anything else? If not, I will say good luck with paper 3. Please, please get back to it. Don't leave it to last minute. Please get in touch with your teaching assistant. Think about the optional paper 1 rewrite if you have time and inclination for that. And stay well, and I look forward to seeing you on Monday. See you soon, everyone. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_12_Quantum_Weirdness_Schrödingers_Cat_EPR_and_Bells_Theorem.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: But if there are no immediately pressing questions on the assignments, then I will jump in. We can start talking some more about quantum weirdness. We can continue our discussion from yesterday, which was clearly insufficiently shocking for you. You were all very quantum complacent yesterday. Let's see if I can do a little better today. So we'll talk about a series of developments of-- that grew out of efforts to make sense of this new set of equations, this new mathematical formalism of quantum mechanics. Some of these developments came pretty quickly in time, very soon after Schrodinger had worked out this new form of the equation, and others, as we'll see, really came together some years later. So for today, we're going to talk about these three examples. This does not exhaust all the things one can think about under the heading of quantum weirdness, but these are some of my favorites, and I think they really show a range of the kinds of questions that this work began to open up. So we'll start by talking about superposition and Schrodinger's cat paradox. The Schrodinger's cat is likely familiar to you, at least in outline. We'll talk a bit about what were the conceptual stakes and what was the historical moment, what was the broader context, in which that work was being developed. Then for our second part today, we'll look at a very famous critique of quantum mechanics, now usually just referred to by the initials EPR. As we'll see, that stands for the initials of the three authors of this paper, Einstein, Podolsky, and Rosen. So we'll talk about that. And then for the last section, we'll talk about work by the physicist John Bell, now known as Bell's inequality. And it relates very directly to something called quantum entanglement. For this last one in particular, and really for throughout today's lecture, but in particular for part three, again, I'll have to skip over some of the intermediate steps. I might show a couple of equations that might look a little difficult to parse. But there's the optional lecture notes on the Canvas site. And you can dig into that a bit more if things are not so clear. And obviously, of course, please ask questions now or email or office hours as well. So that's where we're going today. OK. Let's quickly remind ourselves where we've gotten to so far. As we saw just in yesterday's class session, in 1926, starting in winter/spring of '26, Erwin Schrodinger developed a second, independent approach to what would be called quantum mechanics, independent from the work by Werner Heisenberg, which had been published only a few months earlier. So whereas Heisenberg had emphasized discreteness and wound up using matrices, though he didn't even know it at first, Schrodinger was developing a different approach, building much more on a continuity assumption rather than a discreteness assumption. And in this case, Schrodinger was building very directly on work by Louis de Broglie about matter waves. Somehow, seemingly solid matter like atoms and parts of atoms have associated with them in some way that wasn't so clear some kind of inherent waviness, that that was maybe the real secret to quantum theory. So Erwin Schrodinger introduced a whole new formalism building on this continuity notion of matter waves. And he began to quantify that. So he introduced this equation, which you see here in red. Some of you might even have that tattooed or on T-shirts. It's an excellent candidate for either of those things, the time-independent Schrodinger equation. And this governs the behavior of this new quantity that Schrodinger introduced that we now call the quantum wave function, usually given the Greek letter psi. As we saw, solutions to this equation obeys superposition, just like most typical solutions to wave equations do. So what that means is that if you have two different solutions to the equation, we could call them psi 1 and some other solutions psi 2, then in general, it will be the case that a sum of those two solutions will also be a solution. In fact, a weighted sum. We could weight them with different coefficients in front. And that would also still be, in general, a solution. So that's what yields all this juicy, wave-like behavior, things like interference. And so to remind you in a simple cartoon version, two waves that are each separately solutions of a particular wave equation. In areas where the crests, the peaks, nearly coincide, they'll add up to make a larger overall peak. Other places, one crest will nearly line up with a trough, and they'll almost cancel each other out. That's the kind of constructive and destructive interference, very familiar from classical waves. This was also a feature of these strange quantum wave functions that were associated with Schrodinger's equation. That still didn't answer what psi was a wave of. It was really not so clear. Schrodinger offered several tentative suggestions or hypotheses, and even he himself showed that most of those weren't self consistent. And it was really about six months later in the summer of 1926 when Max Born interpreted Schrodinger's own work, his own equation, and said, here's what your equation perhaps might mean. We've seen this many times. The equation didn't change, but the effort to make sense of it-- that was really not so settled. So Born suggests that the wave function is somehow related to the probability for various outcomes to be measured. So it wasn't a feature of a physical object extended in space. It wasn't a shape of some smeared-out electron through space. It was somehow, in some way it wasn't at all so clear, related to the likelihood to measure certain properties of that electron, for example. So that psi was not even the probability itself, but the probability amplitude. And the absolute square of this, in general, complex number, complex function-- the absolute square would give you the probability. And that's what leads to all these really quite delicious phenomena, like the double slit experiment that we talked a bunch about yesterday. So that's just a recap. So pretty quickly after that, a British physicist named Paul Dirac, who was almost exactly the same age as Heisenberg and Pauli, very young, in his early to mid 20s at this point, and therefore a generation younger than Schrodinger-- Dirac learned about this work. He was very mathematically gifted and well-trained. By this point, he was at Cambridge for his PhD. And he really began to formalize this new quantum mechanics. He was not only one of the first to demonstrate a mathematical equivalence between these very different-looking formalisms from Heisenberg and Schrodinger; he then went further, not just to show that one could build a map from Heisenberg's matrices to Schrodinger's wave equation, but in fact to formalize the entire thing in this general, abstract, geometrical way. So it's really from people like Dirac, among the first, from whom we begin thinking about these quantum states as vectors, not in the three-dimensional space in which we move around, not in the space of x, y, and z, but in some abstract, mathematical space that we would now call a Hilbert space, so that there's a state associated with this wave function psi that would obey Schrodinger's equation. And that's really just a vector in some abstract space, and that the vector could represent, for example, the-- it's related to the likelihood that we would measure a particular value for a property of a quantum system. You see it's already gotten very abstracted from this physical picture of real matter moving through space. It's now the behavior of these abstract pointers in an abstract vector space. So the state vector, this vector psi, could itself be represented as some weighted sum of eigenstates. That is, states that correspond to a definite value for a specific property. So again, if this is brand new to you, please don't worry. For some of you, this might be familiar from other classes. But it's from Dirac that we start writing things like this, that there's some vector psi, some state, and that can in general be written as a sum, as a superposition, a weighted sum, of distinct basis vectors, in this case, the so-called eigenstates. Now, the eigenstates are states that have definite values for particular property. And I'll talk about an example to make that a little more concrete in a moment. But just to pause. If this expression is unexpected or something you haven't seen before, this really is just quantifying that notion of superposition. It's just saying that if these eigenstates, these basis vectors phi, are each solutions to Schrodinger's equation, then so is their sum and so is their weighted sum. So these coefficients A, in general, could be complex numbers, but they're just a number. They're just a weight. Take five parts of phi 1 and 6 plus 7 i parts of phi 2. We're just going to add up a sum because if each of these is a solution, then is so is their weighted sum. It's just a way of formalizing superposition. This is the kind of stuff that Dirac loved. He ate this stuff for breakfast. And he wrote this very, very influential textbook, first edition in 1930, very soon after this work was first published, trying to put this all together in this new vector space packaging. Now, that was very abstract, what I mentioned. Let's consider a relatively simple example, again, one that might be familiar to you. Let's consider basis states or eigenstates that correspond to the property of the spin of an electron if we choose to measure that spin along some definite orientation in space. We line up our magnets along the z-direction in space. We hold up to some gradient of some magnetic field pointing along the z-direction. We could really do that. It's a real thing we could measure. And then we can ask, if we send an electron through that region of space where that external magnetic field is, will its spin be pointing up, directly along the direction of the field, or directly opposite, antiparallel? Those are the two options. The spin could either be exactly along the specified direction of space or exactly opposite to it. There's two possible outcomes. That means there's only two of these basis states, these two eigenstates. States with a definite property, the spin is definitely up along that direction of space, or the spin is definitely down, with definite values. And again, the actual value of that angular momentum, as usual, scaled in terms of Planck's constant. Now, these two states are opposite to each other. They're literally orthogonal. It's spin up or spin down. And the way this gets formalized-- again, we learned this following Paul Dirac is that these vectors are orthogonal to each other. They have vanishing overlap. Again, there's a little bit of this in the associated optional lecture notes. This is just a fancy way of taking the dot product. So we have two vectors that are perpendicular in ordinary space, say one vector pointing exactly along the x-axis, a second vector pointing directly along the y-axis. Take the dot product, and it vanishes because they're perpendicular. The same kind of properties hold among these more abstract vectors in this abstract Hilbert space. And in this case, these opposite properties-- it's either spin up or its opposite, spin down. The corresponding vectors have a vanishing overlap, or if you like, a vanishing dot product. Nonetheless, even though the states are individually opposite to each other, we can construct a perfectly valid quantum state by using superposition. So we can build up some state psi with which to characterize this electron or a whole collection of electrons. And we could say that an individual electron would have some likelihood to be found in spin up upon performing the measurement, some perhaps different likelihood to be found in spin down. So even though the outcomes can't both be found, we'd find either one or the other, the quantum state could be this superposition, this mixture of these seemingly opposite sets of properties. That's the kind of manipulation that people began to learn how to do once the British physicist at Cambridge, Paul Dirac, began to formalize Schrodinger's notion, building very specifically on this notion of superposition. So again, let's stick with our little, simple example for a few more steps. If we were to perform a measurement of the spin of that particle along a particular direction in space, we'd have to orient our magnets, so we're going to choose a direction in space along which to measure spin, then we can calculate in advance the probability to get either of these answers. It will either be spin up or spin down. And we find that the probability, according to Max Born, by asking about the absolute square of the overlap between the general quantum state and that definite outcome. And as you can see here, using some of these inner product relations, that the probability to find spin up will just be the square of this coefficient, the absolute square. The probability to find spin down is just the absolute square of the other coefficient. So what's important to step back and recognize here before we get too lost in the mathematics is that every single time we measure spin along that same direction z, we always get a definite answer. We always find either spin up or spin down. We don't find spin at 72 degrees. We don't find some smeared-out answer or some fuzzy result. We get a definite answer. What the equations of quantum mechanics allowed people to do starting right from the 1920s was calculate the likelihood the probability to get this definite answer versus that definite answer. And yet what the equation seems not to allow physicists to do, and this is something that people like Max Born and Paul Dirac and Niels Bohr and Werner Heisenberg and the folks we've been talking about-- what they began to grapple with and to really reconcile themselves to, is that quantum mechanics can only be used to calculate these probabilities, that we get definite answers each time the real measurement is performed, and the equations tell us the probability to get that definite answer versus the other. What the equations do not seem to allow the physicist to do, then or now, is get any knowledge ahead of time of what the actual value was for that property before the measurement was performed. So if we write down a valid quantum state like this, this tells us that the object has some likelihood to be measured with one property, some likelihood to be measured with some other property. But we don't know until the actual measurement is performed which property the particle really has on its own. Bohr went even further. It's not just that we don't know. He began to suggest that maybe the particle itself simply did not have a definite value prior to its measurement. So the one statement is that we don't know for sure. And that's a statement of our ignorance, the state of our knowledge. And therefore, we average over our ignorance by calculating probabilities. Bohr and others following his line of thinking reasoned maybe the probabilities are even more radical. Maybe the world itself, so to speak, didn't know. What if the electron simply had no definite value of spin, neither spin up nor spin down, prior to its measurement? And again, to bring that home, I like to think of human-scale examples. What would that be like? That's as if saying a person had no particular weight until she stepped on a bathroom scale. Not that she didn't know or she misremembered or it might have been one of many values. What if she literally had no weight at all, no definite weight at all, until she stepped-- until she measured it by stepping on a scale? That's a pretty radical notion, a very strong break from how we think about physics either with Newton or with Maxwell. And not everyone liked that development. So one of the first people to really begin to raise some cautions about that approach is Albert Einstein. So he was, of course, a major contributor to some of the early work. We saw many of his very important contributions within the so-called old quantum theory epoch, even more that we didn't have a chance to talk about in this class. He was all over this early quantum theory stuff. And he was following these more recent developments very closely. And he became more and more dissatisfied, kind of uncomfortable, with this direction that people like Niels Bohr and many of the younger guard were pursuing. So he wrote a series of really amazing letters to his very close friend Max Born, the same Born we've seen many times, who worked closely with Heisenberg who introduced this probability interpretation for Schrodinger's wavefunctions. They were very close friends. They were contemporaries, same age. They'd known each other for a long time. And when they weren't in the same city, they would write letters. And many years later, those letters were collected and published. It's a wonderful collection. So Einstein, we know, grew frustrated with this restriction to only calculating probabilities. And he would often vent his frustration to Born and actually try to convert Born, Born himself, who had actually argued for this probabilistic interpretation. Here's one of the most famous excerpts of quotations from the letters. There's a whole series of very juicy letters they exchange. But as you might well have heard this last part, comes from a private letter that Einstein had written to Born in December of 1926, five months after Born introduced the idea that psi relates to probabilities. So Einstein writes that "quantum mechanics is certainly imposing." I think by that he means impressive. "But an inner voice tells me it's not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one," by which he means something like God, kind of spiritual notion. He goes on, "I, at any rate, am not convinced that He--" sorry. "I, at any rate, am convinced that He is not playing at dice." And you've probably heard this paraphrased. Einstein claimed that God does not play dice with the universe. That's where that quotation comes from, that surely the world has definite properties, whether we happen to know them or not. That's really what he's arguing. It turns out Einstein didn't only raise this question with Max Born. He got very-- into very animated discussions with Erwin Schrodinger. And he had many opportunities, starting very soon after Schrodinger published this pathbreaking work. Schrodinger published his work while he was still a professor in Austria, but very quickly, he was then hired to the fancy professorship in Berlin starting in 1927. In fact, he took over the professorship when Max Planck retired from it. So now Schrodinger, starting in 1927, was the senior-- the ordinarius professor of theoretical physics in Berlin. And Einstein was in Berlin at the Prussian Academy. Now they were neighbors. They became even better friends. They would hang out all the time. We know this from their letters and from other documentation. They would enjoy these-- I love this, the [GERMAN] evenings. These are Viennese sausage parties, which we should bring back. They would also go sailing on a lake together. Einstein owned a summer home in Germany, and he loved sailing on the lake. They'd go on sailing trips together. And they would argue and talk about quantum theory on many of these visits. And in fact, Schrodinger began to feel very much like Einstein. They became-- both became suspicious or skeptical of the direction that quantum theory was going, even though they both, Einstein and Schrodinger each, had done so much to actually build this formalism of quantum mechanics. So one of their surviving letters-- Einstein asks Schrodinger to imagine a particular situation. He wanted to drive home why we shouldn't be content to stop only with calculating probabilities. So imagine that someone had hidden a ball, let's say a red ball, into one of two identical closed boxes. And we can't peek. We don't know where the assistant placed the ball. Prior to opening either box, we could calculate a probability that we would find the ball in either box one or box two. So before we check, before we perform the measurement by measuring where the ball actually is, our probability would say the likelihood to find the ball in box one is 50% and the likelihood in box two is also 50%. Einstein writes to Schrodinger, "Is this a complete description? NO." All capitals. "NO. A complete statement is, the ball is or is not in the first box." There should be a value to the property of the ball's location, whether or not we know what it is. We can calculate probabilities based on our ignorance, not based on how the world is. And Einstein thought people had been too quick to conflate one with the other. Now, these discussions, which are often very spirited and very friendly. They really came to each other very much. Those direct face-to-face interactions were pretty soon disrupted with the rise of the Nazis. And we'll talk more about the rise of Nazis again, actually, starting in the next class session. But as I'm sure you know, Hitler was elected in an open election as chancellor of Germany in January of 1933. And very quickly, the Nazi party then took over more and more elements of the German federal government and local governments. And so by that spring, within just months of Hitler's first election, both Einstein and Schrodinger independently left Germany. Einstein-- we'll talk more about their reasons for leaving. The point is, they were now no longer in Berlin. In fact, they were no longer on the same continent. Einstein resettled to the United States. He moved to Princeton, New Jersey. Schrodinger moved at first to Oxford in England. He later settled for the wall years in Dublin, in Ireland, but he got out of Germany, out of the continent. So now they were an ocean apart. And that was personally very disruptive. But for us historians, it was very lucky because now they could only talk to each other through letter. And most of these letters were saved and have survived. So we can see how their conversation continued, but now we have more of their conversation because we all had to be written down because they weren't sitting up all night eating their Vienna sausages or sailing on the lake. So here's one example. Again, I write about this in one of the essays in the reader for today, one of the readings. Here's a letter that Einstein wrote to Schrodinger now about two years or so after they had both fled Germany. He says, "Imagine a charge of gunpowder, of weaponry, that's intrinsically unstable, that has even odds, even likelihood, to explode over the course of one year." So now he's no longer talking about a kind of emotionally neutral situation, like a red ball and a blue box right. Suddenly, he's thinking about-- the world is racing toward war, at least certainly Europe is. Let's talk about worldly things like caches of weapons and what's going to happen with that stockpile over time. The nature of the discussion changes quite dramatically. So in Einstein's letter, his private letter, he says, "In principle, the situation can easily be represented quantum mechanically." It would be a superposition of these two opposite properties, but a sum of the two, just like with any superposition. After the course of a year, this is no longer the case. This might make sense as the state of the system at the start of one year, is what Einstein is suggesting, but after a year, this can no longer be an accurate description, at least argues Einstein. Rather, the wave function describes a blend of not yet and of already exploded systems. But how could that be, right? In reality, there's just no intermediary between exploded and not exploded. Think about the nature of his examples now, even though he's still talking about statistics and probability. Schrodinger writes back a few weeks later, and now there's a twist. This is where the so-called Schrodinger cat is first introduced, first written down. It's actually first a private letter from Schrodinger to Einstein responding to this exploding cache of gunpowder. Schrodinger sits with that. And he writes back this example, now. This is from his private letter in response. He says, "Confined in a steel chamber is a Geiger counter prepared with a tiny amount of radioactive uranium." That's over here in our cartoon, courtesy of Wikimedia. "So that so small, such a small amount of radioactivity, that in the next hour it is just as probable as not to expect one atomic decay." So it has a 50/50 chance to decay in one hour. Its half life, let's say, is one hour. There's an amplified relay system, so if any radioactive decay is detected here, it will trigger this thing to drop a hammer to shatter a bottle. And the bottle contains what they called prussic acid, which is a certain kind of cyanide poison. And cruelly, a cat is also trapped in the steel chamber. So it is a windowless box. You put a cat inside, which is not very friendly to the cat, radioactive source with a sensitive detector with a particular half life. It's only 50/50 odds to detect even a single decay after one hour. If even one decay happens, it will trigger the thing and break the vial and kill the cat. If no decays happen to happen in that hour, the vial will not break-- be broken, and the cat will live. After one hour, the living and dead cat are smeared out in equal measure according to quantum mechanics. Again, we can write this Schrodinger-like superposition of totally opposite properties. And yet quantum mechanics, they both argue, has no way to tell the difference or to tell us what the real state of the world is, only the likelihood. Einstein writes back. He can say, "Your cat shows that we are in complete agreement." Now they're talking about life and death with poison and gunpowder, not about balls and urns. A wave function "that contains the living as well as the dead cat just cannot be taken as a description of the real state of affairs." And by the way, this is the version, almost word for word, that then Schrodinger publishes in this very famous article later that autumn. It comes out in November of 1935. And that's the first published version that comes to be known as Schrodinger's cat. It's almost exactly word for word what Schrodinger had first written in his private letter to Einstein. And there's an irony in all this, a kind of historical irony, that by the mid-1930s, by 1935, Schrodinger had become quite skeptical, actually quite critical, of some of his own contributions to quantum mechanics. It was Schrodinger who wrote down the Schrodinger wave equation. Schrodinger said, isn't this great? We know how waves behave mathematically. They obey things like superposition. And yet 10 years later, between 1926 and 1935, on the order of 10 years, Schrodinger, like Einstein, had become quite uncomfortable with the conceptual direction with which people were trying to make sense of his own equation. So he became actually an outspoken critic. And so the irony, at least, I think, for us today, is that he invented this now totally famous cat paradox. There are internet memes up and down, not to help teach people about his equations, but actually as a critique. He invented the paradox to say there's something rotten and wrong with quantum mechanics, not as a fun way to teach the theory. So I'll pause there. We can have a little discussion, some questions. So Gary asks, is it possible that even after measuring a definitive measure, like the weight on the scale, the electron or the person does not have at all definite values, but its elemental state probabilistic-- I'm not-- Gary, this is a little garbled. Let's see. I think the idea is that it would be one thing to say that the answer's-- the definite answer we'll get anytime we perform a measurement is an answer drawn from some distribution with some weighted probabilities. But it's a different thing to say there was actually no value of that property whatsoever. If we had never performed a measurement, the person would never have had any weight, not that we wouldn't know which weight it had from within, say, a Gaussian distribution or a bimodal. It's that-- the more radical interpretation that at least people like Niels Bohr began to articulate was that there was no definite value at all for that property. And that could be a property as basic as is the cat alive or dead. That's what Schrodinger was thinking about. Not just what's the value of its portfolio if the stock market closed in New York but is still open in Tokyo. There could be reasons why certain values might be subject to uncertainty. But this was like, let's take one value. Every time we perform a measurement, we get a definite answer. It's not that the question is poorly posed. It's that there was no answer before we asked the question. I think that's what got people like Einstein and Schrodinger even more riled up. [LAUGHS] Alex suggests that Schrodinger better hope the cat is dead, or else the cat's going to come for him. It's true. The prussic acid is a pretty cruel-- that's not so nice. And so coming back to Gary's point, that the idea was that what if nature simply doesn't have definite properties until we ask questions of it? And that might be perfectly comfortable for us today. It's not very comfortable for me. But that's the sharpened question that led people like Einstein and Schrodinger to begin to really question even their own work or the direction in which their work was being taken. And again, it's for properties that otherwise could be trivially measured. So it's not like it's a hard measurement to perform. Oh, I don't know what the answer is. There's a lot of statistical uncertainty. It might have been this or that. It's like, nope, the answer was this versus that. I can tell them clearly apart. And yet until I actually do the measurement, I have no definite way of knowing which it will be. And maybe neither does the electron. That was the weirder part, that last step. Maybe it's not about our lack of knowledge about this system. What if the world itself simply had no particular value for objects in the world? Stephen, thank you. That's a fantastic question, and a really hard one. So it's unfair to ask me. It's a good question. It's a great question. That's still a subject of research. That's still something actively worked on by experts in the field. So Bohr himself had an answer for that. It's not an answer that everyone accepts anymore. Bohr's answer was that there should be some irreversibly amplified detection event. So maybe-- and that's a mouthful. Everything Bohr said was a mouthful. He spoke very verbose. It wasn't clear in Danish. It wasn't clear in German. It wasn't clear in English. That's [INAUDIBLE]. But his idea was that maybe the electron-- some little electron interacts with some little equally microscopic thing. How would we know? To Bohr, the measurement was completed when some irreversible signal of that event, of that interaction, was amplified to macroscopic scales, irreversible being the key point. So that could be, like, there was-- the photographic film was developed and there was really a spot here, not a spot there. There was something that we can't-- that can't be washed away. Now, let me be clear, that's not at all the taken for granted answer today, and a real question remains. The so-called measurement problem has its own name, which is, what counts as a measurement? And if the whole world is ultimately made of atoms and atoms obey quantum theory, then aren't the measuring apparatuses also subject to all these same quantum effects? So the question has to do with, as you say, what's the scale at which the quantum mechanical effects are no longer dominant? And that's, I think, what Bohr was gesturing toward in this notion of an irreversibly amplified classical signal. And as you can see from the rate at which I'm waving my hands, clearly I'm hand waving, because it is genuinely still actively debated, even among the experts today. That's exactly right. And it does go right back to what we called what we spoke very briefly about yesterday, the so-called quantum to classical transition. We can get buckyballs to do two double-sided interference. We can't get you or me to do that. So it's very directly related to that. So one line-- let me just go off hook a little bit, jump ahead a few decades. One line of reasoning, and Amanda hints at this already in the chat as well. One line of reasoning is that this does apply even to you and me into macroscopic things, to you and me on bicycles, to viruses. You might know where I'm going, and that both answers are real. They're just real in causally disjointed regions of the universe. This is what's now often called the many worlds interpretation, very eye raising-- eyebrow raising title. The original title is the relative state formulation. What if it really is quantum mechanics all the way down, and we just have consistent-- self-consistent branches? So the cat is both alive and dead, but any observer who sees it as alive, that version of that observer is now following one track in a kind of forked path coming, to a fork in the road, and that anything else that observer's path will witness will be consistent with having found the cat either alive or dead. I forgot which I said. So one example is this is all quantum mechanics, period. And just deal with it. And that is very much like what Amanda suggested to Gary. A different, or a distinct approach, sometimes they get combined. A distinct approach is to say there's something that happens from the so-called environment, from having lots and lots of quantum mechanical stuff together and interacting together, that the essential quantumness can be smeared out. It can decohere, is the technical term, that the effect of the environment is in some sense to reduce the range of quantum possibilities to one definite outcome because there are additional physical interactions that we haven't taken into account. And that's more like how much stuff do you need before quantum behaves like classical. That's like a big box is different than a very, very tiny electron. So it's more like, do you have enough stuff, enough environment, that it's going to somehow smear out the range of possibilities and leave only one left? And sometimes people combine the two. Maybe it's actually the decoherence plus the many worlds is the answer. So that's-- when we go to the cutting edge, the cognoscenti today, we'll worry about the measurement problem. These are the kinds of questions they ask. They're only able to ask them because of the conceptual work being done back in 1935. The inverse is not true, right? Schrodinger and Einstein didn't have these options to think about at the time. So these are all coming downstream from worrying about exactly this question about-- the electron could have been either way. Didn't have an answer. The cat doesn't make sense, so that it could've have been either. Maybe it could, right? So that's why that's still going on today. So let's see. Aiden asks, is it theoretically possible to accurately predict if the uranium will decay or not in short scenario, but we as humans with the technology available just don't know enough? Very good. So Aiden, that's the kind of thing-- we'll actually come to that very soon in the next part. That's the kind of thing that Schrodinger and Einstein said should be possible. They didn't have the final answer in hand. But that's what they wondered-- that's what they thought any reliable theory of physics should enable one to do. So they never said quantum mechanics is wrong in the sense that it violates measurements. It yields the correct probabilities for definite experimental outcomes. It's accurate as far as it goes. But they said it doesn't go far enough. So some ultimate physical theory, they argued, should enable you to say in advance the uranium nucleus really will or will not decay, not just it has a likelihood. It has a definite state in advance. So the electron spin really will spin up, even though we didn't know it in advance. And so that's more what Einstein and Schrodinger said should be possible. Let's keep aiming for that. Let's not stop short. They didn't have a worked out model that would allow them to do that. That was their aspiration. It's a great point, Yangwei. It's tricky because on the one hand, Schrodinger and Einstein both were being kind of wildly creative and original. I mean, Schrodinger gave us this wave function we didn't have-- we, collectively, didn't have before. And yet you're quite right that his approach was, in some sense to retain as much as possible of that look and feel. So it's not like it's just a boring repetition of what had come before. It's not-- there's real original work being done. But the character of it, some other kind of way of describing it-- it does feel different. And I think it felt different at the time to practitioners than the kind of frankly brash, young, very-- what can I say? Kind of alpha male aggressive work, at least in portrayal, by people like Heisenberg and Pauli. Pauli loved telling everyone else they were wrong. In fact, his favorite expression was not-- he would say, that's not right. That's not even wrong. It's not even-- that's so stupid, it's not even a candidate for being right or wrong. That's not a nice thing to say to people, right? Pauli was pretty-- he had an acid tongue, even as a 20-year-old, let alone as a 50-year-old. He was just mean, aggressive, mean, "I'm smarter than all of you." There was that kind of thing going on, to be sure. And so Schrodinger was-- and Einstein both were doing creative, original, new stuff well into their 40s and 50s. It gives me hope, frankly. But it did have a different kind of style, and in some sense, a different character than some of these folks in their mid-20s. Let me press on before we all debate the Hugh Everett many worlds interpretation because then we'll never leave that branch of the multiverse. But let me go on now to talk about the second big critique of quantum theory by Einstein and his younger colleagues, the so-called EPR thought experiment. So this was published in the very same year as Schrodinger's cat. It actually came out a bit earlier. And this is one of the things that Einstein and Schrodinger were writing about across the ocean. So soon after Einstein got settled at the Institute for Advanced Study in Princeton-- it's in the same town as Princeton University, though it is formally a separate institution. He began working with these two younger colleagues, Boris Podolsky and Nathan Rosen. And they developed an even more elaborate and really very compelling, very, very tricky critique, of quantum theory, as what we now call the EPR, for their initials. And this was in the reader. So again, it's very short. It's deceptively short. It's only four pages, but it's pretty tricky so I'm going to take a little while to walk through the argument, which might not have been so easy upon a first reading. And that's OK. It's a very tricky paper. So they're imagining a thought experiment like this, in a kind of cartoon form. There's some source, that's the capital S here, that will shoot out pairs of particles in opposite directions. We can call them particles A and B. And each particle will be measured some distance apart. There's some detector here that will perform some measurement on particle A, some separate detector some distance away that will eventually measure some property of particle B. And the physicist here can choose to perform one of two types of measurements. In the original EPR paper, the authors considered either measuring the position along the x-axis. They can form a measurement of x. Or they could choose to perform a measurement of the momentum in that direction, p sub x. Now, according to quantum mechanics, prior to either measurement, the system, the system particles A and B, would be in some superposition, not just a single cat that's either alive or dead, but basically, it's like two cats, either of which could be alive or dead. But here's the twist. The properties we will eventually measure of one are tangled up or entangled with the properties we will eventually measure of particle two. So each particle is in some as-yet indefinite state, and yet their two states are somehow connected to each other. And that's written quantitatively as this special kind of superposition. There's two different two-particle states that are then put in superposition. Either particle A will have one property and particle B will have some other, or particle A will have that second property and particle B will have the first. That's two-- that's a superposition of two particle states. So it's even more elaborate than a single cat. And what got them so exercised, what got them so upset about this, Einstein, Podolsky, and Rosen, is that this quantum state, this version of psi, does not factorize. We can't write it, at least according to quantum mechanics, as some mathematical description of the behavior of particle A over here and then separately in some factorizable way, some way of describing the behavior of particle B, that as soon as we say anything about the likely behavior of particle A, in the same breath, we have to say something about the behavior of particle B and vice versa. OK. Why was that so concerning for them? Well, let's suppose that physicists one over here at this detector happens to select a position measurement for particle A. And then from the conservation of momentum, she could immediately infer the position of particle B once she completes her measurement. On the other hand-- sorry, again, on the other hand, she could have chosen to perform the other kind of measurement. She could have chosen just as well to have measured the momentum along the x-direction and then immediately inferred the momentum some other property of particle B. What if she waited until the very last possible moment to decide which measurement to perform? So now there's no time for even a light signal traveling at the very fast speed of light to get from here and catch up with the particle B in time to give it a kind of status update. Oh, when you get measured, make sure your properties agree with what are consistent with what was just measured of particle A over here. So if even a single light beam couldn't have updated particle B in time, then particle B must have had all its own relevant properties on its own prior, to its measurement. It wouldn't have time to wait for an update. So the only thing that made sense to Einstein, Podolsky, and Rosen was that particle B already had its answers at the ready. It already had definite values for those properties. It was not, like Bohr would have insisted, a person who has not yet stepped on a bathroom scale. Particle B would have had definite values, both for position and for momentum. And we didn't know what they were yet, but the particle had them. The problem with that, according to EPR, is that quantum mechanics has no way to describe quantitatively particle B's properties on its own, separate from what's going to happen to particle A. So they conclude in this famous paper that quantum mechanics must be incomplete. And so this assumption, the supposed incompleteness of quantum theory, that-- sorry, that conclusion rests on two assumptions. The first of them, they tell us right on the opening page. It became known as the reality criterion. This is on the first page of their paper. And quoting their own text, "If, without in any way disturbing a system, we can predict with certainty, that is, with probability equal to unity, the value of a physical quantity, then there exists an element of reality, something about the world, not just about our knowledge of the world, corresponding to that physical quantity." So that's an assumption, that quantum objects possess their own properties on their own. They carry them around as they move through space, independent of and prior to our efforts to measure them. The person has a value of her weight, and it can be revealed upon a measurement. The measurement doesn't create her value of the weight. That's what they're responding to Niels Bohr. A little later in the paper, they make explicit their second, equally critical assumption, that would later be called locality. It's now on page 779 of their short article. Here's their wording for that. "Since at the time of measurement of the two systems, particles A and B, they no longer interact, no real change can take place in the second system, particle B, in consequence of anything that might have been done to the first system." That's what we think as of localism, that local causes yield only local effects, or if we think about Einstein's work on relativity, that no force or information or influence of any kind can travel across space arbitrarily quickly. Things could travel maybe as fast as light, but not infinitely fast. And that's the updating paradox that they had for particle B. So even a dozen years later, this was still upsetting Einstein, that quantum mechanics seemed to fail this test. And he wrote, again, in a very famous letter again to Max Born in the late 1940s, "I cannot seriously believe in quantum mechanics because a theory cannot be reconciled with the idea that physics should represent a reality in time and space free from spooky actions at a distance," a phrase that has followed this field since 1947, or I guess, since the letters were published in the early '70s, that he thought anything-- any physical theory, including quantum mechanics, that says that these two particles could remain coordinated across arbitrary distance-- that's a ghost story. That's spooky action at a distance. That's not a proper physical theory in which local causes yield local effects. So let me pause there. Again, it's a complicated article, the EPR paper. It's a classic, but it's very dense. So what I hope we would get out of it upon reading it was at least that-- those important highlights, that they draw a conclusion about the nature of explanation in quantum theory and that their conclusion rests squarely on those two separate, but equally important, assumptions. Objects have their own properties, like it or not. The cat's just dead. Get over it, right? Even though we didn't look yet, that kind of thing. That's the elements of reality. And stuff can't travel arbitrarily fast. And Tiffany says-- she's quite right. Tiffany, thank you so much for sharing that. There's a really lovely, lovely book. It's actually a kind of graphic novel I think you'd enjoy, very creative, by a pair of authors, Jeffrey and Tanya Bub. Those are father and daughter. It's a lovely pairing. Jeffrey Bub is a very accomplished physicist and philosopher of science, now very senior, and his daughter Tanya is a really terrific and very accomplished graphic artist. And they teamed up to write this very fun and quirky and creative graphic novel all about this notion of Einstein's critique and more recent thoughts. It's called Totally Random is what it's called, Totally Random. It's a great, great book. And what would Einstein say about current work on entanglement? I don't know, Gary. I can make predictions for what he'd say. I don't know what he'd say. But I think about that a lot, for reasons we'll talk about in the last part of today's class. It's a fair question. But there's a lot that's been done since his time with which we've been able to sharpen these questions. And so the questions that he set in motion have really energized generations of work ever since. That's pretty amazing. Isn't it more accurate to say spooky at a distance because no physical action happens? That's right. So Alex, what Einstein was worried about was it sounded like telepathy. It sounded like some occult connection, where somehow these two things at arbitrary distance-- it might not just be between Princeton, New Jersey and Dublin, where his friend Schrodinger was. It could be between Princeton, New Jersey and the Andromeda galaxy. And somehow instantaneously, at least according to quantum mechanics, instantaneously, these objects will behave in a rigidly coordinated way, even though there was no time to let one know what the other was going to do. So it really is this-- no physical mechanism. Nothing could have traveled through the intervening space to connect them. And that's what he thought was spooky. And spooky was not meant to be like, oh, cool, it's spooky. I'm intrigued. Like, that's garbage, right? Think back to what we saw in the previous class, Schrodinger and Heisenberg calling each other's work disgusting and repulsive and trash. It's in that spirit that Einstein is writing to his very dear friend Max Born, still trying to convince him in 1947, still trying to convince him in 1955, right up to the time that he died. He just couldn't let this go. Any other questions on the EPR thought experiment or the role of those two assumptions and all that? Yeah. Thank you, Stephen, so much. AUDIENCE: [INAUDIBLE] DAVID KAISER: So on the one hand, that's a great preview for the last part of today's lecture. So on the one hand, hang on. There's also a lot of optional stuff. I know it's, like, midterms. It's a busy, busy time of the term. You're all working on paper, too. Hint, hint. So you're busy. But there's optional stuff on the Canvas site for today's class session as well just for whenever you have time and curiosity. And there's plenty more. I'd be delighted to talk more about this individually as well. But one of the optional things for-- on today's group of readings, totally optional, is a chapter I wrote over the summer, still a draft, on the current status of experimental tests of quantum entanglement and ways to try to really check, have we closed down every possible objection, even far-flung but logically consistent objection, that someone like Einstein, or let's say, an Einsteinian, might raise to the existing battery of tests, and what would it take to close down every possible alternative explanation? And so there's a lot of work that's going on around the world to this day, some of it with my own group and, of course, many groups around the world. So that's on there. Not that I'm asking you to read another 40 pages during a busy time, but there's some resources. And I'd be glad to chat with Stephen, and really, frankly, anyone. I love this stuff. About these bigger questions. So let me give you the bumper sticker version. I think every-- well, I don't know if I'd say every. The vast majority of working physicists today agree that quantum entanglement is a fact of the world that particles seem to betray this very strong correlation across arbitrary distance. And I'll talk more about some experiments lately that have shown that more clearly. Is there a physical mechanism for that? Maybe there's some more exotic structure of space time. Maybe there's a wormhole that would give an Einstein description for a local physics that just was traveling a more complicated path through space and time than our naive picture. I don't think that's the best answer, but that's a legitimate answer. Maybe it's some other thing. So people are still trying to get a physical mechanism that might fill in the dots. But there seems to be little room for doubt anymore that the phenomenon of entanglement is robust and can be reproduced at will. The idea that maybe-- a separate possibility is the one that I think is not the leading-- not the best way to go, would be to say that everything comes down to initial conditions, that the Big Bang, everything was preordained. It's not just that everything's determined, that A leads to B, but everything was so-called super determined, that even my choice to teach this lecture-- I didn't have that much choice, but you know, my choice of what to do on my slides, the choice of the National Science Foundation to give my group our grant, the choice-- the fact that all these cascading, seemingly random decisions since the Big Bang itself, have all been preordained. That's a-- I think a not very compelling and yet logically self-consistent explanation for all these phenomena. But what's amazing to me in the light of experiments like my own, but many others, is that's what's left. You get to say, it's quantum mechanics. Like, just, it's quantum mechanics, right? Sorry, Einstein. Or it's something that sounds, frankly, even more bizarre and potentially not even really subject to experimental tests, right? So what we've been trying to do in the years since Einstein, Podolsky, and Rosen, and really since John Bell, whom I'll talk about in a moment, is to clear out all-- as many intervening, logical possibilities that could have been left. Oh, it could have been this, or this, or this, or this, or this. OK, well, let's design updated experiments to clear out as many of those other possibilities as we can and see what's left. And Stephen, the short answer today is what's left, is quantum mechanics. There's some nonmechanical connection that leads to correlations that are measured reliably all around the world all the time. That's not because it's like two tin cans connected by a piece of twine. It's not a mechanical connection. We've ruled out all those kinds of models, or many, many of them. It's either that, or some, really, more even strange-sounding possibilities. Strangeness is, of course, a surjective judgment. But not-- they're no more appealing to what Einstein would have been arguing for, either. The Einstein-like options we've really been able to pretty well put in a box and subject them to test. That's really what I'll talk about in this next part of the class. But it's a big, juicy question. I'd be delighted to talk more about it, now or whenever any of you has a chance to catch your breath, maybe after midterms and all that. Let me press on, then, to the last part. Because this now comes to some of the more recent developments in exactly this kind of discussion. So almost exactly 30 years after that EPR paper, late in 1964, the Irish physicist John Bell went back to that question. He really scrutinized the EPR paper. You can see he even titled his own paper "On the Einstein-Podolsky-Rosen Paradox." And he went back to those two critical assumptions that the authors had made. And just to remind you, the first one is what they had called the reality criterion, these elements of reality, the assumption is that each particle has definite properties on its own, prior to an independent measurement. And to my mind, that's like saying that each particle carries with it a kind of logbook or an instruction manual that anywhere it goes through space, it has an answer to the question that might be asked of it. It has a definite value for its weight. It has a definite value for its spin along the z-direction. We might not have measured it yet, but it had a real value. That was their assumption one. And assumption two, remember, is this locality, that no force or information or influence can travel across space arbitrarily quickly. What Bell argued is that these are actually assumptions. Any conclusion is only as good as the assumptions that went into its derivation. And this pair of assumptions became called local realism. The local part is this notion consistent with relativity. There's a speed limit before A could affect B. And the realism part refers to the object having real or definite properties on its own, independent of us and measuring. What Bell suggested in this paper late in 1964 is that these are assumptions. Maybe we should try to test them instead of just conclude that quantum mechanics is or is not correct. So we introduced what we now call Bell tests. And you see these are very closely modeled on that original EPR framework. So we have some source of particles that are prepared in a special way, some source that shoots out pairs of particles A and B, two identical measuring devices at-- apart from each other in space. And these have-- these boxes are now kind of generalized. There's some choice of what measurement to perform we call that the detector setting. I'll use lowercase letters, little a and b. That's like saying we could choose to measure the spin along the x-direction. We set the dial. That's how we orient our magnets. We could choose to measure the spin along the y-direction. We change the dial, rotate the magnets, choose to measure some intervening angle. So we choose the question to ask. We choose the measurement, the specific measurement, to be performed. And that's encoded in these detector settings, little a and b. We set the question. The particle encounters the device. The device gives us an answer. That's the measurement outcome. We'll call that capital A and B. Now, to be a little more quantitative, one of the things that Bell did was to consider certain kinds of properties, certain questions we could ask, for which the answer is always of the form either plus 1 or minus 1. If we measure position, the answer could be 1.07 meter, 1.0732 meters. It could be anywhere along a continuum. Likewise for momentum. So Bell clarified, let's stick with measurements for which the answer is discrete and one of two possible discrete outcomes, like spin or like polarization of a photon of light. It's either polarized exactly parallel to the polarizing filter or perpendicular, not in between. So the answers we can always be put in the form of a kind of yes/no question, spin up, spin down, heads or tails, that kind of thing. And then we're going to introduce these correlation functions. This is a fancy way of just saying what's the product of those two outcomes. No matter what question we ask, no matter how we set the detector setting, little a and little b, the outcome, capital A, was either a plus 1 or a minus 1, no matter what question we asked. Spin along this direction, spin along this direction, spin along this direction. The answer for any of those questions will either be spin up, plus 1, or spin down, minus 1. So we can compare the answers, even as we ask different questions. And we'll average that, the brackets here, and we will take an average over many, many trials in which we ask a certain pair of questions, spin along this direction here, spin along some other direction here, and see how often the answers were exactly the same or opposite. OK. And then Bell derived this really innocuous-looking expression. This is a comparison of the correlations. So how do those outcomes of measurements behave as we vary the pairs of questions being asked? That's all this means. And again, I'm going to go a little quickly here. There's the optional lecture notes that focus just on Bell's work in particular on the course on the Canvas side. So this quantity S is really just saying, as we do each of these comparisons with question a and b, question a prime and b, question a with b prime-- you get the idea. How do these answers line up? Are they giving the same answer, plus 1 with plus 1, minus 1 with minus 1, or are they giving opposite answers? Now, Bell goes on-- and again, a deceptively simple or a short paper. It's only six journal pages, very brief. And yet again, it's very dense. He goes on to show that any theory that would obey this local realism of the Einstein-Podolsky-Rosen framework, no matter what the specific model is, it should be put-- one should be able to put it in this general form. Bell argues there must be some properties of those particles that come from how they're prepared. So the lambda is some of shared properties because of some way of preparing particles A and B together. There's some likelihood to get this set of properties versus another, so we're to average over all those. But then there's some probability to get a definite answer here when we subject particle A to a measurement. We choose to perform a particular measurement, and the outcome we get should depend on the question we asked and the shared properties of the particles. That's what this is encoding very abstractly. Likewise, we should be able to predict what answer we'll get at the other device. The answer capital B should depend on the question we ask there, our detector setting, the orientation of our magnets, say, and of these shared properties lambda. And then Bell shows that any framework like this, any model that is consistent with local realism, will have an upper limit. There'll be a limit to how correlated these measurements can be. So they can line up a bunch of the time. They can't line up arbitrarily often. This becomes known as Bell's inequality. This combination of comparisons, combination of correlation functions, should be bounded for any theory, he argues, any possible theory, that is consistent with the local realism assumptions of EPR. And let me just dig in here a little bit. And again, there's some more in the lecture notes. Why does it say that? Look at the form that he's chosen to write. He's encoded this locality, that the measurement outcome here, whether we get spin up or spin down, depends on the question we ask locally and on the shared properties that the particle could have carried with it. It cannot depend in this framework on either the question that's asked very far away or on the answer that's found very far away. That's this local realism. The particle must have had its own properties to answer, so to speak, its own question. And that's like saying the answer we get here can depend on local things local to the box, things that-- properties that particle brought with it, the particular question we chose to ask. The answer here shouldn't depend on something that could be done arbitrarily late in the process arbitrarily far away, OK? And yet, again, the calculation is pretty short. I go through it with all the intervening steps just to make it very explicit in those lecture notes. Quantum mechanics predicts really unambiguously that for certain kinds of systems, certain pairs of particles we might prepare in certain ways-- these measurements should be more strongly correlated than Bell's limit would allow, that the answers at these distant devices should line up even more often than any local realist theory could accommodate or could account for. And not by a little bit, by as much as 40%, that when you do this comparison of the correlations, this single number S could be as large as 2 times the square root of 2. That's like roughly 2.8, 2.81, 2.82, something like that, which is considerably larger than this upper limit from Bell's inequality of 2. So that still sounds pretty abstract. And Stephen, this might be the talk you were referring to. You might have heard this last spring. But for others who might not have heard it. I like to think of this entanglement, the conceptual stakes, in terms of the following parable. Again, I find it helpful to bring this up to human scale. So I imagine a pair of twins, Alice and Bob, who work very hard. They're both very bright and diligent and hard working. And when it comes time to go to college, Alice, who is frankly, just more diligent and more smart, she gets an MIT, and so she comes to Cambridge, Massachusetts. And her very affable but somewhat duller brother Bob goes to Cambridge, England. Alice-- this is the part that's now hard for us to imagine. Alice physically comes to campus. That's hard for some of us to imagine. She then goes to restaurants. What is this crazy story? So imagine prepandemic, Alice comes to MIT. She goes to her favorite local restaurant near campus. And after dinner, the waiter asks, what would you like for dessert? So he's going to pose a question to her, and she'll give an answer. He's posing a question to her about baked goods. He says, tonight, your choice is you can have either a brownie or a cookie. She has one of two possible choices. That's like the spin either being spin up or spin down. The waiter has specified the question. That's the little a. I'll measure spin along this direction. I'll ask about baked goods preferences. He's setting the question, and Alice can give one of two possible outcomes, brownie or cookie, plus 1 or minus 1. Some other nights-- she likes this place a lot. She goes back there very often for dinner. On other nights, the waiter asks her a different question. He now asks a different kind of question by rotating that dial. And he measures in some different measurement basis, like he's rotating the magnets for spin. This new question is actually about frozen dessert. She can either have on that night an ice cream sundae or frozen yogurt. Again, he poses the question. She has one of two possible answers that she can give, a plus 1 or a minus 1. The waiter selects which question to ask by some local process. Maybe he flips a coin back in the kitchen. Alice has no clear preference. She likes all these options. And we know that because if you look at the tally of all her dessert orders after she keeps going for more and more fatty desserts. I don't recommend this experiment too often. But she winds up choosing each option on average equally often. So when she's offered question one about baked goods, on average, half the time, she orders a brownie. Half the time, she orders the cookie, in an order that looks totally random. It's not clear which night she'll order a cookie or brownie, but averaged over the whole set, she has equal numbers of those choices, and likewise for the frozen desserts. Unbeknownst to Alice, an ocean apart, her brother Bob is now going to a pub in Cambridge and being posed the same questions at the same time. The question's drawn from the same pair. So in some visits, Bob's waiter will ask him about baked goods, and he has one of two possible choices, either the brownie or the cookie. On other evenings, the waiter asks Bob about frozen goods. He can choose one of two options. Again, the waiter chooses by some random process. Bob has no clear preference. He, again, is choosing each option in equal frequency, but in what looks like a random order. They come back for Thanksgiving, and the first thing they want to do after being away for the first semester of college is compare their 10,000 dessert orders. That's what I would talk about with my sibling. So they realize there are some pretty surprising correlations that get found when they compare these sets of answers. When they both happen to have been asked the first question about baked goods, they both gave the same answer. They both gave either the brownie answer or the cookie answer at the same time, even though they might have ordered the opposite choice the next time. So when they were both asked the same question, their answers lined up, even though they seem to be answering them independently. When they were asked different questions, there, again, was a correlation, that when Alice was asked about baked goods, she took the brownie at the time when Bob took the frozen yogurt, but not the ice cream when he was asked about frozen goods, and vice versa. There was a correlation in the cross question answers as well. And then when they're both asked question B about frozen goods, their answers once again align. When they both happen to be asked about frozen goods, they both ordered either the ice cream sundae or the fro-yo at the same time. Why should this be surprising? If we just look at how Bob is answering all alone in this British pub, his answer to the same question about frozen goods seems to change and seems to in some sense depend, or at least be correlated, with something that's happening an ocean away for which he's not supposed to have any information. That's what Einstein said shouldn't happen, that the world shouldn't have that kind of telepathy, that kind of spooky connection. And in particular, if Bob is answering his questions out of some kind of local logbook or instruction manual, how-- he doesn't have enough information at his location, at his British pub, to know how to get his answers to line up with those that Alice will give at the same moment thousands of miles away. So how could we subject that kind of human parable to experimental test? And there's many, many ways to do it. I'm going to talk about an effort that my own group did just about two years ago with this absolutely gorgeous mountaintop observatory in the Canary Islands just off the coast of Morocco at the Roque de Los Muchachos Observatory. We were using some of the largest optical telescopes on the planet. It was fantastic. OK. So here's-- we had three main parts to our experiment. I'll go through very quickly. We're almost at time. I'd be glad to chat more about it. We set up a temporary laboratory here that was actually a shipping crate that my colleagues from Vienna shipped over. And so that was our transmitter station. That's our source of the entangled particles. That's where particles A and B are created and then emitted. And then quite a distance away, about half a kilometer in each direction, we had our receiving stations, our detectors. So let's talk about what's going on in this transmitter station. What we're doing there is we have a pump laser, which is just awesome. It's a laser that's actually tuned to emit light in the visible range. It actually looks very much like this color of purple. You could see it with your own eyes. It's in the visible portion. It's in the purple region of the visible spectrum. We shine that laser light into a very special crystal. It's only about a square inch long. It's not very big. It's what's called a nonlinear crystal. It has the fantastic property of absorbing light of a very specific frequency that comes in. And when that particular frequency combs in, it absorbs that one photon of light and emits pairs of particles, pairs of photons. So they conserve momentum. They conserve energy. Each of these particles has less energy than the incoming particle. They conserve all the things they should. But basically, one comes in. Two come out. And that's a property the crystal. Not any old time you shine any light on it, light in; only light of a very specific frequency. So we tuned our laser very carefully in conjunction with our crystal. So that's like creating our twin. That's how we create our twins, Alice and Bob, and then we beam them half a mile in each direction across the island. You can see now the guide lasers here to line up our optics across the island. So once the particles travel at the speed of light, half a kilometer in opposite directions, then we subject them to measurements. And that's like these dessert-- these waiters about to take their dessert orders. We're going to measure properties of those particles after they've been created together they have some common history and common properties but then shoot them off and ask random questions of them and see how often the answers line up. So one thing we able to do is close down that locality explanation. As we'll see, the answers line up a lot, just like quantum mechanics says they do. And we wanted to be sure it couldn't be because they were somehow sharing information en route. And we ensured that by the space time arrangement of the experiment. The time it took to create the particles, beam them across the island, and complete a measurement, not just have it arrive, but actually complete a measurement in this case of its polarization, took just over 2 microseconds, 2 millionths of a second, for-- in each direction. And yet the light travel time between the detectors was nearly 3 and 1/2 microseconds. So there was no time to get an update from, say, Alice's side to Bob's, either to have said my waiter just asked me about baked goods or to say I just ordered a brownie, right? Both the questions that were asked at that particular moment and the actual measurement outcomes, the answers given, were space-like separated from each other. There was no way that information traveling even as fast as light could have updated one side about what's happening on the other side. What else is happening at these receiver stations? Here's where these amazing instruments become so important. I can't believe they let us play with these. I didn't get to touch them. They knew better than me. These are each 4-meter telescopes, some of the largest anywhere, the Galileo National Telescope on one side and the William Herschel Telescope on the other. So these telescopes were so big, 13-foot polished mirrors, polished really to perfection, that they could gather light in a fraction of a second, less than a microsecond, even from very, very dim very distant objects, like very distant extragalactic objects like quasars, some of the most distant galaxies in the universe, distant from us. So what we were doing is performing real-time measurements of the color of the light from those quasars, one telescope pointing in one direction of sky, the other pointing in the opposite direction of sky, and taking new measurements of that light every-- roughly every microsecond, every fraction of a second, while the entangled particles were in flight. So you emit the particles first. They don't know what questions to be asked of them. They don't know, in a sense, what properties are about to be measured of them. And while they're traveling, we perform a real-time measurement of some of the oldest light in the universe. Sometimes in that very narrow window, the light will be more blue than average, sometimes more red than average. And so on the times when it's more blue, that activates local electronics to perform measurement of the earthbound entangled particle in one basis. We measure the polarization in one orientation in space. That's like the waiter choosing to ask about baked goods. On those microsecond windows when the light happened-- the astronomical light happened to be more red than average, we perform the other measure. Let's say we ask the other question. We did this now across the island. We did this with 20,000 pairs of particles. And we found that the measurements line up exactly as quantum mechanics says they would. We get almost exactly the maximum value of the correlation that quantum mechanics predicts. We violate Bell's inequality by more than nine standard deviations. The statistical likelihood that was due to a fluke is 1 part in 10 to the 20. Take that, Gary Gensler. I'd like to see you guys do that with your finance stuff. Anyway, huge statistical significance, even though the decision of what question to be asked came from events on opposite sides of the universe 8 and 12 billion years ago. So on the basis of experiments like this one and many others that I described in that optional reading, many, many kinds of experiments now, the world really does seem to be as spooky as Einstein feared or suspected, even when we close down the kinds of Einstein-like local realist explanations that otherwise might have been able to account for those correlations. So let me very, very rapidly sum up. I know we're basically at time, just to summarize today's material. Schrodinger's equation brings up this property of superposition. It is a wave equation, and therefore, solutions obey the wavelike property of superposition that the sum of any two solutions is itself a solution. But that-- unlike with water waves, that led to some pretty jarring or unexpected questions when we apply that to the outcomes of events. If the wave functions can obey superposition and the wave functions are about likelihoods of events to happen, then what does that do to the nature of explanation? And then finally, what happens if multiple things are in superposition in this entangled way? So the outcome over here is not only a heads or tails that we don't know yet, but is somehow bound up with something that might happen arbitrarily far apart. So let me stop sharing. My apologies for running a little bit long today. But we do have time for a few more questions now, or, of course, if you have questions, please feel free to email, come to office hours. There's lots more stuff I'm glad to send your way if you're curious. Gary is an identical twin, he tells us, and so even though he answers questions at a distance, you thought you had some free will. Exactly. So with the twins, it's not just that you would share a DNA and a similar upbringing. It really is like, would you answer randomly posed questions when you don't know what question you'll be asked in advance? And even when you're asked different questions, have your answers line up? That's what-- people often say, oh, they're twins, they share so much. Of course they give the same answers. And that's what these Bell tests really force us to say. It's not just that they have similar proclivities, but really that answering questions they couldn't know in, advance even different questions, in ways that their answers still line up, even if one is in Cambridge, Massachusetts and the other is in Cambridge, England, or on Alpha Centauri. If that doesn't raise the hair on the top of one's head, for those of us who-- for those of you who have hair on your heads. That discounts both Gary and me. If that doesn't make you really stay up late at night, then I don't know. Then I'm done. I got nothing else, right? You guys weren't upset about the double slit. I did my best. Entanglement should just-- kind of ups the ante a little bit. Any questions on that? OK. I've stunned you into silence. It's also a busy time of the term, so don't worry about that. We'll meet again at our regular time on Monday. In the meantime, I encourage you, please, please do start thinking about paper 2. It's a chunkier assignment. You don't want to leave it to the end. And of course, as always, please don't hesitate to contact me or the teaching assistants for any questions in the meantime. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_3_Worldviews_Wranglers_and_the_Making_of_Theoretical_Physicists.txt | [SQUEAKING] [RUSTLING] [CLICKING] DAVID KAISER: Hi, everyone. Welcome back to 8225 STS 042. Any questions on, again, any kind of logistical stuff? If not, then I think we'll jump right in. So I'm going to go ahead and share screen. I'm eager to get into this next set of material together. So hopefully, you now see my shared screen. Nods or thumbs up, just to confirm? Looking good. Thanks, everyone. OK, great. So today's lecture is still going to be situated pretty squarely in the late 19th century. We're going to pick up from where we were during the previous discussion. So as a reminder, the lecture today-- actually, unusual for me-- has four parts. Usually, my lectures have three parts. These will be four a little bit shorter parts. And we'll break for some questions and discussion between each of these. Quick recap though of where we were last time-- so in the previous lecture, we looked at a few exemplars, pretty telling examples of the kinds of work that was coming together in the middle and late 19th century-- in that case, largely in Britain. We looked at people like Michael Faraday. Faraday is fascinating for his personal story, his biography, but also for what he indicates in terms of what were the pressing questions for natural researchers in his day and what were the kinds of institutional arrangements. Where did he think it would make sense to get his work done? So we saw that he had very little formal education. He was apprenticed at around age 13 to a bookbinder, and then later basically apprenticed in a new kind of position as a natural philosopher at this special kind of place called the Royal Institution, which, remember, was not a school or university. It did not have students enrolled. It was a place that fostered research and also a lot of what we might today call public outreach, public lectures, and so on. So in Faraday's own case, we know he had many reasons to dig into the topics that he wound up studying throughout his life, one of which seems to have been inspired, at least in part, by his unusual religious faith. Remember, he was a Protestant, as we talked about last time, but not an Anglican, not a member of the official church of England. And so his Protestant sect was particularly fascinated by this interconversion, an underlying unity of nature. And for Faraday, that reinforced an interest for him in studying how physical forces could change one into the other or different kinds of effects could impact each other-- electric and magnetic, electrolysis and chemical interactions, and so on. Now, as far as Faraday and really his entire generation was concerned, all of these interconvertible effects, all of these physical forces, were made possible because they were all seated in this all pervasive luminiferous ether. We looked at this word last time. It's a great word-- lumen- coming from the Latin meaning light and -ferous meaning to carry, the light-bearing or light-carrying ether. So in the course of his investigations of this all pervasive, mechanical, elastic medium of the ether, Faraday introduced lines of force, and then developed from that this idea of fields, these fields that would characterize, as far as Faraday, was concerned the local state of the ether. What were the stresses, or strains, or tensions in that elastic, bendable, flexible medium in this location at this time, in some other location at that time, at the first location at some later time? And so he introduced this notion of fields to help make sense of the underlying state of the ether. And so instead of having this action at a distance, he was really emphasizing local effects. Local causes lead to local effects. And this is what we would now consider the birth of field theory. We use field theory all the time across physics, more than ever. It's coming really from work by people like Faraday, in their mind, to understand this particular, physical, real substance that filled the whole universe, as far as they were concerned, the ether. OK. Then we looked last time at-- whoops, let's see. There we go. We looked at some of the folks who were coming downstream from Faraday, people like William Thomson, later Lord Kelvin, and James Clerk Maxwell. So they shared Faraday's fascination with electrical and magnetic effects. They shared Faraday's fascination with all pervasive ether. They began to approach those questions with a somewhat different toolkit. So unlike Faraday, who had very little formal schooling beyond something like early middle school or junior high, Thompson and Maxwell became early graduates of this really new, fast-changing system centered largely at Cambridge University, but quickly became common elsewhere. So by the middle decades of the 19th century, Thompson and Maxwell were approaching Faraday-type questions, but with a new set of tools-- highly mathematical, highly formal tools. And they were both fascinated by these mathematical analogies between electromagnetic phenomena and mechanics. And in fact, they articulated what we call the mechanical worldview. We'll see more examples of that today. It was in the context of exactly that kind of study, in part because of the importance of things like telegraphy to the emerging British empire-- it was within that context that Maxwell was studying the way in which an all pervasive light-bearing ether would respond to kinds of disturbances. You tweak the ether here, how would that disturbance travel or spread throughout the ether. And he came to the conclusion, as we saw at the very end of the previous lecture, that light was actually not related to but was, in fact, nothing other than these transverse undulations, a certain kind of wave of disturbance within the ether. The light, in particular, were waves of electric and magnetic fields as they propagated in that kind of mechanical elastic. That was just to get us back up to speed from where we were in the previous discussion. So now, I want to talk about Maxwell's equations. In some sense, we know Maxwell's equations really well. We still learn them in high school, and college, and graduate school. They're incredibly important. But the way we manipulate them, how we actually handle these equations is-- I would say thank goodness-- pretty different from how Maxwell or Maxwell's immediate circle handled them. So a lot of Maxwell's work that we remember and make most use of were results that he was working out in the 1850s and 1860s, pretty early in his career. As we saw briefly last time, he bundled all those results together in a very massive, two-volume book published in 1873, totaling about 900 pages between the two volumes called the Treatise on Electricity and Magnetism. And in there is where he lays out a first principle description of electricity magnetism and even much of optics, given the fact that he had figured out or argued that light was nothing other than electromagnetic effects in the ether. So here's where he lays out, for his new readers, what we would now call Maxwell's equations. We had a little taste of that in the excerpt for the last class. They look like this, this orange box here. They look horrifying, right? Imagine if you had to manipulate Maxwell's equations always in this full component form. As many, maybe all of you know by now, most of the central quantities in electromagnetism are vector quantities. They have both magnitude and direction. So we have to keep track of what's their component in the x direction, in the y direction, in the z direction, and so on. And so they would write those out component by component. So just writing down things like what's the energy density stored in the field would take half a page. So it's only a later Maxwellian, in this case Oliver Heaviside, who actually invented vector notation precisely to make it easier to handle Maxwell's work. It wasn't to handle any old set of vectors. It was to handle Maxwell's work in this otherwise massive, back-breaking, 900-page treatise. So nowadays, at the height of fashion, we can each proudly wear a t-shirt that has all of Maxwell's equations, only four lines, because we can use Heaviside's really quite efficient vector notation both to represent vector quantities, like the electric field and the magnetic field, but also these differential operators, like the divergence, the gradient, the curl, and so on. These were literally invented to handle Maxwell's equations, so we wouldn't all have to do this horribleness for the rest of our lives. So if you proudly have a t-shirt with Maxwell's equations, you should thank Oliver Heaviside. It fits on one t-shirt because of his innovations. Now, what's really fascinating to me, what I want to spend a little time on for this first part of the lecture, is not just that we now have a more efficient notation to handle vectors, but that what Maxwell's equations seem to mean to Maxwell and his immediate peers is really quite different from what we take those same equations to mean. So for today's class, you had a reading, a portion of an article to read by the historian Jed Buchwald. That article was later incorporated into this really interesting book called From Maxwell to Microphysics. You see its cover shown right here. So I want to follow Jed's lead in walking through a few of these examples of just how different conceptually Maxwell and his immediate circle thought his own equations applied to the world compared to how we use those same equations. We still buy the t-shirt. We think those equations mean something quite different today. I should say Jed Buchwald is a colleague of mine. He taught for many years here at MIT. He's now a professor at Caltech. But this is stuff that I think is really just amazing. OK. So to us, Maxwell's equations describe the behavior of fundamental or elementary charged particles, things like electrons, or ions, or even protons, and quarks, and things we'll get to later in this term. So to us, Maxwell's equations talking about objects that have a fixed amount of electric charge that's stapled onto them, doesn't change. Total electric charge is conserved. You can't change the overall amount of, say, plus or minus charge in the universe. You have a fixed amount of electric charge attached, a constant amount attached to each microscopic charge carrier, like an electron. We would say the charge on an electron is 1 unit. It doesn't later become 1.3 units on one electron. It's a fixed amount per fundamental charge. And then electrical phenomena, like electric current, is nothing other than those little fundamental charge carriers moving around. So electric current, to us, is nothing but the motion of those elementary charge carriers or fundamental charge carriers. So we have very simple cartoons like this to make sense of things like charge and current. To Maxwell and even to most of his followers for generations, not one part of that was true. I just want to let that sink in. We use his equations. We put them on our t-shirts. We wear them proudly. I wear them proudly. And yet, what we think those literally exact same equations refer to is almost 100% reversed from how Maxwell himself thought about them or how his peers and students did. And if you pause for a moment, you say, well, it had to be that way. Some of you may know-- and we'll look at this in a few weeks if this is news to you-- the electron itself wasn't discovered-- or the experimental evidence that would come to be called the electron didn't even coalesce for about 20 years after Maxwell had published his treatise. He writes these 900 pages in 1873. It's only in the early and mid-1890s, two decades later, that physicists start to have any kind of empirical evidence that something like an electron is part of the world. So the Maxwellians were not thinking about fundamental or elementary charges that zip around with fixed charge per particle. They were thinking about a continuum, as we've seen many times now, this elastic continuous medium of the light-bearing ether. So Jed Buchwald calls the Maxwellians inverted atomists. So I think that's a pretty helpful term. They believed in things like chemical atoms. But they didn't believe that there were fundamental constituents that had unchanging properties. So as Jed nicely puts it, instead of building the world out of atoms the way we would imagine, the Maxwellians built atoms out of the world. And by the world, he really means this continuum, elastic medium of the ether. So things like atoms or charges were coagulations of this more primary stuff, the ether. And that has implications for how they make sense of other electric and magnetic phenomena. So here's an example I want to just take a few moments to walk through that Jed talks about in that article, in fact in that first few pages of the article that I'd assigned for today's reading. It's pretty abstract. So I hope a few pictures might help a little bit. And this is really just to give a flavor of how the Maxwellians made sense of these things. If every step of this discussion is hard to follow, that's OK. I just want to get across what they thought they were doing. It's more like how did they conceptualize things like charge and current. So to the Maxwellians, charge could drift in and out of existence because it wasn't a fundamental property attached to fundamental particles. Instead, it was a kind of reflection of the underlying state of the ether. It was a local manifestation of these stresses or strains in the ether. So to take one example, charge could arise, for example, as a surface effect-- not only this way, but this is one example and one that I think Jed treats pretty nicely, pretty helpfully in that article. So one way you might encounter electric charge building up, according to the Maxwellians, was if you had a change in two different kinds of materials near each other in space. It would be a kind of surface or boundary effect. Let's talk about what they meant by that. Imagine you had these two conducting plates, these thick black lines on either side. And you put a voltage difference across them. You charge one up with a positive voltage and the other with a negative. So between them, you'd have some net voltage change between the two conducting plates. So according to Maxwell and his circle, that meant that what you were really doing is putting this elastic ether under tension. The ether that would be in the space between those two plates would now be not at some resting or equilibrium state. You've put it under strain or tension. In fact, fun fact, I believe it's the case to this day that the French word for voltage is actually tension, like tension. So it really was a notion that you were putting the physical medium under tension when you apply this voltage, this voltage difference. So now, the potential energy stored in the ether-- I think about squeezing a spring, that mechanical ether. The rate at which that energy would be dissipated, would be relaxed, is controlled by the intervening stuff. So in this case, they imagined they had two different kinds of material filling this space between the two conducting plates that we'll just call the blue region and the green region. And because they're different kinds of materials, they have different physical properties. In particular, they could store up and/or dissipate this local strain at different rates. This is, again, how the Maxwellians would have talked about it. So in the blue medium, they would have one characteristic elastic capacity. It's what we would call the dielectric constant. We would still use the same Greek letter, epsilon. To them, it really was like a spring constant. How do you store up potential energy by putting that medium under a strain, like squeezing a spring or stretching a spring? So on the one hand, you have a how easy can you store the potential energy. And the other relevant constant, or characteristic of that material, they called the conducting capacity. We simply call it the conductivity. And we use, again, the same Greek letter, sigma. And the point was, in these two different kinds of materials, they would have different properties. So the blue material has one characteristic elastic capacity, some other quantitative value for its conducting capacity-- epsilon 1 and sigma 1. And this green region, made of some other kind of stuff, would have different values for those two basically like spring constants. So the rate at which the tension, the underlying tension of the ether, could be dissipated was a kind of competition between those effects. You store up tension and you dissipate it at different rates, given by, let's say, the ratio tau of that elastic capacity divided by its conducting capacity. So then what happens at the boundaries-- now, you have different rates of dissipating this underlying tension. So you can build up a surface charge as a boundary effect, as a surface effect. So sure, electric charge could be very important. But it wasn't like vital. To the Maxwellians, it was a kind of accidental effect of the real physics, which you have on the elastic medium of the ether. And it can locally discharge or dissipate that tension at different rates given the material properties of the stuff that fills the region-- the blue region or the green region. So the surface charge was not very important. It was ephemeral. At some function of time, it's going to float into or out of existence. And it really comes from more fundamental processes involving the ether. That's how they would have characterized it. So let me just say that we can see some similarities. Perhaps the most important similarity is that for all using field theory, them and us. And we're using it, though the status of those fields is not quite the same. So it's not that quantum field theory is nothing other than a Maxwellian ether. It's certainly not that. But the notion that there's a way of characterizing how things are spread out through space and changing over time, the basic field notion, that is quite common to how we organize our thoughts today, except that-- with the difference being, even in our modern view-- and again, we'll have chance to look at this in more detail in several weeks-- in our modern view the amount of, say, electric charge is fixed per particle associated with that field. So we would say, there is a quantum field associated with the electron. And again, we'll come to this. If this is news, that's OK. We'll have time to look at it more carefully. There's a field spread out through space associated with the electron. But the individual particles, the electrons themselves, have a fixed and unchanging amount of charge nailed onto them, attached to them as part of their fundamental properties. That part had no analogy for the Maxwellians. And I find that amazing. I find that really fascinating. So it's an excellent question. I'd be glad to chat more about it. But for now, let me pause and say some similarities and sense of a field physics, but this notion of fundamental charge carriers, that's actually a pretty important disanalogy. And so in fact, that's part of what the slide is saying here, this updated slide, that we would describe these properties of these materials, like the blue material and the green material, that they're made up of fundamental objects like electrons, and protons, or even quarks, and so on that we could move around or rearrange those fundamental charge carriers. But we can't change the amount of charge per elementary particle. We can change their arrangement. And we often call that polarizing them, changing their orientation with respect to each other. We can move around those fundamental charges. And these parameters, like epsilon and sigma that the Maxwellians considered like basic spring constants, to us, those are measures of how readily we can rearrange the fundamental charges. So they're characteristics of a material, but not a characteristic of the stresses in the underlying ether. So for us, charge comes first-- elementary, unchanging, charge which is fixed to a particle. It never winks out of existence and so on. For the Maxwellians, charge was a secondary effect arising from the state of the medium, even though, as I keep saying, we use Maxwell's equations. We interpret his equations sometimes in really quite radically different ways. So I'm going to wrap up the Maxwell part. And we'll pause for more questions in a moment. So as I've now emphasized a few times, we still use Maxwell's equations. We really use Oliver Heaviside's very efficient version of Maxwell's equations. So now, it fits on one t-shirt. But the way we make sense of those is really almost a turned on its head compared to how Maxwell and, indeed, even Oliver Heaviside had interpreted them. And I find that just delicious. I think that's fantastic. We're going to come to that, by the way, many times this semester-- not only Maxwell versus us, Einstein, Heisenberg, you name it. We're going to see where we use the same equations in many instances. And yet, how we make sense of them, what we think those equations tell us about the world, not fixed, and here's an early example of what's going to be actually a common trend throughout the term. Moreover, for Maxwell much like for Faraday and for William Thomson, the ether came first. We saw a lot of that last time. Here's a few more examples even today. And they develop this mechanical worldview, that this is an elastic medium that we want to analyze-- basically, again, a set of stresses, strains, spring constant-type physics about putting this elastic medium under strain. It's all about continuity. All of physics came down to the behavior of this evenly spread out elastic medium, which supports contiguous, like nearest neighbor local actions. No point particles-- remember, there's no such thing as the electron, as far as they were concerned, no fundamental charges, , all these time-varying states of the ether. So I'm going to pause there. I'll stop sharing screen, see if anyone else has any questions. That was a great question we had already. Any other thoughts on Maxwell? Oh, and I see now in the chat Jade also posted an excellent point that, even in English let alone in French, we still call-- we still talk about high tension lines. That's a great example, Jade. I agree, good example. So our course 6 majors, are electrical engineers in particular, are probably not learning how to manipulate the ether. And yet, we're still using Maxwell's equations to do things like to manage things like high tension lines. Great example. Any other questions on the Maxwell stuff? A momentary disturbance in the ether? There was a great disturbance in the force, we might say. So there was a local tension of the ether because we've acted on it. We've intervened on the ether, they would have said. Because we've set up this conducting plate, and that conducting plate, and probably some big fat chemical battery with big chunky wires, we've done stuff locally. And we haven't done stuff only to an evacuated region. We're not doing this in a region where we sucked all the other stuff out. We're doing it in a region where it's filled with one kind of maybe rubbery substance or what we now call a dielectric material, like an insulating material, let's say, and some other material. And so what happens at the boundaries between those materials? Because the ether is trying to dissipate that local tension, the ether is trying to get back to a kind of equilibrium state, you might say, and the rate at which it approaches equilibrium depends on the other stuff that's in that region, so to speak, on top of it. So how quickly it can dissipate that local disturbance, that local tension, depends on what they would have called the elastic properties of the medium. Take all the stuff that's in that region of space, the ether and any other junk we've thrown on top of it, how quickly can that tension be relaxed? And we have different rates of trying to return to a kind of equilibrium based on these local spring constants. I think that's right. To be honest, it gets complicated, confusing for me as well because I was trained much more like you than like them. But my best effort, led by scholars like Jed, my effort to get into their head is much more like what you described. There's a local disturbance in the state of the ether. The ether can't instantly relax to its equilibrium configuration. The rate at which it does is this interplay between how efficiently can you store up the potential energy, and then a separate constant about how quickly you can release that stored tension. And it's that interplay that can be different in different regions of space because we've filled those regions with different kinds of materials. Those are excellent questions. OK. Let me press on. Let's look at the next part. Right. Oh good, thank you. So again, great questions. So it was really in this time period that something like electrical engineer, as a job title, was just coming into its own, in fact really in the later part of the 19th century. Maxwell, in fact, hints at that a little bit. If you remember, the preface to his massive treatise-- we had a little excerpt of that in the reading for the previous class. So even in 1873, he called them electricians. That was the word that was often used in English in, let's say, the 1870s and '80s. By the end of the century, the term electrical engineer itself would become more common even in English. But Maxwell talks about there's a demand for electricians. He really meant what we would now call electrical engineers, by and large. There's a lot of really interesting historical work on sometimes the tensions-- not just the voltage, but the actual tensions between the academic, mathematically trained, natural philosophers, like Maxwell or Lord Kelvin and the electricians who, then as now, sometimes were in institutions of higher learning, sometimes were in private practice, sometimes were in the employment of the government. And they didn't always agree on theoretical understandings. The electricians, or the engineers, often had, as far as they were concerned-- and they were usually right-- a much better understanding of how realistic materials behaved. What will be electric breakdown? Some dielectric materials really can't be loaded up with too much charge before things go haywire. So there was actually a lot given back and forth-- some of it quite friendly and constructive, some of it a bit of a rivalry. Now, people like Maxwell and Lord Thompson in particular, some of these fancy university professor types, were actually working quite a lot on advisory boards for more practical engineering, especially around things like government-sponsored telegraphy-- not only that, but including that. The telegraph was a huge driver of this stuff because the British government really wanted to have efficient communication both with its colonies and even for worldwide commerce and so on. So the priority of telegraphy among other things, soon would be things like street lighting-- we'll look at other examples-- these practical, real world needs for real electrical engineering would often bring together practical engineers, some of whom were trained in advanced mathematics, some of whom weren't, the university natural philosophers who had this mathematical training, and everything in between. So they had opportunities to interact and learn from each other. And sometimes, then as now, the physicists would say, we understand this and you're all dopes. Some of you might have heard that even today. The engineers would say, we understand this. And you physicists are chasing ridiculous, unrealistic scenarios. That is also often said today and was said then, each of those statements often with good reason. So if people are interested, some of my friends have written a lot about this. The short answer is there was a growing body of people who were professional engineers of electrical stuff, whether they were called electricians or electric engineers. That was changing. There were reasons to incentivize getting more of those folks because the government wanted a lot of them. And there were opportunities to learn from each other, sometimes with great-- smoothly and sometimes with some real name calling. It's a good question. Any other questions on this part? Excellent questions. OK. Let's go to the next part of the lecture. I hope you'll find this next part fun. I find this fantastic, I have to say. This next part is like crazy. OK, let's talk a bit more about how people like Maxwell, and Thompson, and indeed the Maxwellians, how they were encountering this material. For this part, I rely on one of my favorite books ever committed to print-- I'll go on record. I love this book-- called Masters of Theory by a colleague of mine named Andy Warwick. We're going to have an excerpt from Andy's work a little later in the class. It's assigned a few weeks from now. But here's a preview of the kinds of things that Andy writes about in this really quite amazing, I think quite amazing, book. So we saw, of course, a reminder, Michael Faraday had this practical, very modest mathematical training, a little bit of geometry, but not much else. And then we saw, by the middle decades of the 19th century, a really pretty significant shift in the training of people who cared about things like the states of the ether centered at Cambridge. One of the first things that begins to happen in the 1820s-- so basically, a little bit before William Thompson shows up as an undergraduate-- is a shift to paper. I want to let that settle in. It was weird to go to college and have to take notes on paper. And that wasn't because they all had iPads, obviously, right? It was weird to use paper because until then, what you did was practiced oral disputation and not in English but, of course, obviously in Latin. What better way to prove your mastery of Euclidean geometry than to bring your own personal chalkboard, a slate, a little personal chalkboard a couple inches big with chalk, and a compass for drawing circles, and so on, and a ruler? That's what you brought with yourself to your lectures on, let's say, Euclidean geometry. And then to show your mastery of it, you would have a dispute, like a debate, orally in Latin. You would actually debate against usually an instructor or sometimes a fellow student. I miss those days. Anyway, so it was actually weird, starting in the 1820s and '30s, when they were then told to start showing up not just with their own personal chalkboard, but with paper. It was actually still relatively expensive. They would rewrite and rewrite on it. Paper eventually becomes less expensive and less of a foreign item for a college student's toolkit. So over the 1830s, '40s, '50s, Cambridge starts going through this massive transformation in how they assess their students away from oral disputations in Latin and toward written exams on paper. In fact, the entire undergraduate study for a Cambridge undergraduate would culminate in a three-day written examination called the Mathematical Tripos. And so instead of having this scene of people like a debate club about, say, geometry or any part of mathematics, it shifted to this. You sit quietly in a room. This part probably looks a little bit more familiar, like taking SATs or some of the exams today. You sit quietly in a room, working out on paper. Your written responses are then graded by the proctors. And that was how you showed your mastery or your accomplishment of the material. Now, I made it sound like that might be familiar to us. We use paper. We take written exams not infrequently. I don't want to make this sound too familiar. What happened was a remarkable pendulum swing. So by the middle of the 19th century, it's not just that Cambridge students took this exam on paper. This was literally how they graduated from college. To get your bachelor's degree in any subject that you were studying, whether you were a philosophy student, or a literature student, or history, or mathematics, or physics, you had to pass through this Mathematical Tripos exam. There wasn't a Literature Tripos exam. This was the only way to the exit door to get your degree from Cambridge University, whether you had actually studied mathematics or not. It was a three-day exam. And in fact, your score on this one test-- not all your grades along the way, your score on this three-day written exam determined basically your graduation rank. So you go to college for a couple of years. You study a whole bunch of classes. You establish your ranking among all your peers on a three-day written exam in silence taken in the Senate House. And if that's not bad enough, then they would publish the rank order in the national newspapers. So it's bad enough to have to worry about one high stakes test. It's like the definition of a high stakes test. But then everyone you ever knew and everyone your family ever knew would see just how well or, indeed, how poorly you had done. So the very top scorers were called Wranglers. And the very top of the Wranglers was called the Senior Wrangler. That would be like any of the Wranglers with high honors, let's say, something like a magna cum laude and above or something like that or maybe cum laude. But the very top scorer, like the valedictorian, was called the Senior Wrangler. And to show just how competitive this was, both William Thompson and James Clerk Maxwell were actually only Second Wranglers, which is still pretty good. They weren't even best in their class either of their graduating years. So as Andy charts in this really fascinating book, this is a massive overhaul in how Cambridge trained everyone at the undergraduate level. Everyone passed through this Mathematical Tripos. Now, how would you do that? Very quickly, you would learn to do that not only by attending lectures, but by hiring a coach, a personal private coach. Why would they have used the word coach? Again, it puts us back in the mindset of the early and middle 19th century. One of the newest and most fascinating technologies of the day was the railroad or the railway coach. This was just coming into common usage both in Britain, and in other places throughout Europe, and the United States, and elsewhere. In the 1820s and '30s, they were called coaches, these mechanized coaches. The idea of your mathematics coach to help you get through the Mathematical Tripos exam was that, much like the railroad, the coach would literally transport you, the student, from one place to another at an unnatural, breakneck speed. So much like the railroad could go racing down the rails at speeds up to 20mph-- that wasn't very fast by our standards, but it seemed like lightning speed compared to horses. The railway coaches could transport people from one place to their destination really fast. Your mathematics coach would ferry you along at breakneck speed from your entrance of your studies toward your destination, which was the Tripos exam and your graduation. So they began calling their personal tutors coaches in recognition of the railroad. And here's one of the most successful of the later 19th century, Edward Routh, shown here with his-- I think that's Routh here, I think. They all look so similar-- with a bunch of his the pupils he was tutoring. Routh was not a professor at Cambridge. Routh was a former student who did very well on the Tripos exam, and then basically was hired out. So students would pay him directly, in addition to and separate from their university fees. They would hire a personal private tutor. And he'd set them up to work in small groups or one on one. And they would work with Routh around the clock. Now, how would you maintain that kind of discipline and schedule? By a robust program of physical exercise and exertion, like famously rowing or punting on the Cam, if any of you been to visit Cambridge. So there grows up not only a special kind of tutorial or coaching system, these coaches, but a whole associated kind of athletics program not for the joys of athletics, per se, but really to help you stay physically fit, so you can stay mentally fit, so you can spend all of your time preparing for the Mathematical Tripos exam. So then they start to call their athletic tutors coaches, right? So why do we have football coaches today? Not just because of the railway but because of the Mathematical Tripos in 19th century Britain. I love that. So the coaching was really new. And it was set up as a para-university way of shuttling these young students all the way to extraordinary mathematical skill in a short time. So here's an example. Again, I take it from Andy's book. He found some wonderful notes, and diaries, and correspondence from people who went through this system. Here's an example from someone named James Ward, who was a student preparing for the Tripos in the 1870s. This is a set of rules that he'd written out in his diary that he and his roommates had agreed to. And see which parts of this you want to take up with your roommates once for all back on campus again. The rules were-- they all agreed to this. The entire rooming block agreed to this. They would each be out of bed by 7:35. Or they could sleep in a bit more on Sundays. They would do five hours of work-- and that meant mathematics work, mathematics problem sets, before lunch. So you get up early and do five hours of math before lunch. So far, I'm interested. You do at least one hour of athletic exercise after lunch, like rowing. That's to basically help clear your mind, make sure you're ready for more math. Three hours more of math problem sets after that. Finish by 11:00 PM. Be in bed sharply by 11:30 because, the next day, you've got more math to do. And you can stay up a little bit later on Saturday. Here's the best part. You start paying in fines if you break the rules into a collective little pot of money within the rooming group. So if you break these rules, you have to start paying 3-- I remember that's 3 quid or what it is, but 3 pocket change for the first rule broken on any given day, a little bit more for every other rule broken on the same day. Now, it gets even more fun. You get paid out of that fund, that roommate kitty jar. You get paid out of that if you wake up your roommates extra early. Can you imagine your roommates being incentivized to wake you up at 6:30 instead of 7:30 so you could get more math done? You'd be so grateful to them that they would get money out of this common little bit of cash. Any work that you do before 8:00 AM would count toward these other ones. So if you get up extra early and you do work even before 8:00, you can work a little bit less after lunch, for example. You get a little credit for time spent in church society meetings. I love that. And these rules are binding until further notice. I just think that's great. So these are on the slides. You can download those. If you want to set up similar rules with your roommates, I encourage you to. Now, what do you do if you take all the smart, smart, smarty pants at Cambridge University and put them through this where the only way they can graduate, no matter what subject they're actually trying to major in or study-- the only way they can graduate is by passing through this single Mathematical Tripos exam? What happens if you do that for a few generations? And I want to just let this sink in for a moment as well. What they would do on this timed exam, a three-day written exam is solved basically a lot of pretty increasingly hard math problems, math and physics problems. So an example that would be a warm-up that they would learn to do very quickly with their private coaches would be how to solve for the motion, say, of a pendulum that's swinging, if you neglect air resistance and it swings with a small amplitude. Then as now, we would recognize that the pendulum, if it has a fixed length, capital L, this is what we would call a natural frequency. What's its natural rate of swinging back and forth once we displace it from its equilibrium position and let it just gently swing back and forth? It has a frequency to execute its oscillations. And in fact, this is an example of a simple harmonic oscillator. There's a very simple equation we can write for the change of, let's say, the height of the pendulum bob above its resting position. We can characterize the height above its equilibrium spot as some amplitude phi. How will that amplitude of the height change over time? It depends on this natural frequency. And we all know on this call that we can solve this very quickly as a series of sines and cosines. It will oscillate periodically like sines and cosines. OK. What if the length of the pendulum is changing over time? So now, instead of a pendulum of fixed length, the length itself is raising and-- getting longer and shorter. So this distance changes over time. Well now, it's not actually quite so straightforward. It's no longer a simple harmonic oscillator. Let's assume that the length of the pendulum itself changes periodically, maybe with some distinct period from what would have been the average period of motion. So now, we're solving a different differential equation. If this is unfamiliar to you, good. That means you've led a healthy and productive life to date. This is the thing that the Cambridge students would learn with their private coaches because they would face these problems on a timed exam that determined their graduation honors. If you have a pendulum of periodically varying length, then the same problem to solve-- how can we characterize the height at any given moment-- that becomes not the equation for a simple harmonic oscillator, but what's called the Mathieu equation, which miraculously can be solved analytically in closed form. It's kind of amazing. But it doesn't look just like sines and cosines. And in fact, thanks to modern day computers, we can solve this pretty quickly. And it has much more complicated behavior. Instead of only a simple kind of rocking back and forth like sines and cosines, it has these periodic functions where the period is actually determined by the period at which the length is changing, and then these exponential terms-- sorry, the exponential terms which can lead either to more complicated rocking back and forth or, in fact, could lead to instabilities. You could have runaway solutions and so on, if these factors, mu, in fact are real and not only imaginary. Complicated, complicated pendulum. We don't have to solve this problem on a timed exam. I emphasize this to show this is the kind of thing you would hire a coach to help you get really good at. Starting from when you entered as an 18-year-old to when you faced this exit exam, the Tripos, you would learn to solve problems like this in closed form, and then solve 200 more of them on a timed exam over three days. And if you didn't do well, everyone would see just how poorly you did because they'd print the results in the national newspapers. Now, this came home to me rather viscerally when I was doing my own physics dissertation some years ago. To help the coaches and the students get good at this stuff, there were books written by especially successful coaches that the other coaches would use. And you just practice. It was like compendia of worked examples to help you practice like a practice exam because you'd be facing the Tripos. And you can see, they would label which Tripos this exact problem had already appeared upon. So you want to make sure you're practicing on realistic Tripos problems. This book was first published in 1902, but drawing on examples from Tripos problems throughout the 1890s. These are all, in this case, manipulating something called the Weierstrass function, the Weierstrass polynomial, which is one of the things it turns up when you start solving these complicated pendulum problems. So you want to get good at solving them fast, learn the tricks to do a coordinate transformation or see, oh, I've seen that form before. I know what to do next. Well, before I knew about the Tripos and before I knew about these coaching books, I was also trying to solve problems involving coupled oscillators that changed their fundamental frequency over time. In this case, I was studying the early universe for my PhD dissertation. And I actually reinvented or rediscovered a lot of these quite amazing analytic techniques, where you can solve for what are essentially coupled pendula, where they can change their length that determines the rate at which one elementary particle can decay into another. And it starts looking just like these coupled oscillator equations. And you solve them with things like the Weierstrass polynomials. I spent 6, or 7, or probably 8 months beating my head against this trying to solve things like the Mathieu equation and the Lame equation in closed form. It was seen as sufficiently original to warrant publication in the Physical Review in 1897. Then I met my pal, Andy Warwick. He said, oh, that was literally a homework problem for undergraduates in 1897. And so that's when it came home to me that this was basically child abuse. Let's just be clear. What Cambridge was doing was an extraordinary effort to train people in ways that look familiar to us-- timed exams, written paper problem sets-- but are actually not quite what we do either. So I learned myself the hard way-- months and months, not three days on a timed exam-- just what these undergraduates were capable of doing by the end of the 19th century. So this intense mathematical training, I want to emphasize, was not only for people who liked it. This was the only way to the exit door. Every single Cambridge student, throughout much of the 19th century, if they wanted to get their bachelor's degree in theology, in philosophy, in history, in mathematics, everyone took this Mathematical Tripos exam. Why would you ever do that to people? Again, it's an indication of the setting in which this was being done in this rapidly changing British empire. So the empire had a great need for what they considered disciplined thinkers, civil servants. And the thinking was, if you could master things like these crazy oscillator equations on a timed exam, if you could develop the mental discipline and the full-body discipline to keep yourself going for years at a time to conquer these very complicated analytic math problems, then they hoped you could conquer anything in economics, in government, in logistics, any kind of bureaucracy. This was seen as a particularly efficient way to train civil servants to handle the empire. And so as a size effect, not because this was the goal-- but as a side effect of doing this generation after generation for decades, a critical mass grew of people who went through this Tripos training and actually liked it, actually wanted to use those techniques to further their understanding of nature, like the Maxwellians, like Maxwell himself, and then indeed the generations who studied things like Maxwell's treatise. So the people who were experts in things like the divergence, and gradient, and curl, and all these great techniques for electromagnetism, they were coming out of this training not because Cambridge wanted to make more experts necessarily in mathematical physics, but because some small portion of folks who went through this increasingly common system wanted to use those skills for what we would call scientific research. And so the professionalization of this mathematically trained cohort was driven in part by demands of empire, both supplying research problems, like telegraphy or other problems in practical impact of electricity, magnetism, and optics, and also prioritizing this very specific kind of elite educational infrastructure. So let me pause there. Let's get some questions on that. Let's see. I see one of the questions was, who wrote the exams? Good. So the exams were usually written by the faculty in mathematics and physics, most of whom, at this point, had themselves gone through Cambridge. But the people writing the manuals, the help manuals, were usually recent students who had done very well. If you were a Wrangler, let alone a Senior Wrangler-- if you were in the Wrangler group, if you had high honors, then you could basically name your price. If you wanted to hang around Cambridge, you could literally make a career collecting direct student fees a dozen or so at a time and coaching these younger students through because you proved you could do it. And then you would sometimes write your own book and get royalties and sale of your training manual. So the exams themselves were the provenance of the faculty. But all the associated pedagogical materials-- the training manuals, the homework sets, and all that-- those were developed often by these private coaches. Let's see. Another meme on the way. I want to see the meme. Another question is, prior to the Tripos, had university education always been evaluated by oral disputation? By and large, or at least that was the main pattern-- not only at Cambridge, but including at Cambridge. And that goes back really to the earliest universities in Europe. There's, again, some marvelous historical work by other colleagues going back to the Middle Ages. Some of you may know Oxford was the first University in England. But that wasn't even the first in Europe, let alone other parts of the world. Oxford was founded in, roughly speaking, 1100. Cambridge was founded about a century later in around 1209. So these predate James Clerk Maxwell by centuries, by half a millennium. And for most of that time, indeed, the way of showing your command of the material was like jousting. It was oral disputes. And think about what kind of student, what kind of skills that kind of accentuates. It's kind of like-- I mean, if you speak a little disparagingly, it's like finishing school. It's like charm school, right? You want to be very quick on your feet. And you want to be persuasive in oral discussion. Those aren't bad skills to have. They're quite different skills than getting up at 6:30 to do really hard math problem sets till lunch so that three years later you can solve the Mathieu you equation among 200 questions on a timed exam. So what kinds of skills are deemed most important, those aren't static. They change. They change at MIT. We go through curriculum reform, at least to some small degree, all the time. And I just find that particular transition from the oral disputations in Latin to these paper-based problem sets-- I just find that fascinating. Let's see, other questions in here. Did it affect what people chose to study? Oh, that's interesting. I'm not sure. The question is-- you can probably see in the chat-- did the Tripos exam and the esteem that came with scoring well affect what people chose to study? I don't know actually how the system had to work. I'm not sure if then, in 1850 let's say, if an undergraduate had to declare a major the way we would have to do in our modern universities. And if so, if there's information on how many students declared majors in history, or law, or medicine, or philology, and so on. So I'm not sure what the impact was on the distribution of topics studied. It's really interesting. Any would know. I should just ask Andy Warwick. What I find most striking is that even though there existed, then as now even at the time, many fields of study, or many kinds of fields to delve into more and more deeply, the only graduation requirement was the Math Tripos. I mean, that's just astonishing. Imagine if we tried that at MIT, right? I mean, sure, you can go major in political science or, heaven forbid, biology. But you have to take the Math Tripos to get out of MIT. Can you imagine the pushback? So that I find really stunning. And I think that the idea was really-- this is demonstrating a disciplined mind, according to what they considered valuable for discipline in general, but in the context of the still expanding British empire. What happened afterwards? Good. So there was beginning, in this part, in the second part of the 19th century-- there was beginning of fellowships, where if you scored-- especially if you scored very well, if you were a Senior Wrangler, really among the top Wranglers, you would then qualify-- basically get paid to get a fellowship to stay in Cambridge and not only to do so by collecting your own student fees as a coach. So it wasn't quite a tenure track position. Sometimes, if you did well on your fellowship, then you'd get hired. There was not yet a formal program of PhDs. That's really getting introduced in the very, very close of the 19th century. And that gets more common by the 1890s and forward. So there was something like a master's degree. And that, in fact, goes back quite a ways. You can get a bachelor's degree, and then get a fellowship, and then have a career at a university. Sometimes, you can get a master's, and then get hired to stay on the faculty or do other kinds of jobs. And then the more formal graduate training leading to a terminal doctoral degree, that gets built up really over the 1880s, but accelerating over the 1890s. What did pre-university maths education look like? Oh, that's a good question. It was changing slowly. Again, Andy writes about this a bit in his book. A number of the graduates of Cambridge became secondary school teachers as well as tutors and so on. So you see a dispersion effect, where people who have imbibed this program-- gotten drunk on Math Tripos-- some of them do fan out and become very influential high school teachers. And they bring an appropriate level-- they're not doing Tripos problems, per se. But they bring a more formal schooling in mathematical analysis into other parts of Britain. And it's not only happening in Britain. But there's a cool story to be told there. So it does trickle out. Not overnight, but you do see a kind of spillover effect, again, measured over, say, a few decades, not quite in William Thompson's day-- not in the 1840s, but by the 1860s, 1870s. And what happens is you can start comparing Tripos exams from the 1840s to the 1890s. You see an extraordinary acceleration of what's expected of these students in still just only three years of undergraduate training. And that's what people like Andy have looked at in detail. What becomes like a really hard Tripos problem in 1850 becomes a kind of trivial one by 1880 because all the kids have come in and have already trained on those. And so the acceleration of what counts as a legitimate problem is mind-boggling. And again, I love showing that comparison to Weierstrass polynomials of like undergraduates who were doing pretty well in 1890 and me struggling my tail off for months, not for hours, to reproduce what they had learned to do just to get out of college. So you can imagine, if you build up a community that's used to that, that leads to a certain kind of question. One of the things that Andy does so nicely in the book is then show that how does that then impact on how that group framed their own research questions. When they wanted to ask more about the ether, they were starting from that toolkit. And that affected, as we'll see-- a little preview-- that affected how they made sense of other new work, like Einstein's work on relativity. That's the piece that I signed from Andy for a few weeks from now. So when they encountered creative work from other scholars in other parts of the world, they would make sense of it from that toolkit. Oh, I how can I bring my tools to that cool set of questions? So it shapes what even counts as a research question. Let me move on from here. These are great questions. And I'd be glad to chat more about them. But there's a little bit more I want to get through for today's lecture because other countries-- we're going to leave Britain, at least temporarily. And let's start looking at some contrasting developments around the same time, but now on the continent. So Maxwell's work became popular not only in Britain, not only in the Cambridge circle, but pretty quickly on the continent as well, especially in the later 1880s. Remember, his treatise was published in 1873. Roughly 15 years later, it was becoming much, much more common to be taught and talked about in places like the German states-- Austria, Switzerland, and France, and so on. One of the things that helped a lot was in 1888, 15 years later, a German natural researcher, Heinrich Hertz, actually succeeded in testing one of the key predictions from Maxwell's work, that these Maxwell waves, that we would now call electromagnetic radiation, could be disturbances in the ether with any kind of characteristic wavelength, not only within the visible spectrum to which our eyes are sensitive. So in modern terms, what Hertz did was generate and then detect radial waves. He was making Maxwell waves, a kind of disturbance in the ether, but with a much, much longer characteristic wavelength. And that was what Maxwell said should be possible. Hertz was among the first to demonstrate that empirically. And that really drove a lot of interest in this Maxwellian electrodynamics in the 1880s and beyond. What's really fascinating though, coming back to this theme that we've seen a few times and we'll see again, is that when these German scholars, mostly in Germany, began manipulating Maxwell's equations or their slight variations, they did so, again, by making sense of them in a different way. Even when they used the same equations, the way they interpreted them was not really always the same as Maxwell's. And here in particular, instead of following the characteristically British approach to thinking about this mechanical, elastic ether, like Lord Kelvin instructed his listeners-- stick your hand in a pot of jelly and shake it around-- instead of that, a growing number of the German electromagnetic experts began turning that on its head. They would use Maxwell's equations, but to argue that, in fact, the world is made of electromagnetism and mechanics is the after-effect. It was another wonderful inversion not just between how we use Maxwell's equations and them, a kind of distinction over time, but even at one moment in time a distinction across space. So I want to sit just a little bit with these characteristic German approaches-- again, to get the contrast in how the same equations could be interpreted somewhat differently. So here's a quick example about the origin of mass. So instead of assuming that objects just have some mass based on their mechanical composition, a growing number of these scholars in Germany asked, what if mass itself-- this prototypical mechanical property, what if that was an effect of something even more primary, like electromagnetism? So let's start with a quick analogy. Imagine you're trying to drag a beach ball under water. So think about hydrogen dynamics. We could describe the kinetic energy of that system. We'll call the kinetic energy T. If the beach ball has a mass outside of the medium, some m0, we drag it with some speed, v, outside the medium, we could very easily write down the kinetic energy for that object-- 1/2 m0 v squared. We just say the ball has a mass. It's moving at some velocity whose magnitude or speed is v. Kinetic energy-- 1/2 mv squared. We can actually describe the kinetic energy of the combined system when the ball is within the medium, when it's underwater, for an incompressible fluid like water, by using an effective mass. We still have an expression that looks like 1/2 mv squared. But we adjust what we mean by m. We take into account the mass of the displaced fluid, in this case. But otherwise, the form of the equation is actually still quite tidy. It's still 1/2 mv squared. And we adjust the m to take into account that this ball is displacing some of the fluid. So its motion, its resistance to change in its motion, like its mass, is different from what it would be outside the medium. So a number of scholars who focused on electromagnetism in the 1880s and '90s in Germany said, well, what if that's a useful analogy for what happens with electric charges? So imagine the motion of a single object that has some charge Q, some electric charge. By virtue of having some electric charge, it necessarily is creating an electric field, some disturbance in the local ether. Moreover, if that charge is moving with some velocity, it is by necessity generating some magnetic effects. There is an induced magnetic field from a charge in motion. And there is an electric field associated with the electric charge, even if it's at rest. So electric and magnetic fields will affect the motion of an object with electric charge. So the question was to consider what we might now call the back-reaction or the self-field effects. How can we characterize the motion of an electric charge in its own electric and magnetic fields-- not external fields that we host separately, but by virtue of the fact that an electric an object with some electric charge by necessity creates an electric and a magnetic field if it's moving? And those should act back on its own motion. So you could consider something like the Lorentz force law. The force, therefore the changes in that charged object's motion, will be due to the electric field and its velocity cross-product into the magnetic field. Now, what if these are the self-fields of that charged object? You can do the same, trick they found quite cleverly. You can still characterize the kinetic energy of that system as 1/2 mv squared. Just like that beach ball underwater, we make an effective mass. We take into account the extra mass that comes from these field effects that will impact how quickly that charged object would change its motion due to a different imposed force. That's why the mass is a measure of its resistance to changes its motion. It's just 1/2 mv squared. And there's a different expression for the effective mass that comes uniquely from this kind of back-reaction of its self-fields. And the calculator is a tour de force. You can calculate it based on the properties of that system. Unlike the beach ball underwater though-- this is the next step that many of them took-- the electric charge can never be taken outside of this medium. You can't turn off its own electric field if it still has some electric charge. If it's moving, you can't stop it from generating some magnetic field. So unlike the beach ball, you can never take this electric charge out of this medium of these self-fields. As a next step they take, what if there is no mechanical mass? What if this what we might call a bare mass or mechanical mass vanished? What if all mass always arose from these electromagnetic effects? That's what I mean by them turning it on its head. So all of mechanics, in their estimation, might arise from these complicated effects of electric and magnetic fields, including the self-fields from these objects skittering through space on their own. So one of the very influential proponents of this view, Wilhelm Wien, announced, in 1900, this presents the possibility of an electromagnetic foundation for mechanics. This, again, became known as an electromagnetic worldview. And I hope you can see that's a kind of inversion from the British mechanical worldview, where all the electromagnetic effects the British folks often were convinced arose from mechanical stresses and strains. Now, a number of these German scholars said, what if actually mechanics, like F equals ma, is actually coming from electromagnetism? So I'm going to pause there. Again, I'll stop sharing screen, just to give a quick taste for how, even in their own day, Maxwell's equations, again, could inspire other interpretations, even when they use the same equations. Any questions on that stuff? OK. If other questions come up, feel free to put them in the chat or raise your hand. But if not, I'll press on. The last main section for the lecture, almost done. For the last part, I want to get back to some of the institutional questions. Where were some of these German scholars doing their work and why? So we're going to have another contrast, German and Britain, not only conceptually, but now also in terms of the types of places and the types of priorities in which these folks were conducting their research. So the last main part for today is now this question about institutions. So what does it mean to have a job title theoretical physicist? When does that become something that people could aspire to, let alone get paid to do? It's not in the age of Galileo or Newton. So some of the folks whose work we would inherit, we would consider part of the theoretical physics legacy was done by people who were never in their own time called something theoretical physicist. In their own time, in the early and even through the late 1600s, these folks were either called mathematician-- that was a term that's recognized-- or philosopher, or more broadly natural philosopher, as we saw. So the Galileo, Newton, 17th century stuff, there just simply did not exist a person whose job title was theoretical physicist. OK. Let's move forward in time. What, about in the 18th century? People like Lagrange, or Laplace, or Euler, or any of these other folks whose work we still use so much today? Again, they were doing work that we have folded into the activity of theoretical physics. We couldn't do our work without them, thinking about lagrangians, for those of you who've seen it, or the laplacian operator. We use their work all the time in theoretical physics. They were, again, called mathematicians or often astronomers. Now, that term was often now in common usage. They were not called theoretical physicists. What about someone like James Clerk Maxwell now by the mid to late 19th century, during the period we've been focusing on in this class so far? By Maxwell's day, the term physicist had become much more common. That was a kind of job title, not theoretical physicist. And indeed, still natural philosopher was not uncommon either. So a lot of the work that we would recognize as core to the job of a theoretical physicist, to the toolkit, was done by people who did not have that job title and would not probably have even recognized it in their day. The term and the associated role in society of theoretical physicists comes out of a very particular moment in the middle decades of the 19th century, that same period of these reforms at Cambridge, and these cool things with radial waves, and all that. And here, it really comes out of the German system. So take just a few moments to think about this very unusual and, let's say, peculiar German university system. And here, I'm drawing especially on the work by a pair of historians. They wrote another two-volume book. This one is only about 600 pages altogether, not 900 pages, a quite fascinating book by Christa Jungnickel and Russell McCormmach. So one thing to keep in mind is especially after Germany became one unified country in 1871, there was a central education ministry. So within the entire country, there was one basically central government bureau that assigned all the senior professors in every field, in every university, a central bureau that doled out or made the assignments for who would be the full professor in the field of physics, or philosophy, or history, or theology, or chemistry, or anything. Moreover, in each of these universities, there was one full professor who was called the ordinarius professor in a given field. So it would be one full professor of physics at, say, Wurzburg University. There'd be one full, or ordinarius, professor of physics in Berlin, and so on, assigned by the central bureau, one full professor per subject per university. In the case of physics, the ordinarius, or the single full professor, was in charge of all the apparatus in the department, all the experimental work. Here's an example of Wilhelm Rankin's laboratory in the 1890s, for example. Rankin was the ordinarius. This stuff was often treated as if it was the personal property of that one head professor. This was basically Rankin's property until Rankin retired or died. And then it would bequeathed to the next ordinarius professor of physics. So the one head full professor was in charge of all the experimental equipment. What it meant to be a physics professor was to be an experimental physicist. And the way you demonstrated that was by being the ordinarius in charge of all the equipment. After Germany became one unified country, after its war with France, the country really accelerated a very rapid period of industrialization, very large scale industrialization. They thought they were behind both Britain and France. They began investing very heavily in what we might consider many areas of engineering. For that, they needed a large, large number of physics teachers-- not only for universities, but especially for things like the high schools. They needed many folks who could go into and help design and improve these massive public works projects. They needed lots and lots of people who could teach physics at the high school level. So the classrooms grow very rapidly. So then you have to hire lots of people to teach all those would-be high school teachers. So there's a huge growth at the universities throughout Germany in extra faculty. They were called extraordinarius not because they were extraordinary. It's like a false cognate. These were extra, in addition to, the ordinarius. So these were lower rank. These were like junior faculty in our modern parlance. They were not tenured, many of them. They certainly were not the single full professor. You staff up lots of extra to the ordinarius, lots of junior faculty at universities to teach lots of physics because you need to get lots of high school teachers who could teach even more people basic physics, math, and even engineering. Now, these younger faculty, these extraordinarius faculty, only have access to pencil and paper. They're not in charge of the equipment. They're not the ordinarius. It's not their personal property. So all they can do is push pencils around on paper. Here's an example of one of them at one stage in his career, Einstein, a person we'll spend quite a bit of time talking about. Well before Einstein was scribbling in his Zurich notebook in 1912, really starting in earnest in the 1850s, '60s, and '70s, you have lots and lots of people who are now joining the faculty ranks at this lower level, this extraordinarius level. They don't have access, in general, to the experimental equipment, unless they're in the good graces of the full professor. They can't control it. So they spend most of their time in research doing pencil and paper work, mathematical work. Once you build up lots of folks who have done that for one or two generations, there becomes a kind of norm. There becomes a kind of community that's really good at mathematical or theoretical physics. It starts sounding like the Cambridge story. If you push lots of people through the Mathematical Tripos, some portion will get very good at mathematical analysis for physics. A similar thing for different reasons is happening around the same time, especially in the united German university systems now because this quirky university structure, where you have a very tightly controlled, full professor rank. You have extraordinary people who can't do as much experimental work as they might like. But they get really good at showing their own command of theoretical physics because they spend all their time with pencil and paper. So only by the end of that 19th century do some universities begin to recognize this is now a new thing. And there are finally ordinarius professorships, full professorships made for theoretical physics. So increasing numbers of universities within this German system would then have two full professors of physics-- the ordinarius in charge of the experimental work and now a second ordinarius in charge of theoretical physics. The very first to be assigned to in that role was Gustav Kirchhoff, whose work is still well-known to us from things like thermodynamics and statistical physics. A few years later, Max Planck became one of the earliest ordinarius professors of theoretical physics in Berlin. They start their own journals because there's now a kind of specialty community, their own editors of their specialist journals. So again, we start getting a job title of theoretical physics not because there's an aim to make let's make more theoretical physicists. There's an aim, in the German case, to have a very rapid acceleration of nationwide industrialization. And partly how that plays out, in this very unusual university system, is to bloat the early or middle ranks of faculty. And that diverts them, more often than not, to one kind of research rather than another. Do that after a while, and there coalesces a recognized set of standards and even a kind of job title for the strange thing called a theoretical physicist. Now, the last few slides to wrap up, then we'll have time for some more questions. So in place of this British empire building as one of the main roots to building up this community of mathematically trained mathematical or theoretical physicists-- we think about that Tripos exam-- in the newly unified country of Germany, the field of theoretical physics takes root in a pretty different context. The timing is similar. But the drivers are actually a bit different. There's this very centrally controlled university system, very tight limit on the highest ranking positions. And for the field of physics in particular, the ordinarius, or full professors, was coextensive with experimental physics. That's what it meant to be a physics professor. Changing industrialization changes the priorities. You need lots of people who can do at least rudimentary physics and mathematics to help with their engineering. You need lots of people who can teach them in high schools. Therefore, you need lots of people who can teach the teachers at the universities, and so on. So within a generation or two, you create this spillover effect, where you have ordinary professors, and journals, and specialists in this new thing called theoretical physics. Why do I emphasize all that? Because it's in that very particular institutional context, within Germany in the later decades of the 1800s and early 20th century, that's where a lot of the stuff that we're sitting with for the next few weeks of our class, the emergence of what we come to call modern physics, like relativity and quantum theory-- a lot of that is being crafted in that particular institutional context. We want to understand why there was suddenly an ordinarius chair in theoretical physics in Berlin and so on. So I just want to give us an indication that theoretical physics intellectually has a many centuries lineage. We can trace back why we still use the Galilean transformations or Newton's laws. And yet, the job title, or the description, of the theoretical physicist is actually much more recent vintage. So I'll stop my screen sharing there. We have a couple of minutes for some questions, if people have other questions on that. Great question. Thank you, Fisher. So you're very much correct. There was a lot of tension factors. Of course, we now know what's on the horizon-- the First World War. And of course, there were many, many quite bloody skirmishes and wars even before that. So on the one hand, there was a lot of nation-based rivalry. The Germans looked over their shoulders, the German bureaucrats at least, German government officials, and were scared out of their wits that they beat the French in 1870, '71, but they'll lose the next time. Or the British Navy is second to none. So they're always comparing themselves in terms of industrial output and military strength, mostly in this period with Britain and France, also increasingly Russia and other neighbors. So there's a lot of the rivalry. That spills over not infrequently among the scientists. This is also, however, the first period of international scientific conferences. The natural researchers, the professional researchers, starting in the 1870s, would begin to meet once a year, or twice, or three times a year for conferences in what we would now consider quite familiar. That was new, mostly in the 1870s. They would work out things like nomenclature. That's why so many things were actually still named in Latin, even though no one was speaking Latin. They wanted to find some way to share ideas and terms for things and work it out. So it was sometimes a kind of friendly rivalry. Sometimes, it wasn't so friendly. So you see elements of both. And you're quite right, Fisher, this was becoming not uncommon. And we'll see this really literally explode in ways that matter even to us, when we think about the history of physics, narrowly, let alone world history. It really becomes a tinderbox right around 1900 thereafter. That's a great question. Any other questions? I'm running a little bit late. I know some of you might have to run off to a 2:30 class. So I'll stop there. Great questions and discussion. And I'll see you on Wednesday. And stay tuned. We'll post the paper 1 assignment real soon. Stay well, everyone. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_5_Einstein_and_Experiment.txt | DAVID KAISER: So today, at long, long last, we meet this fella named Albert Einstein. We're going to talk a bit more about how he was approaching some of these questions in the early years of the 20th century. So as usual, today's class has three main parts. We're going to look at Einstein leading up to this year of 1905, what were his early educational experiences like, how did he come to this question of the electrodynamics of moving bodies, and what did he think was the most kind of compelling way to start addressing some of these questions? For the middle part of today, we'll talk about this paper, in particular. This excerpt of the paper that you had in the reader on the electrodynamics of moving bodies. What were some of the key ways in which Einstein was talking about, what we would now call the special theory of relativity. And I have a little asterisk here to remind you for this section, for this class session, I wrote up a little short set of extra notes, not nearly so detailed as the previous set. And those are really just as a supplement for something I'm going to probably not have much time to linger on in the class session itself today. And that's a consequence of Einstein's work that he published in a separate follow-up article in which he derived this now-rather famous relation that E equals mc squared. So I'm going to go through, quickly today, the nature of Einstein's argument that led him to that conclusion, and then you can have a couple of pages-- I think it's only about five pages or so-- to read a little bit more detail about some of the ways that Einstein came about that conclusion. So a little extra supplemental, strictly optional as always. But if you are curious about it, there's some more materials there. And then, we'll pan out again. We'll kind of zoom out and try to make sense of, why did Einstein approach these questions in the ways that he did? How can we think about what else Einstein was thinking about, was immersed in, what was going on in his life, in his early professional career, drawing, in this case, very heavily on that article by Peter Galison on the reader to try to understand why Einstein approached, as we'll see, a very conventional question in rather unconventional ways. That's our plan for today. So let's jump in. So remember what we're responding to. What's Einstein inheriting by the time he comes on the scene? So we saw, in the previous few lectures from last week, the questions about the behavior or the propagation of light waves in the ether in that all-pervasive, elastic medium called the light bearing or luminiferous ether. Those questions were really at the forefront. That was mainstream physics by the end of the 19th century. And especially when we get to people like Hendrik Lorentz or Michelson and Morley, the topic that was often described was the electrodynamics of moving bodies, the effort to generalize from James Clerk Maxwell's work from the middle decades of the 19th century in which Maxwell had worked out this really quite beautiful combination of electricity, magnetism, and optics. But he really was treating light waves for the special case in which both the emitter and the receiver of those light waves were both at rest with respect to this light-bearing ether. So the question then becomes, what about moving bodies? What if either the sender or the receiver of light or both were in motion with respect to each other and with respect to the ether? That became known by this catch phrase, "the electrodynamics of moving bodies." One of the key moments in that investigation that we saw from the 1880s was performed by Albert Michelson and Edward Morley in what's now Case Western Reserve University in Cleveland in the United States. So they used this enormous, specially built interferometer of Michelson's own design. By the 1880s, late '80s, they were using an interferometer where each leg of this l-shaped instrument was 11m long. That's about 34ft on a side. Huge, huge, huge instrument that they designed and built expressly to try to measure changes in the behavior of light waves, in particular in the relative speed with which waves would move through the ether, depending on whether our own laboratory here on Earth was moving directly into the ether or some perpendicular-- or the light waves were moving perpendicular to the Earth's motion. So whether it be what we could think of as an ether headwind, like if you're pedaling on your bicycle on an otherwise still day, you'll feel a breeze on your face in the direction of your own motion. And as we saw last time, Michelson had reasoned that there should be a similar effect for light waves because our whole laboratory on Earth was, assuredly, they were quite confident, was moving through the ether, so there should be some relative motion between the behavior of light waves as we measure them on Earth on an instrument like the interferometer and the all-pervasive ether. And as we saw and even read their brief paper, despite some really exquisite experimental design and sensitivity, they never detected any clear evidence that would have been associated with the Earth's motion through the ether. That was what became known as a null result. The results were consistent with no change in relative speed, no matter what direction those light beams were traveling. And as we saw, even though Michelson won the Nobel Prize, he was the first physicist based in the United States to win the Nobel Prize for physics, he won in 1907, he died 20 years later still convinced that he had just messed up. So that's sad for him, and maybe we all suffer the same fate. Maybe we all win the Nobel Prize, even if we're not satisfied with our work. OK, meanwhile, some very leading, very elite physicists back in Western Europe Like Hendrik Lorentz were following Michelson's and Morley's work quite carefully. Lorentz was a mathematical physicist closer in spirit to James Clerk Maxwell. He wasn't building his own experiments. But he was very attuned to what results Michelson-Morley and other experiments were finding. And Lorentz finally introduces a way to accommodate this null result to explain why experiments like Michelson and Morley's might indeed measure no offset, no change in the relative speed. And Lorentz argued it was because of dynamics. Because the previous analyzes had dropped, had missed one of the relevant forces. In this case, Lorentz argued that if the entire apparatus, like everything else, is immersed in this physical, elastic medium of the ether, the ether should exert a force on the material, making up the arm of that instrument, one of those Ls, one edge of the L. The edge is moving directly into the ether wind. That should be compressed or contracted because of a physical force, a kind of resistive force onto the interferometer itself. The Irish mathematical physicist George FitzGerald came up with a very similar argument right around the same time. This was, for some years, known as "the Lorentz-FitzGerald contraction," or sometimes just as "Lorentz contraction." And the idea was that if there's a contraction by just such an amount, depending on the relative speed of the object through the ether, V, compared to the speed of light, C, if you cooked up the amount of contraction just right by that factor gamma that we encountered a few times, then the expected offset in that race between the light waves traveling in the two different sides of the interferometer should exactly cancel. The race should be a tie after all. One should actually expect a null result. And all this came down from treating the electrodynamics of moving bodies by starting with the question dynamics. Are there all the forces at play, rather than kinematics, which is the study of the motion of objects through space and time. So young Albert Einstein then inherits that set of questions and moves into that discussion a little bit later. He was born in 1879. Here's a rather adorable photo of young Einstein, kind of elementary-school aged. So he was born just before Michelson and Morley actually began that series of experiments. So he was born after Maxwell's Treatise before some of Hendrik Lorentz's main work. He was born in the town of Ulm in what was then quite rural Germany, not near any of the main city centers. Einstein's main ambition when he was a young child was actually to join the business that his father and his uncle had created. They were some of the first professional electrical engineers in these later decades of the 19th century, just like the kind of folks about whom Maxwell had written in England in the preface to his treatise, these so-called electricians that would soon be called "electrical engineers." So that's what Einstein's own father did and his uncle, they formed a company together. They were much better at engineering than at business. The company went bankrupt multiple times. They had to actually keep moving. But they were working in this exciting new area during the waves of things like wide-scale electrification, the introduction of electric street lighting in more and more municipalities, more and more ways in which electrification and therefore electrical gadgets were entering into quite common experience. That's the world into which Einstein was literally born. And that's what he wanted to work on. It turns out he dropped out of high school, so he never got a high school diploma. He developed a real allergy. He got very frustrated with what he considered an overly militaristic approach, a very disciplined-based approach in these Prussian high schools. So he ignored his teachers, and got into trouble, and finally just dropped out. By all accounts, his principal was just as happy to see him leave. No one liked each other. But he set his mind on entering this elite Federal Institute of Technology known as the [NON-ENGLISH SPEECH],, or ETH. That stands for the [NON-ENGLISH SPEECH],, which is a phrase I never ever tire of trying to pronounce. We can just call it the ETH. This was basically like the MIT of Zurich. Although, that's a bit unfair because the ETH actually predated the founding of our own MIT. But it was a place a lot like MIT. It was a polytechnic institute with a real focus on science and engineering, including these new areas like electrical engineering. His main goal, Einstein's goal, was to-- even though he had no high school diploma and he dropped out, he would take the special entrance exam for this one special place he most wanted to attend, the ETH or [NON-ENGLISH SPEECH] in Zurich, Switzerland. And then, he could go in. Well, he did take the exam. He failed it because he blew off the subjects that he didn't care about. But the examiner for the physics and math sections, a professor named Weber saw enough promise in Einstein's portions of the entrance exam, not on humanities, not on languages, not on history, those he was quite terrible at this time. But for the physics and math portions, he showed enough promise that he pulled young Einstein aside and said, you failed. You cannot come to the ETH right now. But if you go to a local school, effectively a local community college, a cantonal regional kind of community college nearby in Switzerland, go there for a year, work with some new teachers, take our entrance exam again. And that's what he did. So he was admitted to his dream school the second time around and began studying in Zurich in the very late 1800s. Once he worked so hard to get in, including this extra year of just trying to take their entrance exam, he then proceeded to cut classes and read on his own. This guy was-- I mean, he was no Einstein. He really just kept doing these stupid, stupid things. So even after working so hard to get into his dream university, he basically cut his classes, or at least many of them, any of the ones that he thought were not worth his time or for which he didn't really like the professors. So he would read on his own the latest stuff, was still pretty new stuff, like Maxwell's treatise on electricity magnetism, like some of the very exciting work on statistical mechanics and the behavior of gases by people like Ludwig Boltzmann and others, he would read on his own. Now for some reason, after cutting classes and letting his professors know how little he respected them-- he was not shy, much like when he was in high school-- for some reason, he couldn't get any good letters of recommendation upon graduating. This class is really full of good lessons for today. So don't do the following. Don't annoy all of your professors so much that they all refuse to write you a strong recommendation. So basically, Einstein had a middling, fairly mediocre grade point average because he kept cutting classes and ignoring stuff that wasn't interesting to him. He was rude to the majority of his professors and, for some reason, couldn't get a job upon graduating. He really couldn't figure this out. The guy was kind of dumb for an Einstein in these young years. So what happened was, really in desperation, with help from the father of one of his very close college friends, the father of Marcel Grossmann, he was able to land an entry-level civil service job in Bern, Switzerland, working in the local patent office. He was hired as a patent officer, third class. There was no fourth class. That was really the entry-level position since he had a degree from a leading technical institute but was not otherwise highly recommended. So even for young Einstein, it wasn't what he knew but who he knew. He had to depend on his friends for help on the job market. We also know quite well what happened next. He fell in with some friends who were all around the same age, all recent university graduates, who were all living in Bern. And they all had an interest in science. This included a young Maurice Solovine, Conrad Habicht, and then recent graduate Albert Einstein. They gave themselves the name the Olympia Academy because they were having fun. That sounds like a triumphant, very prestigious academy, like Mount Olympus like the Greek legends. It was actually three recent college graduates meeting literally in the pub to drink a lot of beer and talk about philosophy. But they figured if they gave themselves an exalted name, it would sound much more serious. So the Olympia Academy was mostly those three folks reading on their own after hours after work. Now we even know quite a bit of what they were reading. They wrote lots of letters to each other in between. Many of those letters have survived, so we can reconstruct the kinds of questions and the kinds of materials of particular books that the Olympia Academy found really exciting. One of the authors that they were especially excited about in the early years around 1900 was the Austrian polymath, Ernst Mach. Here's an image of the English translation of one of his very influential books. Mach was originally Austrian, wrote his own books in German. Einstein and his friends really just devoured these. Mach was trained in mathematical physics, in experimental physics. We still talk about Mach numbers, like for the speed of sound. Mach 1, Mach 2 for jet planes and so on. That's the same Ernst Mach. He later held a professorship in surgery and, then finally, in the history and philosophy of science. Mach had quite a remarkable career. So what he did in this series of books, including the one which in English translations known as a science of mechanics, which we know Einstein and his friends were really reading carefully, Mach proposed this scientific philosophy called positivism. His argument was that only quantities that could be, in his phrase, become objects of positive experience, things we could literally touch, and measure, and handle empirically and quantitatively, only these objects of positive experience belonged in any proper scientific theory. Anything else, Mach argued, would lead to just confusion and empty metaphysics, which was his word for bad, bad stuff. You'll just get yourself confused arguing about the equivalent of how many angels could dance on the head of a pin if you're not able to actually subject those questions to measurement or any kind of empirical input. That became known as positivism. And a lot of what Mach did in this particular book, very thick book, The Science Of Mechanics, was basically take Isaac Newton's work to task. So Mach laid out this entire program to basically say that Newtonian physics, as successful as it was in celestial mechanics and all kinds of applications, it was riddled, charged Ernst Mach, with these empty notions that could never be subject to careful empirical measurement, things like absolute space and absolute time. So Mach says no matter the successes of Newton's physics, it's riddled with these problems. We should get rid of all these empty metaphysical notions like absolute space and absolute time. The Olympia Academy just ate this stuff up. So Einstein, we know from contemporaneous letters and notes and so on, was very directly inspired by Ernst Mach, and he pursued this critical reevaluation, deciding that we should begin with kinematics before even worrying about dynamics. Remember, kinematics is the study of bodies in motion through space and time, things that, at least in principle, could be subject to observation and measurement. So we better get that straight with this new critical edge granted by Ernst Mach before we even worry about things like the interplay of forces or the study of dynamics. So while he's working at the patent office, he's still a patent officer third class meeting after hours with the Olympia Academy, increasing neglecting his first wife and his young child, basically blowing them off to go drink with his friends and talk about philosophy. In the space of that time, he submitted several papers to the leading journal of physics, really, in certainly in Europe or North America at the time, the Annalen der Physik, at this point, it was edited by Max Planck, someone whose work we'll come to pretty soon in this class. In 1905, Einstein had what is often called his miracle year, or his annus mirabilis in Latin. It turns out it actually wasn't even a whole year. It was like a miracle two seasons. He needed less than six months to introduce this entire body of work. So here, I'm showing just the dates at which these separate articles were received at the journal, separate submissions that were clocked in at the Annalen der Physik. And for now, what we're going to do is focus on this third in his series, which again, today, we refer to about by calling it the special theory of relativity. The actual article, the first page of which is shown here, is entitled, "On the Electrodynamics of Moving Bodies," a title, a phrase that, by now, sounded conventional to us thanks to the last few lectures, but especially to Einstein's readers back in the day, that was exactly the topic which leading figures like Hendrik Lorentz, and George FitzGerald, and Michelson and Morley, and many, many of their colleagues, that's what they cared about and what they agreed was front and center for cutting-edge physics. So Einstein's paper on the electrodynamics of moving bodies has a conventional or recognizable title. And yet, as we'll talk about through much of today's class, the rest of his approach to that conventional topic was anything but conventional. It's quite unusual. It didn't help his colleagues that the only real indication of where these unusual ideas came from was in the acknowledgments, where he says, in conclusion, let me note that my friend and colleague Michelangelo Besso steadfastly stood by me in my work on the problem discussed here. I'm indebted to him for several valuable suggestions. That didn't help clarify because Besso was equally unknown. He was another, basically, college friend of Einstein's, another occasional member of that Olympia Academy. So one totally unknown person is thanking another basically unknown person, and otherwise his paper appears like it's from Mars. Let me pause there, ask if there are any questions on that first part. Any questions so far? Yeah, so in the paper itself, what I showed you just now on screen is the entirety of the acknowledgments. He only thanked Besso by name. And for some of the specific points in that paper, it does seem, from other letters that have survived and so on, that he discussed these things especially closely with Besso. He did not thank his then-wife Mileva Maric. We can talk about that. They had talked about some of the ideas some years earlier. It's a continuing point of historical interest how much Mileva had talked about these specific ideas closer to 1905. It looks like not so much, partly because she was working in her new motherhood role in a very-- what was by then a very traditional gendered role and not able to really keep up with her own physics and math studies. So he was not thanking Mileva in the paper. He did not thank by name, either, Konrad Habicht or Maurice Solovine, although he was writing with them. In fact, he was talking with them in the letters more about some of those other papers that year, at least from the letters that have survived. In the actual printed paper the only person he thanks is this, again, largely unknown figure, Michele Besso He also, as you've seen even in the excerpt that is in the reader for today, he cites almost no other literature. So we do know from some of the reading notes that have survived they were actually quite interested in work by people like Maxwell but also more recent work by people like Hendrik Lorentz. But he doesn't even cite Lorentz's work in the paper, very, very sparingly anyway. So he's being very, very, we could say selective, which is a nice way of saying he's being kind of a jerk, frankly-- being incomplete, at least, in not citing his sources. If he submitted that paper for our class, he would not get a good grade. You should cite your sources. That's a little reminder for everyone. So in the actual published version, he was not sharing the love very broadly. It was very, very limited acknowledgment of where this was coming from. Any other questions? Let's continue on. Let's see what else we can do now with Einstein once he actually does then move into the electrodynamics of moving bodies. You can see the screen again, I hope? It's back on for everyone? OK. So as you've seen, the very opening paragraph begins with this puzzling pronouncement, puzzling at least to readers of the most prestigious physics journal in, at least, Western Europe and North America at the time. He starts out by saying not that there is an error in Maxwell's equations. He doesn't say, I found a missing factor of 2 pi or missing minus sign. He doesn't say I've conducted new high-precision experiments and I found a glitch or some empirical anomaly compared to previous experiments. He starts out by saying, there's "an asymmetry in the explanation" that is not present in the phenomena. That's a pretty astonishing opening. What is he referring to? Not some very complicated hard thing to calculate, he's referring to something turns out to have been something like a school exercise that he'd encountered very similar to what he encountered when he was a University student. What if we take a simple bar magnet, just a simple north and south pole, and a simple conducting coil that's connected to an ammeter we can measure if any electric current is flowing through that conducting coil? What if the bar magnet and the coil are in some relative motion with respect to each other? Then, it's a rather simple demonstration to find. It's actually quite similar to what Ernst had done in the 1820s and then Faraday soon after-- that when the magnet and the coil are in relative motion, a measurable electric current will be induced. So the phenomenon is, these two things moving with respect to each other, current begins to flow, where otherwise none had flown before. So physicists, Einstein insists, had given two completely different explanations for what he argued was only one single physical effect. The physicists' explanations, to date, had depended entirely on which item was really moving. So if you imagine there's a state of absolute rest, then perhaps the bar magnet is really moving, and the ammeter and the coil are really at rest. And then, you could analyze the system one way or the inverse. This is the asymmetry in the explanation that Einstein says is not present in the phenomenon. Let's take the first case. Let's imagine that the magnet is what's moving, the bar magnet is moving back and forth. This is the Faraday induction case from 1820-21. We keep the electric coil perfectly at rest. Then, using Maxwell's equations, which we can read off from our t-shirt of which Einstein well knew by this point, there's going to be a changing magnetic field in this region of space. We're shaking, we're moving that magnet back and forth so, at any given location nearby, the strength of the magnetic field and even the direction is a change in quantity over time. So the variation in time is non-0. Because we're actively moving the magnet. From Maxwell's equations, that will induce necessarily some electric field in that same region of space. Then, we know that there's an electric field present in, say, around this area. The electric field will exert a force on any little, charged object that's within, for example, this conducting coil. It will push the charges along the coil. It will generate a current. Electric current was nothing but the motion of electric charges. So the argument here would have been, we're moving the magnet, and that makes a time-varying magnetic field from Maxwell's equations. And this particular one of Maxwell's equations, a third one, that induces an electric field, that exerts a force that pushes charges along. We measure current. That's case one. Einstein goes on to say, what about the inverse? This is the one that actually Ernst had first found by accident. What if the coil is moving but the magnet is at rest? Well, then, we'd say there's a static magnetic field that nonetheless varies through space. Remember those curving field lines, the Faraday field lines around a bar magnet? So there's some inhomogeneity in space, even though it's static. It's stuck in time. So this spatial curl is non-0. By Maxwell's fourth equation, that will induce some electric current nearby. And then, you have this static electric field with charges moving around it. So they have some velocity. And the cross-product of their velocity times the magnetic field will induce a force. That will push the charges along, will induce a current. So Einstein argues that there's actually only one phenomenon when the magnet and coil are in relative motion. You should measure a current. And it can't matter, he argues, it shouldn't matter which one you say is really at rest or not. Although not citing it, he's borrowing that from Ernst Mach's critique of Newton. So you have one phenomenon that had been given two completely separate explanations, and he says that's the asymmetry in the explanation. There should only be one explanation. There's some relative motion. Current is induced. So that's the opening paragraph of this somewhat strange-looking article. A little while later, just still on that first opening two pages, he then introduces two postulates, two axioms or conjectures. He doesn't defend them. He doesn't derive them. He doesn't say I've tested them in a laboratory, and they hold up to within experimental error. He says, as you can see the quotation here on the screen, we're going to raise this conjecture to a postulate and, basically, see what follows from it. The first one is that the laws of physics, all the laws of physics, must remain valid in any frame, any state of observation that's moving at a constant speed. These would later be called inertial frames of reference, meaning not accelerating. It doesn't have to be at rest. It can be moving at some constant speed, as long as it doesn't speed up or slow down, so no acceleration. Now that's just elevating the well-known relativity from Galileo, going back to the early 1600s. What had been called Galilean relativity applied to the laws of mechanics. Galileo himself had argued, and Newton agreed, that the laws of mechanics should remain valid in any inertial frame of reference. And all Einstein is doing here, as he says, he's raising it to a postulate and extend it to all the laws of physics. Not just mechanics you see in this first part, but electromagnetism, optics, even thermodynamics. That's just an extension of the Galilean relativity. The second posit, he says, is only apparently irreconcilable with the first, is that the speed of light will be constant independent of the motion of the source. So any observer, no matter what their state of motion, as long as they're in some inertial frame of reference, will always measure light to be moving at the same speed C. Even if they're barreling down the road in the same direction as that light wave, they'll measure that light wave moving away from them at the same speed C as if they were sitting at rest. And he says, it's only apparently irreconcilable with the first. We'll see that in a moment. He says, if we adopt these two postulates-- again, he hasn't proven them. He hasn't really told us necessarily yet why we should buy any of this. But if we adopt those two postulates to start our argument, then he says, as you see later in that paragraph, "the luminiferous ether will prove to be superfluous." It's amazing. [GERMAN] is the word in German. He doesn't say he'll disprove the ether. He'll say it's just irrelevant. In one sentence on page two on an article by an unknown patent clerk, he's basically not just said that people have misunderstood Maxwell's equations. People for 100 years have been chasing an object that, he says, is merely irrelevant. It is literally a red herring, he argues, that it would be merely superfluous. That is quite a strong opening to this paper. Now, why did Einstein introduce that second postulate, the one about the constancy of the speed of light for any inertial observer? Again, he doesn't tell us why in this paper itself. In the intervening more than 100 years, other scholars have been able to fill in some of those pieces based on, at the time, in unpublished notes, and correspondence, and reminiscences, and so on. So we now have a pretty good understanding of what led Einstein to that strange-sounding second postulate. And, in fact, it goes back to a thought experiment he'd had as a high school dropout at age 16. One of the things that literally kept him up at night was, what would happen if you could catch up to a light wave? So imagine being like a surfer here, riding a long wave on the ocean. If you're moving with that wave, if you're surfing at the same speed as that wave and you look up, you chance to look up and see this wave crest menacingly over your head, at that moment, the wave will appear frozen in space. You're moving at the same speed as the ocean wave, so you and the wave move together toward the shore. And it looks like you have a static wave configuration. The wave has a crest here, a trough there, and they're moving-- they're not changing as you move together toward the shore. Now, Einstein argued that should never be able to happen for a light wave. He first just kind of intuited. Later, he learned more at university when he studied a little bit of Maxwell's equations. He could fill this in a bit more quantitative detail that, with the power of Maxwell's equations, if you're in a region of space away from charged matter with no electric currents flowing-- so if rho and J both vanish, rho and J, then there's no way to use these equations, Maxwell's equations, to construct a static field configuration, where you have a fixed frozen crest here and trough there. The electric and magnetic fields just don't behave that way if you're in a region that has no immediate sources. So then, if you could never see this frozen, static field configuration with a crest that just stays fixed like that, then you'd better make sure no one could ever see that. You better make sure no one could ever do like the surfer does and ride at the same speed as that light wave. So he's really trying to avoid a reductio ad absurdum. He's trying to avoid a contradiction. How do you make sure no one ever would encounter this frozen field configuration, which he argues, is not at all consistent with the linear form of Maxwell's equations? Make sure no one could ever catch up with a light wave. If we could never surf at the same speed as a light wave, we would never see this frozen field configuration with a height here and a trough there but otherwise static in time. If we could never catch up to the light wave, we'll always see it traveling past us at the speed of light. That was, at least, the intuition that he nursed over the course of 10 years between the age of 16 and 26. Thank you, Lucas by way of Julia. That's a great question. So I didn't want to spoil this because your first assignment, as I'm sure you've seen, is to imagine you're a referee for the journal reporting to the main editor, Max Planck. That is still your assignment. You still have to do that. However, the real answer as to why this paper was most likely published, it was almost certainly never sent out for refereeing. Our modern idea that every single scientific paper should be subjected to peer review is quite recent. In fact, it only gets put in place with real regularity or standardization in the second half of the 20th century, well after this time. What was happening at many journals, including the Annalen der Physik, including Physical Review here in the United States and many other leading journals at the time was that the first one or two submissions to a journal from an early author, a new author, those would often be very carefully scrutinized and reviewed. They would be subject to peer review. Peer review had existed. It just wasn't applied uniformly. Einstein, it turns out, had published half a dozen or more articles in this journal before this time on, frankly, minor topics, minor even by Einstein's own reckoning. He had published fairly conventional, perfectly legitimate scientific articles in this journal so a few years downstream from that, like most other authors at the time, his later communications could be published at the discretion of the editor, without going through this rigorous refereeing process. And by all likelihood, Max Planck thought this was interesting enough-- it was on a timely topic because people were clearly concerned about the electrodynamics of moving buddies-- that it got into the journal without being subjected to the kind of scrutiny that Einstein's own first articles would have been or that any new authors would have been. You should ignore all that when you write your paper one. Pretend that this paper really is being subjected to careful and thorough peer review. The short answer, historically, is that this wasn't a special favor to Einstein. The general policy was usually to basically let later articles by an established contributor get into the journal, with much more discretion to the editor. Very good question. Any other questions on that? Yeah, I mean, so he was known enough by people like Max Planck to say, let's let this young guy's next article come in. This wasn't his first work ever. On the other hand, the other work that he published was kind of, we might say, workaday. It was perfectly competent, neither interesting nor innervated by the standards of the day, even by Einstein's own-- he was very harsh about even his own earlier work. So he had a small early paper trail. He, in the meantime, had, as I mentioned before, had annoyed almost all of his superiors, both at the patent office and when he was a university student. And he was, by this point, out of the mainstream. He was not working in any of the prestigious scientific research institutes. He was not being invited to attend the usual scientific conferences. He was, by this point, not well known. He was a person who contributed the occasional article to an otherwise very busy journal. And so he had the bare credentials-- he had not yet, by this point, even finished his PhD in the early years. He was a competent recent graduate from a highly respected technical institute but was otherwise not considered any great shakes. It's a good question, Diego. We'll come back to that actually in the next lecture on Wednesday, more the reception of Einstein the person, his career, and also this work. But the short answer was that it was not standing out. So I see Jade asked a good question. How large was the scientific community at this time? Very good. So around 1900-- a number of my colleagues did this famous census some years ago. In round numbers, there were around 1,000 professional physicists in the world, not just in Western Europe but including what could be reconstructed from Japan, from parts of China, from many, many parts of the world-- from India. There were basically 1,000 people whose day job, whose actual profession was something like working physicist, 1,000. So Einstein was, whether you count him in that 1,000 or not, he wasn't really being paid as a working physicist. He was one of on the order of 1,000 or a few thousand who might be contributing, at any given time, to these leading journals like the Annalen. So it's not hundreds of thousands, not a million people. But it's not like there's only a dozen. So it really was easy to have a young person get a few competent articles in a journal but otherwise not stand out, not really be noticed. And that really seems to be where Einstein was in this point in his career. Excellent question. Let me press on. I want to get on to what Einstein does in this paper. We'll walk through these first few sections of his paper from 05. So again, he doesn't tell us he's inspired by Ernst Mach, though it's quite clear from some of his other unpublished materials at the time. He wants to begin by rethinking kinematics, the motion of objects through space and time, not start with dynamics. What can we observe about how these things move? And so he has these comments like this early, again, on either page two or page three, quite early in the article. "If we wish to describe the motion of a material point, we give the values of its coordinates as functions of the time." This is like-- we expect most high school students would have encountered how you use coordinates to describe the motion of objects. He goes on. "Now we must bear carefully in mind that a mathematical description of this kind has no physical meaning--" that's very mocking-- "unless we're quite clear as to what we will understand by time. We have to take into account that all our judgments in which time plays a role are always judgments of simultaneous events. If, for instance, I say 'That train arrives here at 7 o'clock,' I mean, something like this. The pointing of a small hand on my watch to 7:00 and the arrival of the train are simultaneous events." This is in page 2 or 3 of one of the world's leading physics journals, he's trying to explain how to use coordinates to describe things like the passage of time. It's unbelievable. He was, as you probably noticed, enamored of trains. Trains come up over and over again in his examples in this era. OK, so if there's no absolute time, if we're going to be forced to think about what he comes to call the "relativity of simultaneity," how can we compare the times associated with different events? He says, well, we can send light signals because at least if we by that second postulate that every inertial observer will agree on the speed at which light travels. Then, we can use light signals as a universal tool. So imagine, again, this train scenario. We have us standing here. We're at position M on the train platform or the embankment, watching this train go by at some constant speed. As far as we're concerned, when we stand here watching the train go by, there were two assistants of ours, two partners standing at locations A and B, and they both had lanterns in their hands. And the question is, did they turn on their lanterns simultaneously? How could we tell? If we know by prior measurement that we are equal distance from both A and B, we're standing here, we've measured out maybe ahead of time the exact distance between each of our colleagues, our partners, we're in the middle. If we receive those light signals at the same time, if they reach our nose at the same time as the small hand of our watch clicks to 7:00-- so like that-- then we must conclude that the emission of that light happened simultaneously. The event of A and B turning on their lanterns must have occurred simultaneously. Because the light waves traveled the same distance. The light waves couldn't have sped up or slowed down. So they arrived at the same time; they must have left at the same time. The emissions must have been simultaneous. What about our colleague who's riding at the midpoint of the train as the entire train rides past the train platform or the embankment at some constant speed? The person's in the middle of the train, so she knows she's equidistant from the people who are at the front of the back of the train at A and B. She sees them both turn their lanterns on, but she receives the light signal from person B first. Now she concludes that A and B did not turn their lanterns on simultaneously. How could they have? She's clearly equidistant between them. She clearly receives a light from event B before the light from event A. So clearly, A and B could not have been emitted simultaneously. Now, who's correct? We might stomp our feet and say no, no, no. You're moving. It doesn't work for you. And she will just as stubbornly stomp her own feet and say, according to Einstein's own postulate number one, the laws of physics are exactly as legitimate for her as they are for us. She's moving in an inertial frame of reference. She's moving in a constant speed. All the laws of physics work for her. She can draw all the same conclusions that we can. She clearly was at the midpoint of A and B. She clearly received light flash from B first. This is what Einstein means by saying, as he writes in the paper, we can attribute no absolute meaning to the concept of simultaneity but rather that two events, which examined from one coordinate system are simultaneous, like us on the train platform, can no longer be interpreted as simultaneous events when examined from a system which is in motion relative to that system. Remember, nothing about forces, not about her being knocked off course. She's in a perfectly legitimate physical laboratory, moving at a constant speed. Her conclusion is exactly as valid, Einstein says, as ours is. Therefore, simultaneity is not shared across reference frames. OK, what do you do with that information? This is what begins to lead to some new phenomena, new at least for Einstein. How do we measure the length of an object? Well, as any good Machian philosopher would say, tell me exactly the empirical procedure to be followed. Well, here's a good procedure. At the same time, measure the locations of the front and back of the object and then subtract. Take the difference at the same time. So here's the train at rest in the station platform. We're on the train-- on the embankment. We measure the front of the train, the back of the train, take the difference. We measure the length of the train to be length L. What if we on the platform measure the length of the moving train as the train speeds past us at a constant speed V? Well, at one moment in time, simultaneously, we measure the positions of the front and back of the train, and we measure the train to be short. We measure some length L prime, which is shorter than L. In fact, we'll see shorter by how much in a moment. Now, the passenger on the train, our colleague who's sitting at that midpoint m prime says, no, you messed up. You got the wrong length because you messed up on simultaneity. She accuses us, we first measured the location of the front of the train, waited for the train to slide by, and later measured the position of the back of the train. So, of course, our measurement was shorter than it should have been. How does she know that we did them at different times? Because she asked her friends at the front and back of the train to release their lanterns. She clearly got the light signal from B first, so she knows we measured the front of the train at one moment. And at some later time, not simultaneously, we measure the back of the train. So, of course, we got the wrong answer because we messed up the measurement. With only a little bit more work, Einstein shows we can find out by how much these two measurements will disagree. They disagree by exactly that factor, Gamma, that we've now seen many, many times, that 1 over the square root of 1 minus the quantity V over C squared. Just by taking into account kinematics, how do we account for the motion of objects through space and time? Nothing about forces, nothing about an ether. Einstein arrives at exactly the same quantitative form as the Lorentz contraction because of our relativity of simultaneity, not because of forces from the ether. So just to beat this dead horse, length contraction have nothing to do with forces or an ether. It wasn't about dynamics. It was simply a consequence of kinematics once we take the time to understand how objects move through space and time and how we assign coordinates to that force-free motion. He goes on. The next step that he discusses is actually what comes to be known as time dilation. How can we measure the duration between two events, the time duration, how long between how much time passes between the events? Well, we can imagine building something called a light clock, a thought experiment. Take two mirrors, two reflecting mirrors, and fix their relative height. Hold them in position so they have some height, H, between them. And we'll tell the tick of a clock by bouncing a single light wave between them. That's what sets our reference time between ticks. And so the time between ticks, when we're standing at rest with this clock, is just the height between those mirrors divided by the speed with which the light-- speed with which the light will travel. So we have a fixed-clock rate when we hold this clock at rest with respect to us. It's given by the height and the speed of light. What if our partner has an identical clock as she races past us on that moving train? She's now moving on the train at a constant speed, V. How do we on the train platform watch the rate of her moving clock? Well, we see this light wave travel this sawtooth pattern, right? It's emitted from the bottom mirror, the entire assembly drifts to the right as the whole train moves by. So by the time that light wave reaches the top mirror, the entire assembly has moved. It's moved some distance B. Likewise, from the top to the bottom, it's moved another distance B and so on. So we're watching their light beam in the moving light clock trace out this sawtooth pattern as opposed to moving with what, for us, would look like straight up and down. So once again, we can use this great invention of the Pythagorean theorem. We have a beautiful right triangle. We can relate the distance traveled as seen by us of the light beam in the moving clock. That's this distance capital D. We can relate the square of that distance to the sum of squares of the two sides of this right triangle to the drift distance. While that light waves in motion, the entire clock drifts a distance B. Square that length, plus the square of the height, the height between mirrors, OK? Well, now, we know, much like when we talked about the swimmers in the race last time about the Michelson-Morley experiment, we can fill in a bit more information. We know that this distance, capital D, is the distance at which that light wave traveled-- that's this one and only constant C-- times how long it was in motion. It was in transit from the bottom mirror to the top mirror, some time, delta T prime. That's our measurement of the duration of ticks of the moving clock. That's our measurement of their clock rate. So the distance it traveled is its speed times the duration it's in flight, OK? Likewise, it's drift distance is the speed at which the entire assembly is moving, that's the capital V, times that same duration, how long between when the light wave is at the bottom mirror and the top mirror. And then, we still have h squared on the end. Sorry, gang. Well now, we can fill in some other information. We know what h squared is. That's just related to our clock rate in our own reference frame. H, remember, we can relate to the speed at which light is traveling times our tick rate on our clock. And now, it just takes that same amount of algebra, just like in the Michelson-Morley experiment, just like therefore in the lecture notes from last week, it takes very little work to derive a relationship between our measurement of the moving clock's tick rate and our measurement of our own clock's tick rate. And in fact, remember that gamma is always larger than 1 if there's any relative speed. We measure their clock to be running slowly. The time between ticks on the moving clock will be slower than the time between ticks on our own clock that's at rest with respect to us, not because there's a force, not because of any resistance from the ether, but simply because of kinematics. This last part here will be especially sketchy, but here's why I invite you to look it up-- optional, but you're invited to look at the brief lecture notes on the Canvas site. A few weeks later, he submits this article, the one we've been talking about, the electrodynamics of moving bodies, he submits to the Annalen der Physik in early June of 1905. He writes up a three-page follow-up that's submitted to the journal in September of 1905. So he basically takes the summer and derives what is probably his best-known equation ever, which is E equals mc squared. The nature of his original derivation goes like this. And again, the notes fill in some details of an alternate derivation. Einstein imagines a box, a sealed box that's moving at some constant speed, v, just moving otherwise in isolation at some constant speed. In the middle of its journey, two bursts of radiation of equal amounts but moving in opposite directions are emitted by that box. So the box emits a total amount of energy in the form of light waves and radiation, total energy E, half the energy carried off in one direction, half in the other. Now, because the momentum carried by that radiation is equal and opposite, he argues it has no impact on the speed with which the box is moving, right? The recoil from the emission in one side is exactly balanced by the recoil from the emission in the other side. So the box's motion is unchanged-- its velocity is unchanged, even as it emits a total energy, E, in the form of radiation, and then it just keeps speeding along its way. Well, Einstein argues-- again, it only takes three pages of the journal-- that the kinetic energy of the box must have changed. We have to conserve total energy. It has given up on energy E in the form of this radiation, but its speed hasn't changed. So what must have changed? Its mass. If we imagine starting with the kinetic energy of the box, one half mv squared, he's convinced himself that v hasn't changed. Yet, the kinetic energy of the box must have been reduced by some amount, E. The only thing that could have changed was the mass of the box. So he argues-- he actually has a brief calculation-- that some amount of mass of the box has been converted into the energy carried off by the radiation. How much mass? E divided by C squared. And in fact, if you do a little more work for arbitrary speeds v, they're not just small compared to the speed of light. In fact, the expression is gamma mc squared. And again, I'll go through that a little bit more detail in the lecture notes. The point is this is a kind of three-page afterthought that he submits to the Annalen der Physik in September of 1905 in the context of his work on the electrodynamics of moving bodies that he'd submitted in June. Let me pause there. Any questions on how Einstein is approaching this quite familiar question of electrodynamics of moving bodies? I see, DA asks in the chat, did Einstein's earlier papers have improper citations? Well, they had-- that's a good question. I haven't reread his early papers in quite a long time. He was never overly generous in these years. He wasn't oversighting. So that's true, even from his articles from the very early 1900s. I don't know that they were quite as bereft of relevant citations as this 1905 paper was. And his earlier ones, frankly, were probably a little more conscientious because it was his first set of papers. They were going to be-- he would have assumed they would have been subjected to more careful scrutiny and review. So it wouldn't surprise me if he was a bit more conscientious in citing work. But it's a good question. We can look 'em up. His papers are now easily available, thanks to an international team of scholars in something called the Collected Papers of Albert Einstein. And so I don't know. That's a good question. I have to go back. How would you describe the advantage of his theory over the leading theory of the time, Seedler asks. That's a great question, Seedler. So I'm going to actually pause on that because we're going to look a lot at that exact question in the next lecture. So it's one thing for us to ask today, do we think it's better or not as good, and we can vote and so on, and have our ideas about that. I think it's a separate question, but I think it's more like what you're-- maybe what you're after, how did experts at the time evaluate this work? What did they think was novel about it? Did they care? Did they consider this an important improvement? And we'll look at that early reception, really, for the entire class next round. Jade asks, why was the mass energy equivalence included in the paper on electrodynamics of moving bodies? Oh, good. That's a good question, Jade. So Einstein came at that because he was still thinking about this motion of radiation, the way that different observers would handle the question of the radiation of light. This was light being emitted from a moving source. This is us sitting at rest watching a box move past us at some constant speed, emitting light. In the rubric of the day, that was an example of the electrodynamics of moving bodies, the emission of electromagnetic radiation, as seen from a reference frame different from our own. So in that sense, it fit within the rubric. Einstein then made use of some particular relations he had derived in his longer paper on the electrodynamics of moving bodies. But for Einstein, at first, it really came from thinking about how different observers, observers in different states of motion, characterized this radiation from a source that was either moving with respect to them or not. He later comes to argue this is actually quite general. It's not restricted to light waves or radiation at all, or really even about electrodynamics. He comes to see this in a very, very general relation. But he comes to it, originally, from very directly from his work on electrodynamics of moving bodies. Good question. Any other questions on that? Now, we can turn to the last part for today. So let's talk about Einstein, and experiment, and something like engineering practice. For a long, long time, really from early in the 20th century once Einstein's work did become known, and celebrated, and then scrutinized-- for a long time, there was a tradition by scientists, by philosophers, by historians, by many kinds of scholars, a tradition of trying to read Einstein's paper from 1905 as some of direct response to the null result of the Michelson-Morley experiment. So let's say the fact that Michelson-Morley could not find any measurable effects of our motion through the ether, the argument had gone, must have convinced Einstein to then get rid of the ether because it had been somehow empirically disproven. And that was really the conventional way of trying to make sense of Einstein's own work on the electrodynamics of moving bodies. There's a really quite amazing critical review of that line of thought in this book by Gerald Holton. Professor Holton is a retired professor at Harvard. He was a member of both the Physics department and the History of Science department. And he wrote this really iconic, quite lovely collection of essays that was first published-- well, the essays were published throughout the 1950s and '60s. The book pulled together in the early '70s. Still in print, it's a lovely book. And Holton argues, I think really quite persuasively, that this reading of Einstein as somehow responding consciously and directly to this Michelson-Morley null result has almost no basis in fact. In fact, it's not clear whether Einstein even knew about the Michelson-Morley results at the time. If he did, he seemed pretty unconcerned about them. They did not seem to play any major role in his thinking. And that's an important distinction from people like Lorentz, like George FitzGerald, or like others whom we've mentioned. Plenty of people in Europe at the time were very concerned about the Michelson-Morley results. Einstein seems not to have been one of them. He might literally have not even heard about it yet. Or if he did, he didn't pay it much attention. We get some textual evidence just by looking carefully at Einstein's paper itself. This is, again, in the excerpt that you had for today. Right in the early, early part of the paper, he talks about what he calls unsuccessful attempts to discover any motion of the Earth relative to the light medium, the ether. They've all been unsuccessful. And he's talking about the quantities of the first order. Let's say it's a quantity to excuse me to the ratio V over C to the first power. One of the things that was so exciting to people about the Michelson-Morley experiment was actually sensitive to quantities of the second order. So Einstein is not only not citing a source as telling us which particular experiments he had in mind, which were these unsuccessful attempts to measure any motion of the Earth relative to the ether, he's dismissing his entire class of first-order experiments of which there were many, Fresnel aberration and so on. He's not even talking about second-order experiments, let alone mentioning more specifically the Michelson-Morley experiment. So Holton, I think, makes a pretty compelling argument that whatever Einstein was doing, it was probably not responding in any kind of self-conscious careful direct way to the Michelson-Morley null result. Now, that was sometimes taken by other people downstream, even from Professor Holton's work, to suggest that Einstein was somehow uninterested in experiments altogether. And that seems like not correct, either. That's the kind of overinterpretation of this question that's more specifically about Michelson and Morley. So here's where this article by Peter Galison is really drawn from a really fascinating book. That's why I think Galison's work can help put us back into Einstein's very specific world, to his context, to the things he really was concerned about and sometimes even quite obsessed about in the years, and months, and days, and weeks leading up to his work on the electrodynamics of moving bodies. And a lot of it has to do with things that we've already talked about, things like train travel, which he really, really was enamored with. So until the late 19th century, during Einstein's own childhood, in fact, there were no coordinated time zones. We now know about time zones very well. We have to navigate them carefully with these-- our asynchronous Zoom sessions and all the rest. But even into Einstein's own childhood, there were no coordinated time zones. Each town kept its own local time based, usually, on a clock in its town square, basically make sure an easily visible clock that everyone could see and agree on. That would set time, and you set your watch based on the highest clock tower in your town. And there were these clock towers all over the place, including, of course, at places like train stations, or churches, or other tall buildings. And that wasn't true only in Europe, that was also true in North America. So, for example, in this time period, passengers riding the train between Boston and New York City had to change their watches, on average, by about 37 minutes after the trip because there was no long-distance coordination for clock, for time. So what we would have called 12 noon in Boston was off by, on average, more than half an hour compared to what would have been called noon in New York City and vice versa. Now, that was true already, starting from the earliest days of rail travel. This became an especially pressing concern, one that now had both commercial and soon military concerns for people like General von Moltke in the newly unified Germany. You may remember, I mentioned briefly before, Germany was actually a collection of separate-- there was no country of Germany. It was a bunch of separate, German-speaking lands until unified into a single country of Germany in 1871 right on the heels of its war with France in 1870. And one of the leading military leaders from that who was elevated to a count became a-- he was knighted, essentially-- Count von Moltke is recounting why it was so important in this new system of a single unified Germany to pay closer attention to how we coordinate time. And this is now a quotation that I learned from Peter Galison's articles in your reading. He says that "unity of time is indispensable for the satisfactory operating of railways that's now universally recognized as not disputed. But, my gentleman, meine Herren, we have, in Germany, five different units of time. We have, in Germany, five zones with all the drawbacks and disadvantages which result" There's no nationwide coordination. "These we have in our own fatherland besides those we dread to meet at the French and Russian borders." We dread because those were sites of recent war. "This is, I must say, a ruin, which has remained standing out of the one splintered condition of" a Germany when it wasn't yet a unified country. "But which since we have become an empire, it is proper should be done away with." We need to coordinate clocks at a distance. And as I say, this was especially relevant after the recent war with France that led to the unification of a single country of Germany. This is all happening, literally, in Einstein's childhood. Remember, Einstein was born in 1879. The country into which he was born itself was actually quite new. So one of the main ideas that a lot of engineers and scientists were working on throughout Central Europe, not just in Germany, was to install what became known as mother clocks, and to install them in central train stations connected to other clocks by way of electromagnetic signals, either telegraph or increasingly radio waves. This is what Peter Galison wrote about in his really quite lovely book, "Einstein's Clocks, Poincaré's Maps." And, of course, the reading we had today, Peter's article kind of draws on that larger body of work. And so some of you may know the Eiffel Tower in Paris was under construction around 1910. One of the earliest uses for that was not only to draw American tourists. That wasn't such a bad outcome, either, for the French. But it was actually to serve as a transmitting tower for radio waves, in part, to transmit clock coordination signals. It was one of the standardizing beacons to send out standard time. "It is now 12:00 noon" and beam that out by electromagnetic signal. Other municipalities were laying telegraph lines or stringing above ground telegraph cable to do the same thing, to connect especially main hubs on this now intercity rail system so they could coordinate clock signals, even across the entire expanse of the continent. And so every single part of these new systems were subject to patent claims. This is exactly what Einstein was immersed in. I was talking with some students before the class really began this afternoon. It's really quite extraordinary. Peter writes about this a bit in his book. There was a rule at the barren patent office, where Einstein was a patent officer third class, that any records, any unpublished records regarding the review of patent applications had to be destroyed after some passage of time, a couple of decades. And so they wound up destroying even the stuff that Einstein had served as a principal examiner on. So we have very few clues as to the very specific patents on which Einstein would have served as the lead examiner. What we do know, however, and what Peter makes very good use of in his study, are the patents that were issued from that patent office in various years. We also know that Einstein was working on the so-called electrotechnical desk in particular. So it's all but unavoidable to conclude that Einstein was examining at least some of these gadgets along the way, which exact ones we don't know. We do know from the volume of patents being applied for and the kinds of work being done that this was very much a major concern in the Bern patent office as well as other parts of Europe. Peter goes on, this is one of my favorite examples, he actually was able to reconstruct from Einstein's correspondence the walking path that Einstein would have taken between his apartment in Baron and his office at the patent office. And here, you see this map of all of the newly coordinated clocks literally dotting the line of the path along which Einstein would have walked that would have just recently been connected by these telegraph and/or radial signal methods, to say that when you receive this electromagnetic signal, that means that it's 12 noon at the mother clock. Implement an appropriate offset to take into account your distance from that mother clock and the speed of light. So we know that Einstein was literally immersed in these kinds of things even on his walk to work, let alone at his patent desk. So we come back to this very strange-looking paper submitted to the Annalen der Physik in June of 1905 on the electrodynamics of moving bodies. Again, as I've mentioned many times, the title sounds quite conventional. The approach seems anything but conventional in terms of ordinary physics articles of the day, of their own day, let alone of our day today. And yet, if we think of this as the work of a patent clerk who's obsessed, who's enamored with this new technological stuff, maybe begins to make a little more sense. The lack of references might not only have been Einstein being lazy or uncharitable. It also could be a way to emphasize priority and downplay precedence, much like one would have to do to get a patent. I mean, this is actually novel compared to what was done before. There's a very close focus on what we might call the operational details. How would we really measure distances of space and time? How would we actually coordinate our clocks if we're not right next to each other? Or, if we have some intervening state of motion, how would you step by step establish things like clock coordination, taking advantage of the exchange of electromagnetic signals? It begins to look a bit more like a patent application and less like an exercise in mathematical physics of the sort that Hendrik Lorentz or others had done. So that's the main takeaway, that I keep in mind at least, from Peter Galison's article and book. And we can pause there. And again, I'll ask, are there any questions? So Babu-Abel says, I noticed the paper began with saying the Michelson-Morley experiment as a prerequisite. That's certainly not by Einstein. So it depends on which translation you're looking at. It's true that later editors, including Arnold Sommerfeld, who actually made one of the first translations, or one of the first-- actually, one of the first annotations-- many later editors have put stuff onto Einstein's paper, and they're usually marked carefully with footnotes. In some of the translations, it gets a little cluttered as to who put that footnote in. So you have to be a little careful. So Babu-Abel, I'm quite confident that whatever footnote you have in mind was one of these later additions from a well-meaning editor, someone who thought they were helping to clarify, but in fact was falling into this, what becomes now, a kind of historical trap. So if we go back to the Annalen der Physik itself or some of the more bare translations, for example from the Einstein papers project, the Collected Papers of Albert Einstein, now the kind of industry-standard version, we can go very clearly, what was exactly in Einstein's first published version and what was added by later-- well-meaning, but later editors. So that's a good point. Thank you for raising it. Stanley asks, can we explain again why a stationary observer measures the length of a moving train to be smaller? Yeah, very good. Because of different notions of simultaneity. It's a great question Stanley. It's worth sitting with this a little bit longer. Let me preview by saying, we're going to come back to that in the next class session as well to consider not how Einstein himself made sense of that, by how some later people wrestling with Einstein's work made sense of things like length contraction. So this is not the last time we'll see it, even in this class. But what did Einstein argue was the origin of length contraction? Remember, he very clearly says the ether will be merely superfluous. He's going to not talk about a physical medium that's going to exert forces the way Lorentz did about that squeezed beach ball. For Einstein, it's all about how we perform the measurement, that kind of mock-like operation for how do you perform a measurement. So he says, here's how you perform a measurement of length or of distance. Measure the front of the train and the back of the train at the same time and subtract. Take the difference. So if we're at rest with respect to the object, that's not very challenging. The train is at rest in the station, we can leisurely measure the front, the position of the front, mark it off on the train platform, leisurely mark the back, we can then take the difference and say, OK, the train is 25m long or whatever the answer might be. What happens when we try to measure the length of the moving train? When we watch the moving train, we should measure the front of the train and the rear of the train, mark those positions at the same time, and subtract. The problem is, based on this argument about the exchanged light signals, we no longer agree on what counts as, "at the same time," as the observer on the moving train does. So the observer on the moving train watches us do this and watches us do things at different times because of our relative states of motion. So we think we've been very careful measuring at the same time, the locations front to back and take the difference. Our partner on the train says, you measured the front of the train first. And then, you waited. Some time went by during which the train is sliding past. And then, you mark the position of the rear. So it's an unfair measurement. They watch us mark the front of the train, wait a while, and then mark the rear of the train after the train has moved. So along the direction of motion, of course, we measure a shorter distance because they argue we messed up on the timing. Now we could say, no, no, you're wrong. We did it right because of this and that. And they'll say, no, no, I'm right because I have these light signals that were exchanged from my partners A and B, who were stationed at the front and rear of the train, and I got the light signal from the person at the front of the train before I got the light signal from the person at the back of the train. So you can't tell me you were simultaneous. You messed up. You were lazy. You were too slow. You messed up. And we can't say, I'm right; you're wrong. We can, but she can say it with equal confidence and equal fervor. That's what Einstein wanted to get at by pairing those two starting postulates, that all the laws of physics, including all the laws of mechanics, electricity, magnetism, optics, thermodynamics, all the laws of physics hold just as valid for our partner on the moving train, if she's not speeding up or slowing down, if it's an inertial motion, they hold just as valid for her as they do for us. So she can appeal to things like the behavior of light signals, the behavior of her clocks on board and so on. And she very clearly unambiguously receives a light signal from the front of the train first and later from the rear of the train. So therefore, if these light signals were emitted at the moment people tried to measure the front and back of the train, she said, well, you messed up. You didn't get the timing right because you measured the front of the train, and then some time later you measured the rear of the train. And in the intervening moments, the train slid in that direction of motion. So you measured the wrong location of the rear because you measured the rear of the train at the wrong time. So, again, if that's still not super clear, go over the slides. The picture might help a little bit. I'd be glad to chat more during office hours and so on. But we'll also come to that again on Wednesday's class, seeing it from still a different perspective. So if it still doesn't quite sit right, it's OK. We'll get another crack at it on Wednesday. Great, [INAUDIBLE], thank you. Very good observation. So I think the way Einstein would have reconciled those two, he's saying, we can always establish a coordinate system. We can always say, this is my reference frame with respect to which I measure the passage of time. I assign some coordinate t. He even tells us how to measure the time of an event when the train pulls in, and my watch says 7:00, right? So we can always-- in fact, we have to always establish our own coordinate system that's especially convenient for us, let's say at rest with respect to our own motion. We can't appeal to any "real," any true coordinate system or any absolute coordinate system because, he argues, there is no such thing. But that doesn't stop us from laying out our own coordinates. We have to do that. In fact, we have no choice but to set up our own coordinates because there's no absolute system to which we can appeal. So he's arguing, in the case of the clock coordination, that all these locations down the line, the Bern train station wasn't picking up and moving with respect to the Zurich train station. We should be able to apply one set of coordinates to all those things across space. But how do we get our time coordinates to line up? We have to do a little more work. Because they can't appeal to absolute time any more than we can, right? So we have to therefore find a mechanism, find an operation according to which we can line up our own coordinate system, we can get our coordinate systems to be in agreement or in alignment. Because there is no absolute standard to which we can separately appeal. How can we do that? Well, we're all going to agree on the speed of light. Whether we're at rest with respect to each other or moving, according to his second postulate, light becomes this absolute meter stick or measuring tool because we'll all agree on it. That means that if we're using, say, radio waves from the Eiffel Tower to coordinate our clocks at a distance, we know exactly how much time must have elapsed in our own coordinate system between when the clock struck 12 noon in Paris at the Eiffel Tower and that electromagnetic wave carried that signal, it's 12 noon in Paris, it's 12 noon in Paris, it's 12 noon in Paris. And then, we receive that signal some time later, we know exactly how long in our time units, how long it's taken that signal to arrive because the light wave could only have traveled at the speed of light. So we can use this as a universalizing tool to coordinate our coordinate systems at a distance, and then we can make sure that our clock is in the same reference frame as the Paris clock, not by saying, I got the signal from Paris. Now, I'll make my clock say 12 noon. That wouldn't work. By saying, I got the signal from Paris, I now know it's 12 noon plus the propagation time. And I can calculate that without worry because the propagation of that signal is some universal constant, the propagation rate. So if I have separately measured the distance between my house and my train station in Paris or whatever the mother clock is we're referring to-- the Germans would never have used Paris, by the way. They would have hated that. But whatever the mother clock is, whatever standard we're using, you can separately at your leisure measure your distance between them. And then, you'll know how long it will take for an electromagnetic wave to get from that mother clock signal to you. It can't speed up or slow down because it's just the speed of light. And so now, you say, I got the 12 o'clock signal from my relevant so-called mother clock. I now set my clock automatically to be 12 plus appropriate offset. And now, I know my clocks are synchronized. So if I had a partner standing between us, midway between say Zurich and Paris or Zurich and Bern or whatever and we did the lantern trick, we could confirm that our coordinates are now aligned. How could we agree on the time coordinates at distant events? Trade light signals. If I have a partner who's midway between me and the distant mother clock, we can make sure we're still coordinated. We can do something like a calibration by doing another trick with light signals. Did the person in the middle receive the two light signals at the same time if she was equidistant? Yes? OK, then our clocks, indeed, are synchronized. So the idea is to take advantage of this universality of electromagnetic waves, even if Bern is not literally flying past Berlin. They're not in relative motion that way, but they're still using a universal feature of electromagnetic waves and doing local coordination because that's all we can ever do because Einstein says there's no absolute standard to which we could otherwise appeal. Does that make a little sense? Great, thank you. OK, Babu-Abel says, does anyone know why [INAUDIBLE]?? Is it just a postulate? That's a really deep and hard question, Babu. So Einstein made no effort to prove this or even justify it. He didn't even tell us about his surfer idea, which was a pretty cool idea. I mean, that thought experiment is pretty awesome. It's awesome for a 16-year-old. It's awesome for a 49-year-old. It's just an awesome thought experiment. He didn't even tell his readers that, the dummy. He could have at least put that in his paper. He gives us no reason, no compelling argument in this article to take on board this very strange-sounding postulate about the constancy of the speed of light. He has that very strange, abbreviated paragraph I read out to you near the end of the class today about the failure of efforts to measure relative speed with respect to our motion in the ether. He doesn't tell us which experiments. He almost certainly doesn't mean Michaelson-Morley. So there's some modest, weak empirical evidence that seems to argue against that. He's not giving us a hard proof. In the years since then, there have been more and more ideas built up going well beyond the original framework of Einstein's own work towards, which we can appeal and we'll get a glimmer of, actually, some of those first kinds of arguments, actually in Wednesday's class, other kinds of symmetry arguments which were absolutely not the kind to which Einstein appealed at the time, that guarantees certain kinds of invariances, certain kinds of geometrical properties that we'll look at quite squarely on Wednesday's class. But in Einstein's own day, it was, frankly, an article of faith. That's why he calls it a postulate. And Stanley asks a follow-up about measuring the length of a moving train. Slicing and measuring the locations front and back at the same time, take the difference, but they did-- at the same time is not well defined. That's right. So Stanley, that's exactly right. So the idea is if our operation to measure that length is measured front and back at the same time, and we disagree about what counts as at the same time, and everything else flows from that, argues Einstein, which is why it's all about kinematics, not dynamics for him. It's all about, how do we lay down coordinates with which to reckon the motion of objects through space and time. So Stanley asks, do we need to make sure the measurements taken are simultaneous? Yes, that's right. Or rather, it's to say, our measurement of length will be relative to our reference frame. It'll be totally correct in our own reference frame, according to our coordinates. It just will no longer agree with someone's equally correct set of measurements in their own equally valid but moving reference frame. That's what is pretty hard to get our heads around. Julian says, do you even need the second postulate if you use the argument of the thought experiment. Good question, Julian. Einstein himself returned to this in later writings. If we really, really-- we'll actually come to this a bit later in Wednesday's class, too. If we really take on board that Maxwell's equation should apply equally in every frame of reference, then once you know the answer, once you're a post-Einsteinian, you can go back and say, oh Maxwell's equations are invariant with respect to the speed of light. They predict it should be the same for all observers, so maybe you just need postulate one. That's convincing if you already know the answer. Einstein, in 1905, didn't get there that way. He wrestled with whether it was an auxiliary hypothesis later in his career. That's a great question. I'm going to pause there. I know some folks will probably have to jump off for other classes. Great, great questions. Feel free to email me if you have other questions. I'll have office hours on Wednesday this week at 11:00 AM Eastern. Feel free to pop in for that. And we'll pick up the story then on Wednesday. Stay well, everyone. See you soon. |
MIT_STS042J_Einstein_Oppenheimer_Feynman_Physics_In_The_20th_Century_Fall_2020 | Lecture_11_Waves_and_Probabilities.txt | [SQUEAKING, RUSTLING, CLICKING] DAVID KAISER: If there are no immediate questions, that's good. And if so, then I'll go ahead and share screen. And we'll launch into our continuing adventure with quantum theory, which is, frankly, one of my favorite adventures ever, speaking just for myself. OK, so you will hopefully remember on our previous class session, we looked at Werner Heisenberg's introduction of what came to be called matrix mechanics in spring-summer 1925, and what brought him to it, what was he reacting against, and a little taste of what people thought it might all add up to. And today, we're going to look actually at a kind of complementary set of developments that were unfolding very close in time, very rapid periods of period of development. But now looking at what became known as wave mechanics and some of its features that people began to really puzzle about. So for today, we'll look at some of Erwin Schrodinger's most famous work. Again, some of this might already be familiar to some of you, but probably not to all. And I think it's fun to just go back through and zoom out a bit and say, what did Schrodinger think he was doing and why? Let alone, what's the form of the equation that he put together? So that's the part one. Then, we're going to look in parts two, and very briefly, a shorter part three, at some of the implications that physicists began already to grapple with implications of Schrodinger's wave mechanics really pretty quickly. And for both part two and part three, as often happens for class session, I'll try to cover some of the main highlights, some of the most-- what I think, most interesting or salient points. And then, some of the details, some of the extra derivations, and so on are in those optional lecture notes. Strictly optional. They are available on Canvas. But if something goes by too quickly or I just had to skim over some of the intervening steps as sometimes happens, then hopefully those lecture notes will help fill that in for those of you who want to dig in a bit more. So that's our plan for today. So just to remind ourselves again-- looking back to the previous class session-- starting in the spring of 1925, after that bout of severe hay fever that chased Werner Heisenberg temporarily out of Copenhagen and to this really quite lovely-looking island of Heligoland in the North Sea during that span of spring and summer '25 Heisenberg introduced a whole new way of trying to think about quantum theory. This was now something like 25 years into various physicists grappling with something having to do with quantum theory, going back to Max Planck's work that was first published in 1900. We saw through our several moments of sitting with what was called "the old quantum theory" that it wasn't quite so obvious that that collection of ideas was adding up to any one new thing or that they'd even fit together. And that was a concern that people like Heisenberg and some of his young colleagues really sought to tackle. They wanted to have a first-principles approach to quantum theory-- a quantum mechanics-- rather than what looked already to them, as well as to many of us today, this grab bag which had characterized the so-called "old quantum theory" where people would begin with quite traditional expressions for the energy of this or momentum of that, take either Newtonian or Maxwellian expressions and then just kind of staple on at the end some new, usually not very well explained, quantum conditions and kind of ad hoc new rule. And Heisenberg and his very good friend from grad school days, Wolfgang Pauli, and a lot of these younger folks were-- by the mid-1920s-- were looking to do something different. They wanted to have a first principles theory all its own that might account for phenomena at the atomic scale or smaller. And we saw last time Heisenberg, in particular, was guided by a kind of approach that sounded very much like what Albert Einstein had announced back in 1905, in thinking about the electrodynamics of moving bodies. Heisenberg starts sounding a lot like a acolyte of the physicist and philosopher, Ernst Mach. So we saw in the opening pages of Heisenberg's article-- though we had it in the English translation in our reader-- but even in the opening paragraph he says, we're going to keep fooling ourselves or chasing our own tails, basically, if we keep trying to talk about things that are not even, in principle, observable. Examples of which he mentioned, would be things like the orbit of an electron and an atom. We don't watch the electrons zooming around like a planet in the solar system. Let's stop trying to calculate those features and focus instead on things that really are objects of positive experience like the emission lines, these spectral lines that come out when certain gases of atoms are excited. So Heisenberg began looking at things like the observable features of these spectral lines. OK, so he reasoned then, we saw in particular that since the frequencies of these emission lines, especially for very simple spectra-like for a hydrogen atom, the frequencies obey this law of addition, that one line in the spectrum could be written the actual numerical frequency, the number of cycles per second, obey these kinds of relationships. Where there are two other distinct emission lines, the sum of whose frequencies would add up to this third one and it could be it could be extended beyond just pairs, 3's, 4's, 5's. That there were these addition relationships between the frequencies of light that really genuinely was emitted by these excited hydrogen atoms. And then Heisenberg reasoned that when we characterize lights, Maxwellian or otherwise, the frequency appears in the exponent, which suggested to Heisenberg that the amplitude should multiply. If the frequencies are adding, according to this law of addition, and the frequencies are one way of characterizing the light that comes out, then shouldn't the amplitudes of those associated light waves multiply? And that's what he found, to his great surprise, alone on the island of Heligoland, that these amplitudes didn't behave the way he expected them to. In fact, the outcome of multiplying two of those things together depended on the order in which they were multiplied. And so he returned, or started very soon afterwards, a position in Gottingen. And his senior advisor there, Max Born, clarified these are matrices. The fact that the outcome of the multiplication depends on the order is a generic feature of matrices, whereas mathematicians would say, matrices do not commute. And so soon after that, his work became known as matrix mechanics. And then Heisenberg continued working on that, sharpening the formalism, working directly with Max Born and some other colleagues. But also trying to figure out conceptually, what might it mean if the world is characterized, the atomic world is characterized, by these strange arrays of numbers whose mathematics was, at least to him, rather unexpected and foreign? And where it seemed to be like the outcome would depend on the order in which you did stuff. Could it be that our operations on the quantum realm could have an impact on what we could ever see or measure? And in thinking along those lines, about what would it take to perform certain kinds of measurements of things like an electron scattering high-energy light off of it and so on, that Heisenberg began to articulate his uncertainty principle that there seemed to be a consequence of the fact that these matrices don't commune with each other. There was an irreducible uncertainty in the sharpness, the accuracy, or precision with which we could ever specify at the same moment the position of, say, an electron and its simultaneous momentum along the x direction, that there was a trade off whose scale, like so many of these things, was set by Planck's constant. So that was what we mostly looked at last time. And as I gave a kind of foreshadowing for today, right around that same time, just a remarkably close span of weeks and months, there was a really quite different approach to some of those very same questions being worked out by Werner Heisenberg. And we had an excerpt from this really quite wonderful biography by Walter Moore. A part of today's readings was a little section of Moore's book, talking through how did Schrodinger think about what became known as wave mechanics? I want to talk a little bit about that. So this was happening not in the spring summer of '25 but in winter/spring '26, but really just a few months later. Aaron Schrodinger began introducing a different first-principles approach to quantum theory, much like Heisenberg and Pauli and these other kind of young researchers. Schrodinger also thought there should be a kind of way of building a new theory from the ground up to treat the behavior of atoms and parts of atoms. But he followed a different conceptual path. He was working quite independently of Heisenberg in these early steps. So Heisenberg had built upon these ideas about discreteness. He was influenced by Bohr's work and ultimately tried to extract away from Bohr's work. But Schrodinger actually went a different way. He was most directly inspired, his work, not by the kind of Bohr atom or this notion of very rigid, discrete quantum jumps but rather by Louis de Broglie's very suggestive idea from 1924 about matter waves. And here's this, again, kind of cartoon version of the bottom here, trying to show de Broglie's suggestion for why these very discrete orbits that Niels Bohr had identified of, say, the behavior of an electron in a hydrogen atom. Why might those special orbits be picked out as stable? Maybe there was an inherent waviness of the electron and you had to have a constructive interference for a stable orbit. Anything else would suffer destructive interference. Those are the kinds of things that really caught Schrodinger's imagination very soon after de Broglie himself had introduced them. So Schrodinger was coming from a bit of a different community. He was originally from Vienna. He was, at the time, teaching in Zurich. He wasn't from Copenhagen. He wasn't working in Gottingen. It was a little bit kind of different network. Perhaps more important, he was also a generation older than Heisenberg and Pauli. He was basically very nearly the same age as Albert Einstein. They became very close friends, closer in age to Niels Bohr. He'd been working as a professional theoretical physicist for decades by the time this stuff came up, unlike Heisenberg and Pauli, who were working very soon after their own PhDs. Schrodinger was a established researcher. And his approach to this question, I think, shows that. We could characterize his approach to the new quantum mechanics, the new quantum realm, as an effort to retain as much of the kind of look and feel of the toolkit of time-tested physics as he could. And I'm just going to pause briefly and mention-- some of you might know-- so for the tool kits, Heisenberg and Schrodinger look very distinct with Schrodinger seeming to be a little bit more, almost, conservative, in the sense of let's retain as much as possible. And Heisenberg saying, it's a new world. We need a conceptual revolution. We'll use new techniques. And yet socially and personally, they were almost inverted. So Schrodinger, even in his day, was known as a remarkably Bohemian, leading a Bohemian lifestyle, not the kind of traditional family values. He had open relationships with multiple women at a time. He raised children that he had with one person, he raised them with his wife. And it was a really unusual arrangement that, actually, his own contemporaries would talk about. It stood out. And Heisenberg, as we know, was pursuing, actually, a very fairly traditional, stereotypical personal family life. So who's conservative and who's radical actually, different about what kinds of aspects of their life and work were trying to characterize. And, again, Moore's biography, I think, is really quite astonishing for this rounded view of Schrodinger, not only how he got to a few equations. So I recommend that-- more of the book you might enjoy beyond just the excerpt that I gave. OK, what did Schrodinger do? So we know from his own private notebooks that have survived, this work was happening in December 1925, January, 1926, really just roughly half a year after Schrodinger's first publications. Schrodinger began with the usual expression for energy, just the Newtonian expression for the kinetic energy and the potential energy. But then he immediately, immediately tried to pick up on what de Broglie had suggested about these matter waves. And so Schrodinger, who was again, very senior, very seasoned mathematical physicist by this point, reasoned that if something about matter waves is going to be so important moving forward, let's use what we know about how to quantitatively characterize the behavior of waves. So he did. He went in a kind of wranglerish way and said, well, let's consider a wave equation. How would we characterize, for example, the behavior of a standing wave? And we would write down this very simple-looking expression simple certainly to someone like Schrodinger, very familiar. And as many of you probably know-- and we saw briefly in some earlier lectures-- this parameter K here is called the wave number. It's inversely proportional to the wavelength. So we can characterize the behavior of any kind of wave in terms of a wavelength and ask, how is the amplitude of that wave changing across space? How is the wave behaving as we move from one position to the other? And that's what this expression is quantifying. But now de Broglie had said that there was something very specific about a wavelength for matter, the de Broglie wavelength. It should be proportional, de Broglie suggested, to Planck's constant and inversely proportional to the momentum of that particle, of that quantum object. So now if the wave number is related to a wavelength and de Broglie's wavelength is related to a momentum, it was actually not a very difficult series of steps for Schrodinger to say, maybe this k, this wave number that appears in any old wave equation for classical waves of a guitar or ocean waves in the water, maybe that k is actually deeply related to the momentum in a way that actually is new, that comes straight from de Broglie's suggestion. And if so, if we can relate K to the momentum as usual, scaled by Planck's constant here, then we go back to this quite standard equation for a standing wave of any kind. And now we want to characterize something specifically quantum theoretic then we can substitute the k for the momentum with keeping track of our h bar. And now it looks like we actually have a whole new way of thinking about the momentum of any quantum object. Maybe the momentum is just a measure of the kind of spatial gradients of some of associated matter wave. So maybe there's a whole new operator. Maybe there's a new object that tells us that relates the momentum of some object to the gradient of its associated matter wave with, as usual, the scaling, the kind of sense of scale or a sense of proportion provided by Planck's constant. So what Schrodinger was doing here was trying to build up an equation where Planck's constant was kind of built in from the start, not appended at the end. Not, let's solve for some solutions and then smack on some new constraint but trying to build a first principles equation that would describe how matter behaves where Planck's constant is immediately setting the scale from the start. And so, with that series of reasonings, Schrodinger, arrived what we would now call the time-independent Schrodinger equation. And it really was. You can see where it comes from. It's starting from the typical relation between kinetic and potential energy but trying to plug-in these insights and quantify these insights from de Broglie right from the start. OK, so many of you have probably seen this before, that we still call it the Schrodinger wave equation. And let's just talk for a few minutes about the nature of that expression. First of all, it's all continuum. This wave function PSI seems to behave just like any other wave quantity. At least there's no reason to think it wouldn't from this expression. It should be varying continuously across space, gently having a slightly different value here than there. There's nothing of a kind of inherent radical discreteness that faces us when we look at Schrodinger's equation. It looks like, other than the presence of Planck's constant, the form of the equation the mathematical form, looks quite familiar. It's a differential equation for how some quantity is going to change over space. And he later, as many of you know, he introduced a time-varying version where the same quantity PSI could be continuously smoothly varying over time, as well as across space. So it's, again, retaining the look and feel of continuum-based differential equations. It doesn't have these ad hoc discreteness features that seem to be put in by hand by someone like Bohr in the Bohr model. So one of his first tasks-- and, again, we know this from his own private notebook-- he was having a ski skiing vacation with, not his wife, but with two very much younger twin women. And in the midst of this he would take these breaks and try to calculate. It's quite astonishing. And so one of the first tests he knew he had to apply this formula to was the now time-honored hydrogen atoms. So he said, what if the potential energy is not any old form but is the specific form he'd need to characterize the mutual attraction between an electron and a proton in a hydrogen atom put in the Coulomb potential? And then he found, again, just on his own in this kind of ski chalet right around the holiday-- right around New Year's-- that solutions to this equation will correspond to very specific values for the energy. You notice here, the first in this whole series of articles that he wrote were called "Quantization as an Eigenvalue Problem." And, again, that's a hint. He wants to work with familiar differential equations. What he's done here is he has some differential operators, some combination of terms that quantify the rates of change of some continuous property. That's this side. And there should be some eigenvalues of that differential operator, some allowable numerical kind of coefficients, that will characterize the changing across space of that quantity. And so he finds just by putting in the coulomb potential for the hydrogen atom that this very kind of generic looking or familiar kind of format of a differential equation will yield exactly the same energy spectrum, exactly the same allowable values for the eigenvalues of the energy, the same coefficients here that will satisfy the equation, exactly the same series as what Bohr had found from very different starting points. So therefore Schrodinger's equation, as well, would reproduce the successes like the Balmer spectrum. It should predict precisely the same frequencies of those emission lines from excited hydrogen atoms. So Schrodinger, again, had a sense that he was maybe on to something important here. How does that work? And, again, many of you have probably done this calculation in problem sets or seen it done in lecture class. For others of you, it's an exciting thing to look forward to. So I'm not going to go through the whole calculation. But I just want to make sure we understand the kinds of steps that Schrodinger was following. How does this continuum-based equation, this differential equation for some smoothly varying quantity PSI, how would that ever give us a discrete energy spectrum? What would this integer n that Bohr had argued came from some fundamental discreteness in the behavior of electrons, how does that come from Schrodinger's equation? So let's step back and think about a simpler case involving standing waves. Let's just consider motion in one direction, so the x-axis. And let's imagine we have a guitar string or a violin string where the endpoints are anchored. So whatever the legitimate solutions are for this wave equation, PSI being sort of the amplitude of the wave in this case, the wave has to vanish at the origin and at the length L, where the other end of the rope or the string is anchored. So how do we combine that with the general solutions for this differential equation? Again, here's our differential operator, so to speak. Well, if we ignore the boundary conditions, if we ignore the fact that this string is anchored at both ends, then we can very quickly write down the most general series of solutions of this wave equation. It will oscillate across space like sines and cosines, with some characteristic wave number and different coefficients. But now, if we apply the boundary conditions, then not all of these solutions are viable anymore. Some of them no longer solve the relevant equation. And in fact, if we apply these particular boundary conditions to this very general equation, differential equation, then the allowable solutions have a discrete set of eigenvalues. The wave number k, in this case, becomes not any old continuous value but, in fact, kind of snaps into place with only integer allowed, a series of allowable wave numbers. So the wave number could be 1 times some basic reference wave number, or 2 times that, but not 1.8 times that. We get a quantum discrete spectrum by solving a continuum-based differential equation and applying the appropriate boundary conditions. And so for the hydrogen atom, Schrodinger reasons similarly. It wasn't that the electron was somehow glued in space and could only be here versus there, like Bohr had suggested in this kind of ad hoc way. But, rather, Schrodinger said, well, we're going to solve this differential equation for some general smoothly varying quantity PSI. And let's apply some reasonable boundary conditions that the amplitude should vanish at the origin. The electron should not be found in the nucleus. So the wave function somehow describing the electrons' motion will vanish at the origin. And the electron shouldn't be found infinitely far away from its host nucleus. So the amplitude for whatever's waving, whatever kind of de Broglie-like matter wave is associated with the electron, that should have vanishing amplitude when you're arbitrarily far away from the rest of the atom. If you impose those boundary conditions, a little more complicated. But, conceptually, it's like this kind of standing wave on the string of fixed endpoints. That's what forces a discrete spectrum for the allowable energies, not restricting the kind of motion of the electron per se. So Schrodinger now has retained the look and feel of manipulating differential equations familiar for a long time by that point. He has a way of reproducing this kind of inherent discreteness for things like the emission of these spectral lines but by building up this kind of waviness of nature from the start, rather than imposing a kind of ad hoc quantum condition. That's what Schrodinger was working toward in that winter of '25-'26. He also then quickly realizes that solutions to his equation are wave functions. They should obey something called superposition, which we've seen briefly in previous class sessions, even this term. If there was one solution to that equation and a separate solution, then the sum of them will also be a solution, for that matter, the difference. That you can take legitimate solutions to the wave equation and add them and the resulting sum will also be a solution. And that means that these wave functions could do all the things that waves do. They could undergo interference, for example. And, again, here's just a quick example. If you have two waves of the same amplitude but different wavelength, then in places where the amplitudes happen to line up, where crest is near crest and you add those waves together, we'll get constructive interference. The resulting wave will have a height nearly twice as tall as either wave alone. On the other hand, in other locations where a crest is nearly lined up with a trough, they'll just about cancel each other out. And the resulting amplitude will be much smaller. You can have destructive interference. All those things that are familiar from water waves, from sound waves, from light waves in a Maxwellian context, all those things seem to carry over to solutions of Schrodinger's wave equation as well, even though it's seemingly describing something like an electron. So that really forced the question, if this wave function PSI has definite wave-like properties, what was it? What was it a wave of or in? It starts to sound like the questions about light that we began this class with. So what is doing the waving if PSI behaves like a wave? And then it became even more bizarre. Here's when it began to break from just the familiar physics of classical waves. And, again, Schrodinger-- to his credit-- was very quick to notice this himself and really point it out and begin to try to puzzle it through. If you consider a system with more than one electron, let's say as simple as just two electrons, not just one, then the equation suddenly depends on many, many coordinates. The interaction potential, that key portion within the Schrodinger equation, will now, in general, depend on the location of both electron one and electron two. They're each existing in three dimensional space, x, y, and z. They could be at different locations. So the potential energy of that two-body system will now, in general, depend on six spatial coordinates, not just three. So now solutions PSI, solutions to Schrodinger's wave equation, will themselves depend on six dimensions, six coordinates of space, not just three. That's not like a water wave, or a sound wave, or a light wave. So even though the mathematics was, in many ways, very familiar and Schrodinger used that to great effect to find solutions, to figure out why there was a seemingly quantized or discrete spectrum of eigenvalues and all that, it wasn't actually just classical waves. And there were even, mathematically, some unexpected features that Schrodinger himself began to elucidate, that PSI, whatever it was, was not just like a water wave. In fact, it seemed to live in some higher dimensional abstract mathematical space that would quickly become known as configuration space rather than the space in which we seem to move around every day. So that was what Schrodinger began publishing as early as January 1926. He wrote a series of papers very, very quickly, one after the other in a four-part series throughout that winter and spring of 1926. And so before long, both Schrodinger and Heisenberg themselves, and soon many members of this kind of tight-knit community, were beginning to do a kind of compare and contrast exercise. There were now two quite different looking first approaches to a new quantum theory on offer. Heisenberg's matrix mechanics, Schrodinger's wave mechanics, they looked nothing alike. They seemed to make very different starting assumptions. And they had different kinds of strange features. They were strange in different ways. So Schrodinger eventually caught up with some of Heisenberg's papers. And he added a footnote into one of Schrodinger's own follow-up articles in that first series. I think this was not in his very first article but some time later that spring in a footnote. He writes, as you see here, my theory was inspired by Louis de Broglie and by short, but incomplete, remarks by Albert Einstein. No genetic relation whatever with Heisenberg is known to me. I knew of his theory, of course, but felt discouraged, not to say repelled by the methods of transcendental algebra, what we now call matrices, which appeared very difficult to me, and by the lack of visualizability. So that's about as rude as you can get in a footnote in a physics journal, at least at that time. I was discouraged and repelled by my colleagues' work. Heisenberg was a little less restrained. He wrote just in a private letter to his friend, Wolfgang Pauli, around that same time, just a little correspondence exchange. Heisenberg said, the more I reflect on the physical portion of Schrodinger's theory, the more disgusting I find it. Where Schrodinger writes on the visual ability of his theory, I consider it trash. After all, these waves live in six or more dimensional space. That seems no more visualizable than these abstract matrices. So this is really a remarkable clash between two people working on very similar topics at the same time and finding each other's work clearly lacking, not just, say, repulsive, disgusting. And so it was all the more surprising and really unexpected when several other physicists, actually Schrodinger himself and then two others, Pascual Jordan, who was another young physicist about the same age as Heisenberg and Pauli. Jordan was also a young physicist in Gottingen. So he began working directly with Heisenberg to elaborate matrix mechanics. And then separately still, Paul Dirac, a young British physicist who at this point was on a fellowship in Cambridge, England. All independently within a very short span of time, all three of those physicists demonstrated by the summer of 1926 that these two quite different-looking approaches, Heisenberg's matrix mechanics, Schrodinger's wave mechanics, were actually mathematically equivalent. You could actually build a mathematical kind of one-to-one mapping to any expression you might try to formulate in matrix mechanics and put that in wave mechanics or vice versa. So it looks disgusting, and repulsive, and un visualizable, one to the other, between Schrodinger and Heisenberg. It looked like these were really just remarkably different ways of trying to carve up the world. And yet, again, within just a matter of a few months, people had found at least a mathematical bridge between them. And it was that last work that really made a kind of a map between matrix and wave mechanics that really convinced people already by 1926 to begin referring to a new single thing, a new thing called quantum mechanics, not just matrix mechanics, not just wave mechanics, but now one thing called quantum mechanics and to then start using the phrase, "old quantum theory" to refer to that kind of grab bag of techniques that we looked at that had come before 1925-1926. So the new quantum mechanics was heralded, really, by that name almost in real time, just remarkable kind of convergence by summer and fall of 1926. So I'm going to pause there. I see a couple of questions come up in the chat. Time for some other discussion. Any questions on that stuff? So Alex asks, is this is Heligoland the same island the British tried to blow up after World War II? I have no idea. Well, that's news to me. It might be. Wikipedia at least could inform us. I don't know. Did configuration space ever become generalized to Hilbert space, Fisher asks. Yes, I'm not sure I'd say generalized to separately, and also actually pretty soon afterwards, by people like Paul Dirac began trying to formulate this new quantum mechanics in a more mathematically first principles way, thinking about vectors and vector spaces and so on. And that comes not only from Dirac but people like Dirac and John von Neumann, among others. So they weren't replacing configuration space per se. But they were certainly trying to think more carefully about the kinds of spaces, the kinds of mathematical spaces in which something like a quantum state might reside. So if you're thinking about a wave function, that's in configuration space. If you're thinking about a quantum state as a kind of eigenvector in some abstract Hilbert space, then that's the direction that Dirac especially starts to formalize. He writes an extraordinarily influential textbook. In fact, I showed its cover on the previous slide. First edition 1930, that's pretty quick to have a textbook on stuff that was really just a few years old. And then with the Hilbert spaces, people began to realize these could be of almost arbitrary dimension. So Hilbert spaces could be two dimensions. They could be 10,000 dimensions. They could be infinite dimensional. And so, in that sense, they're similar to configuration space in not being limited to only x, y, and z. But they're not exactly the same kind of thing. It's really people like Paul Dirac, who started pushing very heavily in that direction. Silu asked, did their opinions on each other's work ever change? That's a good question. I think they stopped calling each other's work disgusting in print. So that's a plus. And I think Schrodinger himself-- since he was one of the people who did find this mathematical bridge, a kind of equivalence between the two approaches-- Schrodinger, I think, after that began to realize there are kind of conveniences of either approach. So it's not one versus the other, since they could be mathematically mapped. But as each of you have probably learned just from your problem sets, for certain kinds of questions we might choose to ask, for certain kinds of calculations, one starting approach, one coordinate system, one way of characterizing the balance of forces and all that, that's often more convenient than others, even if, in principle, we could have mapped it into a different format. That's how people begin to talk more and more about the compare-and-contrast exercise between wave mechanics and matrix mechanics. If they are fundamentally mappable, then use whichever makes the most sense for a given problem, as opposed to say, choose one versus the other. So Lucas helpfully tells us that Heligoland was the site of one of the largest non-nuclear explosions ever. I had no idea. My goodness, thank you. I didn't know that. Thank you both to Alex and to Lucas. I've never been to Heligoland, obviously hard to travel now, to put it mildly. That would be an awesome field trip that we-- we should at least take a virtual field trip. We can look into that. Good, so my understanding is Heligoland, it has been contested territory. Again, Wikipedia can tell us who. It's a territory of what nation has been an unstable question, Germany, Denmark, probably others. It seems to sit in the North Sea pretty close to the coasts of both Germany and Denmark. So I knew it's been contested. And I thought it was also a lovely kind of vacation spot for much of the 19th century. I didn't realize it was subject to such a bombardment. So that's a preview. As you may have guessed, we're going to be coming to the Second World War in this class pretty soon, actually. So we'll be getting to similar kinds of themes, actually, within about two classes. So keep that theme in mind, I guess. Any other questions about Schrodinger, the wave equation, superposition, discrete eigenvalue spectra? They're fun. If not, then I think I'll press on. We can look at the next part of class. Of course, obviously please keep the questions coming, if more questions come up. So let's look now at the second part for today. And again here, I just remind you there is the optional lecture notes that go into this a little bit more explicitly, as well. So let's talk about something called the double-slit experiment which, again, might be something that at least some of you have heard about before. Some of you might have even been able to do a version in junior lab, for that matter. I'm not sure. This is just a remarkably fruitful experiment. It's the pedagogical gift that keeps on giving. It's not just me who thinks so. Look at this remarkable series of people who have come back time and time again to the double slit. So Heisenberg lectured on it as early as 1929, perhaps earlier. But that's the first case we know about. Schrodinger was lecturing on this as early as 1936 and maybe earlier. But we have summer school lecture notes from as early as 1936, where Schrodinger featured it, as well. Niels Bohr featured this very famously in his very well known discussion of his long debate with Albert Einstein over quantum theory. That was another one of the readings for today, was an excerpt, a kind of shortened version of Bohr's very famous characterization of his long debate with Einstein. Coming closer to our own time in the 1960s, Richard Feynman declared that the double slit has in it the very heart of quantum mechanics. He said this is basically all you need to need to know, practically, about quantum theory. And even really much more recently, readers of Physics World magazine, which as you may know, is the kind of physics today of the UK physics society, they voted this double-slit experiment the single most beautiful experiment in the history of all of physics-- not the history of 20th century physics-- since the Earth cooled. And humans have done experiments to learn about nature. Readers of Physics World magazine concluded the single most beautiful effort of any kind ever was a double-slit experiment. So let's spend at least a few minutes giving proper, well-deserved attention to a double slit. So here's my very bad cartoon version of the double slit. I'm going to really follow what Richard Feynman's approach in what became in the canonical way of introducing the double slit. I've really lifted much of this from Feynman's very famous version of this in the Feynman lectures from the early '60s. He would have had better pictures. But the ideas are pretty similar. So let's imagine, first, the behavior of classical particles. These could be baseballs, or basketballs, or bullets fired from a gun, or any kind of projectile. So let's imagine we're firing bullets one at a time toward a bulletproof wall. So this wall here, this thick wall, is meant to be bulletproof, sorry. And yet there are two narrow slits in the wall. We'll just label those slits, this space here is slit A, this slit, B. The slits are a distance D apart. And then we have some backstop here where the bullets will lodge. And then, when it's safe to, and the gun's put away, we can go out and count up the number of bullets that landed here versus here versus here. So we're to ultimately try to add up. We're going to make a kind of histogram and plot the number of bullets per location as we march up in this direction of the backstop. So we want to then figure out, in a sense, the probability distribution where our bullet's most likely to wind up on this backdrop when, either, one slot is open, the other slot is open, or both are open. That's our exercise here with classical objects like baseballs, or in this case bullets fired from a gun. So let's first imagine that slit A is open but slit B is closed with some firm bullet-proof kind of shielding. So then our number of bullets will look like this. It's been smoothed out. Let's say if we shot 10,000 bullets, then it would actually be a pretty smooth curve. It would be basically a Gaussian. It would look like a kind of bell-shaped curve centered around the open slit. Most bullets would be found directly behind the open slit. But there'd be some modest scatter, some width to the distribution, so to speak. Likewise, if we then sealed up slit A with some bullet-proof shielding, and opened only slit B, and performed and shot another 10,000 bullets at it one at a time, and then did the same counting exercise, we could make a separate histogram. It looks remarkably similar, not surprising. It's a symmetrical problem. So if only slit B is open, then the vast majority of bullets are found directly behind that open slit. And there'll be some kind of bell-shaped curve, basically kind of Gaussian distribution. On either side there'll be some modest scatter. And then, if we then open both slits, fire another 10,000 bullets at the bulletproof wall with both slits A and B open, do the same exercise and make our histogram, the resulting distribution is just the sum of the previous two. So now there are a fair number of bullets that wind up in the middle because some are coming from the tail of this distribution. Some are coming from the tail of this distribution. And so some of them wind up in the middle. It's the sum of those two. But, nonetheless, the distribution is clearly still peaked in the positions directly behind the two open slits. So we have the resulting probability distribution. The histogram, when we add up when both slits are open, really is just the sum of those two separate ones. And so for classical particles, these probability distributions are of independent events. And therefore the probabilities in this case simply add. . We can get the probability distribution for the combined series, slits A and B open, simply by adding two independent possibilities. Each bullet either went through slit A, in which case it followed this distribution. Or each bullet, or an individual bullet, went through slit B. Add up those two options. Here's our resulting distribution, not too surprising. Now let's try a second version with waves. And this could be macroscopic waves, for example, ocean waves. In my little write up, I talk about doing this near the coast of Australia with waves approaching the Great Barrier Reef. But it could be even in an artificial wave pool with some barrier. So now we have some reef or some structure that can block waves that are coming from further away. They're approaching. Here's the shore. And we have some barrier that will block waves from passing through here. But the barrier has these two narrow slits which, again, we can label A and B. And they're a distance D apart. So now what we want to do is measure the intensity of the waves that lap up on shore. Clearly some water gets through slits A and B. So what's the pattern of waves that we find on the shoreline? And how does the intensity of that wave vary as we march along in location? And, again, we'll project the positions of slits A and B on the shore and ask about the varying intensity of the resulting wave as a function of distance or location. So once again, let's start with only one of those slits open. So we're going to close slit B with some kind of dam. We're going to block one of those slits so water can only get through that one narrow slit bay. And therefore we'll have a clearly peaked distribution. The wave that results, that we find on shore, is virtually zero, far away from that location of slit A, the projection onto the shore of the open slit. It's very clearly peaked, right directly behind the open slit. In fact, we can characterize the intensity of the wave as the square of some amplitude. It's a wave. It has some amplitude and some phase. And the intensity goes clearly as the square of this height, of this amplitude. And it basically is very clearly dramatically peaked behind the open slit. Now we place the dam in front of slit A, open slit B, same kind of story. But as many of you probably know, once we open both slits A and B, we get a totally different story. Now we get one of these characteristic interference patterns on the shore, not merely the sum of these two distributions together. And if we do a little more work, we can realize we can still characterize the resulting intensity by summing something together, not by summing the two intensities, actually by summing the two amplitudes first and then squaring that. And that's what allows for these interference patterns, much like we saw with the standing waves earlier. In regions where two crests happen to align, we'll get a sharp constructive interference, a heightened peak, in regions where there might have been one large peak. But a very small trough on the other side will get destructive interference. So in fact, when we add the amplitudes and then square, the resulting intensity looks nothing like the sum of the two intensities because that would overlook, that would discount all these ways that the waves can actually interfere with each other. And one indication of that is this characteristic feature of the interference pattern. The point on the shore with the largest intensity is a point between either two slits, whereas when either slit is open alone, that's a point of actually virtually vanishing intensity. So the greatest intensity is at a spot where neither scenario on its own had much power at all. And so, again, this is a characteristic of classical wave behavior, that the intensity goes as the square of the sum, not the sum of the squares. And since the amplitudes can be both positive and negative, they can even be complex numbers. They can be all kinds of situations in which the sum in which the square of the sum is quite different than the sum of the squares and, in particular, has this characteristic wave-like interference pattern. Again, probably pretty familiar. So now what happens when we now move to the quantum realm? Now let's imagine some cathode ray like JJ Thomson had, or of a sort that was by this point quite familiar, even by the 1920s. Things like what Germer and Davisson used in 1927 to think about de Broglie waves. This was already literally a feasible experiment as early as the mid 1920s. And people tried to do it. So let's imagine shooting out single electrons one at a time from some source, some cathode ray, let's say, some electron gun. We're going to send one electron at a time. We're not going to have a continuous spray of them. We'll send one, shut down the device. Let it cool. Go out for a coffee. Come back an hour later, send out one more electron. We can really take our time. So at any given moment, there's only one electron in play in the device. We're going to aim those one-at-a time electrons at a device. This is, by the way, taken-- this illustration is taken from Bohr's essay about his debates with Einstein. You hopefully recognize it from your reader. So we're going to send these electrons one at a time toward a device that has two slits. The electrons will be blocked if they hit anywhere here. But they can get through either this slit or that slit. And we have the opportunity to open or close each slit independently. And then behind the wall with two slits, we have an array of very closely-spaced detectors, electron detectors. And so we can measure with very good resolution where electrons are detected as a function of position along this kind of backstop, as array of detectors. We also can arrange things So that the distance between the two slits, the distance little D, is 10,000 times larger than the characteristic waviness scale, the de Broglie wavelength of those electrons. So the electron, if it's approaching, if it's anywhere near this slit, it should be 10,000 electron distances away from that slit and vice versa. It should have a localized behavior as it encounters the slits. Moreover, each of the detectors is going to detect the location of electron with a very, very small region of space. So we're not going to expect to find a splayed out wave of electrons. Each one is detected as a point-like discrete particulate detection because we've these very high resolution detectors in the back. So we're going to release one electron at a time. Wait one hour between and do that 10,000 times. Which, as I like to say, is why we actually have graduate students. Who's going to have patience for that, with many thanks to our teaching systems. So this is going to take months. But we care about the nature of the universe. And so we're going to just go plow through and do this. Now we're going to do the same kind of accounting exercise as we've done with both the classical particles and with the classical waves. We're going to make a kind of histogram of the number of electrons that are found in each of these bins, each of these detector bins, as a function of location. So once again, we'll start by keeping one slit closed so the electrons can only pass through the other slit. So if only slit A is open, we get this very strongly peaked distribution directly behind the open slit, just like with the bullets. If slit A is closed but only slit B is open, again, a very strongly peaked distribution. And so it looks a lot like the classical particles. In fact, if you've taken junior lab, you probably would learn to say those little wiggles are clearly instrumental error, that statistical insignificance. They never rise above about 1% of the central peak. It really looks like electrons are behaving like bullets. They're each emitted like a tiny little discrete particle. They're encountering one slit or the other because we've arranged the distance to be so extreme. They're all being detected as one little blip on that detector screen. They look like little baseballs or bullets. And yet, when we then look at the pattern when both slits are open but only one electron went through at a time, we get right back to this very familiar wave pattern. How this can happen has kept people up at night for, now, very nearly 100 years. And I hope we're not too blase about it even today. It looks like particles when only one slit's open. It looks like waves when both slits are open. And yet we've been super careful to only send one electron through at a time. Unlike a water wave is clearly an extended continuous object, clearly the wave can interfere with itself. It is literally an extended object which has many moving parts. These electrons are tiny little pellets. We emit them as little discrete bundles. We only have one in play at a time. It's not like two electrons could have influenced each other by repelling each other and made some kind of characteristic pattern because they were interacting with each other. We sent only one through at a time and waited a whole hour in between. We've done that 10,000 individual times. And yet the pattern that builds up, one individual point-like detection at a time, is this undeniable, very familiar wave pattern where, much as we would expect now from the ocean waves, the resulting probability distribution goes like the square of the sum and definitely not like the sum of the squares. It is not the sum of this distribution plus that one. It is absolutely emphatically not like 10,000 baseballs thrown at a wall with two holes, or bullets shot at some barrier. It looks just like the water wave, even though this built up over time from 10,000 independent events. Now, if I were to unmute you all, I should hear you screaming with either joy and ecstasy or real deep despair. Either way this should not be an emotionally neutral statement. So what comes from all this is Max Born, the Gottingen theorist, a contemporary of Einstein and Schrodinger, Heisenberg's mentor who taught mentor that these were matrices, Max Born then suggests from exactly these kinds of reasoning as early as the summer of 1926 that what is this wave PSI, the thing that is a solution to Schrodinger's wave equation? It seems to be not a physical bit of the electron. It's not some electron density that's spread out in space. It seems to be related to the likely behavior of the associated particle. It's not a physical bit of the particle. It's a description of the way that particle is likely to behave. In particular, it's what comes to be called a probability amplitude, not the probability, not the peak that has this value on our histogram but actually the object whose absolute square gives rise to this thick black-line pattern. So the probability, Born says-- we now call this Born's interpretation-- the probability to find an electron here versus here versus here goes like the absolute square of this wave-like quantity, Schrodinger's wave function. That's what gets worked out within six months of Schrodinger's first publication on this by thinking about what is actually observable in experiments involving these wave-like features. So, again, that was a thought experiment in 1926. By 1920-- literally by 1927, there were efforts to try to do this experimentally. Nowadays we have very fancy equipment with high-precision sources and detectors. So here's a series of images courtesy from my colleagues, Robert Austin and Lyman Page at Princeton, this is not something we can do as an in-class demo or as an undergraduate laboratory. So this is real data. This isn't simulated. These are snapshots of the kind of fluorescent screen of an actual double-slit experiment done at different moments in time as the pattern builds up. So individual photons were fired out of a screen with two slits, literally one at a time. And now we can do single photon experiments, which is quite extraordinary. And then you can detect where each individual photon was detected. So these are little pinpricks of light where an individual photon interacted with the fluorescent photo detector. So you have a mostly blackened screen because we've only sent a couple of photons through, one at a time so far, very localized individual detections here and here, but not here. That's after the first 1/30 of a second that's shooting rapidly one photon at a time and only one at a time. After a whole second has gone by, you have accumulating pattern. Now you have several hundred photons individually detected. Wait 100 seconds and you see very clearly this wave-like interference pattern, exactly the pattern described by that Born interpretation even though this is built from thousands of otherwise independents located in time and space detections of individual quanta. So we shoot out individual quanta, particulate-like. We detect individual quanta, particulate-like. And they can't interfere with each other because we make sure only one's sort of in the device at a time. And yet, over time, they somehow build up this very characteristic wave-like interference pattern. That is what's called the double-slit experiment. So let pause there. How do we go from one electron every hour to 100 every second? Oh, good. So, Alex, good. So the data I just showed you is actually with photons, the Princeton group. One can do it with electrons, as well. They were using, let's say, a photon source that emits single photons rapidly. So they really were shooting out only one photon at a time. But they can produce many photons per second. So the timing when we're dealing with that particular source of individual quanta, just what's called the duty cycle is shorter. We also have very rapid electronics now, unlike 1927. So the electronics can actually resolve individual detections in a tiny fraction of a second. So, conceptually, it's as if we had gone back to my cartoon, turned on our machine, shot out one particle, turn it off, went away. It's just now we can do that with the kind of high-duty cycle rapid electronics. But, conceptually, it to my mind is quite similar. Vittorio asks, as you do the experiment with progressively heavier particles-- oh, good question-- is there a certain point where the resulting pattern begins to look like what you'd expect from two bullets? Oh, thank you, Vittorio. What a wonderful question. That is a question that is still occupying, literally, many of my friends, let along colleagues around the world. The short answer is yes. The longer answer is why and how. And Jade's quite right. Jade puts in the chat, this experiment has been done, a comparable experiment, with objects as large as buckyballs, C60, molecules of 60 carbon atoms in one bound state, not just individual electrons. So an object as large as a buckyball which is, I think, not a billion, but several tens of millions of times larger than an individual electron, we still find this characteristic quantum interference very clearly. Again, it's well within statistical significance. And, in fact, that's an experiment that one of my colleagues, Anton Zeilinger, and his group had done quite some time ago. So the question is, where does that stop? I had a very beloved physics professor of my own, when I was an undergraduate. He was very senior. He's actually just retired. He was still teaching just for fun. And he used to joke that when he was in graduate school, he and his buddies would wonder, could they get themselves to interfere if they rode a bicycle through two standing posts sufficiently quickly, something when they don't interfere with themselves. An individual human on a bicycle doesn't undergo, at least not in any obvious way, this kind of interference. It doesn't seem to happen with things we can shrink down and look at microscopically at the scale of, say, amoebas, which people have tried. But it does happen with enormous molecules, enormous on the size of parts of atoms. So where is that line? And what causes the transition in behavior? If we're all made up of atoms and electrons, and electrons and atoms behave in this fundamentally quantum mechanical way, why don't we? Why didn't my professor on his bicycle get diffracted when he biked through between two standing posts? So that's what's often now called the quantum to classical transition. If you have nothing better to do with yourself for the next 17 years, I suggest you start googling that and read every paper on it. It's enormously complicated and still very much on the forefront of the research frontier for many people. And we just genuinely, we collectively, the community genuinely doesn't know. We're getting better at characterizing where the break is, where the line is, but not why, and not what would cause the behavior to transition from one very specific kind of behavior to the other. So other colleagues are trying to do this with living objects. So it turns out viruses, some kinds of viruses, might be just small enough for this to work and yet much, much bigger than buckyballs. So people are trying to get living things, seemingly living things like viruses, to quantum interfere. That's not been done yet. But that's one thing people are trying. So this question, how big could it get and still show this inherent quantumness? That is still very much a live issue. And as I said, there's two facets to it. What's the scale? But even more subtly, what's the reason why that stops working as we continue considering larger objects? So thank you for that question. Great question. Any other questions about the double slit, about this, what should have been the scream. I mean, all of you should have just been unmuted and just howling at the moon. At least that's how I feel. And I still think about this. Maybe that's my problem. I just find that just astonishing. It's not just a thought experiment. We can do that now with individual quanta with super careful control on our source, on our detectors, have really extraordinarily high statistical significance on all the measurements, and fire what looked, for, all intents and purposes, to be like little tiny baseballs. And somehow they know. Somehow their behavior is guided in a way that baseballs and bullets just aren't, or basketballs. If that's just OK for all of you then, I mean, come on. That's just like, that's COVID speaking. That should just-- OK, you get the point. Any other questions on either my mental state, or emotional state, or on the double-slit experiment? If not we'll go into one last little part, one kind of CODA for double slit. And then we'll have time for questions, as well. OK, so I'll get it together. I'll pull myself together. I'll talk now about this last part, which also started being thought about very early, even in the late 1920s, as further ways to try to probe what's going on when we think about this quantum level. So, again, this I'm going to do, especially schematically in the class session. There is a bit more explicit discussion in the optional lecture notes on the course site. Let's go back to that same setup, the one we just ended on. Let's say we're sending one electron at a time, waiting ample time in between, firing them toward this system where the distance between slits A and B is much, much larger than the kind of characteristic size of the particles themselves. But now we want to say these particles clearly must have gone through either slit A or slit B. That seems like a reasonable question to ponder. Which slit do they pass through on any individual experimental run? The distance between the slits is so much larger than the characteristic size that each particle must have had to have chosen to go through either slit A or slit B. So why don't we try to find out? And this is a very clever idea. Heisenberg's lectures on this, even in those 1929 lectures-- and others have developed it since then. Let's put a bunch of test particles behind only one of the slits so now it'll be like an alarm system. If an electron happens to go through slit A, it'll smack into these test particles. We'll have collisions. We'll have scattered particles zooming out of this region of the experiment that we could then detect and measure. And if no test particles are scattered, then we know the electron must have gone through the other slit. So we're going to bunch together, get kind of close in space, some targets behind only one of the two open slits. And that way, we have what's called a slit detector. We have some mechanism of determining event by event through which slit the particle actually passed. And so now what happens when both slits are open, but we measure through these slit detectors through which slit each electron passed, then the distribution, even for these quantum mechanical electrons, the distribution reverts back to this thoroughly classical sum of two independent probabilities. So we do our usual experiment. We do this 10,000 times. Sometimes we see the scattered particles from behind slit A. Sometimes we see no scattered particles. It must have gone through slit B. And then separately from that, we count up where all the electrons actually wind up at that backstop with our closely-spaced detectors. And we get, basically, the classical distribution. It looks like the sum of two independent classical particle-like distributions. When we ask through which slit each individual particle passed, the particles give us a particle-like answer. They pass through either slit A or slit B and had correspondingly no interference pattern between those two options. On the other hand, when we don't even try to ask through which slit did those individual particles pass, we remove that slit detector altogether. Then we get back to the results that we'd already found, that when both slits are open, we get this characteristic wave-like interference pattern. And we only get that when we've given up even the opportunity to ask this particle-like question about through which slit had individual particles passed. So we are allowed to ask the question. We actually can ask a question. People have done it with a slightly more sophisticated version than just that cartoon. And yet, somehow, the nature of the results changes very deeply and fundamentally, based on the nature of the question we asked. So, again, you can see why that happens. I'm being pretty schematic and kind of waving my hands. It's only a couple lines of algebra. And you might have already guessed it. If I go back to the cartoon, just to say quickly what's going to happen here, in order for this to be a reliable slit detector, this bundle of particles has to really be behind A and not behind B. That means that the spread in space can't be arbitrarily large. If we call this vertical direction the y direction, like my arrow here shows, then the delta y over this distribution can't be too big. In fact, it has to be much smaller than the distance D. If the distribution were too big, then we could see that test particles were scattered into-- but they would no longer tell us whether the particle had gone through slit A or slit B. We would lose the resolution in space if the bundle of particles was too far spread out in the y direction. So we have to contain or compress the distribution of test particles within a small region delta y. But that means they must have a correspondingly large uncertainty in the momentum because of the uncertainty principle. And, therefore, there'll be a correspondingly large uncertainty in the recoil momentum of the electron after scattering of exactly the right amount-- this part is kind of amazing-- of exactly the right amount needed to perfectly wash out those very clear interference fringes to get back to two bell-shaped curves, so that very regular, very narrow peaks of the interference fringes. So it's literally the uncertainty principle that precludes being able to both measure through which slit an individual particle passed and also retain these very clear features of the interference pattern. I think it's pretty amazing and very beautiful. And it's only a couple lines of algebra to show. So if even this most recent hand-waving effort of mine was too quick, I encourage you to look at the lecture notes. But that's the upshot. So what we're really finding conceptually-- and this is what people began to converge around, even by the late 1920s-- was that if we ask a particle-like question such as, what was your path? Through which slit did an individual particle pass? We will get a definite answer. We will get a particle like answer. Each particle either went through slit A or went through slit B with corresponding particle-like statistics. It's all consistent. We'll get a clear answer. And the whole rest of the problem will be consistent with particle-like questions and answers. On the other hand, if we ask a wave-like question, how does this wave function evolve in the region between the slits and the detector, for example, then we'll get a wave-like answer, the way the probability amplitude, the Schrodinger wave function, will spread out in space, in some regions they'll be constructive interference and the likelihood to find electron will be higher. In other regions of space you'll get destructive interference, corresponding to lower likelihood for an electron to be found. But we only get wave-like answers when we ask wave-like questions. So let me summarize. And we'll have a few moments for some discussion and questions. So in a really quite limited span of time, really in less than a whole calendar year between spring of 1925 and spring of 1926, both Werner Heisenberg and separately, Erwin Schrodinger, introduced whole new approaches to what they were seeking, a kind of first principles theory of the quantum realm, a quantum mechanics, rather than this grab bag of ad hoc quantum conditions. They were looking for this quantitative description of the atomic realm that had certain quantum ideas built in from the start, discrete spectral lines from excited atoms, de Broglie wave, matter waves, and so on, rather than kind of stapling these on at the end. Now these approaches looked quite different at first and even kind of not very encouraging, to be polite about it, to each other's inventors. Heisenberg's approach really emphasized and built upon what he considered a kind of inherent discreteness. That was the main lesson of the quantum as far as Heisenberg was concerned. Schrodinger, in the meantime, built separately on de Broglie's suggestion. The real lesson is a surprising continua, that even solid matter like electrons have associated wavelike behavior. They're building on different kinds of conceptual building blocks. And yet, by the summer of 1926, several physicists had built this mathematical map between these very, very different looking approaches. And even though there was now, arguably, one internally consistent mathematical approach, what it all meant about the world was as unclear then, often, as it remains even to this day. So one question was, what's this wave? The wave function is now an object that can be manipulated using familiar techniques. It's a differential equation. We can understand the spectrum of allowable eigenvalues. But what is it actually telling us about the world? And with Max Born's suggestion in the summer of 1926, physicists pretty quickly came to the idea that the wave function is not telling us about the physical extent of an object in space but actually about the likelihood for events to happen. And in particular, the absolute square of this wave-like quantity should give us the probability. Quickly on the heels of that, less than a whole year later by spring of '27, Heisenberg had derived his now famous uncertainty principle, that there are these unavoidable trade offs in the precision with which we could ever hope to specify characteristics of the kind of quantum realm, a kind of pairwise trade off where we can make one quantity really sharp but only at the expense of making some corresponding or complementary quantity very imprecise. And the trade off, the seesaw, again, being controlled by Planck's constant. And then these kinds of broader conceptual dualities, like this wave particle duality we've seen now a few times. But the type of answer we could ever hope to find depends on the type of question we ask. So it's not that the electron really is a particle or a wave, that it's a kind of contextual question. Then what kind of properties do we hope to investigate? Then we're going to foreclose some answers from ever even being possible, even as we get other quite definite answers. So Niels Bohr, partly from his intense discussions with Heisenberg, with other colleagues, he used that to enunciate a very general philosophy of the quantum realm by September of 1927, right on the heels of this, which he came to call complementarity. So it's not an either or world. It's a both and. We are required, in this case, to draw on both ideas and actual mathematical techniques with which we characterize wave behavior, and ideas and techniques with which we characterize particle behavior. They are mutually exclusive. We literally can't use them at the same time. And yet we need both. A complete description requires both, even though they don't fit together at once. And that, to Bohr, was the deepest meaning, the deepest lesson of quantum theory. Not everyone agreed. But that's where Bohr arrived by 1927. OK, any questions on any of that? That is so straightforward, so boringly familiar, that there's not even a single question to ask about the fundamental nature of reality. Boy, holiday weekends. No. Well, if you'll permit me, on tomorrow's class I'm going to try to shake your quantum complacency even further. So tomorrow we'll explore quantum weirdness in even more directions. So if this stuff is humdrum and familiar, give me one more shot for tomorrow. So Fisher asks, was Schrodinger's work more quickly accepted because it was more basic mechanics? I think that's absolutely right, Fisher. And I think there was a kind of-- people could do stuff with it. So even while the concepts were still very far from clear, what's the wave function? What is a wave of? What is it telling us? That was by no means clear. And even after the answers that we would now recognize were put forward, those didn't claim consensus right away. And yet, in between, people could still do stuff. You could train PhD students to look at this or that specific question because the apparatus, the mathematical apparatus, was already something they'd practiced since their youngest, earliest studies. So I think that really had a lot to do with how rapidly Schrodinger's work was really welcomed and that it helped. Obviously it helped a lot when Schrodinger, and Jordan, and Dirac elucidated the kind of mathematical mapping. So then it really became a question of convenience and not either/or. It's not like you had to vote for team Schrodinger versus team Heisenberg. Now you could say, oh, this set of techniques is familiar to me. I have lots of experience from classical mechanics or Maxwell or whatever. And so then it became a kind of tool of choice. And these tools were much more familiar to many more people at the time. That absolutely played a role in hastening many people's attention to this stuff, good, good point. Aiden asks, is Heisenberg's principle kind of spiritual? For some people, yes. Although that gets-- there's a lot to say about that. Are there those who think a more literal analysis would be that we simply need to discover from observation, not involve test particles? So, Aiden, I thought you were going a different way at first. So one topic, which is-- I'll say controversial. Some people pursue it. Other people think it is full of holes. One idea would be to say, does the uncertainty principle give us a physical reason why there might be something like free will-- consciousness and the experience of free will? You can see part of the concern, going back to the days of both Newton and, for that matter, Maxwell. If we live in a universe in which determinism holds perfectly, if events in the future are uniquely determined by events in the past, and then we turn on our sci-fi imaginations and imagine we're made of matter, atoms, and molecules. If atoms and molecules have to obey the laws of nature then, in principle, they are subject to determinism, at least in a pre quantum mechanical way. Then aren't we all just pre-programmed? Aren't we merely machines with no more free will than comets who can't help but follow a Keplerian orbit as they orbit the sun? And believe it or not, that was a debate that got people very excited and exorcized and mad at each other for a long, long time. And so there was some thought by some people early on, starting, really, in the 1920s and '30s, that maybe the uncertainty principle, by breaking the iron hold of determinism, leaves a kind of space within our scientific theorizing for things that seemed, or at least some might have hoped, were not fully uniquely determined, whether that's our own sense of individuality or the course of human events more broadly and so on. Now that gets a little tricky because, unfortunately, the uncertainty principle argues, at least most physicists would say, the upside of the uncertainty principle is that everything's random. And that's not quite how my thoughts feel. Before COVID 19, my thoughts didn't always feel only random, now they do. So appealing to the certain principle breaks the hold of determinism. But what it seems to put in its place is random mush, as opposed to something that otherwise might answer to the kind of lived experience of phenomena of something like free will. So it's not like that's a single answer that that's done. But that opens a space for kind of theorizing that goes on to this day, sometimes on the margins of both kind of physics and also of areas like neuroscience and philosophy. But at least it opens a different way of asking legitimate, hard questions with some new conceptual inputs. Let me put it that way. So even though I don't think people have found that answer yet, it's interesting that opens a space to ask old questions, maybe in some new ways. I wrote a book a couple of years ago called How The Hippies Saved Physics, which is a very obnoxious title, where people thought maybe there were other kind of unexplained or unaccounted for phenomena that quantum theory might be able to account for, like claims of ESP, or other so-called occult phenomena, mind reading, for example. And, again, I don't think they explained that at all. But it was a way of trying to reason about strange or unexpected human-scaled phenomena using a new set of tools that really weren't going to go away, like the uncertainty principle or like things we'll talk about in tomorrow's class, things like quantum entanglement. So I'm not telling you that's the answer. I'm telling you that opened up a space for people to ask either familiar questions from unfamiliar perspectives or sometimes ask new kinds of questions. So in that sense, what's the ultimate meaning of the uncertainty principle? I don't know. And, frankly, I don't think anyone knows. Or at least it's still a subject of really robust debate, like that quantum-to-classical transition. And I think it's fantastic that we have tools to even pose those questions now. So I get excited by the nature of the question rather than having a favorite answer. That's just me proselytizing. So Richard asks, once general relativity was widely accepted, were people already ringing alarm bells about the conflicts between general relativity and quantum mechanics. Yes, yes, they were, including Albert Einstein himself, who had a few opinions on the matter. And so this goes back in large part to things like the fall of determinism. And we'll come to, actually, some of these examples even in tomorrow's class. So we'll come to some of Einstein's most enduring critiques of quantum theory in the light of his thinking about things like relativity tomorrow. So partly I'll just bracket that. And I'll just simply say, yes, it's a great question. We'll see some examples tomorrow. We could talk more Lulu asks, how's my mental and emotional state? I think you can see, I'm still agitated I love this stuff. And tomorrow I won't be able to contain myself. I'll talk about some of my own group's research in the foundations of quantum mechanics. So good, I mean, not that the research is so good. The question is so good. OK, Aiden says, does Heisenberg's principle mean there's neither freewill nor determinism? So what we can do is recognize that now, as a question with which we, as a community, now have new tools to try to puzzle through answers. I'll put it that way. I mean, one of the things that happens is people try to explain certain esoteric features of consciousness using quantum theory. But then it turns out that human brains are dense, hot, and wet. And a lot of these quantum effects work really well when we're near vacuum at low temperatures and super dry. So even though some things could happen, in principle, among quantum objects, having to maintain a kind of inherent quantumness is really hard in an environment like the human brain. So, again, that launches new kinds of questions and investigations, rather than saying, oh, we've solved it. So those things become really interesting and lots of smart people ask a hard question about them around the world because we have this new injection of new conceptual things to really wonder about. Let me pause there. I don't want to run too long. Remember we have class tomorrow, twice in a row this week. We're back in our ordinary Zoom link. You can find it through Canvas. And so tomorrow we'll sit a little bit longer with some of these weird and wacky quantum phenomena. So stay well, see you tomorrow. Talk to you soon. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 13_Classical_Statistical_Mechanics_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we've been wondering how a gas, such as the one in this room, with particles following Newtonian equations of motion, comes to equilibrium. We decided to explain that by relying on the Boltzmann equation. Essentially, we said, let's look at the density, or probability, that we will find particles with momentum p at location q at time t, and we found that within some approximations, we could represent the evolution of this through a linear operator on the left-hand side equal to some collision second order operator on the right-hand side. The linear operator is a bunch of derivatives. There is the time derivative, then the coordinate moves according to the velocity, which is P over m. And summation over the index alpha running from 1 to 3, or x, y, z, is assumed. And at this stage, there is symmetry within the representation in terms of coordinates and momenta. There is also a force that causes changes of momentum. Actually, for most of today, we are going to be interested in something like the gas in this room far away from the walls, where essentially there is no external potential. And for the interest of thinking about modes such as sound, et cetera, we can remove that. The second order operator-- the collision operator-- however, explicitly because we were thinking about collisions among particles that are taking place when they are in close contact, breaks the symmetry between coordinate and momenta that were present here. So this collision operator was an integral over the momentum of another particle that would come at some impact parameter with relative velocity, and then you had a subtraction due to collisions, and then addition due to reverse collisions. Now, what we said is because of the way that this symmetry is broken, we can distinguish between averages in position and in momentum. For example, typically we are interested in variations of various quantities in space. And so what we can do is we can define the density, let's say at some particular point, by integrating over momentum. So again, I said I don't really need too much about the dependence on momentum, but I really am interested in how things vary from position to position. Once we have defined density, we could define various averages, where we would multiply this integral by some function of P and q, and the average was defined in this fashion. What we found was that really what happens through this collision operator-- that typically is much more important, as far as the inverse time scales are concerned, than the operators on the left, it has a much bigger magnitude-- is that momenta are very rapidly exchanged and randomized. And the things that are randomized most slowly are quantities that are conserved in collision. And so we focused on quantities that were collision conserved, and we found that for each one of these quantities, we could write down some kind of a hydrodynamic equation. In particular, if we looked at number conservation-- two particles come in, two particles go out-- we found that the equation that described that was that the time derivative of the density plus moving along the streamline had a special form. And here, I defined the quantity u alpha, which is simply the average of P alpha over m, defined in the way that averages defined over here. And that this is the number of particles, how it changes if I move along the streamline And variations of this, if you are incompressible, comes from the divergence of your flow velocity. And actually, this operator we call the total, or streamline, derivative. Now, the next thing that we said is that OK, number is conserved, but momentum is also conserved. So we can see what equation I get if I look at momentum. But in fact, we put the quantity that was momentum divided by mass to make it like a velocity, and how much you deviate from the average that we just calculated. And this quantity we also call c. And when we looked at the equation that corresponded to the conservation of this quantity in collisions, we found that we had something like mass times acceleration along the streamline. So basically this operator multiplied by mass acting on this was external force. Well, currently we've said this external force to be zero-- so when we are inside the box. Here there was an additional force that came from variations of pressure in the gas. And so we had a term here that was 1 over n d alpha of P alpha beta, and we needed to define a pressure tensor P alpha beta, which was nm expectation line of c alpha c. Finally, in collisions there is another quantity that is conserved, which is the kinetic energy. So here the third quantity could be the kinetic energy-- or actually, we chose the combination mc squared over 2, which is the additional kinetic energy on top of this. And it's very easy to check that if kinetic energy is conserved, this quantity is also conserved. We call the average of this quantity in the way that we have defined above as epsilon. And then the hydrodynamic equation is that as you move along the streams-- so you have this derivative acting on this quantity epsilon-- what we are going to get is something like minus 1 over n d alpha of a new vector, h alpha. h alpha, basically, tells me how this quantity is transported. So all I need to do is to have something like mc squared over 2 transported along direction alpha. And then there was another term, which was P alpha beta-- the P alpha beta that we defined above-- times u alpha beta. U alpha beta was simply the derivative of this quantity, symmetrized. So the statement is that something like a gas-- or any other fluid, in fact-- we can describe through these quantities that are varying from one location to another location. There is something like a density, how dense it is at this location. How fast particles are streaming from one location to another location. And the energy content that is ultimately related to something like the temperature, how hot it is locally. So you have these equations. Solving these equations, presumably, is equivalent, in some sense, to solving the Boltzmann equation. The Boltzmann equation, we know, ultimately reaches an equilibrium, so we should be able to figure out how the system, such as the gas in this room, is if disturbed, comes to equilibrium, if we follow the density, velocity, and temperature. Now, the problem with the equations as I have written is that they are not closed in terms of these three quantities, because I need to evaluate the pressure, I need to evaluate the heat transfer vector. And to calculate these quantities, I need to be able to evaluate these averages. In order to evaluate these averages, I need to know f. So how did we proceed? We said well, let's try to find approximate solutions for f. So the next task is to find maybe some f, which is a function of p, q, and t, and we can substitute over there. Now, the first thing that we said was, OK, maybe what I can do is I can look at the equation itself. Notice that this part of the equation is order of 1 over the time it takes for particles to move in the gas and find another particle to collide with, whereas the left-hand side is presumably something that is related to how far I go before I see some variation due to the external box. And we are really thinking about cases where the gas particles are not that dilute, in the sense that along the way to go from one side of the room to another side of the room, you encounter many, many collisions. So the term on the right-hand side is much larger. If that is the case, we said that maybe it's justifiable to just solve this equation on the right-hand side as a zeroth order. And to solve that, I really have to set, for example, the right-hand side that I'm integrating here, to 0. And I know how to do that. If log f involves collision conserved quantities, then ff before is the same as ff after. And the solution that I get by doing that has the form of exponential involving conserved quantities, which are the quantities that I have indicated over here-- let's say, such as c. And so log of that would be something that involves-- I can write as mc squared over 2 with some coefficient that, in principle, varies from location to another location. I want to integrate this and come up with the density, so I put the density out here, and I normalize the Gaussian. And so this is a reasonable solution. Indeed, this is the zeroth order solution for f-- I'll call that f0. So once you have the zeroth order solution, from that you can calculate these two quantities. For example, because the zeroth order solution is even in c, the heat vector will be 0, because it is symmetric in the different components. The pressure tensor will be proportional to delta alpha beta. OK? We started with that. Put those over here, and we found that we could get some results that were interesting. For example, we could see that the gas can have sound modes. We could calculate the speed of sound. But these sound modes were not damped. And there were other modes, such as sheer modes, that existed forever, confounding our expectation that these equations should eventually come to an equilibrium. So we said, OK, this was a good attempt. But what was not good enough to give us complete equilibrium, so let's try to find a better solution. So how did we find the better solution? We said that the better solution-- let's assume that the solution is like this, but is slightly changed by a correction. And the correction comes because of the effect of the left-hand side, which we had ignored so far. And since the left-hand side is smaller than the right-hand side by a factor involving tau x, presumably the correction will involve this tau x. OK. Good? We said that in order to see that as a correction, what I need to do is to essentially linearize this expression. So what we did was we replaced this f's with f0's 1 plus g, 1 plus g, and so forth. The zeroth term, by construction, is 0. And so if we ignore terms that are order of g squared, we get something that is linear in g. OK? Now, it's still an integral operator, but we said let's approximate that integration and let's do a linearized one collision time approximation. What that approximation amounted to was that, independent of what this deviation is, if we are relaxed to the zeroth order solution over time scale that is the same, and that times scale we'll call tau x. So essentially we wrote this as minus f0, essentially g, which is the difference between f and f0, divided by tau x. f0, this was g. No. I guess we don't have this. We wrote it in this fashion. It's just writing the way that I wrote before. We wrote this as g over tau x, where g is the correction that I have to write here. And you can see that the correction is obtained by multiplying minus tau x with l acting on f0 divided by f0, which is l acting on log of f0. So I would have here 1 minus tau x-- let's make this curly bracket-- and then in the bracket over here, I have to put the action of l on the log of f0. So I have to-- this is my f0. I take its log. So the log will have minus mc squared over 2 kt and the log of this combination, and I do d by dt plus P alpha over m acting on this log, and then a bunch of algebra will leave you to the following answer. Not surprisingly, you're going to get factors of mkT. So you will get m over kT. You get c alpha c beta minus delta alpha beta c squared over 3 acting on this rate of strength tensor that we have defined over here. And then there are derivatives that will act on temperature-- because temperature is allowed to vary from position to position-- so there will be a term that will involve the derivative of temperature. In fact, it will come in the form over T c alpha multiplying another combination which is mc squared over 2kT minus 5/2. Let me see if I got all of the factors of 1/2 correct. Yeah. OK. So this is the first order term. Presumably, there will be high order corrections, but this is the improved solution to the Boltzmann equation beyond the zeroth order approximation. It's a solution that involves both sides of the equation now. We relied heavily on the right-hand side to calculate f0, and we used the left-hand side-- through this log l acting on log of f0-- to get the correction that is order of tau x that is coming to this equation. So now, with this improved solution, we can go back and re-check some of the conclusions that we had before. So let's, for example, start by calculating this pressure tensor P alpha beta, and see how it was made different. So what I need to do is to calculate this average. How do I calculate that average? I essentially multiply this f, as I have indicated over there, by c alpha c beta, and then I integrate over all momenta. Essentially, I have to do integrations with this Gaussian weight. So when I do the average of c alpha c beta with this Gaussian weight, not surprisingly, I will get the delta alpha beta, and I will get from here kT over m. Multiplying by nm will give me mkT. So this is the diagonal form, where the diagonal elements are our familiar pressures. So that was the zeroth order term. Essentially, that's the 1 over here, multiplying c alpha c beta before I integrate. So that's the 1 in this bracket. But there will be order of tau x corrections, because I will have to multiply two more factors of c with the c's that I have over here. Now, none of these terms are important, because these are, again, odd terms. So when I multiply two c's with three or one c the average will be zero. So all of the averages are really coming from these terms. Now, these terms involve four factors of c's. Right? There's the two c's that I put out here-- and actually, I really have to think of these as different indices, let's say nu nu, nu nu. Summation convention again assumed. And then when I multiply by c alpha c beta, I will have, essentially, four indices to play with-- c alpha c beta, c nu c nu. But it's all done with the Gaussian weight out here. And we showed and discussed how there was this nice fixed theorem that enabled you to rapidly calculate these Gaussian weights with four factors of c, or more factors of c-- doesn't really matter. And so in principle you know how to do that, and I'll skip the corresponding algebra and write the answer. It is proportional to minus tau x. And what it gives you, once you do these calculations, is a factor of u alpha beta minus delta alpha beta over 3 nu gamma gamma. And again, let me make sure I did not miss out anything. And apparently I missed out the factor of 2. So really, the only thing that's happened, once we went and included this correction, we added this term. But the nice thing about this term is that potentially, it has off-diagonal terms. This matrix initially was completely diagonal. The corrections that we have calculated are potentially off-diagonal coming from this term that corrected the original Gaussian weight. So where is that useful? Well, one of the problems that we discussed was that I can imagine a configuration of velocities, let's say close to a wall-- but it does not have to be a wall, but something where I have a profile of velocities which exist with components only along the x direction, but vary along the y direction. So this ux is different from this ux, because they correspond to different y's. So essentially, this is a pattern of sheering a gas, if you like. And the question is, well, hopefully, this will come to relax and eventually give us, let's say, uniform velocity, or zero velocity, even better-- if there's a wall, and the wall velocity is zero. So how does that happen? Well, the equation that I have to satisfy is this one that involves u. So I have m. Well, what do I have to have? I have du by dt plus something that acts on ux. ux only has variations along the y direction. So this term, if it was there, had to involve a derivative along the y direction multiplying uy, but it's not present, because there's no uy. On the right-hand side of the equation, I have to put minus 1 over n. Again, the only variations that I'm allowed to have are along the y direction. So I have, when I do the summation over alpha, the only alpha that contributes is y. And so I need the pressure y, but I'm looking for velocities along the x direction, so the other index better be x. OK? So the time course of the velocity profile that I set up over here is determined by the y derivative of the yx component of the pressures tensor. Now, previously our problem was that we stopped at the zeroth order, and at the zeroth order, the pressure tensor was diagonal. It didn't have a yx component. So this profile would stay forever. But now we do have a yx component, and so what do I get? I will get here minus 1 over n dy dy. The yx component will come from minus 2 tau x multiplying nkT and then multiplying-- well, this term, again, is diagonal. I can forget about that term. So it comes from the uxy. What is uxy? It is 1/2 of the x derivative of uy that doesn't exist, and the y derivative of ux that does exist. OK? So what we have, once we divide by m, is that the time derivative of ux is given by a bunch of coefficients. The n I can cancel if it does not vary. What I have is the 2's cancel. The n's cancel. I will have tau x kT over m. tau x kT over m, that's fine. And then the second derivative along the y direction of ux. So suddenly, I have a different equation. Rather than having the time derivative does not change, I find that the time derivative of ux is proportional to Laplacian. It's a diffusion equation. And we know the solution to the diffusion equation, how it looks qualitative if I have a profile such as this. Because of the fusion, eventually it will become more and more uniform in time. And the characteristic time over which it does so, if I assume that, let's say, in the y direction, I have a pattern that has some characteristic size lambda, then the characteristic relaxation time for diffusion will be proportional to lambda squared. There's a proportionality here. The constant of proportionality is simply this diffusion coefficient. So it is inversely. So it is m kT tau x. Actually, we want to think about it further. kT over m is roughly the square of the terminal velocities of the particles. So lambda squared divided by v squared is roughly the time that you would have ballistically traveled over this line scale of the variation. The square of that time has to be provided by the characteristic collision time, and that tells you the time scale over which this kind of relaxation occurs. Yes? AUDIENCE: So if x depends linearly, why? Would still get 0 on the-- why can't-- PROFESSOR: OK. So what you are setting up is a variation where there is some kind of a sheer velocity that exists forever. So indeed, this kind of pattern will persist, unless you say that there's actually another wall at the other end. So then you will have some kind of a pattern that you would relax. So unless you're willing to send this all the way to infinity, is will eventually relax. Yes? AUDIENCE: Can you just say again the last term on the second line, the 1/2 term. Where did that come from? Can you just explain that? PROFESSOR: This term? AUDIENCE: Yeah. The last half of it. PROFESSOR: The last half of it. So what did we have here? So what I have is that over here, in calculating the pressure, when I'm looking at the xy component, I better find some element here that is off diagonal. What's the off diagonal element here? It is u xy What is u xy? u xy is this, calculated for. So it is 1/2 of dx uy, which is what I don't have, because my uy is 0. And the other half or it, of symmetrization, is 1/2 the value of x. Yes? AUDIENCE: [INAUDIBLE] derivative [INAUDIBLE] for ux. You're starting out the second term uy uy. Also, shouldn't there be a term ux dx? PROFESSOR: Yes. Yeah. So this is a-- yes. There is a term that is ux dx, but there is no variation along the x direction. I said that I set up a configuration where the only non-zero derivatives are along the y direction. But in general, yes. You are right. This is like a divergence. It has three terms, but the way that I set it up, only one term is non-zero. AUDIENCE: Also, if you have [INAUDIBLE] one layer moving fast at some point, the other layer moving slower than that. And you potentially can create some kind of curl? But as far as I understand, this would be an even higher order effect? Like turbulence. PROFESSOR: You said two things. One of them was you started saying viscosity. And indeed, what we've calculated here, this thing is the coefficient of viscosity. So this is really the viscosity of the material. So actually, once I include this term, I have the full the Navier-Stokes equations with viscosity. So all of the vortices, et cetera, should also be present and discussed, however way you want to do it with Navier-Stokes equation into this equations. AUDIENCE: So components of speed which are not just x components, but other components which are initially zero, will change because of this, right? PROFESSOR: Yes, that's right. That's right. AUDIENCE: You just haven't introduced them in the equation? PROFESSOR: Yes. So this is I'm looking at the initial profile in principle. I think here I have set up something that because of symmetry will always maintain this equation. But if you put a little bit bump, if you make some perturbation, you will certainly generate other components of velocity. OK? And so this resolved one of the modes that was not relaxing. There was another mode that we were looking at where I set up a situation where temperature and density were changing across the system, but their products-- that is, the pressure-- was uniform. So you had a system that was always there in the zeroth order, despite having different temperatures at two different points. Well, that was partly because the heat transport vector was zero. Now, if I want to calculate this with this more complicated equation, well, what I need-- the problem before with the zeroth order was that I had three factors of c, and that was odd. But now, I have a term in the equation this is also odd. So from here, I will get a term, and I will in fact find eventually that the flow of heat is proportional to gradient of temperature. And one can compute this coefficient, again, in terms of mass, density, et cetera, just like we calculated this coefficient over here. This will give you relaxation. We can also look at the sound modes including this, and you find that the wave equation that I had before for the sound modes will also get a second derivative term, and that will lead to damping of the modes of sound. So everything at this level, now we have some way of seeing how the gas will eventually come to equilibrium. And given some knowledge of rough parameters of the gas, like-- and most importantly-- what's the time between collisions, we can compute the typical relaxation time, and the relaxation manner of the gas. So we are now going to change directions, and forget about time dependents. So if you have questions about this section, now may be a good time. Yes? AUDIENCE: [INAUDIBLE] what is lambda? PROFESSOR: OK. So I assume that what I have to do is to solve the equation for some initial condition. Let's imagine that that initial condition, let's say, is a periodic pattern of some wavelength lambda. Or it could be any other shape that has some characteristic dimension. The important thing is that the diffusion constant has units of length squared over time. So eventually, you'll find that all times are proportional to some lengths where times the inverse of the diffusion constant. And so you have to look at your initial system that you want to relax, identify the longest length scale that is involved, and then your relaxation time would be roughly of that order. OK. So now we get to the fourth section of our course, that eventually has to do with statistical mechanics. And at the very, very first lecture, I wrote the definition for you that I will write again, that statistical mechanics is a probabilistic approach to equilibrium-- also microscopic-- properties of large numbers of degrees of freedom. So what we did to thermodynamics was to identify what equilibrium microscopic properties are. They are things such as identifying the energy, volume, number of particles of the gas. There could be other things, such as temperature, pressure, number, or other collections of variables that are independently sufficient to describe a macrostate. And what we're going to do is to indicate that macroscopic set of parameters that thermodynamically characterize the equilibrium system by big M. OK? Clearly, for a probabilistic approach, you want to know something about the probability. And we saw that probabilities involving large numbers-- and actually various things involving large numbers-- had some simplified character that we are going to exploit. And lastly, in the last section, we have been thinking about microscopic description. So these large number of degrees of freedom we said identify some point, let's say, for particles in a six-n dimensional phase space. So this would for gas particles be this collection p and q. And that these quantities we know are in fact subject to dynamics that is governed by some Hamiltonian. And we saw that if we sort of look at the entirety of the probability in this six n dimensional phase space, that it is subject to Liouville's question that said that dp by dt is a Poisson bracket of H and p. And if the only thing that we take from equilibrium is that things should not change as a function of time, then requiring this probability in phase space to be independent of time would then require us to have a p which is a function of H, which is defined in p and q and potentially other conserved quantities. So what we are going to do in statistical mechanics is to forget about how things eventually reach equilibrium. We spent a lot of time and energy thinking about how a gas reaches equilibrium. Having established what it requires to devise that solution in one particular case, and one of few cases where you can actually get far, you're going to ignore that. We say that somehow my system reached this state that is time independent and is equilibrium, and therefore the probability should somehow have this character. And it depends, however, on what choice I make for the macrostate. So what I need here-- this was a probability of a microstate. So what I want to do is to have a statement about the probability of a microstate given some specification of the macrostate that I'm interested in. So that's the task. And you're going to do that first in the ensemble-- and I'll tell you again what ensemble means shortly. That's called microcanonical. And if you recall, when we were constructing our approach to thermodynamics, one of the first things that we did was we said, there's a whole bunch of things that we don't know, so let's imagine that our box is as simple as possible. So the system that we are looking is completely isolated from the rest of the universe. It's a box. There's lots of things in the box. But the box has no contact with the rest of the universe. So that was the case where essentially there was no heat, no work that was going into the system and was being exchanged, and so clearly this system has a constant energy. And so you can certainly prescribe a particular energy content to whatever is in the box. And that's the chief identity of the microcanonical ensemble. It's basically a collection of boxes that represent the same equilibrium. So if there is essentially a gas, the volume is fixed, so that there would be no work that will be done. The number of particles is fixed, so that there is no chemical work that is being done. So essentially, in general, all of the quantities that we identified with displacements are held fixed, as well as the energy in this ensemble. So the microcanonical ensemble would be essentially E, x, and N would be the quantities that parametrize the equilibrium state. Of course, there's a whole collection of different microstates that would correspond to the same macrostate here. So presumably, this bunch of particles that are inside this explore a huge multidimensional microstate. And what I want to do is to assign a probability that, given that I have fixed E, x, and n, that a particular microstate occurs. OK? How do I do that? Well, I say that OK, if the energy of that microstate that I can calculate is not equal to the energy that I know I have in the box, then that's not one of the microstates that should be allowed in the box. But presumably there's a whole bunch of micro states whose energy is compatible with the energy that I put in the box. And then I say, OK, if there is no other conserved quantity-- and let's assume that there isn't-- I have no way a priori of distinguishing between them. So they are just like the faces of the dice, and I say they're all equally likely. For the dice, I would give 1/6 here. There's presumably some kind of a normalization that depends on E, x, and n, that I have to put in. And again, note that this is kind of like a delta function that I have. It's like a delta of H minus E, and it's therefore consistent with this Liouville equation. It's one of these functions that is related through H to the probability on the microstate. Now, this is an assumption. It is a way of assigning probabilities. It's called assumption of equal a priori probabilities. It's like the subjective assignment of probabilities and like the faces of the dice, it's essentially the best that you can do without any other information. Now, the statement is that once I have made this assumption, I can derive the three laws of thermodynamics. No, sorry. I can derive two of the three laws of thermodynamics. Actually, three of the four laws of thermodynamics, since we had the zeroth law. So let's proceed. OK? So we want to have a proof of thermodynamics. So the zeroth law had something to do with putting two systems in contact, and when they were in equilibrium, there was some empirical temperature from one that was the same as what you had for the other one. So basically, let's pick our two systems and put a wall between them that allows the exchange of energy. And so this is my system one, and they have an spontaneous energy bond. This is the part that is two, it has energy E2. So I start with an initial state where when I look at E2 and E2, I have some initial value of E1, 0, let's say, and E2, 0. So my initial state is here, in these two. Now, as I proceed in time, because the two systems can exchange energy, E1 and E2 can change. But certainly, what I have is that E1 plus E2 is something E total that is E1,0 plus E2,0. Which means that I'm always exploring the line that corresponds to E1 plus E2 is a constant, which it runs by 45-degree along this space. So once I remove the constraint that E1 is fixed and E2 is fixed, they can exchange. They explore a whole bunch of other states that is available to them. And I would probably say that the probability of the microstates is the same up to some normalization that comes from E1. So the normalization-- not the entirety 1 plus 2-- is a microcanonical ensemble but with energy E total. So there is a corresponding omega that is associated with the combined system. And to obtain that, all I need to do is to sum or integrate over the energy, let's say, that I have in the first one. The number of states that I would have, or the volume of phase space that I would have if I was at E1, and then simultaneously multiplying by how many states the second part can have at the energy that corresponds to E total minus E1. So what I need to do is to essentially multiply the number of states that I would encounter going along this axis, and the number of states that I would multiply going along the other axis. So let's try to sort of indicate those things with some kind of a color density. So let's say that the density is kind of low here, it comes kind of high here, and then goes low here. If I move along this axis, let's say that along this axis, I maybe become high here, and stay high, and then become low later. Some kind of thing. So all I need to do is to multiply these two colors and generate the color along this-- there is this direction, which I will plot going, hopefully, coming out of the board. And maybe that product looks something like this. Stating that somewhere there is a most probable state, and I can indicate the most probable state by E1 star and E2 star, let's say. So you may say, OK, it actually could be something, who says it should have one maximum. It could have multiple maxima, or things like that. You could certainly allow all kinds of things. Now, my claim is that when I go and rely on this particular limit, I can state that if I explore all of these states, I will find my system in the vicinity of these energies with probability that in the n goes to infinity, limit becomes 1. And that kind of relies on the fact that each one of these quantities is really an exponentially large quantity. Before I do that, I forgot to do something that I wanted to do over here. I have promised that as we go through the course, at each stage we will define for each section its own definition of entropy. Well, almost, but not quite that. Once I have a probability, I had told you how to define an entropy associated with a probability. So here, I can say that when I have a probability p, I can identify the average of log p, the factor of minus p log p, to be the entropy of that probability. Kind of linked to what we had for mixing entropy, except that in thermodynamics, entropy had some particular units. It was related to heat or temperature. So we multiplied by a quantity kb that has the right units of energy divided by degrees Kelvin. Now, for the choice of the probability that we have, it is kind of like a step function in energy. The probability is either 0 or 1 over omega. So when you're at 0, this p log p will give you a 0. When you're over here, p log p will give you log of omega. So this is going to give you kb log of omega. So we can identify in the macrocanonical ensemble, once we've stated what E, x, and n are, what the analog of the six that we have for the throwing of the dices, what's the number of microstates that are compatible with the energy. We will have to do a little bit of massaging that to understand what that means in the continuum limit. We'll fix that. But once we know that number, essentially the log of that number up to a factor would give something that would be the entropy of that probability that eventually we're going to identify with the thermodynamic entropy that corresponds to this system. But having done that definition, I can rewrite this quantity as E1 e to the 1 over kb S1 plus 1 over kb S2. This one is evaluated at E1. This one is evaluated at E2, which is E total minus E1. Now, the statement that we are going to gradually build upon is that these omegas are these types of quantities that I mentioned that depend exponentially on the number of particles. So let's say, if you just think about volume, one particle then, can be anywhere in this. So that's a factor of v. Two particles, v squared. Three particles, v cubed. n particles, v to the n. So these omegas have buried in them an exponential dependence on n. When you take the log, these are quantities that are extensing. They're proportionate to this number n that becomes very large. So this is one of those examples where to calculate this omega in total, I have to evaluate an integral where the quantities are exponentially large. And we saw that when that happens, I could replace this integral essentially, with its largest value. So I would have 1 over kb S1 of E1 star plus S2 of E2 star. Now, how do I identify where E1 star and E2 star are? Well, given that I scale along this axis or along that axis, essentially I want to find locations where the exponent is the largest. So how do I find those locations? I essentially set the derivative to 0. So if I take the derivative of this with respect to E1, what do I get? I get dS1 by dE1. And from here, I get dS2 with respect to an argument that is in fact has a minus E1, so I would have minus dS2 with respect to its own energy. With respect to its own energy argument, but evaluated with an energy argument that goes opposite way with E1. And all of these are calculated at conditions where the corresponding x's and n's are fixed. And this has to be 0, which means that at the maxima, essentially, I would have this condition. So again, because of the exponential character, there could be multiple of these maxima. But if one of them is slightly larger than the other in absolute terms, in terms of the intensive quantities, once I multiply by these n's, it will be exponentially larger than the others. We sort of discussed that when we were doing the saddle point approximation, how essentially the best maximum is exponentially larger than all the others. And so in that sense, it is exponentially much more likely that you would be here as opposed to here and anywhere else. So the statement, again, is that just a matter of probabilities. I don't say what's the dynamics by which the energies can explore these two axes. But I imagine like shuffling cards, the red and the black cards have been mixed sufficiently, and then you ask a question about the typical configuration. And in typical configurations, you don't expect a run of ten black cards, or whatever, because they're exponentially unlikely. So this is the same statement, that once you allow this system to equilibrate its energy, after a while you look at it, and with probability 1 you will find it at a location where these derivatives are the same. Now, each one of these derivatives is completely something that pertains to its own system, so one of them could be a gas. The other could be a spring. It could be anything. And these derivatives could be very different quantities. But this equality would hold. And that is what we have for the zeroth law. That when systems come into equilibrium, there is a function of parameters of one that has to be the same as the function of the parameters of the other, which we call the empirical temperature. So in principal, we could define any function of temperature, and in practice, for consistently, that function is one over the temperature. So the zeroth law established that exists this empirical function. This choice of 1 over T is so that we are aligned with everything else that we have done so far, as we will see shortly. OK? The next thing is the first law, which had something to do with the change in the energy of the system. When we go from one state to another state had to be made up by a combination of heat and work. So for this, let's expand our system to allow some kind of work. So let's imagine that I have something-- let's say if it is a gas-- and the quantity that would change if there's work done on the gas is the volume. So let's imagine that there is a piston that can slide. And this piston exerts for the gas a pressure, or in general, whatever the conjugate variable is to the displacement that's we now allow to change. So there's potentially a change in the displacement. So what we are doing is there's work that is done that corresponds to j delta x. So what happens if this j delta x amount of work is done on the system? Then the system goes from one configuration that is characterized by x to another configuration that is characterized by x plus delta x. And let's see what the change in entropy is when that happens. Change in entropy, or the log of the number of states that we have defined. And so this is essentially the change starting from E and x to going to the case where x changed by an amount dx. But through this process I did work, and so the amount of energy that is inside the system increases by an amount that is j delta x. So if all of these quantities are infinitesimal, I have two arguments of S now have changed infinitesimally, and I can make the corresponding expansion in derivatives. There is a dS with respect to dE at constant x. The corresponding change in E is j delta x. But then there's also a dS by dx at constant E with the same delta x, so I factored out the delta x between the two of them. Now, dS by dE at constant x is related to the empirical temperature, which we now set to be 1 over T. Now, the claim is that if you have a situation such as this, where you have a state that is sitting in equilibrium-- let's say a gas with a piston-- then by definition of equilibrium, the system does not spontaneously change its volume. But it will change its volume because the number of states that is availability to it increases, or S increases. So in order to make sure that you don't go to a state that is more probable, because there are more possibilities, there are more microstates, I better make sure that this first derivative is 0. Otherwise, depending on whether this is plus or minus, I could make a corresponding change in delta x that would increase delta S. So what happens here if I require this to be 0 is that I can identify now that the derivative of this quantity dS by dx at constant E has to be minus j over T. Currently, I have only these two variables, and quite generically, I can say that dS is dS by dE at constant x dE plus dS by dx at constant E dx. And now I have identified these two derivatives. dS by dE is 1 over T, so I have dE over T. dS by dx is minus j over T, so I have minus j dx over T. And I can rearrange this, and I see that dE is T dS plus j dx. Now, the j dx I recognize as before, it's the mechanical work. So I have identified that generically, when you make a transformation, in addition to mechanical work, there's a component that changes the energy that is the one that we can identify with the heat. Yes? AUDIENCE: Could you explain why you set that S to 0? PROFESSOR: Why did I set delta S to 0? So I have a box, and this box has a certain volume, and there's a bunch of particles here, but the piston that is holding this is allowed to slide to go up and down. OK? Now, I can ask what happens if the volume goes up and down. If the volume goes up and down, how many states are available? So basically, the statement has been that for each configuration, there is a number of states that is available. And if I were to change that configuration, the number of states will change. And hence the log of it, which is the entropy, will change. So how much does it change? Well, let's see what arguments changed. So certainly, the volume changed. So x went to x plus dx. And because I did some amount of work, pdv, the amount of energy that was in the box also changed. OK? So after this transformation with this thing going up or down, what is the new logarithm of the number of states? How much has it changed? And how much of the change is given by this? Now, suppose that I make this change, and I find that suddenly I have a thousand more states available to me. It's a thousand times more likely that I will accept this change. So in order for me to be penalized-- or actually, not penalized, or not gain-- because I make this transformation, this change better be 0. If it is not 0, if this quantity is, let's say, positive, then I will choose a delta v that is positive, and delta S will become positive. If this quantity happens to be negative, then I will choose a delta v that is negative, and then delta S will be positive again. So the statement that this thing originally was sitting there by itself in equilibrium and did not spontaneously go up or down is this statement this derivative 0. AUDIENCE: That's only 0 when the system is in equilibrium? PROFESSOR: Yes. Yes. So indeed, this is then a relationship that involves parameters of the system in equilibrium. Yeah? So there is thermodynamically, we said that once I specify what my energy and volume and number of particles are in equilibrium, there is a particular S. And what I want to know is if I go from one state to another state in equilibrium, what is the change in dS? That's what this statement is. Then I can rearrange it and see that when I go from one equilibrium state to another equilibrium state, I have to change internal energy, which I can do either by doing work or by doing heat. OK? Now, the second law is actually obvious. I have stated that I start from this configuration and go to that configuration simply because of probability. The probabilities are inverse of this omega, if you like. So I start with some location-- like the pack of cards, all of the black on one side, all of the red on the other side-- and I do some dynamics, and I will end up with some other state. I will, because why? Because that state has a much more volume of possibilities. There is a single state that we identify as all-black and all-red, and there's a myriad of states that we say they're all randomly mixed, and so we're much more likely to find from that subset of states. So I have certainly stated that S1 evaluated at E1 star, plus S2 evaluated at E2 star, essentially the peak of that object, is much larger than S1 plus E1 plus 0 plus S2 [INAUDIBLE]. And more specifically, if we sort of follow the course of the system as it starts from here, you will find that it will basically go in the direction always such that the derivatives that are related to temperature are such that the energy will flow from the hot air to the colder body, consistent with what we expect from thermodynamics. The one law of thermodynamic that I cannot prove from what I have given you so far is the third law. There is no reason why the entropy should go to 0 as you go to 0 temperature within this perspective. So it's good to look at our canonical example, which is the ideal gas. Again, for the ideal gas, if I am in a microcanonical ensemble, it means that I have told you what the energy of the box is, what the volume of the box is, and how many particles are in it. So clearly the microstate that corresponds to that is the collection of the 6N coordinates and momenta of the particles in the box. And the energy of the system is made up by the sum of the energies of all of the particles, and basically, ideal gas means that the energy that we write is the sum of the contributions that you would have from individual particles. So individual particles have a kinetic energy and some potential energy, and since we are stating that we have a box of volume v, this u essentially represents this box of volume. It's a potential that is 0 inside this volume and infinity outside of it. Fine. So you ask what's the probability of some particular microstate, given that I specified E, v, and n. I would say, well, OK. This is, by the construction that we had, 0 or 1 over some omega, depending on whether the energy is right. And since for the box, the energy is made up of the kinetic energy, if sum over i p i squared over 2m is not equal to E, or particle q is outside the box. And it is the same value if sum over i p i squared over 2m is equal to v and q i's are all inside the box. So a note about normalization. I kind of skipped on this over the last one. p is a probability in this 6N dimensional phase space. So the normalization is that the integral over all of the p i's and qi's should be equal to 1. And so this omega, more precisely, when we are not talking about discrete numbers, is the quantity that I have to put so that when I integrate this over phase space, I will get 1. So basically, if I write this in the form that I have written, you can see that omega is obtained by integrating over p i q i that correspond to this accessible states. It's kind of like a delta function. Basically, there's huge portions of phase space that are 0 probability, and then there's a surface that has the probability that is 1 over omega and it is the area of that surface that I have to ensure is appearing here, so that when I integrate over that area of one over that area, I will get one. Now, the part that corresponds to the q coordinates is actually very simple, because the q coordinates have to be inside the box. Each one of them has a volume v, so the coordinate part of the integral gives me v to the m. OK? Now, the momentum part, the momenta are constrained by something like this. I can rewrite that as sum over i p i squared equals to 2mE. And if I regard this 2mE as something like r squared, like a radius squared, you can see that this is like the equation that you would have for the generalization of the sphere in 3N dimensions. Because if I had just x squared, x squared plus y squared is r squared is a circle, x squared plus y squared plus z squared is r squared is a sphere, so this is the hypersphere in 3N dimensions. So when I do the integrations over q, I have to integrate over the surface of a 3N dimensional hypersphere of radius square root of 2m. OK? And there is a simple formula for that. This surface area, in general, is going to be proportional to r raised to the power of dimension minus 1. And there's the generalization of 2pi r over 4pi r squared that you would have in two or three dimensions, which we will discuss next time. It is 2 to the power of 3n over 2 divided by 3n over 2 factorial. So we have the formula for this omega. We will re-derive it and discuss it next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 3_Thermodynamics_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So what we've been trying to do is to develop a description of properties of some system, could be something like a gas, that includes both mechanical as well as thermal properties. So the first thing that we decided to do was to wait until this system has properties that do not change. Further, with the observation time that we are assigning to this system, and then if it is a gas we can say, OK the gas has some pressure and volume. In general, we said that we can characterize the system through a set of displacements and the conjugate forces which are used when we describe systems that are mechanical. Work on them is done. We can say that the work is Jdx, for example. OK? Then we decided that this mechanical description was not enough because systems could exchange heat and gradually started to build a more general prescription that included the exchange of heat. We saw that, for example, from the zeroth law, we could define temperature as a function of these quantities. And that we could also, from the first law define energy. And in particular that the change in energy, which is a function of where you are in equilibrium. So once you say that you have a system in equilibrium you can say what the values of the x's and J's are. You know somehow what the value of temperature is, what the value of energy is. And if you were to make a change in energy by some external means the amount of change would either come from mechanical work that you did on the system, or the heat that you supplied to the system. Now this expression is the one that you would like to elevate somehow and be able to compute energy as a function of state. Kind of keeping in mind what we would do if there was no heat around. And we could do things mechanically, then mechanically we could in principle build up the energy function by changing displacements lightly and calculating the amount of work by this formula. And basically we emphasized that you would be able to use a formula such as this if you were to make things sufficiently slowly so that at each stage in the process you can calculate what Ji is, which means that at each stage in the process you should be in some kind of an equilibrium state. OK? If that was the case, if you didn't have dQ then this would be your energy function and actually once you had the energy function you could, for example, calculate J as a derivative of energy with respect to x. So then you also start thinking about how many independent degrees of freedom do I have. Do I really need all of the set of x's and J's to describe my system in equilibrium. If additionally once I know the e, I can derive all of the J's. So we have to also come eventually to grips with how many independent degrees of freedom will describe our system. Now the first step towards completing this expression was to find what dQ is. So we needed another law of thermodynamics beyond zeroth law and first law that somehow related heat and temperature to each other. As we expect that somehow this expression will leave you temperature. And what we used was some version of the second law. That was there how heat would flow from, say, a hot air body to a cold air body. And Clausius's theorem would say that you have only heat flowing in one particular direction. Well, we sort of introduced the idea of engines, which are machines that are used to do work by taking heat from the heart air body to the cold air body. And we could define an efficiency, which is work divided by heat input, which since work is the difference between these two we could write as 1 minus Qc over Qh. And then we introduced this special class of engines, which were these Carnot engines. And the idea of this class was that you could run them forward and backward. They were kind of reversible. The were kind of a analogue of the frictionless type of processes that you would use to construct mechanical energy. And we found that these engines were the most efficient. And so the efficiency of these engines was marked only by the two temperatures. So we had the functional form potentially of efficiency as a function of the two temperatures involved. And what we also showed was that the efficiency of any random engine was going to be less than the efficiency of the Carnot engine that is marked by these two temperatures. And by putting some Carnot engines in series, we saw that the good way to write this was something of this form. I can in principle remove the one from the two sides of this equation and rearrange it a little bit into the form Qh over Th plus minus Qc over Tc because of this inequality writing it as less than or equal to 0. OK? What does that tell us? As it stands, not much. But then I promised you at the end of last lecture then this is an example of a much more powerful result, which is the Clausius Theorem. So let's start with writing what that theorem is and how it relates to a simple expression. And the theory is that for any cyclic process I can, cyclic, what does it mean? Is that in the set of coordinates in principle that is used to describe the system, I will start with the position that is equilibrium, so I can put that point in this state, but then I make a transformation. And ultimately return to that point. So the cycle is the return to that point. So I carry out a set of steps. And I haven't indicated the set of steps through a connected continuous curve in this multi-dimensional coordinate space because I don't want to restrict myself to processes that are even in equilibrium. So I may take a gas that I have over there, expand it rapidly, close it rapidly as long as I wait when I come back to the same point that I have reached my equilibrium again. I have done a cyclic process. Now at each stage of this cyclic process presumably the system takes in a certain amount of heat. So let's say that there is a dQ that goes at this stage of the process. So I use s to indicate, say, stage 0, one, two, three, four all the way coming back. So this is just a numbering process. The statement of Clausius's theorem is that for any cyclic process if I go out around the cycle, add up these elements of heat that are delivered to this system and I allow the possibility therefore that some of the times these elements dQ will be negative just as I did over here by writing this as minus Qc so that what the dQ is what has gone into whatever this black box is. So Qh went into this black box. Qc went out, which means that minus Qc went in so I arranged it in this fashion. You can see that the elements of heat are divided by some temperature. I generalize that expression over here by writing it T of S here. And the statement of Clausius's theorem is that this quantity has the same sign constraint as here where I told you that dQ is heat delivered to system at temperature T sub s. Now this particular thing requires a certain amount of thinking about. Because I told you that your system is not in equilibrium. So what is this T sub S if you have a system that is not in equilibrium? You can't define its temperature. However, you can imagine that whatever machine was delivering it had sump particular temperature. And so that's the temperature of the device or whatever is delivering this heat to the system. And you would be justified to say, well, why is that even useful? Because certainly if I were to have exactly the same cycle, but deliver the heat elements through a different process this Ts would be different. So this is really also a function of the method through which I want to carry out this cycle. So given that it doesn't sound like a particularly useful theorem. It seems very arbitrary. So let's first prove it. The proof is simple. And then see whether it's useful or not. Any questions at this point? OK. So how do we go proving it? We are going to use these Carnot engines. So what I will imagine is that I have some big vat that is at some, let's say, for purposes of, use a high temperature, a vat of hot water. And I use this vat as the process that provides heat to this entity. Exactly what do I mean by that? I will use a Carnot engine to take heat. Let's call it Qh, although its sign is not that particularly important. And convert this into the heat element dQs that is delivered here. So since I'm using s, an infinitesimal element here, let me call this dQ. And let's call it dQ0 because I'm picking it up from the temperature of T0. So I have as a result of this process a certain amount of work. All of this corresponds to the s-th element through this particular cycle. So I divide my cycle to lots of elements. Each element takes in this amount of heat and I use a Carnot engine to deliver that heat and always taking the Carnot engine from the input from T0, but what it delivers is where this randomness in temperature T of S comes about. Why did I use a Carnot engine? Because I said that sometimes this cycle is going to take in heat. Sometimes it is going to give out heat. So I want to be able to run this both forward and backwards. Yes? AUDIENCE: Are you delivering the work to the cycle too? PROFESSOR: No. AUDIENCE: OK. PROFESSOR: The work goes whenever it goes. AUDIENCE: OK. PROFESSOR: OK? And indeed that's the next step of the argument, which is what happened to the work? OK? So we do this. At the end of this story I am back to where I started. AUDIENCE: Maybe I'm-- PROFESSOR: Yes? AUDIENCE: Maybe I missed something basic, but I don't understand why you can't just define the function dQ not of s and ignore the Carnot engine. Just deliver it the key. What was wrong with just the blue arrow? PROFESSOR: OK. Why can't I just take heat from here to here? Because in principle where I put it is at a different temperature when I eventually go to equilibrium. And then I would run into problems of second law, as to the ability to transfer heat from a hotter to colder engine. So if you like this intermediate stage, allows me ultimately two vary this temperature Ts. What you say is correct as long as the T of S is uniformly T0. It kind of becomes useless. OK? All right. So we enclose our cycle, the Carnot engine, everything into a box. And we see what this box is doing. So there is the box. The box takes in elements of heat, dQ from the reservoir. And does elements of work, dW at different stages. Now once the cycle has been completed, the net result is that I have extracted an amount of heat, so this is the net, extracted heat is the integral over the cycle of dQ0 of s. And converted it to work. Oh, one other step. Once I'm using the Carnot engine, and I think I emphasized this, there is a relationship between heat and temperature. It is this proportionally-- Qh is proportional to h for the Carnot engine. Qc is proportional to Tc. So there is a proportionality between the heat that I take and the heat that I deliver that is related to the temperatures. So dQ0 is in fact T0 dQ that goes into the engine divided by the temperature at which it is delivered to the engine. OK? Now I know also something about this because the second law of thermodynamics says that no process is possible whose soul result is the conversion of heat to work. I can do the opposite. I can convert work to heat, but that means that the integral here has to be negative. OK? Now T0 is just a positive constant. So once I divide through by that I have gotten the proof of the statement [INAUDIBLE]. OK? Questions? Yes? AUDIENCE: So how is this is really going to help with the definition of T of s because get theoretically even your extraction of heat and your doing the work in that black box or the opposite could happen irreversibly, right? So you-- PROFESSOR: Exactly. AUDIENCE: --still have a problem. PROFESSOR: Yeah. So as I have written it here I would agree. It looks completely useless. So let's make it useful. How am I going to make it useful? So let's start some set of corollaries to this theorem. For the first thing is we get rid of this definition by considering a reversible transformation. OK? So same picture that I was drawing before with multiple coordinates. Start some point and come back. But now I will draw is solid curve in this space, meaning that at each stage of the process I will do it sufficiently slowly so that my system comes to thermal equilibrium and I can identify it as a point in this generalized phase space. OK? So now I have points in this generalized phase space. To each point I can therefore assign the equilibrium temperature. That I am at that stage of the cycle. OK? So now I can request my Carnot engines to deliver the heat at the temperature of the system to be maintaining the whole system plus the Carnot engines at thermal equilibrium as I switch from one cycle to another cycle. OK? So now I have a relationship that is pertaining to things that are in equilibrium. As long as I follow the equilibrium temperature of the system and do reversible transformations I have the result that dQ reversible over Ts is less than 0. But the things about-- yes? AUDIENCE: Could you explain again why the path integral of dQ not over s is smaller than [INAUDIBLE]? PROFESSOR: OK. Clausius's theorem says that if I took heat from a vat and converted it to useful work I have violated the second law. So the sign of the net amount of heat that I extracted better be negative, right? OK. So you are not going to be able to have a vat and extract energy out of the lake and run your engine. It's not allowed. OK? Now my path is reversible, which means that I could go this way, or I could go the opposite way. And if I go the opposite way, what was the definition of reversible? I reverse all of a heat exchanges inputs and outputs. OK? Which means that what I was calling previously dQ becomes minus dQ. So the inequality holds for both minus or plus, which means that as long as I do reversible transformations I must have the integral to be equal to 0. OK? Good. So we constructed in a sense analog of doing frictionless ways of doing work and increasing the energy of the system. We have an idea about reversible ways and their relationship to T, well, one step further I need to go. Which is that I go from some point A to some point B in my coordinate space. OK? I can do it multiple ways. I can go this way and then come back. Reversibly, I can go this way and come back one way or the other. So every way I have a number of choices. Essentially what it says is that I can go from say A to B, dQ, reversible over T, over some path, and the answer is going to be the same for doing it along a different path. Because if I went along this path I can close the reversible cycle either by returning that way or by going the other way. So going this way or going that way, the result of the integration along reversible path should be the same. And it should be the same also for any other path that I take between these two points. So it kind of again reminds us of pushing a particle up a hill. And as long as you are doing things frictionlessly the amount of work that you do in this process does not depend on how you go there. It's just the potential energy difference between the two points. So this entity being independent of the path implies immediately that there is some function like a potential energy that only depends on the two end points and this is how you would decline the entropy. OK? So what we have essentially said is that if I take an infinitesimal step there's an infinitesimal amount of heat that I can do reversibly, that is related to the change of the state function through let's say dS is dQ reversible over T. Or vice versa, dQ [INAUDIBLE] TdS. OK? We've done it finally. We can get our expression because we have now the possibility to go from this point to this point reversibly using a reversible transformation. And calculate the change in energy. Now quite generally the change in energy conservation lay, it's dW plus dQ. If I do this reversible change so that each element I am in equilibrium. I know what J is. The dW is sum over iJidxi. And I have established that as long as I do the reversible transformations dQ is TdS. And so we have this formula. OK? So I will write that again because really this is the most important relationship that you need to know from thermal dynamics. And we have to put all kinds of colorful things around it so you remember. OK? Now in particular there is a very common misconception which is that results are relevant to transformations. And you derive this result for a reversible transformation. So this formula is only valid for reversible transformation. No. That's not the case. It is like saying, I calculated the potential energy of pushing something up the hill through a frictionless process. Therefore, the potential energy is only relevant for frictionless transformations. No. Potential energy exists. It's a function of states. Energy is a function of state. And we have been able to relate energy as a function of state to all the other equilibrium quantities that we have over here. OK? Any questions? OK. So now we can answer another side question that I raised at the beginning, which is I started with xi and Ji, as describing the system in equilibrium. Then I gradually added T. I added E, now I've added S. How many things do I need to describe the system in equilibrium? A lot of these are dependent on each other. And in particular saw that mechanically if there was the only thing that was happening in the story, J's could be obtained from the derivatives of energy with respect to x. So J's, in some sense, once you had E were given to you. You didn't have to list all of the J's. You just needed to add E as a function of x's. But when you have thermal transformations around, that's not enough. This equation says that if there are n ways of doing work on the system, there is one of the doing heat. You have n plus 1 independent degrees of freedom.. So n ways of sum over i, Ji, dxi, one way of SdT, you have n plus one independent variables to describe your system. And once you realize that you have a certain amount of freedom in choosing precisely which n plus one you want to select. So I could select all of the x's and one temperature. I could select all of the J's and one energy. So I have a number of choices. And once I have made my choice I can rearrange things accordingly. So suppose I had chosen the energy in xi, then I could write dS to be dE over T minus sum over i Ji over T xi. So I just rearrange this expression and solve for dS. So this amounts to my having chosen x's and E's as independent variables and prescribed through this important law s as a function of E and x's. What about everything else? Well, everything else you can calculate by derivatives. So 1 over T would be dS by dE at constant x. While Ji over T would be minus dS by dxi at constant E and xJ that is not equal to i. OK? Fine. So that's sort of kind of extract the mathematical content that we can get out of the second law through this Clausius's theorem. There is one important corollary that has to do be irreversible transformations. After all, in setting up the Clausius's theorem I did not say anything about the necessity of being in equilibrium in order to achieve the inequality. Reversibility allowed me to get it as an equality. So what can I do? I can take any complicated space. I can pick a point A and in principle, make an irreversible transformation to some point B. And maybe in that process I will get some heat inputted through the system dQ. And what I would have is that the integral going from A to B, dQ divided by T along the path that I have described here which is irreversible. But I can't say anything about it at this point. Clausius's theorem has to do with cycles. So I can, in principle connect back from B to A reversibly. So I go integral from B to A, dQ reversible over T. And having completed the cycle I know for this cycle that this is negative. If I were to take this to the other side of the equation it becomes essentially the difference of the entropies, or here it's also the difference of entropies. But I prefer it to be on the other side. And then to make these two points very close to each other, in which case, I can write that dQ is less than or equal to TdS. This is the change in entropy in going from A to B. OK? Again, there's some question here as to what this T is. So let's get rid of that uncertainty by looking at processes that are adiabatic. So I make sure that there is no heat exchange in going from A to B. What do I conclude then? Is that for these processes the change in entropy has to be positive. OK? Because T's are generally positive quantities. OK? So a consequence of Clausius's theorem is that entropy can only increase in any transformation. Its change in entropy would be 0 for these adiabatic transformations if you have reversible processes. So this is actually another one of those statements that you have to think a little bit to see whether it has any use or not. Because we all say that entropy has to increase, but is that what we have proven here? Not quite. Because what we've proven is that if you have a system that is isolated, so there is a system that is isolated because I want dQ to be 0. And I want entropy to increase. OK? So let's say we start with some initial condition and we wait and go to some final condition of whatever is inside the box. I should be able to calculate entropy before and after in order to see that this change is positive. OK? But we say entropy is a function of state. And what have I done here to change this state? I cannot calculate entropy if the system is not in equilibrium. If the system is in equilibrium it presumably does not change as a function of time. So what is this expression relevant to? The expression is relevant to when you have some kind of a constraint that you remove. So imagine that let's say there is a gas in this room and some kind of piston that is initially clamped to some position. Then I can calculate the entropy of what's on the left side, what's on the right hand side, and that would be my initial entropy. I remove the clamp. I remove the constraint so this can slide back and forth. And eventually it comes to another position. So then I can again calculate the final entropy from the sum of what's on the left side and what's on the right hand side. And I can see that entropy is increased. So basically this statement really is useful when there is some internal constraint that you can remove. And as you remove the internal constraint, the entropy increases. So I guess that image that we all have is that if I give you two pictures of a room, one with books nicely arranged on a shelf and one with books randomly distributed you would have no problem to time order them. And actually presumably there was some constraint from the shelf that was removed and then these things fell down. So that's the kind of process for which this statement of the second law is appropriate. Any questions? OK. So that statement is something about approach to equilibrium. But as I said that there was some initial configuration with some constraint and as a function of removing that constraint the system would go from one sort of equilibrium, constrained equilibrium to an unconstrained equilibrium and this increase of entropy tells us in which direction it would go. You can time order the various processes. Now sometimes this adiabatic way of doing things is not the best way. And so depending on what it is that you are looking at, rather than looking at entropy you will be looking at different functions. And so the next step is to construct different types of functions that are useful for telling us something about equilibrium. Now I'll start doing that by some function with which you are very familiar. And has practically nothing to do with entropy and temperature. So we are going to look at mechanical equilibrium. OK. And so the kind of prototype of this, since we've been thinking in terms of springs or wires, is let's imagine that I have a wire that has some natural lent. And it is sitting there happily in equilibrium. And then I attach, let's say a mast to it and because of that I'm pulling it with a force that I will indicate by J. So J, if you are in gravity, would be mass of this times J. OK? Now if you were doing things in vacuum, what would happen to your x, which is the distortion that you would have from the equilibrium as a function of time? What would happen is presumably the thing would start to oscillate and you would get something like this. But certainly not equilibrium, but in real world there is always friction that is operating. And so if there is a lot of friction what is going to happen is that your x will come to some final value. So again, this is a constraint, which I allowed this to move. And because of that constraint I went from the initial equilibrium that corresponded to x equals to 0 to some final equilibrium. And you say, well, if I want to calculate that final equilibrium, what I need to do is to remove all of the kinetic energy so that the thing does not move anymore, which means that I have to minimize the potential energy. I'll give it the symbol H for reasons to become apparent shortly. And that is the sum of the energy that I have in the spring. Let's imagine that it is a hook end spring. We would have a formula such as this, plus the potential energy that I have in my mass, which is related to how far I go up and down. And say an appropriate description is minus Jx. OK? So this is the net potential energy of the spring plus whatever is supplying the mass that is applying in the external force. And so in this case, what you have is that minimizing H with respect to the constraint x will give you that the x equilibrium is J over K. OK? So basically it comes down to J over K. And that the value of this function, potential energy function in equilibrium substituting this over here is minus J squared over K. 2K. OK? So in many processes that involve thermodynamics we are going to do essentially the same thing. And we are going to call this net potential central energy, in fact, an enthalpy. OK? And the generalized version of what I wrote down for you is the following, in the process that I described which was related to the increase in entropy we were dealing with a system that was isolated. dQ was 0 and there was no work because the coordinates were not changing. dW was 0. Whereas here, I am looking at a process where there is a certain amount of work because this external J that I am putting is doing work on the spring. It is adding energy to the spring. And the thing is that because of the friction it doesn't continue to oscillate. So some of the energy that I would have put in the system, if I really were using the formula Jidxi is lost somehow. We can see those oscillations have gone away. Now I'm always doing this process at constant J. Right? So if I prescribe for you some path of x as a function of T, Jdx is the same thing as d of Jx because J does not change. x changes. OK? I'm going to also imagine processes where mechanical equilibrium, but with no heat so that dQ is 0. If I were to add these two together, then I will get an inequality involving dE, which is less than or equal to d of Jixi, which I can write as the change in e minus Jixi has to be negative. So appropriate to processes, which are conducted so that there is mechanical work at constant force, is this function that is usually called H for enthalpy. And what we find is that the dH is going to be negative. And again, if you think about it this is precisely this enthalpy is none other than the potential energy that is in the mechanical degrees of freedom. And there's always loss in the universe. And so the direction naturally is for this potential energy to decrease. One thing to note is that dH is dE. Thinking of it not now as a process going on at constant J, but as a change in a function of state because we said that once I prescribe where I am in the coordinate space I know what, say e is. I know what J is. I can certainly construct a function, which is e minus Jx. It's another function of state. And I can ask what happens if I make a change from one point in space to another point. Along the path that potentially allows variations in J dE I know, is Jidxi. And then, actually I'm using summation conversion. So let's remove that, plus TdS and here it becomes minus Jidxi minus xidJi. Jidxi is cancelled. And we that if I construct the function H as e minus Jx, the natural variations of it are TdS, just like dE but rather than Jdx I have minus xdJ. OK? So H is naturally expressed as a function of variables that are s and the Ji's. And remember that we said we are doing things at constant J. So it's nice that it happens that way. And in particular if I do partial derivatives I find that my xi is minus dH by dJi at constant s. So that in the same way before I said that if you have the energy function dE by dx will give J, if you have the H functions, dH by dJ would give you x. Let's check. Here we had an H function. If I do a dH by dJ with a minus sign, what do I get? I will get J over K. What was my equilibrium x? It was J over K. OK? Now both of these functions that we have encountered so far were expressing the system energy or enthalpy have as their argument entropy. Now if I give you a box you will have a hard time potentially figuring out what entropy is, but varying temperature is something that we do all the time. So it would be great if we could start expressing things not in terms of entropy, in terms of anything else. Well, what's the natural thing if not entropy? Here we went from x to the conjugate variable J. I can make a transformation that goes from s to the conjugate variable, which is temperature. And that would be relevant, therefore, when I'm thinking about isothermal processes. And so what I'm going to do is to reverse the role of the two elements of energy that I had previously for mechanical processes. So what we had there was that dQ was 0, but I want dQ to be non-0. What we had there was that dW was non-0. So let's figure out processes where dW is 0, but dQ, the same way that previously we were doing dW at constant J, let's do dQ not equals to 0 at constant T. And constant T is where the isothermal expression comes into play. OK? So if dW is 0, I have a natural inequality that involves dQ. I just erased it, I think. Yeah. It's not here. dQ is less than or equal to TdS. And if T is a constant throughout the process, I can write this as d of TS. Add these two together to get that dE less than or equal to d of TS, or taking it to the other side, I can define a function which is the Helmholtz free energy. F, which is E minus TS. And what we have is that the natural direction of flow for free energy when we remove some kind of a constraint at constant temperature is that we will tend to minimize the free energy. And then again doing the manipulation of dF is dE minus d of TS in general will have minus TdS minus SdT. The TdS part of the E will vanish and what I will have is Jidxi minus SdT. So now we can see that finally for the free energy, the Helmholtz free energy, and natural variable rather than being entropy is the temperature and all set of displacements. And then if you ask, well, what happened to the entropy? You say, well, I can get the entropy as minus dF by dT at constant x. If I take this picture of the gas that I had put at the beginning, which was just erased, picture of a piston that can slide. And I think of that as a box that I bring into this room, well, then if you'll quickly adjust to the temperature of the room, as well as the pressure of the room. So this is going to be a general transformation in which both heat and the work are exchanged. If is at external temperature and pressure that is fixed. So it's a transformation that is isothermal and constant J. In pressure you would call it [INAUDIBLE]. OK? And then quite generally you would say extending what we had before, in this case dQ is, again, less than TdS because of Clausius. dW is less than Jdx because you always lose something to friction. You always lose something to friction. You add the two things together, and you get that dE is less than or equal to TdS, plus Jidxi. For the transformations that we are considering that are at constant t and J I can take these expressions to mean the same thing as d of ts plus Jixi, which means that more generally, I can define a function which is the Gibbs free energy, G, which is the energy minus ts. This part is the Helmholtz free energy, minus Jixi. And that if I regard this as a function of state, which certainly will be minimized under conditions of constant J and T for some constraint that is removed. Quite generally as a state function depending on all of these variables, it's variations would be able to be expressed as minus SdT minus Jidxi. ie, our G is naturally a function of temperature and the set of displacements. OK. I don't think music was appropriate at this point. I would have put it earlier, but such is life. OK. Now there is something that I kind of did not pay sufficient-- yes? AUDIENCE: Sorry. Didn't you get it backwards between the J's and the x's? PROFESSOR: I did. I did. I wrote it wrong here. So maybe that's what the music was for. OK. Is it fine? Anything else? OK. The thing that I wasn't sufficiently careful with was the list of things that goes into mechanical work. And one additional care that what needs to have, which is that suppose I were to tell you, let's say, the pressure and temperature as my two variables, and I guess this is what I would have here if I go and look at pressure as my force. Then have I told you everything that I need to know about the system? So I have a box, let's say, in this room that is at the pressure of this room and at the temperature of this room. Have I completely specified the box? What have I left? AUDIENCE: The size. [INAUDIBLE]. PROFESSOR: How big is it, yes. So what I have left, previously maybe I had volume, but I sort of discarded the volume in terms of the pressure. So what I'd really need is to have the number of particles. I have to specify something. Maybe the mass, something that has still is going to distinguish these boxes in terms of their size. Now it is most useful to think in terms of the number of particles. So I give you a box and I can say that within this box I have so many molecules of oxygen and nitrogen, or whatever is the composition, but then we sort of start to get into the realm of chemistry and the fact that the different chemical components can start to react with each other, maybe even if you have a box that is one component, some of the molecules are going to get absorbed on the surface. So precisely what the number is is potentially something that is a variable. And in order to specify exactly what the energy content of a system is you have to specify how many particles and the energy carried by those particles. So the list of things that appears here, especially when you think in terms of chemical systems, has an additional element that is typically called chemical work. And so let's separate mechanical work from the chemical work. So mechanical work was sum over iJidxi where these were real displacements and we can actually write an expression that is very similar here where the displacements are the number of particles of different species, allowing potentially there going to other species through chemical reactions provided we multiply these by some appropriate chemical potential. So these mu's are a chemical potential. So for the reason that I stated before that if I were to just-- this ambiguity that I had about the size of the box tells me that I should have at least one thing left in my system describing the variables that is proportional to the size of the system. So typically what you do when you construct a Gibbs free energy is that you subtract the mechanical work, but you don't subtract the chemical work. So the ends will remain and can tell you how big your system is. OK? Now there is a conjugate way I can certainly do the other way. So here I constructed something in which mechanical work was removed, but chemical work remained. I could do it the other way around. I can define a function that is obtained by subtracting from the energy the chemical work, but leaving the mechanical component aside. So this function dG, OK, I should go one step before. So once we separate out the contributions of Jidxi into mechanical and chemical components, I would write dE as TdS plus sum over iJidxi, plus sum over alpha mu alpha dN alpha. OK? I just explicitly separating out those two sources of work. Now when I constructed G, which subtracts from e, the mu, but not the J, what happens? I will get a minus TdS. This will remain unchanged. This will get transformed to N alpha, d mu alpha. Just hold on a second. So this entity that is called the grand potential actually completes the list of these functions of state that I wanted to tell you is naturally a function of T, the chemical potentials as well as the displacement. For example, if it's a gas the displacements will include the volume which tell you how much material you have. Yes? AUDIENCE: In that case, do you need to subtract an additional TS from E and [INAUDIBLE]? PROFESSOR: Very good. These are all examples of the energies so that you define them as a function of temperature rather than entropy. The only things that really remain, the one function that you sort of keep as a function of entropy is energy itself. Enthalpy, too. Yes, it has its uses. Yes? AUDIENCE: So I have many questions about [INAUDIBLE]. PROFESSOR: OK. AUDIENCE: [INAUDIBLE]. Right now I wanted to ask you say that so a particle is added to the system, a particle of some species. And this is going to change the energy of the system. So my question is while this particle can bring energy in several ways, like kinetic energy, or maybe it's this chemical bound that's been broken-- PROFESSOR: Yes. AUDIENCE: between energy from there. PROFESSOR: Absolutely. AUDIENCE: Or also it's rest energy from special [INAUDIBLE]. PROFESSOR: Yes. AUDIENCE: So which ones are we counting? Which ones are we not counting? PROFESSOR: OK. So what you see is that when we do all of these calculations we definitely need to include when we are talking about a gas, the kinetic energy component. The change in the covalent bond energy definitely has to be there if I take an oxygen molecule and separate it to two oxygen atoms. There is that change in the energy that has to be included corresponding kinetic energies. Typically the rest mass is not included. And so if I were to include that it will be a shift of chemical potential by an amount that is mc squared. Why do people not bother? Because typically what we will be trying to look at is the change when something happens. And you have to bring those particles presumably from somewhere. So as long as you bring in particles from somewhere then there's the difference between the rest mass that you had outside and rest mass that you have here, that's 0. It's really all of the other things that contribute to useful quantities that you would be involved with like temperature, pressure. The rest mass is irrelevant. Now I can't rule out that you will come up it some process that is going on in [INAUDIBLE] for whatever where the rest mass is an important contribution. So it may, to some extent, depend on the circumstance that you are looking at. But if you are asking truly what should I include in the chemical potential, it will include the rest mass. AUDIENCE: So for example in-- I don't know, in cosmology they often talk of the chemical potential of the different species, like elementary particles in the cosmos. PROFESSOR: Right. AUDIENCE: They say that the chemical potential of photons is 0. PROFESSOR: Yes. AUDIENCE: So here they have fixed some kind convention? PROFESSOR: No. You see there is a difference between photons and other particles such as electrons or whatever, which is that you have value in number or some other number conservation that is applicable. So any process that you have will not change the number of [INAUDIBLE]. Whereas there are processes in which there is something that is heated and will give you whatever number of photons that you wish. So there is a distinction between things that are conserved and things that are not conserved. So We will get to that maybe later on as to why it is appropriate to set chemical potential to 0 for non-conserved things. OK? Yeah, but it's certainly something that we have immediately a feel for what the pressure is for what the temperature is, so we have our senses seem to sort of tell you the reality of temperature and pressure, but maybe not so much the chemical potential. So I tried to see whether we have a sense that actually is sensitive to chemical potential and we do to some extent. As you drink something and you say it's too salty, or whatever, so somehow you are measuring the chemical potential of sort. So if you like that, that would be your sensual equivalent of chemical potential. OK? Yes? AUDIENCE: Is there a potential where you have both chemical and mechanical work? [INAUDIBLE] PROFESSOR: OK. So that will run into the problem of if I do that, then my parameter is going to be T, J, and mu. And then I can ask you, how much do I have? Because all of these quantities are intensive. Right? So it could be a box this is one cubic centimeters or miles long. So I am not allowed to do that. And there is a mathematical reason for that that I was going to come to shortly. Other questions? OK. So We are going to take an interlude. Actually, the last question is quite relevant to what we are going to do next, which is that we have these functions defined, many coordinates, and one of the things that thermodynamics allows you to do is to relate measurements of one set of quantities, another set of quantities. And it does so through developing mathematical relationships that you can have between these different functions of state. So the next segment has to do with mathematical results, which I will subdivide into two sets of statements. One set of statement follow from the discussion that we have had so far, which has to do with extensivity. What do I mean? Let's take a look at our most fundamental expression, which is dE is TdS plus sum over iJdxi, plus sum over alpha mu alpha dN alpha. Now you recognize that certainly as the amount of material gets increased, the number of particles increases. The typical sizes and displacements get bigger. The energy content gets bigger. The entropy content is related to the heat content and so that gets bigger. So these differential forms that I have on this expression, they're all proportional to the size of the system. What does that mean? It means that e, the way that I have it here has as its natural variables S, x, N. And extensivity means that if I were to make my system twice as large, so that all of these quantities would get multiplied by a factor of two, the energy content would get multiplied by a factor of two. OK? Now it is important to state that this is not a requirement. OK? This is a statement about most things that we encounter around us, but once you go, let's say, to the cosmos, and you have a star, the gravitational energy of the star is not proportional to its volume. It goes the size to some fractional power. So that is an example of a system because of the way gravity works as a long range force, that is not extensive system. Typically this would work as long as you have interactions among your elements that are sufficiently short-ranged so that you don't get non-extensivity. If that's the case then I can take a derivative of this expression with respect to lambda, evaluated at lambda equals 2, 1. If I take a derivative with respect to lambda here I will have dE by dS, times S. I mean, the argument here is lambda S. I am taking a derivative with respect to the first argument. So I bring out a factor of S. dE by dS at constant x and N. And then the next term is xi dE with respect to all the different xi's at constant S and N. Next one is going to be N alpha, dE by dN alpha at constant x and S. And there's only one lambda on this side, which is going to give me E of Sx. OK? Now if I look at my initial expression I can immediately see that dE by dS at constant x and N is none other than T. So this is the same thing as T. dE by dxi at constant s and n is none other than Ji. dE by dN alpha at constant x and Js is none other than mu alpha. So once I set the argument to be 1, so that all of these things are evaluated at lambda equals to 1, I have the result that e is equal to TS plus Jixi, plus mu alpha N alpha. So in some sense all I did was I took the more fundamental expression and removed all of the d's. So some places, this is called the fundamental relation, but I don't like that because it is only valued for systems that are extensive, whereas the initial formula is valued irrespective of that. OK? Now once I have this what I can do is I can think of this as a relationship among different functions of state. Take a derivative and write it as dE is d of TS. D of TS is TdS plus SdT, plus d of Jixi, which gives me Jidxi plus xidJi, plus d of mu alpha. And alpha will give me mu alpha dn alpha, plus N alpha d mu alpha. OK? This is just a rewriting of this expression in differential form. But I know that dE is the same thing as TdS plus Jdx plus mu n alpha. So immediately what it tells me is that the intensive variables T, J, and mu our constrained to satisfy this relation, which is called a Gibbs-Duhem relation. OK? So this is, if you like, the mathematical reason for my answer before. That I cannot choose this set of variables to describe my system because this set of variables are not independent. If I vary two of them the variation of the other one is fixed. And I said I need N plus 1 independent degrees of freedom. If I choose all of the intensive ones I'm really using one additional relationship that relates them, makes them dependent. And of course this also goes to the fact that I won't know what the size of the system is, both of them reflections of extensivity. Just as an example, let's see. Chemical potential along isotherm. So for a gas, what would I have? I would write SdT minus Vdp. That is my contribution from the work here remembering that hydrostatic work at the wrong side. And then I have N d mu equals to 0. I said how do these vary along an isotherm? Isotherm means that I have to set dT equals to 0. If I'm along an isotherm then I have that d mu is rearranging things a little bit. Mu over NdP. So this is the formula that I will have to use. Now let's specialize to the case of an ideal gas. So I'm going to use here an ideal gas. For the case of the ideal gas we had the relationship that PV was NKT. So v over n is the same thing as kT over p. Remember that we are dealing with an isotherm, so T is constant. dp over pi can integrate, and therefore conclude that mu at a constant along an isotherm, as a function of the other two intensive variables T and P, is some reference that comes from a constant of integration. And then I have KT, integral of dP over P is log of P divided by sum. Actually, I already put the constant. OK? Very briefly to be expanded upon next lecture, the other set of mathematical relations go under the name of Maxwell relations. And they follow from the following observation. If I have a function of more than one variable, let's say, f of x and y, two variables, then the natural way to write this consistent with what we had before is that df is df by dx partial at constant ydx, plus dfy, the y at constant x dy. But one thing that we know from calculus is that these are first derivatives, and there are a bunch of second derivatives. In particular, the second derivative d2 f, dx, dy is independent of the order of taking the derivatives. Which means that if I take a derivative of this object with respect to y, I will get the same thing if I were to take a derivative of the second object with respect to x. So in particular if I go back to my fundamental equation, dE, let's forget about the chemical potential because time is running short, Jidxi plus TdS. I identify T and Ji as the first derivatives. That is, Ji is the first derivative of e with respect to xi at constant S. T is the first derivative of E, with respect to S at constant x. Then if I were to take a second derivative to construct d2E with respect to x and T, and S, I could do it two ways. I already have a derivative with respect to x that gave me Ji, so I take another derivative of J with respect to S, now at constant x. Or I can take a derivative here, which is dT by dxi at constant S. And depending on which one of the many functions of state that we introduced, E, F, G, H, you can certainly make corresponding second derivative inequalities. And the key to sort of understanding all of them is this equality of second derivatives. So what I will start next time around is to just give you a set of things on the left hand side, the analog of this. So for example, I could choose dS by dxi at constant T and show that just by looking at the form of this how we will be able to construct what this thing is related through a Maxwell construction. OK? Thank you. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 4_Thermodynamics_Part_4.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's start. Any questions? We've been gradually building up thermodynamics, culminating in what I call the most important fundamental relation, which was the E is T dS plus Ji dxi. And essentially we said that this was some statement of conservation of energy. where there's a heat component that goes into the system, and there's a work component. And actually, we ended up breaking the work component into a mechanical part and a chemical part, which we wrote as mu alpha dN alpha. I guess if I don't write the sums, the summation convention is assumed. Now, this gave us an idea of what a thermodynamic system is, in the sense of how many coordinates do we need in order to place the system in equilibrium somewhere in some coordinate space. And we said that we can just count the number of variables that we have on this side. And indeed, for the case of energy, the natural set of variables would be S, x, and N. And given some place in the coordinate system that is S, x, and N, you could find a point that corresponds to that thermodynamic equilibrium state. Now, having gotten this fundamental relation, we can manipulate and write it in different fashion. And in particular, for example, you can write the S is dE by dT, minus Ji dxi over T, minus mu alpha dN alpha over T. So you could in principle express everything rather than in terms of entropy-- which in some circumstances may not be the right variable-- in terms of the energy content of the system and the variables that you need in order to make work type of changes on the system. And we saw that other versions of it is possible. So for example, we could rearrange. Rather than looking at E, we could look at E minus TS, the quantity that we call F. And then dF would look very much like the original equation. Rather than T dS, we would get minus S dT. And clearly, for this quantity F, the natural variables are T, x, and N. And there were other types of these functions that we construct. So basically at this stage, we have a formula that tells us how to navigate our way in this space of things that are in equilibrium. And by taking various derivatives of equilibrium functions with respect to each other, we can express other ones. And via this methodology, we can relate to many different things to each other and have mathematical relations between physical observables. So following up on that, we then ask what are some other mathematical forms that we can look at. And one set of forms followed from extensivity. And we saw that if we were under circumstances such that we were to multiply all of the extensive variables by some factor-- 2, 3, whatever-- and the energy content would increase by that same factor, then we could in fact integrate the equation and drop the d's and write it in this fashion-- TS Jx plus mu N, regarding these as kind of vectors. And importantly, that told us that the intensive variables were not independent of each other. And you had relationship that you-- S dT plus Ji xi dJi plus N alpha d mu alpha is 0. And last time, I did one calculation using this Gibbs-Duhem relation that I repeat. And that was for a gas isotherm. Isotherm corresponds to dT goes to 0. And the analogue of hydrostatic work for a gas was minus P for J. So this becomes minus V dP plus N d mu. If you have one component, it goes to 0. And we said that this gives us the relationship along an isotherm. So for a fixed T, d mu by dP is V over N. I'll come back to that shortly. Towards the end of last lecture, I mentioned that we can get a number of other so-called Maxwell relations by noting that mixed second derivatives are independent of the order of taking derivatives. So that if you have some df of x and y written as df by dx at constant y dx, plus df by dy at constant x dy, then the order of the second derivatives, which would be obtained by taking either a derivative of the first term with respect to y or the second term with respect to x-- doesn't matter which order. So let's start with our initial equation. And for simplicity, let's think of cases where we have a fixed number of particles so that we can write dE as T dS plus Ji dxi. So this looks like this dF that I wrote before for you. And I can identify temperature as dE by dS at constant x, and each one of the forces Ji as dE by dxi at constant S. And all of the J's that are not equal to i kept fixed. And then, since the second derivative is irrespective of order, I can form the second derivative with respect to the combination S xi into a different fashion. I can either take a derivative over here-- I already have derivative with respect to S giving me T, so I will get dT by dxi, essentially at constant S-- or I can take the derivative of this object. I already have the x derivative, so I take the S derivative. So I have dJi with respect to S at constant x. So this is an example of the Maxwell relation. And just to remind you, we can always invert these derivatives. So I can invert this and say that actually, if I had wondered how the entropy changes as a function of a force such as pressure at constant x, it would be related to how the corresponding displacement to the force changes as a function of temperature. So you would have a relationship between some observable such as how the length of a wire changes if you change temperature rapidly, and how the entropy would change as a function of pressure, or corresponding force, whatever it may be. Now, what I would like to remind you is that the best way to sort of derive these relationships is to reconstruct what kind of second derivative or mixed derivative you want to have to give you the right result. So for example, say somebody told me that I want to calculate something similar-- dS by dJ, let's say at constant temperature. How do I calculate that? A nice way to calculate that is if I somehow manipulate so that S appears as a first derivative. Now, S does not appear as a first derivative here, but certainly appears as a first derivative here. So who says I have to take E as a function of state? I can look at the second derivative of F. So if I look at F, which is E minus TS-- and actually, I don't even need to know what the name of this entity is, whether it's F, G, et cetera. All I know is that it will convert this to minus S dT-- the TS that I don't want, so that S appears now as a first derivative. Now I want to take a derivative of S with respect to J. If I just stop here, the next term that I would have is Ji dxi. And the natural way that I would construct a second derivative would give me dS with respect to xi. But I want the S with respect to Ji. So I say, OK, I'll do this function instead. As I said, I don't really care what the name of these functions is. This will get converted to xi dJ. Now I can say there is this first derivative-- d o minus S with respect to Ji at constant T is the same thing as d of minus y with respect to T at constant J. So I can get rid of the minus sign if I want, and I have the answer that I want. It is dxi by dT at constant J-- at constant-- yeah, T-- J. And you can go and construct anything else that you like. Let me, for example, try to construct this entity we already calculated. So let's see. What can I say about d mu by dP at constant temperature, and I want to calculate that via a Maxwell relation? So what do I know? I know that dE is T dS minus pdV. Good. Oh, I want to have mu in the play, so now I have to add mu dN. That's good, because I have mu as a first derivative. I have it over here. But I have P as a first derivative, whereas I want P to appear as the work element, something like dP. So what I will do is I will make this d of E plus pV. This converts this to plus V dP plus mu dN. That's fine, except that when I do things now, I would have calculated things at constant S. And I want to calculate things at constant T. That's not really important. I do a minus ST. This becomes minus S dT. And then I have a function of state-- I don't care what it's called-- that has the right format for me to identify that d mu by dP at constant N and T, along an isotherm of a fixed number of particles, is the same thing as dV by dN at constant PN T. So that's my Maxwell relation. So why didn't I get this result? Did I make a mistake? AUDIENCE: [INAUDIBLE]. PROFESSOR: Louder. AUDIENCE: Is this valid for extensive systems? PROFESSOR: Yes, this is valid for extensive system. I used extensively in the relation of this. And here, I never used extensivity. But if I have a system that I tell you its pressure and temperature, how is its volume related to the number of particles? I have a box in this room. I tell you pressure and temperature are fixed. If I make the box twice as big, number of particles, volume will go twice as big. So if I were to apply extensivity, I would have to conclude that this is the same thing as V over N, and I would be in agreement with what I had before. Any questions? Yes. AUDIENCE: What about the constraint of constant PN T [INAUDIBLE]? PROFESSOR: OK. Actually, constraint of constant PN T is important because we said that said there it is a kind of Gibbs-Duhem relation. And the Gibbs-Duhem relation tells me for the case of the gas that the S dT minus V dP plus N d mu is 0. And that constraint is the one that tells me that once you have set P and T, which are the two variables that I have specified here, you also know mu. That is, you know all the intensive quantities. So once you set PN T-- the constraints over there-- really, the only thing that you don't know is how big the system is. And that's what's the condition that I used over here. But it is only-- so let me re-emphasize it this way-- I will write dV by dN is the same thing as V over N, only if I have PN T here. If I had here PN S, for example, I wouldn't be able to use this. I was going to build this equation to say, look at the elements of this. This part we got from the first law, the temperature we got from the zeroth law, and this we got from the second law, which is correct, but it leaves out something, which is that what the second law had was things going some particular direction. And in particular, we had that for the certain changes that I won't repeat, the change in entropy to be positive. So when you are in equilibrium, you have this relationship between equilibrium state functions. But in fact, the second law of thermodynamics tells you a little bit more than that. So the next item that we are going to look at is, what are the consequences of having underlying this equality between functions of state some kind of an inequality about certain things being possible and things not being possible? And that relates to stability conditions. And I find it easier to again introduce to you these stability conditions by thinking of a mechanical analogue. So let's imagine that you have some kind of a spring, and that this spring has some kind of a potential energy that I will call phi as a function of the extension x. And let's say that it's some non-linear function such as this. There is no reason why we should use a hook and spring, but something like this. Now, we also discussed last time what happens if I were to pull on this with some force J. So I pull on this. As a result, presumably it will no longer be sitting at x, because what I need to do is to minimize something else-- the function that we call H-- which was the potential energy of the spring plus the entity that was exerting the force. So if I were to make it in the vertical direction, you can imagine this as being the potential energy of the mass that was pulling on the spring. So what that means is that in the presence of J, what I need to do is to find the x that minimizes this expression. Once essentially the kinetic energy has disappeared, my spring stops oscillating and goes to a particular value. And mathematically, that corresponds to subtracting a linear function from this. Actually, what will it look like? Let's not make this go flat, but go up in some linear fashion so that at least I can draw something like this. So what I need to do is to minimize this. So I find that for a given J, I have to solve d phi dx equals J. But especially if you have a spring that has the kind of potential that I drew for you, you start increasing the-- so basically, what it says is that you end up at the place where the slope is equal to the derivative of the potential function. What happens if you start increasing this slope, if it is a non-linear type of thing, at some point you can see it's impossible. Essentially, it's like a very flat noodle. You pull on it too much and then it starts to expand forever. So essentially, this thing that we are looking for a minimum, ultimately also means that you need to have a condition on the second derivative. d2 phi by dx squared better be positive actually, I have to take d2 H y dx squared. I took one derivative to identify where the location of the best solution is. If I take a second derivative, this became essentially the J. J is a constant. It only depends on the potential energy. So it says independent of the force that you apply. You know that if the shape of the potential energy is given, the only places that are accessible are the kinds of places where you take the second derivative and the second has a particular sign. So this portion of the curve is in principle physically accessible. The portion that corresponds to essentially changing the curvature and going the other way, there's no way, there's no force that you can put that would be able to access those kinds of displacements. So there is this convexity condition. This was for one variable. Now suppose you have multiple variables-- x2, x3, et cetera. So you will have the potential energy as a function of many variables. And the condition that you have is that the second derivative of the potential with respect to all variables, which forms-- and if there are n variables, this would be an n by n matrix. This matrix is positive definite. What that means is that if you think of this as defining for you the bottom of a potential, in whatever direction you go, this would describe the change of potential for you. The first order change you already set to 0 because of the force J. The second order change better be positive. So this is the condition that you would have for stability, quite generally mechanically. And again, this came from the fact that if you were to exert a force, you would change the energy by something like J delta x, which is certainly the same thing that we have here. This is just one component that I have been focusing on. And so ultimately, I'm going to try to generalize that to this form. But one thing that I don't quite like in this form is that I'm focusing too much on the displacements. This is written appropriately using this form. And we have seen over here that whether or not I express things in terms of the displacements or in terms of entropy, temperature, conjugate forces, et cetera, these are all equivalent ways of describing the equilibrium system. So I'm going to do a slight manipulation of this to make that symmetry between forces and displacements apparent. So the generalization of this expression that we have over here is that once I know the potential energy, the force in this multi-component space is df by dxi. So in this multi-component space, I have some J, and there's a corresponding x. They're related by this. Now, if I were to make a change in J, there will be a change in the position of the equilibrium. And I can get that by doing a corresponding cange in the derivative. The corresponding change in the derivative is the second derivative of phi with respect to xi xj delta xj, sum over J. So what I'm saying is that this is a condition that relates the position to the forces. If I make a slight change in the force, there will be a corresponding slight change in the position. And the changes in the position and the force given by this equilibrium condition are related by this formula. Now, you can see that that sum is already present here. So I can rewrite the expression that is over here using that formula as delta xi delta Ji, sum over i-- better be positive. So this expression is slightly different and better way of writing essentially the same convexity condition, stability condition, but in a manner that treats the displacements and the forces equivalently. And I'm going to apply that to this and say that in general, there will be more ways of manipulating the energy of a thermodynamic system, and that having the system reach to equilibrium requires the generalization of this as a delta T delta S, plus sum over i delta Ji delta xi, plus sum over alpha, delta mu alpha, delta N alpha-- to be positive. And you may say I didn't really derive this for you because I started from mechanical equilibrium and stability condition and generalized to the thermodynamic one. But really, if you go back and I use this condition and do manipulations that are compatible to this, you will come up with exactly this stability condition. And I have that in the notes. It is somewhat more mathematically complicated than this, because the derivatives involve additional factors of 1 over T. And you have to do slightly more work, but you will ultimately come with the same expression. OK? You say, why is that of any use? It is possible that you've already seen a particular use of this expression. Let's look at the case of a gas-- one component. You would say that the condition that we have is that delta T delta S minus delta P delta V plus delta mu delta N has to be positive. Let's even be simpler. Look at the case where delta T is 0, delta N equals to 0, which means that I have a fixed amount of gas at a particular temperature. So I could have, for example, a box in this room which is in equilibrium at the same temperature as the temperature of the room. But its volume I'm allowed to adjust so that the pressure changes. So in that process, at a particular temperature, I will be finding a particular curve in the pressure volume space. And the reason that I drew this particular way is because I have the condition that minus dP dV better be positive, so that if I have P as a function of V, then I know that dP is dP by dV delta V. And if the product dP dV is positive, then this derivative dP by dV has to be negative. This has to be a decreasing function. And another way of saying that is that the compressibility-- kappa T, which is minus 1 over V, the inverse of this dV by dP at constant T-- has to be positive. So this is a curve where dP by dV is negative. Fine, you say. Nothing surprising about that. Well, it turns out that if you cool the gas, the shape of this isotherm can change. It can get started. And there is a critical isotherm at which you see a condition such as this, where there's a point where the curve comes with 0 slope. So you say that for that particular critical isotherm occurring at TC, if I were to make this expansion, the first term in this expansion is 0. You say, OK, let's continue and write down the second term-- 1/2 d2 P by dV squared at constant P times delta V squared. Now the problem is that if I multiply this by delta V-- which would have made this into delta V squared and then this into delta V cubed-- I have a problem because delta V can be both positive or negative. I can go one direction or the other direction. And a term that is a cubic can change sign-- can sometimes be positive, can sometimes be negative, depending on the sign of that. And that is disallowed by this condition. So that would say that if an analytical expression for the expansion exists, the second derivative has to be 0. And then you would have to go to the next term-- d cubed P dV cubed. And then delta V-- you would normally write delta V cubed, but then I've multiplied by delta V on the other side. We'd have delta V to the fourth. Delta V to the fourth you would say is definitely positive. Then I can get things done by having to make sure that the third derivative is negative. And you say, OK, exactly what have you done here? What you have done here is you've drawn a curve that locally looks like a cubic with the appropriate sign. And so that's certainly true that if you have an isotherm there the compressibility diverges, you have a condition on second derivatives. And you probably have used this for calculating critical points of van der Waals and other gases. And we will do that ourselves. But it's actually not the right way to go about, because when you go and look at things experimentally with sufficient detail, you find that these curves at this point are not analytic. They don't admit a Taylor expansion. So while this condition is correct, that the shape should be something like this, it is neither cubic nor a fifth power, but some kind of a non-analytic curve. AUDIENCE: Could you repeat last statement? PROFESSOR: That statement, in order to fully understand and appreciate, you have to come to next term, where we talk about phase transitions. But the statement is that the shape of the function in the vicinity of something like this is delta P is proportional to delta V to some other gamma that is neither 3 nor 5 nor an integer. It is some number that experimentally has been determined to be of the order of, I don't know, 4.7. And why it is 4.7 you have to do a lot of interesting field theory to understand. And hopefully, you'll come next semester and I'll explain it. AUDIENCE: So your point is that gamma is not [INAUDIBLE]. PROFESSOR: Gamma is not-- so the statement is, in order for me to write this curve as something delta V plus some other thing-- delta V Squared-- plus some other thing-- delta V cubed-- which we do all the time, there is an inherent assumption that an analytical expansion is possible around this point. Whereas if I gave you a function y that is x to the 5/3, around x of 0, you cannot write a Taylor series for it. OK? Essentially, this stability condition tells you a lot of things about signs of response functions. In the case of mechanical cases, it is really that if you have a spring, you better have a force constant that is positive, so that when you pull on it, the change in displacement has the right sign. It wants to be contracted. And similarly here, you are essentially compressing the gas. There's a thermal analogue of that sine which is worth making sure we know. Let's look at the case where we have a fixed number of particles. And then what we would have is that our dE is T dS plus Ji dxi. I am free to choose any set of variables. The interesting set of variables-- well, OK. So then stability implies that delta T delta S plus delta Ji delta xi with the sum over i is positive. Now, I can take a look at this expression and choose any set of variables to express these changes in terms of. So I'm going to choose for my variables temperature and xi. You see, the way that I've written it here, S, T, J, x appear completely equivalently. I could have chosen to express things in terms of S and J, T and J, S and x. But I choose to use T and x. All right, so what does that mean? It means that I write my delta S in terms of dS by dT at constant x dT, plus dS by dxi at constant T and other xj not equal to i, delta xi. I also need an expression for delta Ji here. so my delta Ji would be dJi by dT at constant x, delta T, plus dJi with respect to xj at constant T, an appropriate set of J's, delta xi-- delta xj. Sum over J implicit here. I substitute these-- delta S and delta J-- in the general form of the stability condition to get constraints applicable to combinations involving delta T and delta x. So I have to multiply delta S with delta T. I have to multiply delta Ji with delta xi here. Yes? AUDIENCE: So here, J is not equal to i. And then which one is J? [INAUDIBLE] PROFESSOR: OK. So let's say that I have S that is a function of-- I have x1 and x2. So here, I would have derivatives dS by dx1 times delta x1. I would have derivatives dS by dx2 times delta x2. In both cases, T is kept fixed. In this case, additionally, x2 is kept fixed. In this case, additionally, x1 is kept fixed. That's all. OK? So let's do that multiplication. We multiply a delta T here and we get dS by dT at constant x, delta T squared. We get here a term which is delta T delta xi, multiplying dS by dxi at constant T, and the others that I don't bother to write. But I note that I will get another term that is delta xi delta t from the second term, whose coefficient is dJi by dT at constant x. And I group those two terms together. And finally, we have the last term here, which is delta xi delta xj, dJi by dxj, constant T. And the constraint for all of these stability conditions that I have written, I have to be in the minimum of a potential. Any deviation that I make better lead to an increase in this combination, which is kind of equivalent to being at the minimum of some kind of potential energy. If you do some manipulations, this is also equivalent to being at the maximum of some entropy function. Now, let's look at that manipulation a little bit. Now, I happen to know that this form will simplify a little bit. And that's why I chose this combination of variables. And the simplification that will happen is that this entity is 0. So let me show you why that entity is 0. And that relies on one of these Maxwell relationships that I had. So let's find the Maxwell relation that is applicable to dS by dxi at constant T, which is what I have over here. OK, let's again start with dE being T dS plus ji dxi. I'm at fixed number of particles. I don't need to write the other term-- UDN. Unfortunately here, S is not the first derivative. So what I will do is I will look at E minus TS. That's going to turn this into minus S dT. Good. Ji dxi is exactly what I want because this will tell me something about the derivative in this fashion. dS by dxi at constant T is equal to what? It's taking the things in this fashion, which is dJi by dT at constant x. Good. So I can see that the sum total of these two terms is 0. And that's exactly what I have here. So this is 0 by Maxwell relation. OK? OK So having set that to 0, I have two terms. One of them is this entity that goes with delta T squared. And the other is this entity-- sum over i and J, delta xi, delta xj, dJi, dxj, constant T. And this-- for any choice of delta x's and delta T's-- for any choice, it better be positive. So in particular, given this nice form that I have over here, I can certainly choose all of the delta x's to be 0, because I do things at constant displacement. And then what I have is that the whole thing is proportional to delta T squared and some coefficient. So that coefficient better be positive. So entropy should always be an increasing function of temperature at constant displacement. And the consequence of that is that heat capacities measure at constant displacement. So this is the heat capacity-- Cx. The general definition is you put some amount of heat into the system at constant displacement. So I have to specify the procedure by which heat is applied to the system and see how much this temperature changes. I can in principle do this sufficiently stability so I can measure precisely the changes in temperature infinitesimally. And then I can use the relationship between dQ and reversible heat-- so dS at constant x dT. And dS by dT at constant x is precisely the entity that we have looked at-- to be positive. So in the same way that a spring is stable-- if you pull on it, its extension should increase a little bit. And so the coefficient that relates the change in displacement corresponding to the change in force has to be positive for stability. Heat capacities must have this positivity in the sense that if you add heat to a system, its temperature should go up. It it went the other way, it would actually also violate the second law of thermodynamics. Yes? AUDIENCE: I'm a bit confused about the J notation that we used. The initial equation that we start has only-- it deals with i, but where does J come from? PROFESSOR: This equation. Which equation? Up here? AUDIENCE: So index J is just a dummy variable? [INAUDIBLE] that dE equals T dS equals Ji dSi. PROFESSOR: OK. Fine. So let's start from here. And part of the confusion is that I'm not really consistent. Whenever I write an index that is repeated, there is a so-called summation convention that you may have seen in quantum mechanics and other courses which says that the index that is repeated twice is summed over. So when I write ai bi, using the summation convention, what I mean is, sum over all possible values of i, ai, bi. So what I meant here is that there may be multiple ways of doing work on the system. So there could be pressure, there could be spring, there could be magnetic work. And if I do mechanical work on the system, the change in energy is the sum total of all different ways to do mechanical works, plus the heat that is supplied to the system. So that's statement number one. Now, I did something else here, which is that when I take a derivative of a function that depends on multiple variables-- so f depends on x1, x2, x3, xn-- then if I look at the change in this, any one of these variables can change by an amount that is dxi. And if the second variable changes, there is a corresponding contribution, which is df with respect to the second variable. And I have to add all of these things together. So if I'm again inconsistent, this is really what I mean, that when you do a change in a function of multiple variables, you would write it as dxi, df by dxi. If you use the summation convention, I have to really sum over all possibilities. Now, suppose I took one of these things, like df by dx3. So it's one of the derivatives here. And I took another derivative, or I looked at the change in this function. This function depends also on variables x1 through xn. So I have to look at the sum over all changes with respect to xj of df by dx3 delta xj. So you can see that automatically, I have kind of assumed some of these things as I write things rapidly. And if it is not clear, you have to spend a little bit of time writing things more in detail, because it's after all kind of simple algebra. Yes, did I make another mistake? Yes? AUDIENCE: Oh, not a mistake, just-- PROFESSOR: Yes? AUDIENCE: When you were talking about mechanicals, you had a non-linear spring. You start pulling on it. At some point, it just increases. PROFESSOR: If it is a non-linear spring, yes, that's right. AUDIENCE: Can you provide example of thermodynamic system which, upon some threshold action upon it, becomes unstable? PROFESSOR: OK. I think I more or less had that example. Let's try to develop it a little bit further. So I said that typically, if I look at the pressure volume isotherms of a gas, it looks something like this. And I told you that there is actually some kind of a critical isotherm that looks something like this. And you may ask, well, what do things look below that for lower temperatures? And again, you probably know this, is that this is the temperature below which you have a transition to liquid gas coexistence. And so the isotherms would look like that. Now, it turns out that while these are true equilibrium things-- you would have to wait sufficiently long so that the system explores these isotherms-- it is also possible in certain systems to do things a little bit more rapidly. And then when you go below, you will see a kind of remnant of that isotherm which kind of looks like this. So that isotherm would have portions of it that don't satisfy these conditions of stability. And the reason that you still get some remnant of it-- not quite the entire thing. You get actually some portion here, some portion here-- is because you do things very rapidly and the system does not have time to explore the entire equilibrium. So that's one example that I can think of where some remnants of something that is like that non-linear noodle takes place. But even the non-linear noodle, you cannot really see in equilibrium. You can only see it when you're pulling it. And you can see that it is being extended. There is no equilibrium that you can see that has that. So if I want to find an analogue of that non-linear noodle showing instability, I'd better find some near-equilibrium condition. And this is the near-equilibrium condition that I can think of. There are a number of cases. And essentially, you've probably heard hysteresis in magnets and things like that. And by definition, hysteresis means that for the same set of conditions, you see multiple different states. So not both of them can be or multiple of them can be in equilibrium. And over some sufficient time scale, you would go from one behavior to another behavior. But if you explore the vicinity of the curves that correspond to these hysteretic behavior, you would see the signatures of these kinds of thermodynamic instabilities. There was one other part of this that maybe I will write a little bit in more detail, given the questions that were. Asked It is here I looked at this general condition for the case where the delta x's were 0. Let's look at the same thing for the case where delta T equals to 0, because then I have the condition that the Ji by the xj, delta xi, delta xj should be positive. And this was done at constant T. And to be more explicit, there is a sum over i and J in here. So if I'm doing things at constant temperature, I really have the analogue of the instability condition. What I have here, it is the same thing as this one. And if you think about this in a little bit more detail-- let's say I have something like a gas. We have two ways of doing work on the system. I have dJ1 by dx1. I have dJ2 by dx2. I have dJ1 by dx2. I have dj2 by dx1. Essentially, this object, there are four of them. And those four I can put into something like this. And the statement that I have written is that this 2 by 2 matrix, if it acts on the displacement delta x1 delta x2 on the right, and then gets contracted on the left by the same vector delta x1, delta x2, for any choice of the vector delta x1 delta x2, this better be positive. This is for two, but in general, this expression is going to be valid for any number. Now, what are the constraints that a matrix must satisfy so that irrespective of the choice of this displacement, you will get a positive result? Mathematically, that's caused a positive definite matrix. So one of the conditions that is certainly immediately obvious is that if I choose the case where delta x2 is 0, then the diagonal term that corresponds to that entity better be positive. So a matrix is positive definite if for every one of the diagonal elements, I have this condition. So this is actually something that we already saw. In the case of the gas, this would be something like minus dP by dV at constant temperature-- be positive. This is the nature of these isotherms always going one way that I was drawing. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: You're not summing over alpha now, are you? Or are you-- PROFESSOR: Yes, I'm not. And I don't know how to write. [LAUGHS] Let's call it delta J1, x1. So all diagonal elements must be positive, but that's not enough. So I can write a matrix that has only positive elements along the diagonal, but off-diagonal elements such that the combination will give you negative values of this culmination for appropriate choices of delta x1 and delta x2. The thing that you really need is that all eigenvalues must be positive. It is not that easy to write down mathematically in general what that means in terms of multiple response functions. That's why this form of writing is much more compact and effective. It basically says, no matter what combination of elements you choose, you are at the bottom of the potential. You should not be able to go away from it. AUDIENCE: So the second statement is necessary and sufficient condition? PROFESSOR: This statement is necessary but not sufficient. This statement is equivalent to the meaning of what is a positive definite matrix. It is sufficient. Yes? AUDIENCE: Is there any physical interpretation to diagonalizing that matrix? PROFESSOR: It depends on the kind of the formations that you make. Like if I was really thinking about a gas where I allow some exchange with the surface or something else so that delta N is not equal to 0, the general form of this matrix in the case where I have pressure volume would be something like this. I would have minus dP by dV. I would have d mu by dN. I would have dP by dN. I would have d mu by dV. And I would have to make sure that this matrix is positive definite, so I know certainly that the compressibility by itself has to be positive. d mu by dN also has to be positive. But if I allow changes both in volume and the number of particles, then because of these off-diagonal terms, I have additional constraints. So for appropriate changes, it is sufficient to look at the diagonal terms. But for the entire set of possible transformations, it's not. Yes? AUDIENCE: If you look at different ways to do the work on the system-- so in this case, change volume, change number of particles, if you're saying forces have different dimensions, and if diagonalization of these metrics corresponds to inventing some new ways of acting on the system which are a linear combination of old ways like change volume a little and change particles-- PROFESSOR: Change the number of particles, yes. AUDIENCE: Those are pretty weird manipulations, in terms of how to physically explain what is diagonalization matrix. PROFESSOR: OK. There is a question of dimensions, and there's a question of amount. So maybe what you are worried about is that if I make one them volume and the other pressure, one is extensive, one is intensive, and the response functions would have to carry something that is proportional to size. But then there are other conditions, let's say, where all of the variables are intensive. And then it's moving around in the space that is characterized, let's say, by pressure and chemical potential from one point to another point. And there is some generalized response that you would have, depending on which direction you go to. So essentially, I could certainly characterize various thermodynamic functions in this space P mu. And what I know is that I cannot write any function in this two-parameter space that would be consistent with the thermodynamic principles of stability. I have to have some kind of convexity. So certainly, what typically-- let's say again, going back to my favorite example here-- what people do and what we will do later on is we specify that you have some particular pressure and volume of an interacting gas. And there is no way that we can calculate the thermodynamic properties of that system exactly. We try to construct a free energy as a function of these variables. And that free energy will be approximate based on some ideas. And then we will see that that free energy that I construct in some places will violate the conditions of stability. And then I know that that free energy that I constructed, let's say on the assumption that the box that I have is uniformly covered by some material, that assumption is violated. And maybe what has happened is that part of the volume is a liquid and part of the volume is gas. So we will do precisely that exercise later on. We will choose a pair of variables. We will try to construct a free energy function out. We make the assumption that the box is uniformly occupied with the gas at some density. Based on that, we estimate its energy, entropy, et cetera. We construct a free energy function out. And then we will see that that energy function violates the compressibility condition at some point. So we know that, oh, we made a mistake. The assumption that it is uniformly occupying this box is wrong. All right, any other questions? Yes? AUDIENCE: One of the things that confuses me about thermodynamics in general is the limits of its applicability. We think of it as being very general, applying to many different systems. For example, if we had two bar magnets and we had the north pole close to the south pole of the other one, and I thought of the magnetic field in between, is it conceivable to think of that as a thermodynamic system? PROFESSOR: You can think of the magnet as a thermodynamic system. And you can certainly look at two of them, including the electromagnetic field that surrounds them. And there would be some conditions, like if you were to jiggle these things with respect to each other-- and you would maintain the entire thing in a box that prevented escape of electromagnetic waves or heat to the outside-- that eventually this jiggling of the two magnets will lead to their coming to equilibrium but at a higher temperature. So this is what I'm saying. So imagine that you have-- let's for simplicity think of, again, one big magnet that you don't move, and one small magnet here that you externally jiggle around. And then you remove the source of work. You wait for a while. And then you find that after a while, this thing comes to an equilibrium. You ask, where did that kinetic energy go to? You had some kinetic energy that you had set into this, and the kinetic energy disappeared. And I assume that there are conditions that the kinetic energy could not have escaped from this box. So the only thing that could have happened is that that kinetic energy heated up the air of the box, the magnet, or whatever you have. So thermodynamics is applicable in that sense to your system. Now, if you ask me, is it applicable to having a single electron spin that is polarized because of the magnetic field and some condition that I set it to oscillate, then it's not applicable, because then I know some quantum mechanical rules that govern this. There may be in some circumstances some conservation law that says this with oscillate forever. If it doesn't oscillate forever, there's presumably a transition matrix that says it will admit a particular photon of a particular energy. And I have to ask what happened to that photon. So what we will see later on is that thermodynamics is very powerful, but it's not a fundamental theory in this sense that the quantum mechanics that governs the spins. And the reason that this works for the case of the magnet is because the magnet has billions of spins in it-- billions and billions of spins, or whatever. So it is the of large numbers that ultimately allows you to translate the rules of motion into a well-defined theory that is consistent with thermodynamics. AUDIENCE: I know that my question seemed off-topic. But the reason why I asked about this particular system is because I would think that it doesn't have a positive compressibility. Like if you pull the two magnets apart very slowly, the force pulling them together decreases instead of increasing like with the rubber band. PROFESSOR: OK. It may be that you have to add additional agents in order to make the whole system work. So it may be that the picture that I had in mind was that you have the magnet that is being pushed up and down. And I really do put a number of springs that make sure there exists an equilibrium position. If there is no mechanical equilibrium position, then the story is not whether or not to thermodynamic applies. It's a question of whether or not there is an equilibrium. So in order for me to talk about thermodynamics I need to know that there is an eventual equilibrium. and that's why I actually recast the answer to your question, eventually getting rid of the kinetic energy of this so that I have a system that is in equilibrium. If there are things that are not in equilibrium, all bets are off. Thermodynamics is the science of things that are in equilibrium. Other questions? All right, so there is only one minor part of our description-- rapid survey of thermodynamics-- that is left. And that's the third law of thermodynamics and that has to do something with the behavior of things as you go towards the limit of the zero of thermodynamic temperature. And I think we will spend maybe 10, 15 minutes next lecture talking about the third law of thermodynamics. Probably from my perspective, that is enough time with respect towards the time we spend on the other laws of thermodynamics, because as I will emphasize to you also, the third law of thermodynamics is in some sense less valid than the others. It's certainly correct and it is valid. It is just that its validity rests on other things than what I was emphasizing right now, which is the large number of degrees of freedom. You could have a large number of degrees of freedom in a world that is governed by classic laws of physics, and the third law of thermodynamics would be violated. So it is a condition that we live in a world that is governed by quantum mechanics that tells us about the third law, as we will discuss next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 7_Kinetic_Theory_of_Gases_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu PROFESSOR: So we started by talking about thermodynamics. And then switched off to talking about probability. And you may well ask, what's the connection between these? And we will eventually try to build that connection through statistical physics. And maybe this lecture today will sort of provide you with why these elements of probability are important and essential to making this bridge. So last time, I started with talking about the Central Limit Theorem which pertains to adding lots of variables together to form a sum. And the control parameter that we will use is this number of terms in the sum. So in principle, there's a joint PDF that determines how these variables are distributed. And using that, we can calculate various characteristics of this sum. If I were to raise the sum to some power m, I could do that by doing a sum over i running from let's say i1 running from 1 to N, i2 running from-- im running from 1 to N, so basically speaking this sum. And then I have x of i1, x of i2, x of im. So basically I multiplied m copies of the original sum together. And if I were to calculate some moment of this, basically the moment of a sum is the sum of the moments. I could do this. Now the last thing that we did last time was to look at some characteristic function for the sum related to the characteristic function of this joint probability distribution, and conclude that actually exactly the same relation holds if I were to put index c for a cumulant. And that is basically, say the mean is the sum of the means, the variance is sum of all possible variances and covariances. And this holds to all orders. OK? Fine. So where do we go from here? We are going to gradually simplify the problem in order to get some final result that we want. But that result eventually is a little bit more general than the simplification. The first simplification that we do is to look at independent variables. And what happened when we had the independent variables was that the probability distribution could be written as the product of probability distributions pertaining to different ones. I would have a p1 acting on x1, a p2 acting on x2, a pn acting on the xn. Now, when we did that, we saw that actually one of the conditions that would then follow from this if we were to Fourier transform and then try to expand in powers of k, is we would never get in the expansion of the log terms that were coupling different k's. Essentially all of the joint cumulants involving things other than one variable by itself would vanish. So essentially in that limit, the only terms in this that would survive we're the ones in which all of the indices were the same. So basically in that case, I would write this as a sum i running from one to N, xi to the power of N. So basically for independent variables, let's say, the variance is the sum of the variances, the third cumulant is the sum of the third cumulants, et cetera. One more simplification. Again not necessary for the final thing that we want to have in mind. But let's just assume that all of these are identically distributed. By that I mean that this is basically the same probability that I would use for each one of them. So this I could write as a product over i one to N, the same p for each xi. Just to make sure you sum notation that you may see every now and then, variables that are independent and identically distributed are sometimes called IID's. And if I focus my attention to these IID's, then all of these things are clearly the same thing. And the answer would be simply N times the cumulant that I would have for one of them. This-- actually some version of this, we already saw for the binomial distribution in which the same coin, let's say, was thrown N independent times. And all of the cumulants for the sum of the number of heads, let's say, were related to the cumulants in one trial that you would get. OK? So fine. Nothing so far here. However let's imagine now that I construct a variable that I will call y, which is the variable x, this sum that I have. From it I subtract N times the mean, and then I divide by square root of N. I can certainly choose to do so. Then what we observe here is that the average of y by this construction is 0. Because essentially, I make sure that the average of x is subtracted. No problem. Average of y squared-- not average of y squared, but the variance. Surely it's easy to show the variance doesn't really depend on the subtraction. It is the same thing as the variance of x. So it is going to be essentially x squared c divided by square of this. So I will have N. And x squared, big x squared cumulant, according to this rule, is N times small x squared cumulant. And I get something like this. Still nothing interesting. But now let's look at the m-th cumulant. So let's look at y m c for m that is greater than 2. And then what do I get? I will get to N times x m c divided by N to the m over 2. The N to the power of m over 2 just came from raising this to the power of m, since I'm looking at y to the m. And x to the m c, according to this, is N times x1. Now we see that this is something that is proportional to the N to the power of 1 minus m over 2. And since I chose m to be greater than 2, in the limit that N becomes much, much larger than 1, this goes to 0. So if I look at the limit where the number of terms in the sum is much larger than 1, what I conclude is that the probability distribution for this variable that I have constructed has 0 mean, a finite variance, and all the other higher order cumulants are asymptotically vanishing. So I know that the probability of y, which is this variable that I have given you up there, is given by the one distribution that we know is completely characterized by its first and second cumulant, which is the Gaussian. So it is exponential of minus y squared, two times its variance divided, appropriately normalized. Essentially this sum is Gaussian distributed. And this result is true for things that are not IID's so long as this sum i1 to im, one to N, xi1 to xim goes as N goes to infinity, much, much less than 1, as long as it is less than-- less than strictly than N to the m over 2. So basically, what I want to do is to ensure that when I construct the analog of this, I would have something that when I divide by N to the m over 2, I will asymptotically go to 0. So in the case of IID's, the numerator goes like N, it could be that I have correlations among the variables et cetera, so that there are other terms in the sum because of the correlations as long as the sum total of them asymptotically grows less than N to the m over 2, this statement that the sum is Gaussian distributed it is going to be valid. Yes. AUDIENCE: Question-- how can you compare a value of [INAUDIBLE] with number of variables that you [INAUDIBLE]? Because this is a-- just, if, say, your random value is set [? in advance-- ?] PROFESSOR: So basically, you choose a probability distribution-- at least in this case, it is obvious. In this case, basically what we want to know is that there is a probability distribution for individual variables. And I repeat it many, many times. So it is like the coin. So for the coin I will ensure that I will throw it hundreds of times. Now suppose that for some reason, if I throw the coin once, the next five times it is much more likely to be the same thing that I had before. Kind of some strange coin, or whatever. Then there is some correlation up to five. So when I'm calculating things up to five, there all kinds of results over here. But as long as that's five is independent of the length of the sequence, if I throw things 1,000 times, still only groups of five that are correlated, then this result still holds. Because I have the additional parameter N to play with. So I want to have a parameter N to play with to go to infinity which is independent of what characterizes the distribution of my variable. AUDIENCE: I was mainly concerned with the fact that you compare the cumulant which has the same dimension as your random variable. So if my random variable is-- I measure length or something. I do it many, many times length is measured in meters, and you try to compare it to a number of measurements. So, shouldn't there be some dimensionful constant on the right? PROFESSOR: So here, this quantity has dimensions of meter to m-th power, this quantity has dimensions of meter to the m-th power. This quantity is dimensionless. Right? So what I want is the N dependence to be such that when I go to large N, it goes to 0. It is true that this is still multiplying something that has-- so it is. AUDIENCE: It's like less than something of order of N to m/2? OK. PROFESSOR: Oh this is what you-- order. Thank you. AUDIENCE: The last time [INAUDIBLE] cumulant [INAUDIBLE]? PROFESSOR: Yes, thank you. Any other correction, clarification? OK. So again but we will see that essentially in statistical physics, we will have, always, to deal with some analog of this N, like the part number of molecules of gas in this room, et cetera, that enables us to use something like this. I mean, it is clear that in this case, I chose to subtract the mean and divide by N to the 1/2. But suppose I didn't have the division by N to the 1/2. Then what happens is that I could have divided for example by N. Then my distribution for something that has a well-defined, independent mean would have gone to something like a delta function in the limit of N going to infinity. But I kind of sort of change my scale by dividing by N to the 1/2 rather than N to sort of emphasize that the scale of fluctuations is of the order of square root of N. This is again something that generically happens. So let's say, we know the energy of the gas in this room to be proportional to volume or whatever. The amount of uncertainty that we have will be of the order of square root of volume. So it's clear that we are kind of building results that have to do with dependencies on N. So let's sort of look at some other things that happen when we are dealing with large number of degrees of freedom. So already we've spoken about things that intensive, variables such as temperature, pressure, et cetera. And their characteristic is that if we express them in terms of, say, the number of constituents, they are independent of that number. As opposed to extensive quantities, such as the energy or the volume, et cetera, that are proportional to this. We can certainly imagine things that would increase [INAUDIBLE] the polynomial, order of N to some power. If I have N molecules of gas, and I ask how many pairs of interactions I have, you would say it's N, N minus 1 over 2, for example. That would be something like this. But most importantly, when we deal with statistical physics, we will encounter quantities that have exponential dependence. That is, they will be something like e to the N with some something that will appear after. An example of that is when we were, for example, calculating the phase space of gas particles. A gas particle by itself can be in a volume V. Two of them, jointly, can occupy a volume V squared. Three of them, V cubed, et cetera. Eventually you hit V to the N for N particles. So that's a kind of exponential dependence. So this is e g V to the N that you would have for joined volume of N particles. OK? So some curious things happen when you have these kinds of variables. And one thing that you may not realize is what happens when you summing exponentials. So let's imagine that I have a sum composed of a number of terms i running from one to script N-- script n is the number of terms in the sum-- that are of these exponential types. So let's actually sometimes I will call this-- never mind. So let's call these e to the N phi-- Let me write it in this fashion. Epsilon i where epsilon i satisfies two conditions. One of them, it is positive. And the other is that it has this kind of exponential dependence. It is order of e to the N phi i where there could be some prefactor or something else in front to give you dimension and stuff like that that you were discussing. I assume that the number of terms is less than or of the order of some polynomial. OK? Then my claim is that, in some sense, the sum S is the largest term. OK? So let's sort of put this graphically. What I'm telling you is that we have a whole bunch of terms that are these epsilons i's. They're all positive, so I can sort of indicate them by bars of different lengths that are positive and so forth. So let's say this is epsilon 1, epsilon 2 all the way to epsilon N. And let's say that this guy is the largest. And my task is to add up the length of all of these things. So how do I claim that the length is just the largest one. It's in the following sense. You would agree that this sum you say is certainly larger than the largest term, because I have added lots of other things to the largest term, and they are all positive. I say, fine, what I'm going to do is I'm going to raise the length of everybody else to be the same thing as epsilon max. And then I would say that the sum is certainly less than this artificial sum where I have raised everybody to epsilon max. OK? So then what I will do is I will take log off this expression, and it will be bounded by log of epsilon max and log of N epsilon max, which is the same thing as log of epsilon max plus log of N. And then I divide by N. And then note that the conditions that I have set up are such that in the limit that N goes to infinity, script N would be P log N over N. And the limit of this as N becomes much less than 1 is 0. Log N over N goes to 0 as N goes to infinity. So basically this sum is bounded on both sides by the same thing. So what we've established is that essentially log of S over N, its limit as N goes to infinity, is the same thing as a log of epsilon max over N, which is what? If I say my epsilon max's have this exponential dependence, is phi max. And actually this is again the reason for something that you probably have seen. That using statistical physics let's say a micro-canonical ensemble when you say exactly what the energy is. Or you look at the canonical ensemble where the energy can be all over the place, why do you get the same result? This is why. Any questions on this? Everybody's happy, obviously. Good. AUDIENCE: [INAUDIBLE] a question? PROFESSOR: Yes. AUDIENCE: The N on the end, [INAUDIBLE]? PROFESSOR: There's a script N, which is the number of terms. And there's the Roman N, which is the parameter that is the analog of the number of degrees of freedom. The one that we usually deal in statistical physics would be, say, the number of particles. AUDIENCE: So number of measurements [INAUDIBLE] number of particles. PROFESSOR: Number of measurements? AUDIENCE: So the script N is what? PROFESSOR: The script N could be, for example, I'm summing over all pairs of interactions. So the number of pairs would go like N squared. Now in reality practicality in all cases that you will deal with, this P would be one. So the number of terms that we would be dealing would be of the order of the number of degrees of freedom. So, we will see some examples of that later on. AUDIENCE: [INAUDIBLE] script N might be N squared? PROFESSOR: If I'm forced to come up with a situation where script N is N squared, I would say count the number of pairs. Number of pairs if I have N [? sides ?] is N, N minus 1 over 2. So this is something that goes like N squared over 2. Can I come up with a physical situation where I'm summing over the number of terms? Not obviously, but it could be something like that. The situations in statistical physics that we come up with is typically, let's say, in going from the micro-canonical to the canonical ensemble, you would be summing over energy levels. And typically, let's say, in a system that is bounded the number of energy levels is proportional to the number of particles. Now there cases that actually, in going from micro-canonical to canonical, like the energy of the gas in this room, the energy axis goes all the way from 0 to infinity. So there is a continuous version of the summation procedure that we have that is then usually applied which is in mathematics is called the saddle point integration. So basically there, rather than having to deal with a sum, I deal with an integral. The integration is over some variable, let's say x. Could be energy, whatever. And then I have a quantity that has this exponential character. And then again, in some specific sense, I can just look at the largest value and replace this with e to the N phi evaluated at x max. I should really write this as a proportionality, but we'll see what that means shortly. So basically it's the above picture, I have a continuous variable. And this continuous variable, let's say I have to sum a quantity that is e to the N phi. So maybe I will have to not sum, but integrate over a function such as this. And let's say this is the place where the maximums occur. So the procedure of saddle point is to expand phi around its maximum. And then I can write i as an integral over x, exponential of N, phi evaluated at the maximum. Now if I'm doing a Taylor series, then next term in the Taylor series typically would involve the first derivative. But around the maximum, the first derivative is 0. Again if it is a maximum, the second derivative phi double prime evaluated at this xm, would be negative. And that's why I indicate it in this fashion. To sort of emphasize that it is a negative thing, x minus xm squared. And then I would have higher order terms, N minus xm cubed, et cetera. Actually what I will do is I will expand all of those things separately. So I have e to the minus N over 6 phi triple prime. N plus N over 6 phi triple prime, evaluated at xm, x minus xm cubed, and then the fourth order term and so forth. So basically there is a series such as this that I would have to look at. So the first term you can take outside the integral. And the integration against the one of this is simply a Gaussian. So what I would get is square root of 2 pi divided by the variance, which is N phi double prime. So that's the first term I have taken care of. Now the next term actually the way that I have it, since I'm expanding something that is third order around a potential that is symmetric. That would give me 0. The next order term, which is x minus xm to the fourth power, you already know how to calculate averages of various powers with the Gaussian using Wick's Theorem. And it would be related to essentially to the square of the variance. The square of the variance would be essentially the square of this quantity out here. So I will get a correction that is order of 1 over N. So if you have sufficient energy, you can actually numerically calculate what this is and the higher order terms, et cetera. Yes. AUDIENCE: Could you, briefly remind what the second term in the bracket means? PROFESSOR: This? This? AUDIENCE: The whole thing, on the second bracket. PROFESSOR: In the numerator, I would have N phi m, N phi prime. Let's call the deviation y y. But phi prime is 0 around the maximum. So the next order term will be phi double prime y squared over 2. The next order term will be phi triple prime y cubed over 6. e to the minus N phi triple prime y cubed over 6, I can expand as 1 minus N phi triple prime y cubed over 6, which is what this is. And then you can go and do that with all of the other terms. Yes. AUDIENCE: Isn't it then you can also expand as N the local maximum? PROFESSOR: Excellent. Good. So you are saying, why didn't I expand around this maximum, around this maximum. So let's do that. xm prime xm double prime. So I would have a series around the other maxima. So the next one would be N to the phi of xm prime, root 2 pi N phi double prime at xm prime. And then one plus order of 1 over N And then the next one, and so forth. Now we are interested in the limit where N goes to infinity. Or N is much, much larger than 1. In the limit where N is much larger than 1, Let's imagine that these two phi's if I were to plot not e to the phi but phi itself. Let's imagine that these two phi's are different by I don't know, 0.1, 10 to the minus 4. It doesn't matter. I'm multiplying two things with N, and then I'm comparing two exponentials. So if this maximum was at 1, I would have here e to the N. If this one was at 1 minus epsilon, over here I would have e to the N minus N epsilon. And so I can always ignore this compared to that. And so basically, this is the leading term. And if I were to take its log and divide by N, what do I get? I will get phi of xm. And then I would get from this something like minus 1/2 log of N phi double prime xm over 2 pi. And I divided by N, so this is 1 over N. And the next term would be order of 1 over N squared. So systematically, in the large N limit, there is a series for the quantity log i divided by N that starts with phi of xm. And then subsequent terms to it, you can calculate. Actually I was kind of hesitant in writing this as asymptotically equal because you may have worried about the dimensions. There should be something that has dimensions of x here. Now when I take the log it doesn't matter that much. But the dimension appears over here. It's really the size of the interval that contributes which is of the order of N to the 1/2. And that's where the log N comes. Questions? Now let me do one example of this because we will need it. We can easily show that N factorial you can write as 0 to infinity dx x to N, e to the minus x. And if you don't believe this, you can start with the integral 0 to infinity of dx e to the minus alpha x being one over alpha and taking many derivatives. If you take N derivatives on this side, you would have 0 to N dx x to the N, e to the minus alpha x, because every time, you bring down a factor of x. On the other side, if you take derivatives, 1 over alpha becomes 1 over alpha squared, then goes to 2 over alpha cubed, then go c over alpha to the fourth. So basically we will N factorial alpha to the N plus 1. So I just set alpha equals to 1. Now if you look at the thing that I have to integrate, it is something that has a function of x, the quantity that I should integrate starts as x to the N, and then decays exponentially. So over here, I have x to the N. Out here I have e to the minus x. It is not quite of the form that I had before. Part of it is proportional to N in the exponent, part of it is not. But you can still use exactly the saddle point approach for even this function. And so that's what we will do. I will write this as integral 0 to infinity dx e to some function of x where this function of x is N log x minus x. And then I will follow that procedure despite this is not being quite entirely proportional to N. I will find its maximum by setting phi prime to 0. phi prime is N over x minus 1. So clearly, phi prime to 0 will give me that x max is N. So the location of this maximum that I have is in fact N. And the second derivative, phi double prime, is minus N over x squared, which if I evaluate at the maximum, is going to be minus 1 over N. Because the maximum occurs at the N. So if I'm were to make a saddle point expansion of this, I would say that N factorial is integral 0 to infinity, dx e to the phi evaluated at x max, which is N log N minus N. First derivative is 0. The second derivative will give me minus 1 over N with a factor of 2 because I'm expanding second order. And then I have x minus this location of the maximum squared. And there would be higher order terms from the higher order derivatives. So I can clearly take e to the N log N minus N out front. And then the integration that I have is just a standard Gaussian with a variance that is just proportional to N. So I would get a root 2 pi N. And then I would have higher order corrections that if you are energetic, you can actually calculate. It's not that difficult. So you get this Stirling's Formula that limit of large N, let's do log of N factorial is N log N minus N. And if you want, you can go one step further, and you have 1/2 log of 2 pi N. And the next order term would be order of 1/N. Any questions? OK? Where do I need to use this? Next part, we are going to talk about entropy, information, and estimation. So the first four topics of the course thermodynamics, probability, this kinetic theory of gases, and basic of statistical physics. In each one of them, you will define some version of entropy. We already saw the thermodynamic one as dQ divided by T meaning dS. Now just thinking about probability will also enable you to define some form of entropy. So let's see how we go about it. So also information, what does that mean? It goes back to work off Shannon. And the idea is as follows, suppose you want to send a message of N characters. The characters themselves are taken from some kind of alphabet, if you like, x1 through xM that has M characters. So, for example if you're sending a message in English language, you would be using the letters A through Z. So you have M off 26. Maybe if you want to include space, punctuation, it would be larger than that. But let's say if you're dealing with English language, the probabilities of the different characters are not the same. So S and P, you are going to encounter much more frequently than, say, Z or X. So let's say that the frequencies with which we expect these characters to occur are things like P1 through PM. OK? Now how many possible messages are there? So number of possible messages that's are composed of N occurrences of alphabet of M letters you would say is M to the N. Now, Shannon was sort of concerned with sending the information about this message, let's say, over a line where you have converted it to, say, a binary code. And then you would say that the number of bits that would correspond to M to the N is the N log base 2 of M. That is, if you really had the simpler case where your selections was just head or tail, it was binary. And you wanted to send to somebody else the outcome of 500 throws of a coin. It would be a sequence of 500 0's and 1's corresponding to head or tails. So you would have to send for the binary case, one bit per outcome. If it is something like a base of DNA and there are four things, you would have two per base. So that would be log 4 base 2. And for English, it would be log 26 or whatever the appropriate number is with punctuation-- maybe comes to 32-- possible characters than five per [? element ?]. OK. But you know that if you sort of were to look at all possible messages, most of them would be junk. And in particular, if you had used this simple substitution code, for example, to mix up your message, you replaced A by something else, et cetera, the frequencies would be preserved. So sort of clearly a nice way to decode this substitution code, if you have a long enough text, you sort of look at how many repetitions they are and match them with their frequencies that you expect for a real language. So the number of possible messages-- So in a typical message, what you expect Ni, which is Pi N occurrences, of xi. So if you know for example, what the frequencies of the letters in the alphabet are, in a long enough message, you expect that typically you would get that number. Of course, what that really means is that you're going to get correction because not all messages are the same. But the deviation that you would get from getting something that is proportional to the probability through the frequency in the limit of a very long message would be of the order of N to the 1/2. So ignoring this N to the 1/2, you would say that the typical message that I expect to receive will have characters according to these proportions. So if I asked the following question, not what are the number of all possible messages, but what is the number of typical messages? I will call that g. The number of typical messages would be always of distributing these number of characters in a message of length N. Again there are clearly correlations. But for the time being, forgetting all of the correlations, if [? we ?] [? do ?] correlations, we only reduce this number. So this number is much, much less time M to the N. Now here is I'm going to make an excursion to so far everything was clear. Now I'm going to say something that is kind of theoretically correct, but practically not so much. You could, for example, have some way of labeling all possible typical messages. So you would have-- this would be typical message number one, number two, all the way to typical message number g. This is the number of typical message. Suppose I could point out to one of these messages and say, this is the message that was actually sent. How many bits of information would I have to that indicate one number out of g? The number of bits of information for a typical message, rather than being this object, would simply be log g. So let's see what this log g is. And for the time being, let's forget the basis. I can always change basis by dividing by log of whatever quantity I'm looking at the basis. This is they log of N factorial divided by these product over i of Ni factorials which are these Pi N's. And in the limit of large N, what I can use is the Stirling's Formula that we had over there. So what I have is N log N minus N in the numerator. Minus sum over i Ni log of Ni minus Ni. Of course the sum over Ni's cancels this N, so I don't need to worry about that. And I can rearrange this. I can write this as this N as sum over i Ni. Put the terms that are proportional to Ni together. You can see that I get Ni log of Ni over N, which would be log of Pi. And I can actually then take out a factor of N, and write it as sum over i Pi log of Pi. And just as a excursion, this is something that you've already seen hopefully. This is also called mixing entropy. And we will see it later on, also. That is, if I had initially a bunch of, let's say, things that were of color red, and separately in a box a bunch of things that are color green, and then bunch of things that are a different color, and I knew initially where they were in each separate box, and I then mix them up together so that they're putting all possible random ways, and I don't know which is where, I have done something that is irreversible. It is very easy to take these boxes of marbles of different colors and mix them up. You have to do more work to separate them out. And so this increase in entropy is given by precisely the same formula here. And it's called the mixing entropy. So what we can see now that we sort of rather than thinking of these as particles, we were thinking of these as letters. And then we mixed up the letters in all possible ways to make our messages. But quite generally for any discrete probability, so a probability that has a set of possible outcomes Pi, we can define an entropy S associated with these set of probabilities, which is given by this formula. Minus sum over i Pi log of Pi. If you like, it is also this-- not quite, doesn't makes sense-- but it's some kind of an average of log P. So anytime we see a discrete probability, we can certainly do that. It turns out that also we will encounter in cases later on, where rather than having a discrete probability, we have a probability density function. And we would be very tempted to define an entropy associated with a PDF to be something like minus an integral dx P of x log of P of x. But this is kind of undefined. Because probability density depends on some quantity x that has units. If this was probability along a line, and I changed my units from meters to centimeters, then this log will gain a factor that will be associated with the change in scale So this is kind of undefined. One of the miracles of statistical physics is that we will find the exact measure to make this probability in the continuum unique and independent of the choice of-- I mean, there is a very precise choice of units for measuring things that would make this well-defined. Yes. AUDIENCE: But that would be undefined up to some sort of [INAUDIBLE]. PROFESSOR: After you [INAUDIBLE]. AUDIENCE: So you can still extract dependencies from it. PROFESSOR: You can still calculate things like differences, et cetera. But there is a certain lack of definition. Yes. AUDIENCE: [INAUDIBLE] the relation between this entropy defined here with the entropy defined earlier, you notice the parallel. PROFESSOR: We find that all you have to do is to multiply by a Boltzmann factor, and they would become identical. So we will see that. It turns out that the heat definition of entropy, once you look at the right variables to define probability with, then the entropy of a probability distribution is exactly the entropy that comes from the heat calculation. So up to here, there is a measured numerical constant that we have to define. All right. But what does this have to do with this Shannon story? Going back to the story, if I didn't know the probabilities, if I didn't know this, I would say that I need to pass on this amount of information. But if I somehow constructed the right scheme, and the person that I'm sending the message knows the probabilities, then I need to send this amount of information, which is actually less than N log M. So clearly having knowledge of the probabilities gives you some ability, some amount of information, so that you have to send less bits. OK. So the reduction in number of bits due to knowledge of P is the difference between N log M, which I had to do before, and what I have to do now, which is N Pi sum over i Pi log of Pi. So which is N log M plus sum over i Pi log of Pi. I can evaluate this in any basis. If I wanted to really count in terms of the number of bits, I would do both of these things in log base 2. It is clearly something that is proportional to the length of the message. That is, if I want to send a book that these twice as big, the amount of bits will be reduced proportionately by this amount. So you can define a quantity that is basically the information per bit. And this is given the knowledge of the probabilities, you really have gained an information per bit which is the difference of log M and sum over i Pi log Pi. Up to a sign and this additional factor of log N, the entropy-- because I can actually get rid of this N-- the entropy and the information are really the same thing up to a sign. And just to sort of make sure that we understand the appropriate limits. If I have something like the case where I have a uniform distribution. Let's say that I say that all characters in my message are equally likely to occur. If it's a coin, it's unbiased coin, it's as likely in a throw to be head or tail. You would say that if it's an unbiased coin, I really should send one bit per throw of the coin. And indeed, that will follow from this. Because in this case, you can see that the information contained is going to be log M. And then I have plus 1 over M log of 1 over M. And there are M such terms that are uniform. And this gives me 0. There is no information here. If I ask what's the entropy in this case. The entropy is M terms. Each one of them have a factor of 1 over M. And then I have a log of 1 over M. And there is a minus sign here overall. So this is log of M. So you've probably seen this version of the entropy before. That if you have M equal possibilities, the entropy is related to log M. This is the case where all of outcomes are equally likely. So basically this is a uniform probability. Everything is equally likely. You have no information. You have this maximal possible entropy. The other extreme of it would be where you have a definite result. You have a coin that always gives you heads. And if the other person knows that, you don't need to send any information. No matter thousand times, it will be thousand heads. So here, Pi is a delta function. Let's say i equals to five or whatever number is. So one of the variables in the list carries all the probability. All the others carry 0 probability. How much information do I have here? I have log M. Now when I go and looked at the list, in the list, either P is 0, or P is one, but the log of 1 and M is 0. So this is basically going to give me 0. Entropy in this case is 0. The information is maximum. You don't need to pass any information. So anything else is in between. So you sort of think of a probability that is some big thing, some small things, et cetera, you can figure out what its entropy is and what is information content is. So actually I don't know the answer. But presume it's very easy to figure out what's the information per character of the text in English language. Once you know the frequencies of the characters you can go and calculate this. Questions. Yes. AUDIENCE: Just to clarify the terminology, so the information means the [INAUDIBLE]? PROFESSOR: The number of bits that you have to transmit to the other person. So the other person knows the probability. Given that they know the probabilities, how many fewer bits of information should I send to them? So their knowledge corresponds to a gain in number of bits, which is given by this formula. If you know that the coin that I'm throwing is biased so that it always comes heads, then I don't have to send you any information. So per every time I throw the coin, you have one bit of information. Other questions? AUDIENCE: The equation, the top equation, so natural log [INAUDIBLE] natural log of 2, [INAUDIBLE]? PROFESSOR: I initially calculated my standing formula as log of N factorial is N log N minus N. So since I had done everything in natural log, I maintained that. And then I used this symbol that log, say, 5 2 is the same thing that maybe are used with this notation. I don't know. So if I don't indicate a number here, it's the natural log. It's base e. If I put a number so log, let's say, base 2 of 5 is log 5 divided by log 2. AUDIENCE: So [INAUDIBLE]? PROFESSOR: Log 2, log 2. Information. AUDIENCE: Oh. PROFESSOR: Or if you like, I could have divided by log 2 here. AUDIENCE: But so there [INAUDIBLE] all of the other places, and you just [? write ?] all this [INAUDIBLE]. All right, thank you, [? Michael. ?] PROFESSOR: Right. Yeah. So this is the general way to transfer between log, natural log, and any log. In the language of electrical engineering, where Shannon worked, it is common to express everything in terms of the number of bits. So whenever I'm expressing things in terms of the number of bits, I really should use the log of 2. So I really, if I want to use information, I really should use log of 2. Whereas in statistical physics, we usually use the natural log in expressing entropy. AUDIENCE: Oh, so it doesn't really matter [INAUDIBLE]. PROFESSOR: It's just an overall coefficient. As I said that eventually, if I want to calculate to the heat version of the entropy, I have to multiply by yet another number, which is the Boltzmann constant. So really the conceptual part is more important than the overall numerical factor. OK? I had the third item in my list here, which we can finish with, which is estimation. So frequently you are faced with the task of assigning probabilities. So there's a situation. You know that there's a number of outcomes. And you want to assign probabilities for these outcomes. And the procedure that we will use is summarized by the following sentence that I have to then define. The most unbiased-- let's actually just say it's the definition if you like-- the unbiased assignment of probabilities maximizes the entropy subject to constraints. Known constraints. What do I mean by that? So suppose I had told you that we are throwing a dice. Or let's say a coin, but let's go back to the dice. And the dice has possibilities 1, 2, 3, 4, 5, 6. And this is the only thing that I know. So if somebody says that I'm throwing a dice and you don't know anything else, there's no reason for you to privilege 6 with respect to 4, or 3 with respect to 5. So as far as I know, at this moment in time, all of these are equally likely. So I will assign each one of them for probability of 1/6. But we also saw over here what was happening. The uniform probability was the one that had the largest entropy. If I were to change the probability so that something goes up and something goes down, then I calculate that formula. And I find that the-- sorry-- the uniform one has the largest entropy. This has less entropy compared to the uniform one. So what we have done in assigning uniform probability is really to maximize the entropy subject to the fact that I don't know anything except that the probabilities should add up to 1. But now suppose that somebody threw the dice many, many times. And each time they were throwing the dice, they were calculating the number. But they didn't give us the number and frequency is what they told us was that at the end of many, many run, the average number that we were coming up was 3.2, 4.7, whatever. So we know the average of M. So I know now some other constraint. I've added to the information that I had. So if I want to reassign the probabilities given that somebody told me that in a large number of runs, the average value of the faces that showed up was some particular value. What do I do? I say, well, I maximize S which depends on these Pi's, which is minus sum over i Pi log of Pi, subjected to constraints that I know. Now one constraint you already used previously is that the sum of the probabilities is equal to 1. This I introduce here through a Lagrange multiplier, alpha, which I will adjust later to make sure that this holds. And in general, what we do if we have multiple constraints is we can add more and more Lagrange multipliers. And the average of M is sum over, let's say, i Pi. So 1 times P of 1, 2 times P of 2, et cetera, will give you whatever the average value is. So these are the two constraints that I specified for you here. There could've been other constraints, et cetera. So then, if you have a function with constraint that you have to extremize, you add these Lagrange multipliers. Then you do dS by dPi. Why did I do this? dS by dPi, which is minus log of Pi from here. Derivative of log P is 1 over P, with this will give me minus 1. There is a minus alpha here. And then there's a minus beta times i from here. And extremizing means I have to set this to 0. So you can see that the solution to this is Pi-- or actually log of Pi, let's say, is minus 1 plus alpha minus beta i. So that Pi is e to the minus 1 plus alpha e to the minus beta times i. I haven't completed the story. I really have to solve the equations in terms of alpha and beta that would give me the final results in terms of the expectation value of i as well as some other quantities. But this is the procedure that you would normally use to give you the unbiased assignment of probability. Now this actually goes back to what I said at the beginning. That there's two ways of assigning probabilities, either objectively by actually doing lots of measurement, or subjectivity. So this is really formalizing what this objective procedure means. So you put in all of the information that you have, the number of states, any constraints. And then you maximize entropy that we defined what it was to get the best maximal entropy for the assignment of probabilities consistent with things that you know. You probably recognize this form as kind of a Boltzmann weight that comes up again and again in statistical physics. And that is again natural, because there are constraints, such as the average value of energy, average value of the number of particles, et cetera, that consistent with maximizing their entropy, give you forms such as this. So you can see that a lot of concepts that we will later on be using in statistical physics are already embedded in these discussions of probability. And we've also seen how the large N aspect comes about, et cetera. So we now have the probabilistic tools. And from next time, we will go on to define the degrees of freedom. What are the units that we are going to be talking about? And how to assign them some kind of a probabilistic picture. And then build on into statistical mechanics. Yes. AUDIENCE: So here, you write the letter i to represent, in this case, the results of a random die roll, that you can replace it with any function of a random variable. PROFESSOR: Exactly. So I could have, maybe rather than giving me the average value of the number that was appearing on the face, they would have given me the average inverse. And then I would have had this. I could have had multiple things. So maybe somebody else measures something else. And then my general form would be e to the minus beta measurement of type one, minus beta 2 measurement of type two, et cetera. And the rest of thing over here is clearly just a constant of proportionality that I would need to adjust for the normalization. OK? So that's it for today. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 23_Ideal_Quantum_Gases_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So we've been developing the procedure to describe a quantum gas of identical particles. That's our non-interacting. So what does that mean? Is that we are thinking about the Hamiltonian for a system of N particles. If they are not interacting, it can be written as the sum of the contributions for the individual particles. And if they are identical, it's basically the same particle for each one of them. Same Hamiltonian for each one of them. OK? How do we handle these? Well, if you regard this as a quantum problem, for a single particle-- so for each one of these Hamiltonians-- we can find the eigenstates. So there's a bunch of states that say label by k because we are ultimately have in mind the plane waves that describe gas particles in a box. And there is corresponding energy levels for the single particles, epsilon k. And the single particle spectrum can therefore can be described by a bunch of energies, a lot of energy levels, with different values of k and different allowed discretized energies. Now, when we want to describe a case of N particles that are non-interacting, we said that we could, in principle, pick one k here, one k here, one k here. And if we are dealing with bosons, potentially more than one at each side. And so the state would then be described by a bunch of occupation numbers Nk. And this Nk will be 0 or 1 if you are talking about fermions, but any integer if you are talking about bosons. So this eta being plus or minus was used to distinguish appropriate symmetrization or anti-symmetrization. Now, once we have a state such as this, clearly the N particle hamintonium acting on a state that is symmetrized with these bunch of occupation numbers will give us back this state. The energy that we would be getting is summing over all possible k values, epsilon k times the number of occurrance of that in this state. Again, multiplying this state back. Of course, we have a constraint that the number of particles matches the sum of the occupation numbers. So then we said, OK, this is just a description of the state that they want to describe, or the set of states that represent this system of identical non-interacting partners in quantum mechanics. So the next stage, we want to do some kind of first statistics for them. And we had seen that basically, in order to assign appropriate statistics in case of quantum systems, we have to deal with a density matrix. And we saw that in the canonical representation, the density matrix is complicated and amounts to some kind of interaction among particles. Although there is no interaction at this level, the symmetrization or anti-symmetrization create states that are intermixed in a manner that appears as if you have an interaction. We found that, however, we could get rid of that by going to the grand canonical ensemble where we would trade in the constraint that the number of particles is fixed with a chemical potential. And in this ensemble, we found that the density matrix is diagonal in this representation. So the diagonal elements can be thought of as the probability of one of these occupation number sets. And so the probability of a particular set of occupations in this grand canonical prescription is proportional to e to the beta-- of course we have mu if we are going to this grand canonical ensemble-- minus epsilon k, and then we had Nk. So the probability that a particular one of these is occupied Nk times in this grand canonical prescription is simply an exponential. And of course, the different k levels are completely independent of each other. So the probability factor arises in this fashion. Of course, as any probability, we have to normalize this. Normalization is appropriate. We call the grand canonical partition function and will be different whether we are dealing with boson statistics or fermion statistics. Actually, because we have a structure such as this, Q is going to be the product of the normalization of each individual one. And therefore, log of Q is going to be a sum of the contributions that I would get for the normalization of each individual ones. And of course, the log of whatever normalization I have for the individual one. If I have the case of a fermions where Nk gets 0 or 1 as the two possibilities, then the normalization will be simply 1 plus this exponential. And I choose to write that as z e to the minus beta epsilon of k where I have introduced z to be e to the beta mu. Because this combination occurs all the time, I don't want to keep repeating that. So this was for the case of fermions. For the case of bosons where N goes 0, 1, 2, 3, et cetera, this is a geometric series. For the geometric series, when I sum it, I would get 1 over this factor. So when I take the log I will have a minus sign here. And I can put the Boson and Fermi cases together by simply putting a factor of minus eta here so that I have the plus for the case of fermions and the minus for the case of bosons. OK? So this is one thing that we will be using. The other thing is that this is the probability for this level having a certain occupation number. In k we can always ask, what is the average of Nk? And from this probability it is easy to calculate the average. And the answer, again, can be collapsed into one expression for the case of bosons and fermions as 1 over z inverse e to the beta epsilon of k minus eta encompassing both fermions and bosons. So these results give some idea both at the microscopic level-- in terms of occupation numbers of these different levels-- and macroscopic level a grand partition function from which we can construct thermodynamics. Actually, full probability, again, in this ensemble which we can transform in principle and get density matrices, et cetera, in other basis. But these are the kinds of things that we can do for any interacting Hamiltonian. OK? Now, what I said over here was that I'm actually interested in the case of a gas. So let's again think about the type of gas such as the one in this room that we are interested. And again, to make a contrast with something that we may want to do later, let's emphasize that it is a non-relativistic gas. So the Hamiltonian H is essentially free particle in a box of volume V, the thing that we have been using all the time, just the kinetic energy of a one particle state. And for that we know that the single particle energy levels are characterized by a vector k. As three components, in fact. And h bar squared over 2mk squared. And we've discussed what the discretization of k is appropriate to the size of the box. We won't go over that. So again, maybe it is important to mention that because of the discretizatoin of the k values, the spacing between possible values of k are 2 pi over l in each direction. And ultimately what we want to do is to replace these sums over k's in the limit of large boxes with integrals over k. So these go over to an integral over k times the density factor that you get in going from the sum to the integral, which is 2 pi cubed times the volume. I will introduce one other factor here, which is that when we are talking about quantum particles and bosons and fermions, they are typically assigned also another state-- another parameter-- that characterizes their spin. And depending on the possible values of spin, you will have g copies of the system. And this g will be related to the quantum spin-- to the formula 2s 1-- so that when you have fermions, typically s is half integer and this will be an even number. Whereas for bosons it will be an odd number. So that's, again, something that we've seen. You just, essentially says multiply everything by some factor of g. Now if I go through this procedure that I have outlined here, I will ultimately get the grand partition function of the gas-- which as we will see is related to its pressure-- in terms of the chemical potential. Throughout the course we've seen that the more interesting thing to look at is how things depend on density rather than chemical potential. So I'd better use this result to get what the density is. How do I do that? Well, once I have determined what the chemical potential is, the average number of particles will be the sum over and the average occupation numbers that I have in all of these states. And according to the prescription that we have over here, in the limit of a large box, this sum over k becomes gV intergral d cubed k 2 pi Q. Ultimately I will divide by V so I will have the formula for the density. And what is Nk? Well, I have the formula for Nk. It is 1 over z inverse e to the beta. Epsilon k is h bar squared k squared over 2m, and then I have a minus 8. OK? So if I divide through by V, really I will have the density. Get rid of this V. But I have to do that integral. Now for this and a lot of other things, you can see that always I will have sums or integrations over the combination that appears in the exponent and enhances dimension list which is beta epsilon of k. So let me introduce a new variable. Let me call x to be the combination beta h bar squared k squared over 2m. So I'm going to call this e to the x, if you like. So I have changed variables from k to x. k in terms of x is simply square root of 2m inverse of beta-- which is kt-- divided by h bar squared. But I took a square root so it's just h bar. Times x to the 1/2. OK? Now, I realize that h bar is in fact h divided by 2pi. So the 2pi I can put in the numerator. And then I remember that actually the combination h divided by square root of 2pi mkt, I was calling these thermal wavelength lambda. And k should have the dimensions of an inverse landscape, so that's the appropriate thing to choose. So what do I get? I take a square root of pi inside and I'm left with 2 square root of pi divided by this parameter lambda x to the 1/2. Again, for later clarity, let's make sure we remember this was our lambda. So this is the variable k that I need. In fact, for the-- for the integration I have to also make a change in dk. So I not that dk, if I take a derivative here, I will get x-- 1/2x to the minus 1/2 becomes root pi over lambda x to the minus 1/2 dx. Actually, the factor that I need over here is dk-- d cubed k over 2pi cubed. Now everything only depends on the magnitude of k. So I will take advantage of spherical symmetry, write d cubed k as 4pi k squared dk divided by 8pi cubed. And then I substitute from k squared and dk above in terms of this x. So I will have a 1 over 2pi squared from this division. k squared is simply 4pi over lambda squared x. This is k squared. dk is root pi over lambda x to the minus 1/2 dx. So you can see lots of things cancel out and the answer is going to be x to the 1/2 dx divided by lambda cubed. And what is left from all of these pi's et cetera, is a factor of square root of pi over 2. Or 2 in the numerator or a square root of pi over 2 in the denominator. OK? So if I do these set of changes that I have on the right part of the board, what do I have for density? The density becomes simply g. I have this integral over k that I have recast as an integral that is x to the 1/2 dx. Let me pull out the factor of lambda cubed on the outside. So I have g over lambda cubed. I will pull out to that factor of root pi over 2 that I had before. And then I have the integral from 0 to infinity dx x to the 1/2. Actually, let me write it in this fashion. dx x to the 1/2. And in the denominator, what do I have? I have z universe into the x minus 8. So we did all of these and we ended up with an integral that we don't recognize. So what we are going to do is to pretend that we recognize it and give it a name. All right? So we define a set of functions f sub m of variable z. Once I do the integration over x, I will get a function of z. So that function of z I will call fm of z. But clearly I have a different fm if I have eta minus 1 or eta plus 1 for bosons of fermions. So there is two classes of things. What's this m? OK. So here I had the integral x to the 1/2. It turns out that you will get a whole bunch of these integrals where the [INAUDIBLE] that is out front in the numerator is different. So I will generalize this x to the 1/2, this 1/2 to something that I'll call it m minus 1. And then I have z inverse e to the x minus eta. And if these normalized so that what is in front is 1 over m minus 1 factorial. OK? So these are very important functions. Everything we will do later on we will be expressing in terms of these functions. Now, clearly what is happening here corresponds to m minus 1 being equal to 1/2 or m being 3/2. What is this normalization? Well, for the normalization, if this is 1/2 I would need 1/2 factorial. And if you go and look at the various definitions that we have for the gamma function or the factorial, you can check that 1/2 factorial is root pi over 2. And, indeed, we saw this already when we wrote the expression for the surface area in general dimensions that involved factorial function appearing as fractions. And in order to get the right result in two or three dimensions, you can check that this root pi over 2 for 1/2 factorial is correct. So, indeed, I already have this root pi over 2 over here. It was properly normalized. So the whole thing here can be written as n is g over lambda cubed f 3/2 eta of z. That is the expression that relates the density to the chemical potential or this communication e to the beta mu, which is z or the fugacity is simply this nice formula. OK? But what we want is to calculate something like the pressure for the gas. Well, let me remind you that log Q, essentially, quite generally, log Q comes from normalizing something that is like mu times the total number of particles minus the energy-- you can kind of read that from here-- and then there would be TS from the number of states. This was the general formula that we had. And when we substitute for a system that is extensive, where E for a gas is mu n plus TS minus PV, we see that the log of the partition function is simply the combination beta PV. Of course, again, this pressure as a function of density or chemical potential would be different whether we are thinking about bosons of fermions. But it can be calculated from the kind of sum that we have over here. So let's do that. So I have beta P which is something like log Q divided by V. Is obtained by doing this sum over k over V. And so what do I get? I will get-- first of all, I have the minus eta sum over k [INAUDIBLE] with g integral over k divided by 2pi cubed. The volume factor cancels because I divided by volume. I have log of 1 minus eta z e to the minus beta-- again, h bar squared k squared over 2m-- are the energy that I'm dealing with. OK? Fine. What do we do with this? We make the same change of variables that we have over here from k to x. So then I would have minus eta d cubed k over 2pi cubed gets replaced with this 2 over root pi lambda cubed. And then I have the integral 0 to infinity dx x to the 1/2 log of 1 minus eta z e to the minus x. Huh. It seems like-- yes? AUDIENCE: What happened to your factor g? PROFESSOR: G? Good question. Anything else? So I was hoping to have integrals that are all expressible in terms of this-- these set of functions that I defined. This log does not fall into this category. Should I define something else? Well, the answer is no because I can make that integral look like this by simbly doing an integration by parts. So we do integrate by parts in which case what I do is I take a derivative of this and I take an integral of x to the 1/2. Integral of x to the 1/2 becomes x to the 3/2 over 3/2. There is, of course, a minus sign involved. So this minus becomes plus eta. Make sure I have the g this time. 2 over root pi lambda cubed. I have the integral from 0 to infinity dx. x to the 1/2 became x to the 3/2. The 3/2 I will bring out front. So I have 1 over 3/2 here. And then I have to take the derivative of this function. What's the derivative of the log? It is 1 over the argument of the log times the derivative of the argument of the log which is minus eta z, the derivative of e to the minus x, which is e to the minus x. And this minus becomes plus. OK? So let's arrange things a little bit. I can get rid of these etas because eta squared will be 1. What is the answer, therefore? It is g over lambda cubed. That's fine. If root pi over 2 is 1/2 factorial, and I multiply 1/2 factorial with 3/2, what I will get is 1 over 3/2 factorial. Because n factorial is n times n minus 1 factorial. And then, what do I have? I have 0 to infinity dx. x to the 3/2, divide the numerator and denominator by z e to the minus x and you get z inverse e to the x minus eta. And lo and behold, we have a function of the variety that we had defined on the board corresponding to n minus 1 being 3/2 or n being 5/2. So what we have here is that beta P, depending on whatever statistics we are dealing with, is g over lambda cubed f 5/2 eta of z. Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: OK. So what you have to do is always ensure what is happening to the boundary terms. Boundary terms will be a product of x to the 3/2 and this factor. At x equals to 0, x to the 3/s will ensure that you are 0. At x goes to infinity, the exponential will ensure that you're to 0. So the boundary terms are important and indeed you have to check them. And you can check that they are 0. OK? AUDIENCE: I think the factor that multiples x to the 3/2 is [INAUDIBLE]. But it's-- the argument still holds. PROFESSOR: The argument still holds because as x goes to infinity, e to the minus x goes to 0. Log of 1 is 0. In fact, you can ask, how does it go to 0? It goes to 0 exponentially. Exponentially it will [INAUDIBLE]. Yes. Thank you for the correction. OK? All right. There's one other thing that we can take a look at. The total energy. OK. So the energy in each one of these states is given by that formula. So if I ask, what is the total expectation value of energy? And again, in thermodynamics we expect the extensive quantities such as the energy should have, in an aerodynamic sense, well defined values so I won't write the expectation value. Is going to be a sum over k Nk epsilon k. And this will depend on the statistics that I'm looking at. And what I will have is that this will go over to an integral over k divided by 2pi cubed multiplied g and multiplied by V. I have epsilon of k which is h bar squared k squared over 2m. The expectation value of N, which is z inverse e to the minus beta-- sorry-- z inverse e to the plus beta h bar squared k squared over 2m minus 8. OK? So the simplest way to do this is to first of all divide by V. So I can define an energy density. Actually, since I'm going to change variables to this combination that is x, I notice that up to a factor of betas the numerator is x. So let me actually calculate the combination beta e over V. That only depends on this combination x. So what is it? It is g-- the integral over k, I know how to replace it with something that involves x-- is going to give me 2 over root pi lambda cubed. Integral 0 to infinity dx. From the change of variables, I have x to the 1/2. But I introduced one factor of x here because I have this energy factor in the numerator. So that changes this to x to the 3/2. And then again, I have z inverse e to the x minus 8. OK? So if is one of the integrals that we have defined. In fact, it is exactly what I have above for the pressure, except that I don't have the factor of 3/2 that appeared nicely to give the right normalization. So what I conclude is that this quantity is in fact g over lambda cubed times 3/2 of f 5/2 eta of z. OK? But they are exactly the same functions for energy and pressure. So you can see that, irrespective of what the chemical potential is doing, I can state that for the quantum system e is always 3/2 PV. Of course, we have seen this expression classically. Now what happens quantum mechanically is that the pressure starts to vary in some complicated way with density. But what we learn is that energy will also vary in the same complicated way with density, such that always this formula EV 3/2 PV is valid. There is some nice symmetry reason for why that is the case that you explore in one of the problems that you will see. OK? So these are the characteristics. In some sense we have solved the problem because we know both density and pressure in terms of chemical potential. So all we need to do is to invert this, get the chemical potential as a function of density, insert it in the formula for pressure and we have pressure as a function of density. But since these are kind of implicit through these f functions, we don't quite know what that means. So the next stage is to really study how these f functions behave and how we can relate their behaviors to dependence on density. And we will see that we have to distinguish between low density and high density regimes. What does it mean? Well, you can see that both have players on the right hand side is dimensionless. The only dimension full combination that we have here is N-- well, we can make it dimensionless by multiplying by lambda cubed over g. So it turns out that whether or not you see things that are classical or things that are very quantum mechanical depend on this degeneracy factor. Which, within our context of statistical physics of gases, we define to be the communication N lambda cubed over g. And we will regain classical behaviors when degeneracy is small, density is small enough such that d is much less than 1. We will see totally new behavior that is appropriate to quantum mechanics in other limit where [? d ?] becomes large. OK? So this distinguishes classical versus quantum behavior. Now, we expect to regain classical behavior in the limit of very low densities. That's what we've been kind of saying from the very first lecture that low density gas has this ideal behavior where PV is Nkt. And, therefore, let's focus on the limit where V is much less than 1, which is classical. And since d is f 3/2 eta of z, we need the behavior of the function when it is small-- and I'll show you [INAUDIBLE] consistently that the function is small when it's argument is small. So we'll see that d less than 1 implies or is implied by z being less than 1. OK? So let's take a look at one of these functions, fm eta z, 1 over m minus 1 factorial, 0 to infinity dx, x to the m minus 1, z universe e to the x minus 1. And I'd like to explore the limit of this when z is small. So that's currently not very good because I have z inverse. So what I will do is I will multiply by z e to the minus x both numerator and denominator. So I will have m minus 1 factorial integral 0 to infinity dx x to the n minus 1. I have z e to the minus x. I have-- oops, there was an eta here. 1 minus eta z e to the minus x. OK? Then I say, well, the thing that is appearing in the denominator, I can write as a sum of a geometric series in powers of z. And that's, after all, what I'm attempting to do, make an expansion in powers of z. It starts with z to the minus x. And 1 over 1 minus eta z to the minus x I can write as the sum alpha going from 0 to infinity of eta z e to the minus x raised to the power of [INAUDIBLE]. This geometric series sums to what I have over there. OK? Let's rearrange this a little bit. I will take the sum over alpha outside. Well, I notice that actually I have here, this first term, which is different from this by a factor of eta. So let me multiply by an eta and get another eta. And then I have a sum that starts from 1 to infinity of eta z e to the minus x to the power of alpha. So I have eta z to the power of alpha. I have one other eta here. And then I have the integration over x, x to the m minus 1. I didn't write e to the minus x to the alpha because I pulled it out the integration. I will write it now. e to the minus alpha x. And I didn't write 1 over n minus 1 factor. Now, once more, this is an integration that you have seen. The result of doing integral dx x to the m minus 1 e to the minus x is simply m minus 1 factorial. Except that it is not e to the minus x but it's e to the minus alpha x. I can rescale everything by alpha and I will pull out a factor of alpha to there. OK? So what do we have? We have that fm eta of z as a [? power series ?] representation, alpha running from 1 to infinity, z to the alpha divided by alpha to the m. It's an alternating series that starts with plus and then becomes a minus, et cetera. So if I were to explicitly write this, you will see how nice it is. It is z plus eta z squared 2 to the m plus z cubed 3 to the m plus eta z to the 4 4 to the m. And you can see why it is such a nice thing that really deserves to have a function named after it. So we call that nice series [? to Vf. ?] Now, for future reference, note the following. If I take a derivative of this function with respect to z-- its argument-- then in this power series, say, z to the 4 becomes 4z cubed. 4 will cancel one of the 4's that I have here and it becomes 4 to the m minus 1. This becomes 3 to the m minus 1. So you can see that the coefficients that were powers of m become powers of m minus 1. Numerator has gone down by one factor. So let's multiply by z. So the numerator is now what it was before. The denominators, rather than being raised to the power of m, are raised to the power of m minus 1. So clearly, this is the same thing as fm minus 1. So basically these functions have this nice character that they are like a ladder. You take a derivative of one and you get the one that is indexed by one less. OK? AUDIENCE: Is there a factor of eta that comes out to be [INAUDIBLE]. PROFESSOR: In the derivatives? AUDIENCE: I mean, it seems to me that there's an eta-- like, in that relation that-- PROFESSOR: In this relation? AUDIENCE: Yeah. PROFESSOR: OK. So let's do it by step. If I take a derivative, divide dz, I get 1 plus eta z squared z 2 to the m minus 1 plus z squared' 3 to the m minus 1. Then I multiply by z, this becomes z, this becomes z squared, this becomes z cubed. And you can see that the etas are exactly where they should be. AUDIENCE: Would you explain why the index of summation starts from-- PROFESSOR: One? AUDIENCE: --one. PROFESSOR: OK. So let's take a look at this function. You can see there is a z inverse in the denominator. So I made it into a z here. So you can see that if z is very small, if will start with z. All right? Now what happened here? I wrote this as a sum of geometric series. I pulled out this factor of z that was in the numerator, and the denominator I wrote as a series that starts with 0. But then there is this z out front. I pulled this z e to the minus x in front here and then it starts with 1. I could have written it-- I can put this in here and write it as alpha plus 1 with alpha starting from 0, et cetera. Or I can write it [INAUDIBLE]. Or, alternatively, you can say, let's put the eta here and eta here. And then I see that this is the geometric series where the first term is this z to the minus x [INAUDIBLE]. OK? So I want to erase this board but keep this formula. So let me erase it and then rewrite it. So this I will rewrite as N lambda q over g, which is my degeneracy factor, is f 3/2 eta of z. And in the limit where z goes to small, I can use the expansion that I just derived, which is z plus eta z squared 2 to the-- what is it-- 3/2 plus z cubed. 3 to the 3/2 and so forth. You can see that precisely because as z goes to 0, this series goes to 0, that smallness of z is the same thing or implied by the smallness of z. But that z is equal to d only at the lowest order. So what happens at higher orders, z-- inverting this-- is d minus eta z squared 2 to the 3/2 plus z cubed 3 to the 3/2, and so forth. And we saw this when I was doing the calculation for the interacting gas, that clearly to lowest order z equals to d, if I want to get the next order, I substitute the result of the previous order into this expression. So z I substitute for d, I will d minus eta d squared 2 to the 3/2 as the first contribution. And then something that is order of d cubed. And if I want the answer to the d cubed order, I take this whole thing that is that order of d squared and substitute in this series. And so what do I get? I will get d minus eta 2 to the 3/2, square of z-- which is the square of this quantity-- square of this quantity is d squared minus twice the product of these two terms. So I have twice eta d cubed 2 to the 3/2. And then square of this term, but that's order of e to the 4. I don't care for that. And then I have, from here, plus d cubed 3 to the 3/2 ignoring terms of the order of d to the 4th. OK? So z is-- AUDIENCE: [INAUDIBLE] z equals d minus and the d [INAUDIBLE]. PROFESSOR: Ah, great. Thank you. Minus. Minus. Good. Saves me having to go back later when the answer doesn't match. OK. Fine. Now let's organize terms. The lowest order I have d. At order of d squared I have still just this one term, minus eta d squared over 2 to the 3/2. We knew that was the correct result at order of d squared. At order of d cubed I have two terms. One is the product of these two terms. The etas disappear, minus 2, and this will give me 2 over 9 which is 1/4. And from other part, I have 1 over 3 to the 3/1 d cubed. And I haven't calculate things at order of d to the 4th. OK? Fine. So we have calculated in a power series that we can go to whatever order you like. This is z which is related to chemical potential as a function of this degeneracy that is simply related to density. Now, the formula for the pressure is up there. It is that eta P is g over lambda cubed f 5/2 eta of z. Right? So reorganizing this a little bit, the dimensionless form is beta P lambda cubed over g f 5/2 eta of z. f 5/2 we can make an expansion. It is z plus eta z squared 2 to the 5/2 plus z cubed 3 to the 5/2 and so forth. For z I substitute from the line above. So I have d minus eta d squared 2 to the 3/2 plus 1/4 minus 1 over 3 to the 3/2 d cubed. So that's the first term. The next term is eta 2 to the 5/2 z squared. So I have to square what is up here and here keeping terms to order of d cubed. So what do I have? I have d squared from squaring this term. And a term that is twice the product of these two terms. That's the only term that is order of d cubed. So I have minus 2 eta 2 to the 3/2 d cubed. And finally, z cubed over 3 to the 5/2 would be d cubed over 3 to the 5/2. And I haven't calculated anything at all of d to the 4th. OK? So what do I have? I have that beta P lambda cubed over g. Well, you can see that everything is going to be proportional to d d squared d cubed. So let's put out the factor of d which is m lambda cubed over g. So this is just one factor of d that I have pulled out. So that then I start with 1, and then the next correction I have an eta d squared 2 to the 3/2 here, eta d squared 2 to the 5/2 here with opposite sign. So this is twice bigger than that. So the sum total of them would be minus eta divided by 2 to the 5/2 times d squared-- one factor of d I have out front-- the other factor I will write here and lambda cubed over g. And then at order of d cubed, I have a bunch of terms. I have a 1/4 here, and from the product of these terms, I will have etas disappear, I have 2 divided by 2 to the 8/2 which is 2 to the 4th which is 16. So the product of these things will give me a minus 1/8. I have 1/4 minus 1/8 I will get 1/8. AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. And then we have minus 3-- 1 over 3 to the 1/2 and this 3 to the 5/2. This is three times larger than this. This is 3 over 3 to the 5/2. Subtract one of them, I will be left minus 2 divided by 3 to the 5/2 [? N ?] lambda cubed over g squared. And so forth. So you can see that the form that we have is beta P-- the pressure-- starts with density. I essentially divide it through by lambda q over g. I start with the ideal gas result, n, and then I have a correction that is order of n squared. I can write it as B2 n squared. In view up our [INAUDIBLE] co-efficients. And we see that our B2 is minus eta times-- sorry. Minus eta 2 to the 5/2 times n lambda cubed over g. Remember that B2s, et cetera, had dimensions of volume. So if the volume is provided by lambda cubed, it is an additional factor. For fermions it's a positive pressure. So the pressure goes up because fermions repel each other. For bosons it is negative. Bosons attract each other. And we have actually calculated this. We did a calculation of the canonical form where we had the one exchange corrections. And if you go back to the notes or if you remember, we had calculated precisely this. But then I said that if I went and tried to calculate the next correction using the canonical formulation, I would have to keep track of diagrams that involve three exchanges and all kinds of things and it was complicated. But in this way of looking at things, you can see that we can very easily compute the [INAUDIBLE] co-efficients and higher orders. In particular, the third [INAUDIBLE] co-efficient we compute to be 1/8 minus 2 3 to the 5/2 lambda cubed over g squared. And you can keep going and calculate higher and higher order corrections. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: So if you're doing this expansion for the z being close to 0-- PROFESSOR: Yes. AUDIENCE: --it appears that chemical potential is a large negative number. PROFESSOR: Yes. AUDIENCE: What kind of interaction does it imply between our grand canonical and [INAUDIBLE] and the system from which it takes particles? Because chemical potential represents energy threshold for the particle to move from there to-- PROFESSOR: OK. So as you say, it corresponds to being large and negative. Which means that, essentially, I think the more kind of physical way of looking at it is-- this z is e to the beta mu-- is the likelihood of grabbing and adding one particle to your system. And what we have is that this is much, much less than 1. And the way to achieve that is to say that it is very hard to extract and bring in other particle from whatever reservoir you have. And because it is so hard that you have such low density of particles in the system that you have. So for example, if you are in vacuum and you put a surface, and some atoms from the surface detach and form a gas in your container, the energetics of binding those atoms to the substrate particles was huge and few of them managed to escape. OK? All right. But this is only one limit. And this is the limit that we already knew. We are really interested in the other limit where we are quantum dominated. And clearly we want to go to the limit where the argument here is not small. I have to include a large number of terms in this series. And then I have to figure out what happens. I mean, how do I have to treat the cases where I want to solve this equation and the left hand side, d, is a large number? So how can I make the value of the function large? What does that imply about argument of the function? OK? So different functions will become large for different values of the argument. And as we will show for the case of bosons, these functions become large where the argument becomes 1. Whereas, for fermions, they become large when the argument itself becomes very large. So the behavior of these functions is very different. When they become large they become large at different points when we are thinking about bosons and fermions. And because of that, at this stage I have to separately address the two problems. So up to here, the high temperature limit, we sort of started from the same point and branched off slightly by going into boson and fermion direction. But when I go to the other limit, I have to discuss the two cases separately. So let's start with the case of fermions. So degenerate-- degenerating a sense of d being larger than 1 rather than character flaws. So what do we do in this limit? So what I need to know is to look at one of these functions. So let's say d n lambda cubed over g is f 3/2 minus of z which is 1 over this 1/2 factorial integral 0 to infinity dx x to the 1/2. And here I have a plus because I'm looking at the fermionic version of these. I have z inverse e to the x and I can write that as e to the x minus log z. OK? Now, essentially, if I look at this function-- actually, let's look at the function that is just 1 over e to the x minus log z plus 1 as a function of x. What does it do in the limit where z is large? In the limit where z is large, log z is also very large. And then what I have is something that if x is much less than log z-- essentially, this factor will be 0 and I have 1. Whereas if x is much greater than log z, then this is exponentially large quantity and I will have 0. So basically, at around log z, this switches from being very close to 0 to being 1. Indeed, that x equals 2 log z is exactly 1/2. So here is 1/2. It is switching very rapidly. And you can see that, essentially, the value of the function depends on the combination x minus log z. So if I were to make log z much larger, I'm essentially transporting this combination without changing its shape, either in one direction or the other direction. OK? And if I go to the limit, essentially that log z is very large, I will have 1 for a huge distance and then, at some point, I would switch to 0. And what I'm doing, therefore, is to essentially approximately integrating from 0 to log z x to the 1/2 dx. Which would give me x to the 3/2-- sorry, log z to the 3/2 over 3/2. And you can see that I can repeat this for any other value of m here. I would be integrating x to the m minus 1. So you would conclude that the limit as z is much, much larger than 1, of one of these functions fm minus of z, is obtained by integrating 0 to log z dx x to the m. I will get another factor of log z. And then I will get log z to the power of m divided by 1 over m multiplying by 1 over m minus 1 factorial. It will give me m factorial. Now, of course there will be correction to this because this function would be equal to this if I was integrating a step function. And I don't have a step function. I have something that gradually vanishes. Although, the interval over which it vanishes is scaling much smaller than log z. So what you can do, is you can do some manipulations and calculate corrections to this formula that become significant as log z is not exactly infinite but becomes smaller and smaller. It turns out that that will generate for you a series that is inverse powers of log z. And in fact, inverse even powers of log z. So that the next term will fall off as 1 over log z squared. The next term will fall off as 1 over z to the fourth and so forth. And the coefficient of the first one-- and I show you this in the notes, how to calculate it-- is pi squared over 6 times m m minus 1. The coefficient of the next term is 7 pi to the fourth 360 m m minus 1, m minus 2, m minus 3. And there's a formula for subsequent terms in the series, how they are generated. And this thing is known as the Sommerfeld expansion. OK? So, essentially, if you plot as a function of z one of these fm minuses of z's, what we have according to the previous expansion, as you go to z equals to 0 you have a linear function z. And then the next correction is minus z squared. You start to bend in this direction. And then you have higher order corrections. What we see, that at larger values, it keeps growing. And the growth at larger values is this. So here you have z minus z squared 2 to the power of m. Out here you have log z to the m over m factorial. And then this corrections from the Sommerfeld expansion. OK? And so once we have that function, which is the right hand side of this equality, for a particular density we are asking, what is the value of z? And at low densities we are evaluating solutions here. At high values of density we are picking up solutions on this other. OK? So let's imagine that we are indeed in that limit. So what do we have? We have that n lambda cubed over g is log z to the 3/2 over 3/2 factorial. That's the leading term. Sub-leading term is 1 plus pi squared over 6. m, in our case, is 3/2. m minus 1 would be 1/2 divided by log z squared. I will stop at this order. That's all we will need. And so I will use this to calculate z as a function n. Again, same procedure as before. I can invert this and write this as log z being 3/2 factorial n lambda cubed over g. so I multiplied by this. I have to raise to the 2/3 power-- the inverse of 3/2. I take this combination to the other side. So I have 1. This becomes plus pi squared over 8 1 over log z squared raised to the minus 2/3 power because I had to take it to the other side. OK. So at the lowest order, what I need to do is to calculate log z from the zeroed order term and then, at the next order, substitute that value of log z in here and get the next correction. So the lowest order correction will tell me that log z--- which, remind you, is simply beta mu-- is this combination raised to the 2/3 power. What is that communication? 3/2 factorial. 3/2 factorial we saw was 3/2 1/2 factorial-- which was root pi over 2-- so this is 3/2 factorial. I have n over g. Oh, sorry. n lambda cubed. So write it in this fashion. n over g. And then I have lambda cubed. Let me put this to the 2/3 power because lambda cubed to the 2/3 power is simply lambda squared. And lambda squared is going to be h squared divided by 2 pi mkT. So this is lambda cubed. What I'm going to do is notice that one of these h over 2pi's is in fact already an h bar. I'd like to write the result in terms of h bar squared divided by 2m. The 1 over kT I can write as beta. Ultimately, it is my intention to cancel that beta which is the 1 over kT in the beta that I have out here. And I get the value for mu. OK? To go from h squared over 2pi to h bar squared over 2m, I need a factor of 4pi squared. Those 4pi squared I will put out front in here. And they will change the result to something that is like this. It is n over g is going to be remaining there. I have pi by the time it gets in front over here becomes pi to the 3/2. Pi to the 3/2 multiplied by pi to the 1/2 becomes pi squared. You can convince yourself that the factors of 2 and all of those things manage to give you something that is 6pi squared n over g to the 2/3 power. Yes? AUDIENCE: So when [INAUDIBLE] chemical potential as no temperature [INAUDIBLE]. PROFESSOR: Exactly. When you go to 0 temperature, the chemical potential of a Fermi gas goes to a constant that you have actually seen-- and I just was about to make it clear to you what that constant is. So we get that limit as essentially this T goes to 0, which is another way of achieving the combination n lambda cubed over g goes to infinity. When T goes to 0, lambda goes to infinity, this combination becomes large. Canceling the betas, the limit of mu is something that potentially in solid state courses you have seen as the fermion energy. And the fermion energy is h bar squared over 2m. Some Fermi momentum squared. And hopefully the Fermi momentum is going to come out to be 6pi squared n over g to the 1/3. And one way that you usually get your Fermi momentum is you say that, what I need to do is to fill up all of this state up to some value of kf. And the number of states that there are is the volume of the sphere- 4pi over 3k f cubed, of course times the volume times the degeneracy coming from spins if I have multiple possibilities. And this should give me the total number n. Ah. I made some big mistake here. Can somebody point it out to me? How do we calculate Fermi momentum and Fermi [INAUDIBLE]? Not coming out the answer I want. AUDIENCE: One [INAUDIBLE] is a sphere, right? PROFESSOR: No. It's a sphere. AUDIENCE: [INAUDIBLE] PROFESSOR: Thank you. Divide by density of state. I multiplied by the V but I forgot the 2pi cubed here. OK? So when you divide 4pi divided by 8pi cubed, you will have 2pi squared times 3, which will give you 6pi squared. n over V is the density. Divided by g. So you can see that we get the right result. Yes? AUDIENCE: [INAUDIBLE] cubed. PROFESSOR: f cubed. Yes. That's right. So kf is 1/3 of that. Let's go step by step. A is 6pi squared n over g to the 1/3. kf squared would be the 2/3 power [? of g. ?] Yes? AUDIENCE: So you said that this is the lowest order term for the chemical potential. PROFESSOR: Exactly. Yes. AUDIENCE: But aren't there going to be maybe a second order temperature dependent term? PROFESSOR: Yes. So when we substitute that formula in here, we will find the value of the chemical potential. So I will draw it for you and then recalculate the result later. So this is the chemical potential of the Fermi system as a function of its temperature. What did we establish so far? We established that at zero temperature it is in fact epsilon f. We had established that at high temperature-- this was the discussion we had before-- it's large and negative. It is down here. So indeed the behavior of the chemical potential of the Fermi gas as a function of temperature is something like this. And so the chemical potential at zero temperature is epsilon f. As you go to finite temperature, it will get reduced by a factor that I will just calculate. OK? So if I now substitute this result in here to get the next correction, what do I have? OK. So this zero to order term we established is the same thing as beta epsilon f. And then the next order term will be 1 plus-- this log z is going to be small-- minus 2/3 times this is going to give me minus pi squared over 6. Right? And then I have 1 over log z squared. One over log z is simply 1 over beta epsilon f which is the same thing as kT over epsilon f squared So what we see is that the chemical potential starts to be epsilon f and then it starts to go down parabolically in temperature. And the first correction is of the order of kT over epsilon f. Not surprisingly, it will go to 0 at the value T that is off the order of epsilon f over kT. So the chemical potential of the Fermi gas actually changes sign. It's negative and large at high temperature, it goes to the Fermi energy which is positive at low temperatures. So once we know this behavior or the chemical potential, we can calculate what the energy and pressure on. And again, at zero temperature you will have some energy which comes from filling out this Fermi C. And then we will get corrections to energy and pressure that we can calculate using the types of formulas that we have that relate pressure and energy to f 5/2 of z. And so we can start to calculate the corrections that you have to completely Fermi sphere. And how those corrections give you, say, the linear heat capacity is something that we will derive next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 22_Ideal_Quantum_Gases_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Particles in quantum mechanics. In particular, the ones that are identical and non-interacting. So basically, we were focusing on a type of Hamiltonian for a system of N particles, which could be written as the sum of contributions that correspond respectively to particle 1, particle 2, particle N. So essentially, a sum of terms that are all the same. And one-particle terms is sufficient because we don't have interactions. So if we look at one of these H's-- so one of these one-particle Hamiltonians, we said that we could find some kind of basis for it. In particular, typically we were interested in particles in the box. We would label them with some wave number k. And there was an associated one-particle energy, which for the case of one-particle in box was h bar squared k squared over 2m. But in general, for the one-particle system, we can think of a ladder of possible values of energies. So there will be some k1, k2, k3, et cetera. They may be distributed in any particular way corresponding to different energies. Basically, you would have a number of possible states for one particle. So for the case of the particle in the box, the wave functions and coordinate space that we had x, k were of the form e to the i k dot x divided by square root of V. We allowed the energies where h bar squared k squared over 2m. And this discretization was because the values of k were multiples of 2 pi over l with integers in the three different directions. Assuming periodic boundary conditions or appropriate discretization for closed boundary conditions, or whatever you have. So that's the one-particle state. If the Hamiltonian is of this form, it is clear that we can multiply a bunch of these states and form another eigenstate for HN. Those we were calling product states. So you basically pick a bunch of these k's and you multiply them. So you have k1, k2, kN. And essentially, that would correspond, let's say, in coordinate representation to taking a bunch of x's and the corresponding k's and having a wave function of the form e to the i k alp-- sum over alpha k alpha x alpha, and then divided by V to the N over 2. So in this procedure, what did we do? We had a number of possibilities for the one-particle state. And in order to, let's say, make a two-particle state, we would pick two of these k's and multiply the corresponding wave functions. If you had three particles, we could pick another one. If you had four particles, we could potentially pick a second one twice, et cetera. So in general, basically we would put N of these crosses on these one-particle states that we've selected. Problem was that this was not allowed by quantum mechanics for identical particles. Because if we took one of these wave functions and exchanged two of the labels, x1 and x2, we could potentially get a different wave function. And in quantum mechanics, we said that the wave function has to be either symmetric or anti-symmetric with respect to exchange of a pair of particles. And also, whatever it implied for repeating this exchange many times to look at all possible permutations. So what we saw was that product states are good as long as you are thinking about distinguishable particles. But if you have identical particles, you had to appropriately symmetrize or anti-symmetrize these states. So what we ended up was a wabe of, for example, symmetrizing things for the case of fermions. So we could take a bunch of these k-values again. And a fermionic, or anti-symmetrized version, was then constructed by summing over all permutations. And for N particle, there would be N factorial permutations. Basically, doing a permutation of all of these indices k1, k2, et cetera, that we had selected for this. And for the case of fermions, we had to multiply each permutation with a sign that was plus for even permutations, minus for odd permutations. And this would give us N factorial terms. And the appropriate normalization was 1 over square root of N factorial. So this was for the case of fermions. And this was actually minus. I should have put in a minus here. And this would have been minus to the P. For the case of bosons, we basically dispensed with this factor of minus 1 to the P. So we had a Pk. Now, the corresponding normalization that we had here was slightly different. The point was that if we were doing this computation for the case of fermions, we could not allow a state where there is a double occupation of one of the one-particle state. Under exchange of the particles that would correspond to these k-values, I would get the same state back, but the exchange would give me a minus 1 and it would give me 0. So the fermionic wave function that I have constructed here, appropriately anti-symmetrized, exists only as long as there are no repeats. Whereas, for the case of bosons, I could put 2 over same place. I could put 3 somewhere else, any number that I liked, and there would be no problem with this. Except that the normalization would be more complicated. And we saw that appropriate normalization was a product over k nk factorial. So this was for fermions. This is for bosons. And the two I can submerge into one formula by writing a symmetrized or anti-symmetrized state, respectively indicated by eta, where we have eta is minus 1 for fermions and eta is plus 1 for bosons, which is 1 over square root of N factorial product over k nk factorials. And then the sum over all N factorial permutations. This phase factor for fermions and nothing for bosons of the appropriately permuted set of k's. And in this way of noting things, I have to assign values nk which are either 0 or 1 for fermions. Because as we said, multiple occupations are not allowed. But there is no restriction for bosons. Except of course, that in this perspective, as I go along this k-axis, I have 0, 1, 0, 2, 1, 3, 0, 0 for the occupations. Of course, what I need to construct whether I am dealing with bosons or fermions is that the sum over k nk is the total number of particles that I have. Now, the other thing to note is that once I have given you a picture such as this in terms of which one-particle states I want to look at, or which set of occupation numbers I have nk, then there is one and only one symmetrized or anti-symmetrized state. So over here, I could have permuted the k's in a number of possible ways. But as a result of symmetrization, anti-symmetrization, various ways of permuting the labels here ultimately come to the same set of occupation numbers. So it is possible to actually label the state rather than by the set of k's. By the set of nk's. It is kind of a more appropriate way of representing the system. So that's essentially the kinds of states that we are going to be using. Again, in talking about identical particles, which could be either bosons or fermions. Let's take a step back, remind you of something that we did before that had only one particle. Because I will soon go to many particles. But before that, let's remind you what the one particle in a box looked like. So indeed, in this case, the single-particle states were the ones that I told you before, 2 pi over l some set of integers. Epsilon k, that was h bar squared k squared over 2m. If I want to calculate the partition function for one particle in the box, I have to do a trace of e to the minus beta h for one particle. The trace I can very easily calculate in the basis in which this is diagonal. That's the basis that is parameterized by these k-values. So I do a sum over k of e to the minus beta h bar squared k squared over 2m. And then in the limit of very large box, we saw that the sum over k I can replace with V integral over k 2 pi cubed. This was the density of states in k. e to the minus beta h bar squared k squared over 2m. And this was three Gaussian integrals that gave us the usual formula of V over lambda cubed, where lambda was this thermal [INAUDIBLE] wavelength h root 2 pi mk. But we said that the essence of statistical mechanics is to tell you about probabilities of various micro-states, various positions of the particle in the box, which in the quantum perspective is probability becomes a density matrix. And we evaluated this density matrix in the coordinate representation. And in the coordinate representation, essentially what we had to do was to go into the basis in which rho is diagonal. So we had x prime k. In the k basis, the density matrix is just this formula. It's the Boltzmann weight appropriately normalized by Z1. And then we go kx. And basically, again replacing this with V integral d cubed k 2 pi cubed e to the minus beta h bar squared k squared over 2m. These two factors of xk and x prime k gave us a factor of e to the kx, xk I have as ik dot x prime minus x. Completing the square. Actually, I had to divide by Z1. There is a factor of 1 over V from the normalization of these things. The two V's here cancel, but Z1 is proportional to V. The lambda cubes cancel and so what we have is 1 over V e to the minus x minus x prime squared pi over lambda squared. So basically, what you have here is that we have a box of volume V. There is a particle inside at some location x. And the probability to find it at location x is the diagonal element of this entity. It's just 1 over V. But this entity has off-diagonal elements reflecting the fact that the best that you can do to localize something in quantum mechanics is to make some kind of a wave packet. OK. So this we did last time. What we want to do now is to go from one particle to the case of N particles. So rather than having 1x prime, I will have a whole bunch of x primes labeled 1 through N. And I want to calculate the N particle density matrix that connects me from set of points x to another set of points x prime. So if you like in the previous picture, this would have been x1 and x1 prime, and then I now have x2 and x2 prime, x3 and x3 prime, xN and xN prime. I have a bunch of different coordinates and I'd like to calculate that. OK. Once more, we know that rho is diagonal in the basis that is represented by these occupations of one-particle states. And so what I can do is I can sum over a whole bunch of plane waves. And I have to pick N factors of k out of this list in order to make one of these symmetrized or anti-symmetrized wave functions. But then I have to remember, as I said, that I should not over-count distinct set of k-values because permutations of these list of k's that I have over here, because of symmetrization or anti-symmetrization, will give me the same state. So I have to be careful about that. Then, I go from x prime to k. Now, the density matrix in the k-basis I know. It is simply e to the minus beta, the energy which is sum over alpha h bar squared k alpha squared over 2m. So I sum over the list of k alphas that appear in this series. There will be n of them. I have to appropriately normalize that by the N-particle partition function, which we have yet to calculate. And then I go back from k to x. Now, let's do this. The first thing that I mentioned last time is that I would, in principle, like to sum over k1 going over the entire list, k2 going the entire list, k3 going over the entire list. That is, I would like to make the sum over k's unrestricted. But then I have to take into account the over-counting that I have. If I am looking at the case where all of the k's are distinct-- they don't show any double occupancy-- then I have over-counted by the number of permutations. Because any permutation would have given me the same number. So I have to divide by the number of permutations to avoid the over-counting due to symmetrization here. Now, when I have something like this, which is a multiple occupancy, I have overdone this division. I have to multiply by this factor, and that's the correct number of over-countings that I have. And as I said, this was a good thing because the quantity that I had the hardest time for, and comes in the normalizations that occurs here, is this factor of 1 over nk factorial. Naturally, again, all of these things do depend on the symmetry. So I better make sure I indicate the index. Whether I'm calculating this density matrix for fermions or bosons, it is important. In either case-- well, what I need to do is to do a summation over P here for this one and P prime here or P prime here and P here. It doesn't matter, there's two sets of permutations that I have to do. In each case, I have to take care of this eta P, eta P prime. And then the normalization. So I divide by twice, or the square of the square root. I get the N factorial product over k nk factorial. And very nicely, the over-counting factor here cancels the normalization factor that I would have had here. So we got that. Now, what do we have? We have P prime permutation of these objects going to x, and then we have here P permutation of these k numbers going to x. I guess the first one I got wrong. I start with x prime. Go through P prime to k. And again, symmetries are already taken into account. I don't need to write that. And I have the factor of e to the minus beta h bar squared sum over alpha k alpha squared over 2m divided by ZN. OK, so let's bring all of the denominator factors out front. I have a ZN. I have an N factorial squared. Two factors of N factorial. I have a sum over two sets of permutations P and P prime. The product of the associated phase factor of their parities, and then I have this integration over k's. Now, unrestricted. Since it is unrestricted, I can integrate independently over each one of the k's, or sum over each one of them. When I sum, the sum becomes the integral over d cubed k alpha divided by 2 pi-- yeah, 2 pi cubed V. Basically, the density in replacing the sum over k alpha with the corresponding integration. So basically, this set of factors is what happened to that. OK, what do we have here? We have e to the i x-- well, let's be careful here. I have e to the i x prime alpha acting on k of p prime alpha because I permuted the k-label that went with, say, the alpha component here with p prime. From here, I would have minus because it's the complex conjugate. I have x alpha k p alpha, because I permuted this by k. I have one of these factors for each V. With each one of them, there is a normalization of square root of V. So the two of them together will give me V. But that's only one of the N-particle So there are N of them. So if I want, I can extend this product to also encompass this term. And then having done so, I can also write here e to the minus beta h bar squared k alpha squared over 2m within the product. AUDIENCE: [INAUDIBLE] after this-- is it quantity xk minus [INAUDIBLE]. PROFESSOR: I forgot an a here. What else did I miss out? AUDIENCE: [INAUDIBLE] quantity. PROFESSOR: So I forgot the i. OK, good? So the V's cancel out. All right, so that's fine. What do we have? We have 1 over ZN N factorial squared. Two sets of permutations summed over, p and p prime. Corresponding parities eta p eta of p prime. And then, I have a product of these integrations that I have to do that are three-dimensional Gaussians for each k alpha. What do I get? Well, first of all, if I didn't have this, if I just was doing the integration of e to the minus beta h bar squared k squared over 2m, I did that already. I get a 1 over lambda cubed. So basically, from each one of them I will get a 1 over lambda cubed. But the integration is shifted by this amount. Actually, I already did the shifted integration here also for one particle. So I get the corresponding factor of e to the minus-- ah. I have to be a little bit careful over here because what I am integrating is over k alpha squared. Whereas, in the way that I have the list over here, I have x prime alpha and x alpha, but a different k playing around with each. What should I do? I really want this integration over k alpha to look like what I have over here. Well, as I sum over all possibilities in each one of these terms, I am bound to encounter k alpha. Essentially, I have permuted all of the k's that I originally had. So the k alpha has now been sent to some other location. But as I sum over all possible alpha, I will hit that. When I hit that, I will find that the thing that was multiplying k alpha is the inverse permutation of alpha. And the thing that was multiplying k alpha here is the inverse permutation of p. So then I can do the integration over k alpha easily. And so what do I have? I have x prime of p prime inverse alpha-- the inverse permutation-- minus x of p inverse alpha squared pi over lambda squared. Now, this is still inconvenient because I am summing over two N factorial sets of permutations. And I expect that since the sum only involves comparison of things that are occurring N times, as I go over the list of N factorial permutation squared, I will get the same thing appearing twice. So it is very much like when we are doing an integration over x and x prime, but the function only depends on x minus x prime. We get a factor of volume. Here, it is easy to see that one of these sums I can very easily do because it is just repetition of all of the results that I have previously. And there will be N factorial such terms. So doing that, I can get rid of one of the N factorials. And I will have only one permutation left, Q. And what will appear here would be the parity of this Q that is the combination, or if you like, the relative of these two permutations. And I have an exponential of minus sum over alpha x alpha minus x prime Q alpha squared pi over lambda squared. And I think I forgot a factor of lambda to the 3 [INAUDIBLE]. This factor of lambda 3. So this is actually the final result. And let's see what that precisely means for two particles. So let's look at two particles. So for two particle,s I will have on one side coordinates of 1 prime and 2 prime. On the right-hand side, I have coordinates 1 and 2. And let's see what this density matrix tells us. It tells us that to go from x1 prime x2 prime, a two particle density matrix connecting to x1 x2 on the other side, I have 1 over the two-particle partition function that i haven't yet calculated. Lambda to the sixth. N factorial in this case is 2. And then for two things, there are two permutations. So the identity maps 1 to 1, 2 to 2. And therefore, what I will get here would be exponential of minus x1' minus x1 prime squared pi over lambda squared minus x2 minus x2 prime squared pi over lambda squared. So that's Q being identity and identity has essentially 0 parity. It's an even permutation. The next thing is when I exchange 1 and 2. That would have odd parity. So I would get minus 1 for fermions, plus for bosons. And what I would get here is exponential of minus x1' minus x2 prime squared pi over lambda squared minus x2 minus x1 prime squared pi over lambda squared. So essentially, one of the terms-- the first term is just the square of what I had before for one particle. I take the one-particle result, going from 1 to 1 prime, going from 2 to 2 prime and multiply them together. But then you say, I can't tell apart 2 prime and 1 prime. Maybe the thing that you are calling 1 prime is really 2 prime and vice versa. So I have to allow for the possibility that rather than x1 prime here, I should put x2 prime and the other way around. And this also corresponds to a permutation that is an exchange. It's an odd parity and will give you something like that. Say OK, I have no idea what that means. I'll tell you, OK, you were happy when I put x prime and x here because that was the probability to find the particle somewhere. So let me look at the diagonal term here, which is a probability. This should give me the probability to find one particle at position x1, one particle at position x2. Because the particles were non-interacting, one particle-- it could be anywhere. I had the 1 over V. Is it 1 over V squared or something like that? Well, we find that there is factor out front that we haven't yet evaluated. It turns out that this factor will give me a 1 over V squared. And if I set x1 prime to x1, x2 prime to x2, which is what I've done here, this factor becomes 1. But then the other factor will give me eta e to the minus 2 pi over lambda squared x1 minus x2 squared. So the physical probability to find one particle-- or more correctly, a wave packet here and a wave package there is not 1 over V squared. It's some function of the separation between these two particles. So that separation is contained here. If I really call that separation to be r, this is an additional weight that depends on r. This is r squared. So you can think of this as an interaction, which is because solely of quantum statistics. And what is this interaction? This interaction V of r would be minus kT log of 1 plus eta e to the minus 2 pi r squared over lambda squared. I will plot out for you what this V or r looks like as a function of how far apart the centers of these two wave packets are. You can see that the result depends on eta. If eta is minus 1, which is for the case of fermions, this is 1 minus something. It's something that is less than 1. [INAUDIBLE] would be negative. So the whole potential would be positive, or repulsive. At large distances, indeed it would be exponentially going to 0 because I can expand the log at large distances. So here I have a term that is minus 2 pi r squared over lambda squared. As I go towards r equals to 0, actually things become very bad because r goes to 0 I will get 1 minus 1 and the log will diverge. So basically, there is, if you like, an effective potential that says you can't put these two fermions on top of each other. So there is a statistical potential. So this is for eta minus 1, or fermions. For the case of bosons, eta plus 1. It is log of 1 plus something, so it's a positive number inside the log. The potential will be attractive. And it will actually saturate to a value of kT log 2 when r goes to 0. So this is again, eta of plus 1 for the case of bosons. So the one thing that this formula does not have yet is the value for this partition function ZN. It gives you the qualitative behavior in either case. And let's calculate what ZN is. Well, basically, that would come from noting that the trace of rho has to be 1. So ZN is trace of e to the minus beta H. And essentially, I can take this ZN to the other side and evaluate this as x e to the minus beta H x. That is, I can calculate the diagonal elements of this matrix that I have calculated-- that I have over there. So there is an overall factor of 1 over lambda cubed to the power of N. I have N factorial. And then I have a sum over permutations Q eta of Q. The diagonal element is obtained by putting x prime to be the same as x. So I have exponential of minus x-- sum over alpha x alpha minus x of Q alpha. I set x prime to be the same as x. Squared. And then there's an overall pi over lambda squared. And if I am taking the trace, it means that I have to do integration over all x's. So I'm evaluating this trace in coordinate basis, which means that I should put x and x prime to be the same for the trace, and then I have to sum or integrate over all possible values of x. So let's do this. I have 1 over N factorial lambda cubed raised to the power of N. OK. Now I have to make a choice because I have a whole bunch of terms because of these permutations. Let's do them one by one. Let's first do the case where Q is identity. That is, I map everybody to themselves. Actually, let me write down the integrations first. I will do the integrations over all pairs of coordinates of these Gaussians. These Gaussians I will evaluate for different permutations. Let's look at the case where Q is identity. When Q is identity, essentially I will put all of the x prime to be the same as x. It is like what I did here for two particles and I got 1. I do the same thing for more than one particle. I will still get 1. Then, I will do the same thing that I did over here. Here, the next term that I did was to exchange 1 and 2. So this became x1 minus x2. I'll do the same thing here. I look at the case where Q corresponds to exchange of particles 1 and 2. And then that will give me a factor which is e to the minus pi over lambda squared x1 minus x2 squared. There are two of these making together 2 pi over lambda squared, which I hope I had there, too. But then there was a whole bunch of other terms that I can do. I can exchange, let's say, 7 and 9. And then I will get here 2 pi over lambda squared x7 minus x9 squared. And there's a whole bunch of such exchanges that I can make in which I just switch between two particles in this whole story. And clearly, the number of exchanges that I can make is the number of pairs, N N minus 1 over 2. Once I am done with all of the exchanges, then I have to go to the next thing that doesn't have an analog here for two particles. But if I take three particles, I can permute them like a triangle. So presumably there would be next set of terms, which is a permutation that is like 1, 2, 3, 2, 3, 1. There's a bunch of things that involve two permutations, four permutations, and so forth. So there is a whole list of things that would go to here where these two-particle exchanges are the simplest class. Now, as we shall see, there is a systematic way of looking at things where the two-particle exchanges are the first correction due to quantum effects. Three-particle exchanges would be higher-order corrections. And we can systematically do them in order. So let's see what happens if we compare the case where there is no exchange and the case where there is one exchange. When there is no exchange, I am essentially integrating over each position over the volume. So what I would get is V raised to the power of N. The next term? Well, I have to do the integrations. The integrations over x3, x4, x5, all the way to x to the N, there is no factors. So they will give me factors of V. And there are N minus 2 of them. And then I have to do the integration over x1 and x2 of this factor, but it's only a function of the relative coordinate. So there is one other integration that I can trivially do, which is the center of mass gives me a factor of V. And then I am left with the integral over the relative coordinate of e to the minus 2 pi r squared over lambda squared. And I forgot-- it's very important. This will carry a factor of eta because any exchange is odd. And so there will be a factor of eta here. And I said that I would get the same expression for any of my N N minus 1 over 2 exchanges. So the result of all of these exchange calculations would be the same thing. And then there would be the contribution from three-body exchange and so forth. So let's re-organize this. I can pull out the factor of V to the N outside. So I would have V over lambda cubed to the power of N. So the first term is 1. The next term has the parity factor that distinguishes bosons and fermions, goes with a multiplicity of pairs which is N N minus 1 over 2. Since I already pulled out a factor of V to the N and I really had V to the N minus 1 here, I better put a factor of 1 over V here. And then I just am left with having to evaluate these Gaussian integrals. Each Gaussian integral will give me 2 pi times the variance, which is lambda squared divided by 2 pi. And then there's actually a factor of 2. And there are three of them, so I will have 3/2. So what I get here is lambda cubed divided by 2 to the 3/2. Now, you can see that any time I go further in this series of exchanges, I will have more of these Gaussian factors. And whenever I have a Gaussian factor, I have an additional integration to do that has an x minus something squared in it. I will lose a factor of V. I don't have that factor of V. And so subsequent terms will be even smaller in powers of V. And presumably, compensated by corresponding factors of lambda squared-- lambda cubed. Now, first thing to note is that in the very, very high temperature limit, lambda goes to 0. So I can forget even this correction. What do I get? I get 1 over N factorial V over lambda cubed to the power of N. Remember that many, many lectures back we introduced by hand the factor of 1 over N factorial for measuring phase spaces of identical particles. And I promise to you that we would get it when we did identical particles in quantum mechanics, so here it is. So automatically, we did the calculation, keeping track of identity of particles at the level of quantum states. Went through the calculation and in the high temperature limit, we get this 1 over N factorial emerging. Secondly, we see that the corrections to ideal gas behavior emerge as a series in powers of lambda cubed over V. And for example, if I were to take the log of the partition function, I would get log of what I would have had classically, which is this V over lambda cubed to the power of N divided by N factorial. And then the log of this expression. And this I'm going to replace with N squared. There is not that much difference. And since I'm regarding this as a correction, log of 1 plus something, I will replace with the something eta N squared 2V lambda cubed 2 to the 3/2 plus higher order. What does this mean? Once we have the partition function, we can calculate pressure. Reminding you that beta p was the log Z by dV. The first part is the ideal gas that we had looked at classically. So once I go to the appropriate large-end limit of this, what this gives me is the density n over V. And then when I look at the derivative here, the derivative of 1/V will give me a minus 1 over V squared. So I will get minus eta. N over V, the whole thing squared. So I will have n squared lambda cubed 2 to the 5/2, and so forth. So I see that the pressure of this ideal gas with no interactions is already different from the classical result that we had calculated by a factor that actually reflects the statistics. For fermions eta of minus 1, you get an additional pressure because of the kind of repulsion that we have over here. Whereas, for bosons you get an attraction. You can see that also the thing that determines this-- so basically, this corresponds to a second Virial coefficient, which is minus eta lambda cubed 2 to the 5/2, is the volume of these wave packets. So essentially, the corrections are of the order of n lambda cubed that is within one of these wave packets how many particles you will encounter. As you go to high temperature, the wave packets shrink. As you go to low temperature, the wave packets expand. If you like, the interactions become more important and you get corrections to ideal gas wave. AUDIENCE: You assume that we can use perturbation, but the higher terms actually had a factor [INAUDIBLE]. And you can't really use perturbation in that. PROFESSOR: OK. So what you are worried about is the story here, that I took log of 1 plus something here and I'm interested in the limit of n going to infinity, that finite density n over V. So already in that limit, you would say that this factor really is overwhelmingly larger than that. And as you say, the next factor will be even larger. So what is the justification in all of this? We have already encountered this same problem when we were doing these perturbations due to interactions. And the answer is that what you really want to ensure is that not log Z, but Z has a form that is e to the N something. And that something will have corrections, potentially that are powers of N, the density, which is N over V. And if you try to force it into a perturbation series such as this, naturally things like this happen. What does that really mean? That really means that the correct thing that you should be expanding is, indeed, log Z. If you were to do the kind of hand-waving that I did here and do the expansion for Z, if you also try to do it over here you will generate terms that look kind of at the wrong order. But higher order terms that you would get would naturally conspire so that when you evaluate log Z, they come out right. You have to do this correctly. And once you have done it correctly, then you can rely on the calculation that you did before as an example. And we did it correctly when we were doing these cluster expansions and the corresponding calculation we did for Q. We saw how the different diagrams were appearing in both Q and the log Q, and how they could be summed over in log Q. But indeed, this mathematically looks awkward and I kind of jumped a step in writing log of 1 plus something that is huge as if it was a small number. All right. So we have a problem. We want to calculate the simplest system, which is the ideal gas. So classically, we did all of our calculations first for the ideal gas. We had exact results. Then, let's say we had interactions. We did perturbations around that and all of that. And we saw that having to do things for interacting systems is very difficult. Now, when we start to do calculations for the quantum problem, at least in the way that I set it up for you, it seems that quantum problems are inherently interacting problems. I showed you that even at the level of two particles, it is like having an interaction between bosons and fermions. For three particles, it becomes even worse because it's not only the two-particle interaction. Because of the three-particle exchanges, you would get an additional three-particle interaction, four-particle interaction, all of these things emerge. So really, if you want to look at this from the perspective of a partition function, we already see that the exchange term involved having to do a calculation that is equivalent to calculating the second Virial coefficient for an interacting system. The next one, for the third Virial coefficient, I would need to look at the three-body exchanges, kind of like the point clusters, four-point clusters, all kinds of other things are there. So is there any hope? And the answer is that it is all a matter of perspective. And somehow it is true that these particles in quantum mechanics because of the statistics are subject to all kinds of complicated interactions. But also, the underlying Hamiltonian is simple and non-interacting. We can enumerate all of the wave functions. Everything is simple. So by looking at things in the right basis, we should be able to calculate everything that we need. So here, I was kind of looking at calculating the partition function in the coordinate basis, which is the worst case scenario because the Hamiltonian is diagonal in the momentum basis. So let's calculate ZN trace of e to the minus beta H in the basis in which H is diagonal. So what are the eigenvalues and eigenfunctions? Well, the eigenfunctions are the symmetrized/anti-symmetrized quantities. The eigenvalues are simply e to the minus beta H bar squared k alpha squared over 2m. So this is basically the thing that I could write as the set of k's appropriately symmetrized or anti-symmetrized e to the minus beta sum over alpha H bar squared k alpha squared over 2m k eta. Actually, I'm going to-- rather than go through this procedure that we have up there in which I wrote these, what I need to do here is a sum over all k in order to evaluate the trace. So this is inherently a sum over all sets of k's. But this sum is restricted, just like what I had indicated for you before. Rather than trying to do it that way, I note that these k's I could also write in terms of these occupation numbers. So equivalently, my basis would be the set of occupation numbers times the energy. The energy is then e to the minus beta sum over k epsilon k nk, where epsilon k is this beta H bar squared k alpha squared over 2m. But I could do this in principle for any epsilon k that I have over here. So the result that I am writing for you is more general. Then I sandwich it again, since I'm calculating the trace, with the same state. Now, the states have this restriction that I have over there. That is, for the case of fermions, my nk can be 0 or 1. But there is no restriction for nk on the bosons. Except, of course, that there is this overall restriction that the sum over k nk has to be N because I am looking at N-particle states. Actually, I can remove this because in this basis, e to the minus beta H is diagonal. So I can, basically, remove these entities. And I'm just summing a bunch of exponentials. So that is good because I should be able to do for each nk a sum of e to something nk. Well, the problem is this that I can't sum over each nk independently. Essentially in the picture that I have over here, I have some n1 here. I have some n2 here. Some n3 here, which are the occupation numbers of these things. And for that partition function, I have to do the sum of these exponentials e to the minus epsilon 1 n1, e to the minus epsilon 2 n2. But the sum of all of these n's is kind of maxed out by N. I cannot independently sum over them going over the entire range. But we've seen previously how those constraints can be removed in statistical mechanics. So our usual trick. We go to the ensemble in which n can take any value. So we go to the grand canonical prescription. We remove this constraint on n by evaluating a grand partition function Q, which is a sum over all N of e to the beta mu N ZN. So we do, essentially, a Laplace transform. We exchange our n with the chemical potential mu. Then, this constraint no longer we need to worry about. So now I can sum over all of the nk's without worrying about any constraint, provided that I multiply with e to the beta mu n, which is a sum over k nk. And then, the factor that I have here, which is e to the minus beta sum over k epsilon of k nk. So essentially for each k, I can independently sum over its nk of e to the beta mu minus epsilon of k nk. Now, the symmetry issues remain. This answer still depends on whether or not I am calculating things for bosons or fermions because these sums are differently constrained whether I'm dealing with fermions. In which case, nk is only 0 or 1. Or bosons, in which case there is no constraint. So what do I get? For the case of fermions, I have a Q minus, which is product over all k. And for each k, the nk takes either 0 or 1. So if it takes 0, I will write e to the 0, which is 1. Or it takes 1. It is e to the beta mu minus epsilon of k. For the case of bosons, I have a Q plus. Q plus is, again, a product over Q. In this case, nk going from 0 to infinity, I am summing a geometric series that starts as 1, and then the subsequent terms are smaller by a factor of beta mu minus epsilon of k. Actually, for future reference note that I would be able to do this geometric sum provided that this combination beta mu minus epsilon of k is negative. So that the subsequent terms in this series are decaying. Typically, we would be interested in things like partition functions, grand partition functions. So we have something like log of Q, which would be a sum over k. And I would have either the log of this quantity or the log of this quantity with a minus sign. I can combine the two results together by putting a factor of minus eta because in taking the log, over here for the bosons I would pick a factor of minus 1 because the thing is in the denominator. And then I would write the log of 1. And then I have in both cases, a factor which is e to the beta mu minus epsilon of k. But occurring with different signs for the bosons and fermions, which again I can combine into a single expression by putting a minus eta here. So this is a general result for any Hamiltonian that has the characteristic that we wrote over here. So this does not have to be particles in a box. It could be particles in a harmonic oscillator. These could be energy levels of a harmonic oscillator. All you need to do is to make the appropriate sum over the one-particle levels harmonic oscillator, or whatever else you have, of these factors that depend on the individual energy levels of the one-particle system. Now, one of the things that we will encounter having made this transition from canonical, where we knew how many particles we had, to grand canonical, where we only know the chemical potential, is that we would ultimately want to express things in terms of the number of particles. So it makes sense to calculate how many particles you have given that you have fixed the chemical potential. So for that we note the following. That essentially, we were able to do this calculation for Q because it was a product of contributions that we had for the individual one-particle states. So clearly, as far as this normalization is concerned, the individual one-particle states are independent. And indeed, what we can say is that in this ensemble, there is a classical probability for a set of occupation numbers of one particle states, which is simply a product over the different one-particle states of e to the beta mu minus epsilon k nk appropriately normalized. And again, the restriction on n's being 0 or 1 for fermions or anything for bosons would be implicit in either case. But in either case, essentially the occupation numbers are independently taken from distributions that I've discussed [INAUDIBLE]. So you can, in fact, independently calculate the average occupation number that you have for each one of these single-particle states. And it's clear that you could get that by, for example, bringing down a factor of nk here. And you can bring down a factor of nk by taking a derivative of Q with respect to beta epsilon k with a minus sign and normalizing it, so you would have log. So you would have an expression such as this. So you basically would need to calculate, since you are taking derivative with respect to epsilon k the corresponding log for which epsilon k appears. Actually, for the case of fermions, really there are two possibilities. n is either 0 or 1. So you would say that the expectation value would be when it is 1, you have e to the beta epsilon of k minus mu. Oops. e to the beta mu minus epsilon of k. The two possibilities are 1 plus e to the beta mu minus epsilon of k. So when I look at some particular state, it is either empty. In which case, contributes 0. Or, it is occupied. In which case, it contributes this weight, which has to be appropriately normalized. If I do the same thing for the case of bosons, it is a bit more complicated because I have to look at this series rather than geometric 1 plus x plus x squared plus x cubed is 1 plus x plus 2x squared plus 3x cubed, which can be obtained by taking the derivative of the appropriate log. Or you can fall back on your calculations of geometric series and convince yourself that it is essentially the same thing with a factor of minus here. So this is fermions and this is bosons. And indeed, I can put the two expressions together by dividing through this factor in both of them and write it as 1 over Z inverse e to the beta epsilon of k minus eta, where for convenience I have introduced Z to be the contribution e to the beta. So for this system of non-interacting particles that are identical, we have expressions for log of the grand partition function, the grand potential. And for the average number of particles, which is an appropriate derivative of this, expressed in terms of the single-particle energy levels and the chemical potential. So next time, what we will do is we will start this with this expression for the case of the particles in a box to get the pressure of the ideal quantum gas as a function of mu. But we want to write the pressure as a function of density, so we will invert this expression to get density as a function of-- chemical potential as a function of density [INAUDIBLE] here. And therefore, get the expression for pressure as a function of density. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 14_Classical_Statistical_Mechanics_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MEHRAN KARDAR: You decide at first look at the our simple system, which was the ideal gas. And imagine that we have this gas contained in a box of volume V that contains N particles. And it's completely isolated from the rest of the universe. So you can know the amount of energy that it has. So the macroscopic description of the system consists of these three numbers, E, V, and N. And this was the characteristics of a microcanonical ensemble in which there was no exchange of heat or work. And therefore, energy was conserved. And our task was somehow to characterize the probability to find the system in some microstate. Now if you have N particles in the system, there is at the microscopic level, some microstate that consists of a description of all of the positions of momenta. And there is Hamiltonian that governs how that microstate evolves as a function of time. And for the case of ideal gas, the particles don't interact. So the Hamiltonian can be written as the sum of n terms that describe essentially the energy of the end particle composed of its kinetic energy. And the term that is really just confining the particle is in this box. And so the volume of the box is contained. Let's say that in this potential, that it's zero inside the box and infinity outside. And we said, OK, so given that I know what the energy, volume, number of particles are, what's the chance that I will find the system in some particular microstate? And the answer was that, obviously, you will have to put zero if the particles are outside box. Or if the energy, which is really just the kinetic energy, does not much the energy that we know is in the system, we sum over i of P i squared over 2m is not equal to the energy of the system. Otherwise, we say that if the microstate corresponds to exactly the right amount of energy, then I have no reason to exclude it. And just like saying that the dice can have six possible faces, you would assign all of those possible phases equal probability. I will give all of the microstates that don't conflict with the conditions that I have set out the same probability. I will call that probability 1 over some overall constant omega. And so this is one otherwise. So then the next statement is well what is this number omega that you have put it? And how do we determine it? Well, we know that this P is a probability so that if I were to integrate over the entirety of the face space of this probability, the answer should be 1. So that means this omega, which is a function of these parameters that I set out from the outside to describe the microstate, should be obtained by integrating over all q and p of this collection of 1s and 0s that I have out here. So I think this box of 1s and 0s, put it here, and I integrate. So what do I get? Well, the integration over the q's is easy. The places that I get 1 are when the q's are inside the box. So each one of them will give me a factor of V. And there are N of them. So I would get V to the N. The integrations over momenta essentially have to do with seeing whether or not the condition sum over i Pi squared over 2m equals to E is satisfied or not. So this I can write also as sum over i P i squared equals to 2mE, which I can write as R squared. And essentially, in this momentum space, I have to make sure that the sum of the components of all of the momenta squared add up to this R. squared, which as we discussed last time, is the surface of hypershpere in 3N dimensions of radius R, which is square root of 2mE. So I have to integrate over all of these momenta. And most of the time I will get 0, except when I heat the surface of this sphere. There's kind of a little bit of singularity here because you have a probability that there's 0, except at the very sharp interval, and then 0 again. So it's kind of like a delta function, which is maybe a little bit hard to deal with. So sometimes we will generalize this by adding a little bit be delta E here. So let's say that the energy does not have to be exactly E, but E minus plus a little bit, so that when we look at this surface in three n dimensional space-- let's say this was two dimensional space-- rather than having to deal with an exact boundary, we have kind of smoothed that out into an interval that has some kind of a thickness R, that presumably is related to this delta E that I put up there. Turns out it doesn't really make any difference. The reason it doesn't make any difference I will tell you shortly. But now when I'm integrating over all of these P's-- so there's P. There's another P. This could be P1. This could be P2. And there are different components. I then get 0, except when I hit this interval around the surface of this hypersphere. So what do I get as a result of the integration over this 3N dimensional space? I will get the volume of this element, which is composed of the surface area, which has some kind of a solid angle in 3N dimensions. The radius raised to the power of dimension minus 1, because it's a surface. And then if I want to really include a delta R to make it into a volume, this would be the appropriate volume of this interval in momentum space. Yes. AUDIENCE: Just to clarify, you're asserting that there's no potential inside the [INAUDIBLE] that comes from the hard walls. MEHRAN KARDAR: Correct. We can elaborate on that later on. But for the description of the ideal gas without potential, like in the box, I have said that potential to be just 0 infinity. OK? OK, so fine. So this is the description. There was one thing that I needed to tell you, which is the d, dimension, of solid angle, which is 2pi to the d over 2 divided by d over 2 minus 1 factorial So again, in two dimensions, such as the picture that I drew over here, the circumference of a circle would be 2 pi r. So this s sub 2-- right there'd be 2 pi-- and you can show that it is 2 pi divided by 0 factorial, which is 1. In three dimensions it should give you 4 pi r squared. Kind of looks strange because you get 2 pi to the 3/2 divided by 1/2 half factorial. But the 1/2 factorial is in fact root 2 over pi-- root pi over 2. And so this will work out fine. Again, the definition of this factorial in general is through the gamma function and an integral that we saw already. And factorial is the integral 0 to infinity, dx, x to the n into the minus x. Now, the thing is that this is a quantity that for large values of dimension grows exponentially v E d. So what I claim is that if I take the log of this surface area and take the limit that d is much larger than 1, the quantity that I will get-- well, let's take the log of this. I will get log of 2. I will get d over 2 log pi and minus the log of this large factorial. And the log of the factorial I will use Sterling's formula. I will ignore in that large limit the difference between d over 2 and d over 2 minus 1. Or actually I guess at the beginning I may even write it d over 2 minus 1 log of d over 2 minus 1 plus d over 2 minus 1. Now if I'm in this limit of large d again, I can ignore the 1s. And I can ignore the log 2 with respect to this d over 2. And so the answer in this limit is in fact proportional to d over 2. And I have the log. I have pi. I have this d over 2 that will carry in the denominator. And then this d over 2 times 1 I can write as d over 2 log e. And so this is the answer we get. So you can see that the answer is exponentially large if I were to again write s of d. S of d grows like an exponential in d. OK, so what do I conclude from that? I conclude that s over Kd-- and we said that the entropy, we can regard as the entropy of this probability distribution. So that's going to give me the log of this omega. And log off this omega, we'll get a factor from this v to the n. So I will get N log V. I will get a factor from log of S of 3N. I figured out what that log was in the limit of large dimensions. So I essentially have 3N over 2 because my d is now roughly 3N. It's in fact exactly 3N, sorry. I have the log of 2 pi e. For d I have 3N. And then I actually have also from here a 3N log R, which I can write as 3N over 2 log of R squared. And my R squared is 2mE. The figure I have here. And then you say, OK, we added this delta R. But now you can see that I can also ignore this delta R, because everything else that I have in this expression is something that grows radially with N. What's the worse that I can do for delta R? I could make delta R even as big as the entirety of this volume. And then the typical volume would be of the order of the energy-- sorry, the typical value of R would be like the square root of energy. So here I would have to put this log of the square root of the energy. And log of a square roots of an extensive quantity is much less than the extensive quantity. I can ignore it. And actually this reminds me, that some 35 years ago when I was taking this course, from Professor Felix Villiers, he said that he had gone to lunch. And he had gotten to this very beautiful, large orange. And he was excited. And he opened up the orange, and it was all skin. And there was just a little bit in the middle. He was saying it is like this. It's all in the surface. So if Professor Villiers had an orange in 3N dimension, he would have exponentially hard time extracting an orange. So this is our formula for the entropy of this gas. Essentially the extensive parts, n log v and something that depends on n log E. And that's really all we need to figure out all of the thermodynamic properties, because we said that we can construct-- that's in thermodynamics-- dE is TdS minus PdV plus YdN in the case of a gas. And so we can rearrange that to dS be dE over T plus P over T dV minus Y over T dN. And the first thing that we see is by taking the derivative of S with respect to the quantities that we have established, E, V, and N, we should be able to read off appropriate quantities. And in particular, let's say 1 over T would be dS by dE of constant v and n. S will be proportional to kB. And then they dependents of this object on E only appears on this log E. Except that there's a factor of 3N over 2 out front. And the derivative of log E with respect to E is 1 over E. So I can certainly immediately rearrange this and to get that the energy is 3/2 N k T in this system of ideal point particles in three dimensions. And then the pressure, P over T, is the S by dV at constant e and n. And it's again, kB. The only dependence on V is through this N log V. So I will get a factor of N over V, which I can rearrange to PV is N kB T by the ideal gas law. And in principle, the next step would be to calculate the chemical potential. But we will leave that for the time being for reasons that will become apparent. Now, one thing to note is that what you have postulated here, right at the beginning, is much, much, more information than what we extracted here about thermodynamic properties. It's a statement about a joint probability distribution in this six N dimensional face space. So it has huge amount of information. Just to show you part of it, let's take a note at the following. What it is a probability as a function of all coordinates and momenta across your system. But let me ask a specific question. I can ask what's the probability that some particular particle-- say particle number one-- has a momentum P1. It's the only question that I care to ask about this huge amount of degrees of freedom that are encoded in P of mu. And so what do I do if I don't really care about all of the other degrees of freedom is I will integrate them. So I don't really care where particle number one is located. I didn't ask where it is in the box. I don't really care where part because numbers two through N are located or which momenta they have. So I integrate over all of those things of the full joint probability, which depends on the entirety of the face space. Fine, you say, OK. This joint probability actually has a very simple form. It is 1 over this omega E, V, and N, multiplying either 1 or 0. So I have to integrate over all of these q1's, all of these qi P i. Of 1 over omega or 0 over omega, this delta like function that we put in a box up there-- so this is this delta function that says that the particular should be inside the box. And the sum of the momenta should be on the surface of this hypershpere. Now, let's do these integrations. Let's do it here. I may need space. The integration over Q1 is very simple. It will give me a factor of V. I have this omega E, V, N, in the denominator. And I claim that the numerator is simply the following. Omega E minus P1 squared over 2m V N minus 1. Why? Because what I need to do over here in terms of integrations is pretty much what I would have to integrate over here that gave rise to that surface and all of those factors with one exception. First of all, I integrated over one particle already, so the coordinate momenta here that I'm integrating pertains to the remaining N minus 1. Hence, the omega pertains to N minus 1. It's still in the same box of volume V. So V, the other argument, is the same. But the energy is changed. Why? Because I told you how much momentum I want the first particle to carry. So given the knowledge that I'm looking at the probability of the first particle having momentum P1, then I know that the remainder of the energy should be shared among the momenta of all the remaining N minus 1 particles. So I have already calculated these omegas up here. All I need to do is to substitute them over here. And I will get this probability. So first of all, let's check that the volume part cancels. I have one factor a volume here. Each of my omegas is in fact proportional to V to the N. So the denominator has V to the N. The numerator has a V to the N minus 1. And all of the V's would cancel out. So the interesting thing really comes from these solid angle and radius parts. The solid angle is a ratio of-- let's write the denominator. It's easier. It is 2 pi to the 3N over 2 divided by 3N over 2 minus one factorial. The numerator would be 2 pi to the 3 N and minus 1 over 2 divided by 3 N minus 1 over 2 minus 1 factorial. And then I have these ratio of the radii. In the denominator I have 2mE to the power of 3 N minus 1 over 2 minus 1. So minus 3N-- it is 3N minus 1 over 2. Same thing that we have been calculating so far. And in the numerator it is 2m E minus P1 squared over 2m. So I will factor out the E. I have 1 minus P1 squared over 2m E. The whole thing raised to something that is 3 N minus 1 minus 1 over 2. Now, the most important part of this is the fact that the dependence on P1 appears as follows. I have this factor of 1 minus P1 squared over 2m E. That's the one place that P1, the momentum of the particle that I'm interested appears. And it raised to the huge problem, which is of the order of 3N over 2. It is likely less. But it really doesn't make any difference whether I write 3N over 2, 3N over minus 1 over 2, et cetera. Really, ultimately, what I will have is 1 minus the very small number, because presumably the energy of one part they can is less than the energy of the entirety of the particle. So this is something that is order of 1 out of N raised to something that is order of N. So that's where an exponentiation will come into play. And then there's a whole bunch of other factors that if I don't make any mistake I can try to write down. There is the 2s certainly cancel when I look at the factors of pi. The denominator with respect to the numerator has an additional factor of pi to the 3/2. In fact, I will have a whole bunch of things that are raised to the powers of 3/2. I also have this 2mE that compared to the 2mE that comes out front has an additional factor of 3/2. So let's put all of them together. 2 pi mE raised to the power of 3/2. And then I have the ratio of these factorials. And again, the factorial that I have in the denominator has one and a half times more or 3/2 times more than what is in the numerator. Roughly it is something like the ratio of 3 N over 2 factorial divided by 3 N minus 1 over 2 factorial. And I claim that, say, N factorial compared to N minus 1 factorial is larger by a factor of N. If I go between N factorial N minus 2 factorial is a factor that is roughly N squared. Now this does not shift either by 1 or by 2, but by 1 and 1/2. And if you go through Sterling formula, et cetera, you can convince yourself that this is roughly 3 N over 2 to the power of 1 and 1/2-- 3/2. And so once you do all of your arrangements, what do you get? 1 minus a small quantity raised to a huge power, that's the definition of the exponential. So I get exponential of minus P1 squared over 2m. And the factor that multiplies it is E. And then I have 3N over 2. And again, if I have not made any mistake and I'm careful with all of the other factors that remain, I have here 2 pi m E. And this E also gets multiplied by the inverse of 3N over 2. So I will have this replaced by 2E over 3N. So statement number one, this assignment of probabilities according to just throwing the dice and saying that everything that has the same right energy is equally likely is equivalent to looking at one of the particles and stating that the momentum of that part again is Gaussian distributed. Secondly, you can check that this combination, 2E divided by 3N is the same thing as kT. So essentially this you could also if you want to replace 1 over kT. And you would get the more familiar kind of Maxwell type of distribution for the momentum of a single particle in an ideal gas. And again, since everything that we did was consistent with the laws of probability, if we did not mix up the orders of N, et cetera, the answer should be properly normalized. And indeed, you can check that this is the three dimensional normalization that you would require for this gas. So the statement of saying that everything is allowed is equally likely is a huge statement in space of possible configurations. On the one hand, it gives you macroscopic information. On the other hand, it retains a huge amount of microscopic information. The parts of it that are relevant, you can try to extract this here. OK? So those were the successes. Question is why didn't I calculate for you this u over T? It is because this expression as we wrote down has a glaring problem with it, which in order to make it explicit, we will look at mixing entropies. So the idea is this is as follows. Let's imagine that we start with two gases. Initially, I have N1 particles of one type in volume 1. And I have N2 particles of another type in volume 2. And for simplicity I will assume that both of them are of the same temperature. So this is my initial state. And then I remove the partition. And I come up with this situation where the particles are mixed. So the particles of type 1 could be either way. Particles of type 2 could be in either place. And let's say I have a box of some toxic gas here. And I remove the lid. And it will get mixed in the room. It's certainly an irreversible situation where is an increase of entropy i associated with that. And we can calculate that increase of entropy, because we know what the expression for entropy is. So what we have to do is to compare the entropy initially. So this is the initial entropy. And I calculate everything in units of kB so I don't have to write kB all over the place. For particle number one, what do I have? I have N1 log V1. And then I have a contribution, which is 3 N 1 over 2. But I notice that whatever appears here is really only a function of E over N. E over N is really only a function of temperature. So this is something that I can call a sigma of T over here. And the contribution of box 2 is N2 log V plus 3N 2 over 2. This-- huh-- let's say that they are-- we ignore the difference in masses. You could potentially have here sigma 1, sigma 2. It really doesn't make any difference. The final state, what do we have? Essentially, the one thing that changed is that the N1 particles now are occupying the box of volume V. So if call V to the V1 plus V2, what we have is that we have N1 log of V plus N2 log of V. My claim is that all of these other factors really stay the same. Because essentially what is happening in these expressions are various ratios of E over N. And by stating that initially I had the things at the same temperature, what I had effectively stated was that E1 over N1 is the same thing as E2 over N2. I guess in the ideal gas case this E over N is the same thing as 3/2 kT. But if I have a ratio such as this, that is also the same as E1 plus E2 divided by N1 plus N2. This is a simple manipulation of fractions that I can make. And E1 plus E2 over N1 plus N2, by the same kinds of arguments, would give me the final temperature. So what I have to compute is that the final temperature is the same thing as the initial temperature. Essentially, in this mixing of the ideal gases, temperature does not change. So basically, these factors of sigma are the same before and after. And so when we calculate the increase in entropy, Sf minus Si, really the contribution that you care about comes from these volume factors. And really the statement is that in one particle currently are occupying a volume of size V, whereas previously they were in V1. And similarly for the N2 particles. And if you have more of these particles, more of these boxes, you could see how the general expression for the mixing entropy goes. And so that's fine. V is certainly greater than V1 or V2. Each of these logs gives you a positive contribution. There's an increase in entropy as we expect. Now, there is the following difficulty however. What if the gases are identical-- are the same? We definitely have to do this if I take a box of methane here and I open it, we all know that something has happened. There is an irreversible process that has occured. But if the box-- I have essentially taken the air in this room, put it in this box, whether I open the lid or not open the lid, it doesn't make any difference. There is no additional work that I have to do in order to close or open the lid. Is no there no increase of entropy one way or the other. Whereas if I look at this expression, this expression only depends on the final volume and the initial volumes, and says that there should an increase in entropy when we know that there shouldn't be. And of course, the resolution for that is something like this. That if I look at my two boxes-- and I said maybe one of them is a box that contains methane. Let's call it A. And the other is the room that contains the air. Now this situation where all of the methane is in the box and the oxygen freely floating in the room is certainly different from a configuration where I exchange these two and the methane is here and the oxygen went into the box. They're different configurations. You can definitely tell them apart. Whereas if I do the same thing, but the box and outside contain the same entity, and the same entity is, let's say, oxygen, then how can you tell apart these two configurations? And so the meaning of-- yes. AUDIENCE: Are you thinking quantum mechanically or classically. Classically we can tell them apart, right? MEHRAN KARDAR: This is currently I am making a macroscopic statement. Now when I get to the distinction of microstates we have to-- so I was very careful in saying whether or not you could tell apart whether it is methane or oxygen. So this was a very macroscopic statement as to whether or not you can distinguish this circumstance versus that circumstance. So as far as our senses of this macroscopic process is concerned, these two cases have to be treated differently. Now, what we have calculated here for these factors are some volume of phase space. And where in the evening you might say that following this procedure you counted these as two distinct cases. In this case, these were two distinct cases. But here, you can't really tell them apart. So if you can't tell them apart, you shouldn't call them two distinct cases. You have over counted phase space by a factor of two here. And here, I just looked at two particles. If I have N particles, I have over counted the phase space of identical particles by all possible permutations of n objects, it is n factorial. So there is an over counting of phase space or configurations of N identical particles by a factor of N factorial. I.e., when we said that particle number one can be anywhere in the box, particle number two can be anywhere in the box, all the way to particle number n, well, in fact, I can't tell which each is which. If I can't tell which particle is which, I have to divide by the number of permutations and factors. Now, as somebody was asking the question, as you were asking the question, classically, if I write a computer program that looks at the trajectories of N particles in the gas in this room, classically, your computer would always know the particle that started over here after many collisions or whatever is the particle that ended up somewhere else. So if you ask the computer, the computer can certainly distinguish these classical trajectories. And then it is kind of strange to say that, well, I have to divide by N factorial because all of these are identical. Again, classically these particles are following specific trajectories. And you know where in phase space they are. Whereas quantum mechanically, you can't tell that apart. So quantum mechanically, as we will describe later, rather than classical statistical mechanics-- when we do quantum statistical mechanics-- if you have identical particles, you have to write down of a wave function that is either symmetric or anti-symmetric under the exchange of particles. And when we do eventually the calculations for these factors of 1 over N factorial will emerge very naturally. So I think different people have different perspectives. My own perspective is that this factor really is due to the quantum origin of identity. And classically, you have to sort of fudge it and put it over there. But some people say that really it's a matter of measurements. And if you can't really tell A and B sufficiently apart, then you don't know. I always go back to the computer. And say, well, the computer can tell. But it's kind of immaterial at this stage. It's obvious that for all practical purposes for things that are identical you have to divide by this factor. So what happens if you divide by that factor? So I have changed all of my calculations now. So when I do the log of-- previously I had V to the N. And it gave me N log V. Now, I have log of V to the N divided by N factorial. So I will get my Sterling's approximation additional factor of minus N log N plus N, which I can sort of absorb here in this fashion. Now you say, well, having done that, you have to first of all show me that you fixed the case of this change in entropy for identical particles, but also you should show me that the previous case where we know there has to be an increase in entropy just because of the gas being different that that is not changed because of this modification that you make. So let's check that. So for distinct gases, what would be the generalization of this form Sf minus Si divided by kV? Well, what happens here? In the case of the final object, I have to divide N1 log of V. But that V really becomes V divided by N1, because in the volume of size V, I have N1 oxygen that I can't tell apart. So I divide by the N1 factorial for the oxygens. And then I have N2 methanes that I can't tell apart in that volume, so I divide by essentially N2 factorial that goes over there. The initial change is over here I would have N1 log of V1 over N1. And here I would have had N2 log of V2 over N2. So every one of these expressions that was previously log V, and I had four of them, gets changed. But they get change precisely in a manner that this N1 log of N1 here cancels this N1 log of and N1 here. This N2 log of N2 here cancels this N2 log of N2 here. So the delta S that I get is precisely the same thing as I had before. I will get N1 log of V over V1 plus N2 log of V over V2. So this division, because the oxygens were identical to themselves and methanes were identical to themselves, does not change the mixing entropy of oxygen and nitrogen. But let's say that both gases are the same. They're both oxygen. Then what happens? Now, in the final state, I have a box. It has a N1 plus N2 particles that are all oxygen. I can't tell them apart. So the contribution from the phase space would be N1 plus N2 log of the volume divided by N1 plus N2 factorial. That ultimately will give me a factor of N1 plus N2 here. The initial entropy is exactly the one that I calculated before. For the line above, I have N1 log of V1 over N1 minus N2 log of V2 over N2. Now certainly, I still expect to see some mixing entropy if I have a box of oxygen that is at very low pressure and is very dilute, and I open it into this room, which is at much higher pressure and is much more dense. So really, the case where I don't expect to see any change in entropy is when the two boxes have the same density. And hence, when I mix them, I would also have exactly the same density. And you can see that, therefore, all of these factors that are in the log are of the inverse of the same density. And there's N1 plus N2 of them that's positive. And N1 plus N2 of them that is negative. So the answer in this case, as long as I try to mix identical particles of the same density, if I include this correction to the phase space of identical particles, the answer will be [? 0. ?] Yes? AUDIENCE: Question, [INAUDIBLE] in terms of the revolution of the [INAUDIBLE] there is no transition [INAUDIBLE] so that your temporary, and say like, oxygen and nitrogen can catch a molecule, put it in a [? aspertometer. ?] and have different isotopes. You can take like closed isotopes of oxygen and still tell them apart. But this is like their continuous way of choosing a pair of gases which would be arbitrarily closed in atomic mass. MEHRAN KARDAR: So, as I said, there are alternative explanations that I've heard. And that's precisely one of them. And my counter is that what we are putting here is the volume of phase space. And to me that has a very specific meaning. That is there's a set of coordinates and momenta that are moving according to Hamiltonian trajectories. And in principle, there is a computer nature that is following these trajectories, or I can actually put them on the computer. And then no matter how long I run and they're identical oxygen molecules, I start with number one here, numbers two here. The computer will say that this is the trajectory of number one and this is the trajectory of numbers two. So unless I change my definition of phase space and how I am calculating things, I run into this paradox. So what you're saying is forget about that. It's just can tell isotopes apart or something like that. And I'm saying that that's fine. That's perspective, but it has nothing to do with phase space counting. OK? Fine, now, why didn't I calculate this? It was also for the same reason, because we expect to have quantities that are extensive and quantities that are intensive. And therefore, if I were to, for example, calculate this object, that it should be something that is intensive. Now the problem is that if I take a derivative with respect N, I have log V. And log V is clearly something that does not grow proportionately to size but grows proportionately to size logarithmically. So if I make volume twice as big, I will get an additional factor of log 2 here contribution to the chemical potential. And that does not make sense. But when I do this identity, then this V becomes V over N. And then everything becomes nicely intensive. So if I allowed now to replace this V over N, then I can calculate V over T as dS by dN at constant E and V. And so then essentially I will get to drop the factor of log N that comes in front, so I will get kT log of V over N. And then I would have 3/2 log of something, which I can put together as 4 pi N E over 3N raised to the 3/2 power. And you can see that there were these E's from Sterling's approximation up there that got dropped here, because you can also take derivative with respect to the N's that are inside. And you can check that the function of derivatives with respects to the N's that are inside is precisely to get rid of those factors. OK? Now, there is still one other thing that is not wrong, but kind of like jarring about the expressions that they've had so far in that right from the beginning, I said that you can certainly calculate entropies out of probabilities as minus log of P average if you like. But it makes sense only if you're dealing with discrete variables, because when you're dealing with continual variables and you have a probability density. And the probability density depends on the units of measurement. And if you were to change measurement from meters to centimeters or something else, then there will be changes in the probability densities, which would then modify the various factors over here. And that's really also is reflected ultimately in the fact that these combinations of terms that I have written here have dimensions. And it is kind of, again, jarring to have expressions inside the logarithm or in the exponential that our not dimensionless. So it would be good if we had some way of making all of these dimensionless. And you say, well, really the origin of it is all the way back here, when I was calculating volumes in phase space. And volumes in phase space have dimensions. And that dimensions of pq raised to the 3N power really survives all the way down here. So I can say, OK, I choose some quantity as a reference that has the right dimensions of the product of p and q, which is an action. And I divide all of my measurements by that reference unit, so that, for example, here I have 3N factors of this. Or let's say each one of them is 3. I divide by some quantity that has units of action . And then I will be set. So basically, the units of this h is the product of p and q. Now, at this point we have no way of choosing some h as opposed to another h. And so by adding that factor, we can make things look nicer. But then things are undefined after this factor of h. When we do quantum mechanics, another thing that quantum mechanics does is to provide us with precisely [? age ?] of Planck's constant as a measure of these kinds of integrations. So when we eventually you go to calculate, say, the ideal gas or any other mechanic system that involves p and q in quantum mechanics, then the phase space becomes discretized. You would have-- The appropriate description would have energies that are discretized corresponding to various other discretization that are eventually the equivalent to dividing by this Planck's constant. Ultimately, I will have additionally a factor of h squared appearing here. And it will make everything nicely that much [? less. ?] None of these other quantities that I mentioned calculated would be affected by this. So essentially, what I'm saying is that you are going to use a measure for phase space of identical particles. Previously we had a product, d cubed Pi, d cubed Qi. This is what we were integrating and requiring that this integration will give us [INAUDIBLE]. Now, we will change this to divide by this N factorial, if the particles are identical. And we divide by h to the 3N because of the number of pairs of pq that appear in this. The justification to come when we ultimately do quantum study. Any questions? So I said that this prescription when we look at a system at complete isolation, and therefore, specify fully its energy is the microcanonical ensemble, as opposed to the canonical ensemble, whereas the set of microscopic parameters that you identified with your system, you replace the energy with temperature. So in general, let's say there will be some bunch of displacements, x, that give you the work content to the system. Just like we fixed over there the volume and the number of particles, let's say that all of the work parameters, such as x microscopically, we will fix in this canonical ensemble. So however, the ensemble is one in which the energy is not specified. And so how do I imagine that I can maintain a system at temperature T? Well, if this room is at some particular temperature, I assume that smaller objects that I put in this room will come to the same temperature. So the general prescription for beginning something other than temperature T is to put it in contact with something that is much bigger. So let's call this to be a reservoir. And we put our system, which we assume to be smaller, in contact with it. And we allow it to exchange heat with the reservoir. Now I did this way of managing the system to come to a temperature T, which is the characteristic of a big reservoir. Imagine that you have a lake. And you put your gas or something else inside the lake. And it will equilibrate to the temperature of the lake. I will assume that the two of them, the system and the reservoir, just for the purpose of my being able to do some computation, are isolated from the rest of the universe, so that the system plus reservoir is microcanonical. And the sum total of their energies is sum E total. So now this system still is something like a gas. It's a has a huge number of potential degrees of freedom. And these potential number of degrees of freedom can be captured through the microstate of the system, u sub s. And similarly, the water particle in the lake have their own state. An there's some microstate that describes the positions and the momenta of all of the particles that are in the lake. Yes? AUDIENCE: When you're writing the set of particles used to describe it, why don't you write N? Since it said the number of particles in the system is not fixed. MEHRAN KARDAR: Yes, so I did want to [INAUDIBLE] but in principle I could add N. I wanted to be kind of general. If you like X, [? it ?] is allowed to include chemical work type of an X. So what do I know? I know that there is also if I want to describe microstates and their revolution, I need to specify that there's a Hamiltonian that governs the evolution of these microstates. And presumably there's a Hamiltonian that describes the evolution of the reservoir microstate. And so presumably the allowed microstates are ones in which E total is made up of the energy of the system plus the energy of the reservoir. So because the whole thing is the microcanonical, I can assign a probability, a joint probability, to finding some particular mu s, mu r combination, just like we were doing over there. You would say that essentially this is a combination of these 1s and 0s. So it is 0 if H of-- again, for simplicity I drop the s on the system-- H of mu s plus H reservoir of mu reservoir is not equal to E total. And it is 1 over some omega of reservoir in the system otherwise. So this is just again, throwing the dice, saying that it has so many possible configurations, given that I know what the total energy is. All the ones that are consistent with that are allowed. Which is to say, I don't really care about the lake. All I care is about to the states of my gas, and say, OK, no problem, if I have the joint probability distribution just like I did over here, I get rid of all of the degrees of freedom that I'm not interested in. So if I'm interested only in the states of the system, I sum over or integrate over-- so this would be a sum. This would be an integration, whatever-- of the joint probability distribution. Now actually follow the steps that I had over here when we were looking at the momentum of a gas particle. I say that what I have over here, this probability see is this 1 over omega R, S. This is a function that is either 1 or 0. And then I have so sum over all configurations of the reservoir. But given that I said what the microstate of the system is, then I know that the reservoir has to take energy in total minus the amount the microstates has taken. And I'm summing over all of the microstates that are consistent with the requirement that the energy in the reservoir is E total minus H of microstate. So what that is that the omega that I have for the reservoir-- and I don't know what it is, but whatever it is, evaluated at the total energy minus the energy that is taken outside the microstate. So again, exactly the reason why this became E minus Pi squared over 2. This becomes E total minus H of mu S. Except that I don't know either what this is or what this is. Actually, I don't really even care about this because all of the H dependents on microstate dependents is in the numerator. So I write that as proportional to exponential. And the log of omega is the entropy. So I have the entropy of the [INAUDIBLE] in units of kB, evaluated at the argument that this E total minus H of mu S. So my statement is that when I look at the entropy of the reservoir as a function of E total minus the energy that is taken out by the system, my construction I assume that I'm putting a small volume of gas in contact with a huge lake. So this total energy is overwhelmingly larger than the amount of energy that the system can occupy. So I can make a Taylor expansion of this quantity and say that this is S R of E total minus the derivative of S with respect to its energy. So the derivative of the S reservoir with respect to the energy of the reservoir times H of the microstate and presumably higher order terms that are negligible. Now the next thing that is important about the reservoir is you have this huge lake. Let's say it's exactly at some temperature of 30 degrees. And you take some small amount of energy from it to put in the system. The temperature of the lake should not change. So that's the definition of the reservoir. It's a system that is so big that for the range of energies that we are considering, this S by dE is 1 over the temperature that characterizes the reservoir. So just like here, but eventually the answer that we got was something like the energy of the particle divided by kT. Once I exponentiate, I find that the probability to find the system in some microstate is proportional to E to the minus of the energy of that microstate divided by kT. And of course, there's a bunch of other things that I have to eventually put into a normalization that will cause. So in the canonical prescription you sort of replace this throwing of the dice and saying that everything is equivalent to saying that well, each microstates can have some particular energy. And the probabilities are partitioned according to the Boltzmann weights of these energies. And clearly this quantity, Z, the normalization is obtained by integrating over the entire space of microstates, or summing over them if they are discrete of this factor of E to the minus beta H of mu S. And we'll use this notation beta 1 over kT sometimes for simplicity. Now, the thing is that thermodynamically, we said that you can choose any set of parameters, as long as they are independent, to describe the macroscopic equilibrium state of the system. So what we did in the microcanonical ensemble is we specified a number of things, such as energy. And we derived the other things, such as temperature. So here, in the canonical ensemble, we have stated what the temperature of the system is. Well, then what happened? On one hand, maybe we have to worry because energy is constantly being exchanged with the reservoir. And so the energy of the system does not have a specific value. There's a probability for it. So probability of system having energy epsilon-- it doesn't have a fixed energy. There is a probability that it should have energy. And this probability, let's say we indicate with P epsilon given that we know what temperature is. Well, on one hand we have this factor of E to the minus epsilon over kT. That comes from the [INAUDIBLE]. But there isn't a single state that has that energy. There's a whole bunch of other states of the system that have that energy. So as I scan the microstates, there will be a huge number of them, omega of epsilon in number, that have this right energy. And so that's the probability of the energy. And I can write this as E to the minus 1 over kT that I've called beta. I have epsilon. And then the log of omega that I take in the numerator is S divided by Kb. I can take that Kb here and write this as T S of epsilon. And so this kind of should remind you of something like a free energy. But it tells you is that this probability to have some particular energy is some kind of [? a form. ?] Now note that again for something like a gas or whatever, we expect typical values of both the energy and entropy to be quantities that are proportional to the size of the system. As the size of the system becomes exponentially large, we would expect that this probability would be one of those things that has portions that let's say are exponentially larger than any other portion. There will be a factor of E to the minus N something that will really peak up, let's say, the extremum and make the extremum overwhelmingly more likely than other places. Let's try to quantify that a little bit better. Once we have a probability, we can also start calculating averages. So let's define what the average energy of the system is. The average energy of the system is obtained by summing over all microstates. The energy of that microstate, the probability of that microstate, which is E to the minus beta H microstate divided by the partition function, which is the sum-- OK, the normalization, which we will call the partition function, which is the sum over all of these microstates. Now this is something that we've already seen. If I look at this expression in the denominator that we call Z and has a name, which is the partition function, then it's certainly a function of beta. If I take a derivative of Z with respect to beta, what happens I'll bring down a factor of H over here. So the numerator up to a sine is the derivative of Z with respect to beta. And the denominator is 1 over Z. And so this is none other than minus the log Z with respect to beta. So OK, fine, so the mean value of this probability is given by some expression such as this. Well, you can see that if I were to repeat this process and rather than taking one derivative, I will take n derivatives and then divide by 1 over Z. Each time I do that, I will bring down a factor of H. So this is going to give me the average of H to the N. The end moment of this probability distribution of energy is obtainable by this procedure. So now you recognize, oh, I've seen things like such as this. So clearly this partition function is something that generates the moments by taking subsequent derivatives. I can generate different moments of this distribution. But then there was something else that maybe this should remind you, which is that if there's a quantity that generates moments, then its log generates cumulants. So you would say, OK, the nth cumulant should be obtainable up to this factor of minus 1 to the n, as the nth derivative with respect to the beta of logs. And it's very easy to check that indeed if I were to take two derivatives, I will get the expectation value of H squared minus the average of H squared, et cetera. But the point is that clearly this to log Z is, again, something that is extensive. Another way of getting the normalization-- I guess I forgot to put this 1 over Z here. So now it is a perfectly normalized object. So another way to get z would be to look at the normalization of the probability. I could integrate over epsilon this factor of E to the minus beta epsilon minus T S of epsilon. And that would give me Z. Now, again the quantities that appear in the exponent, energy-- entropy, their difference, free energy-- are quantities that are extensive. So this Z is going to be dominated again by where this peak is. And therefore, log of Z will be proportional to log of what we have over here. And it be an extensive quantity. So ultimately, my statement is that this log of Z is something that is order of N. So we are, again, kind of reminiscent of the central limit theorem. In a situation where we have a probability distribution, at large N, in which all of the cumulants are proportional to N. The mean is proportional to N. The variance is proportional to N. All of the cumulants are proportional to N, which means that essentially the extent of the fluctuations that you have over here are going to go the order of the square root of N. So the bridge, the thing that again allows us, while we have in principle in the expression that we have said, a variable energy for the system. In fact, in the limit of things becoming extensive, I know where that energy is, up to fluctuations or up to uncertainty that is only of the order of square root of N. And so the relative uncertainty will vanish as the N goes to infinity, limit is approached. So although again we have something that is in principle probabilistic, again, in the thermodynamic sense we can identify uniquely an energy for our system as, let's say, the mean value or the most likely value. They're all the same thing of the order of 1 over N. And again, to be more precise, the variance is clearly the second derivative of log Z. 1 derivative of a log Z is going to give me the energy. So this is going to be d by d beta up to a minus sign of the energy or the expectation value of the Hamiltonian, which we identified as the energy of the system. The derivative with respect to beta, I can write as kB T squared. The derivative of energy with respect to T, everything here we are doing that conditions of no work. So the variance is in fact kB T squared, the heat capacity of the system. So the extent that these fluctuations squared is kB T squared times the heat capacity the system. OK, so next time what we will do is we will calculate the results for the ideal gas. First thing, the canonical ensemble to show that we get exactly the same macroscopic and microscopic descriptions. And then we look at other ensembles. And that will conclude the segment that we have on statistical mechanics of non-interacting systems. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 21_Quantum_Statistical_Mechanics_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last lecture, we laid out the foundations of quantum stat mech. And to remind you, in quantum mechanics we said we could very formally describe a state through a vector in Hilbert space that has complex components in terms of some basis that we haven't specified. And quantum mechanics describes, essentially what happens to one of these states, how to interpret it, et cetera. But this was, in our language, a pure state. We were interested in cases where we have an ensemble and there are many members of the ensemble that correspond to the same macroscopic state. And we can distinguish them macroscopically. If we think about each one of those states being one particular member, psi alpha, some vector in this Hilbert space occurring with some probability P alpha, then that's kind of a more probabilistic representation of what's going on here for the mech state. And the question was, well, how do we manipulate things when we have a collection of states in this mixed form? And we saw that in quantum mechanics, most of the things that are observable are expressed in terms of matrices. And one way to convert a state vector into a matrix is to conjugate it and construct the matrix whose elements are obtained-- a particular rho vector by taking element and complex conjugate elements of the vector. And if we were to then sum over all members of this ensemble, this would give us an object that we call the density matrix, rho. And the property of this density matrix was that if I had some observable for a pure state I would be able to calculate an expectation value for this observable by sandwiching it between the state in the way that quantum mechanic has taught us. In our mech state, we would get an ensemble average of this quantity by multiplying this density matrix with the matrix that represents the operator whose average we are trying to calculate, and then taking the trace of the product of these two matrices. Now, the next statement was that in the same way that the micro-state classically changes as a function of time, the vector that we have in quantum mechanics is also changing as a function of time. And so at a different instant of time, our state vectors have changed. And in principle, our density has changed. And that would imply a change potentially in the average that we have over here. We said, OK, let's look at this time dependence. And we know that these psis obey Schrodinger equation. I h bar d psi by dt is h psi. So we did the same operation, I h bar d by dt of this density matrix. And we found that essentially, what we would get on the other side is the commutator of the density matrix and the matrix corresponding to the Hamiltonian, which was reminiscent of the Liouville statement that we had classically. Classically, we had that d rho by dt was the Poisson bracket of a rho with H versus this quantum formulation. Now, in both cases we are looking for some kind of a density that represents an equilibrium ensemble. And presumably, the characteristic of the equilibrium ensemble is that various measurements that you make at one time and another time are the same. And hence, we want this rho equilibrium to be something that does not change as a function of time, which means that if we put it in either one of these equations, the right-hand side for d rho by dt should be 0. And clearly, a simple way to do that is to make rho equilibrium be a function of H. In the classical context, H was a function in phase space. Rho equilibrium was, therefore, made a function of various points in phase space implicitly through its dependence on H. In the current case, H is a matrix in Hilbert space. A function of a matrix is another function, and that's how we construct rho equilibrium. And that function will also commute with H because clearly H with H is 0. Any function of H with H will be 0. So the prescription that we have in order now to construct the quantum statistical mechanics is to follow what we had done already for the classical case. And classically following this equation, we made postulates relating rho-- what its functional dependence was on H. We can now do the same thing in the quantum context. So let's do that following the procedure that we followed for the classical case. So classically, we said, well, let's look at an ensemble that was micro-canonical. So in the micro-canonical ensemble, we specified what the energy of the system was, but we didn't allow either work to be performed on it mechanically or chemically so that the number of elements was fixed. The coordinates, such as volume, length of the system, et cetera, were fixed. And in this ensemble, classically the statement was that rho indexed by E is a function of H. And clearly, what we want is to only allow H's that correspond to the right energy. So I will use a kind of shorthand notation as kind of a delta H E. Although, when we were doing it last time around, we allowed this function to allow a range of possible E-values. I can do the same thing, it's just that writing that out is slightly less convenient. So let's stick with the convenient form. Essentially, it says within some range delta A allow the Hamiltonians. Allow states, micro-states, whose Hamiltonian would give the right energy that is comparable to the energy that we have said exists for the macro-state. Now clearly also, this definition tells me that the trace of rho has to be 1. Or classically, the integral of rho E over the entire phase space has to be 1. So there is a normalization condition. And this normalization condition gave us the quantity omega of E, the number of states of energy E, out of which we then constructed the entropy and we were running away calculating various thermodynamic quantities. So now let's see how we evaluate this in the quantum case. We will use the same expression, but now me realize that rho is a matrix. So I have to evaluate, maybe elements of that matrix to clarify what this matrix looks like in some basis. What's the most convenient basis? Since rho is expressed as a function of H, the most convenient basis is the energy basis, which is the basis that diagonalizes your Hamiltonian matrix. Basically, there's some vectors in this Hilbert space such that the action of H on this will give us some energy that I will call epsilon n, some eigenvalue. An n. So that's the definition of the energy basis. Again, as all basis, we can make these basis vectors n to be unit length and orthogonal to each other. There is an orthonormal basis. If I evaluate rho E in this basis, what do I find? I find that n rho m. Well, 1 over omega E is just the constant. It comes out front. And the meaning of this delta function becomes obvious. It is 1 or 0. It is 1 if, let's say, Em equals to the right energy. Em is the right energy for the ensemble E. And of course, there is a delta function. So there is also an m equals to n. And it is 0, clearly, for states that have the wrong energy. But there is an additional thing here that I will explain shortly for m not equal to n. So let's parse the two statements that I have made over here. The first one, it says that if I know the energy of my macro-state, I clearly have to find wave functions to construct possible states of what is in the box that have the right energy. States that don't have the right energy are not admitted. States that are the right energy, I have nothing against each one of them. So I give them all the same probability. So this is our whole assumption of equal equilibrium a priori probabilities. But now I have a quantum system and I'm looking at the matrix. And this matrix potentially has off-diagonal elements. You see, if I am looking at m equals to n, it means that I am looking at the diagonal elements of the matrix. What this says is that if the energies are degenerate-- let's say I have 100 states, all of them have the right energy, but they are orthonormal. These 100 states, when I look at the density matrix in the corresponding basis, the off-diagonal elements would be 0. So this is the "or." So even if this condition is satisfied, even if Em equals to E, but I am looking at off-diagonal elements, I have to put 0. And this is sometimes called the assumption of random phases. Before telling you why that's called an assumption of random phases, let's also characterize what this omega of E is. Because trace of rho has to be 1, clearly omega of E is the trace of this delta h E. Essentially as I scan all of my possible energy levels in this energy basis, I will get 1 for those that have the right energy and 0 otherwise. So basically, this is simply number of states of energy E, potentially with minus plus some delta E if I want to include that. Now, what these assumptions mean are kind of like this. So I have my box and I have been told that I have energy E. What's a potential wave function that I can have in this box? What's the psi given that I know what the energy E is? Well, any superposition of these omega E states that have the right energy will work fine. So I have a sum over, let's say, mu that belongs to the state such that H-- such that this energy En is equal to the energy that I have specified. And there are omega sub E such states. I have to find some kind of an amplitude for these states, and then the corresponding state mu. I guess I should write here E mu. Now, the first statement over here is that as far as I'm concerned, I can put all of these a mu's in whatever proportion that I want. Any linear combination will work out fine. Because ultimately, psi has to be normalized. Presumably, the typical magnitude a m squared, if I average over all members of the ensemble, should be 1 over omega. So this is a superposition. Typically, all of them would contribute. But since we are thinking about quantum mechanics, these amplitudes can, in fact, be complex. And this statement of random phases is more or less equivalent to saying that the phases of the different elements would be typically uncorrelated when you average over all possible members of this ensemble. Just to emphasize this a little bit more, let's think about the very simplest case that we can have for thinking about probability. And you would think of, say, a coin that can have two possibilities, head or tail. So classically, you would say, head or tail, not knowing anything else, are equally likely. The quantum analog of that is a quantum bit. And the qubit can have, let's say, up or down states. It's a Hilbert space that is composed of two elements. So the corresponding matrix that I would have would be a 2 by 2. And according to the construction that I have, it will be something like this. What are the possible wave functions for this system? I can have any linear combination of, say, up and down, with any amplitude here. And so essentially, the amplitudes, presumably, are quantities that I will call alpha up and alpha down. That, on average, alpha squared up or down would be 1/2. That's what would appear here. The elements that appear here according to the construction that I have up there, I have to really take this element, average it against the complex conjugate of that element. So what will appear here would be something like e to the i phi of up minus phi of down, where, let's say, I put here phi of up. And what I'm saying is that there are a huge number of possibilities. Whereas, the classical coin has really two possibilities, head or tail, the quantum analog of this is a huge set of possibilities. These phis can be anything, 0 to 2 pi. Independent of each other, the amplitudes can be anything as long as the eventual normalization is satisfied. And as you sort of sum over all possibilities, you would get something like this also. Now, the more convenient ensemble for calculating things is the canonical one, where rather than specifying what the energy of the system is, I tell you what the temperature is. I still don't allow work to take place, so these other elements we kept fixed. And our classical description for rho sub T was e to the minus beta H divided by some partition function, where beta was 1 over kT, of course. So again, in the energy basis, this would be diagonal. And the diagonal elements would be e to the minus beta epsilon n. I could calculate the normalization Z, which would be trace of e to the minus beta H. This trace is calculated most easily the basis in which H is diagonal. And then I just pick out all of the diagonal elements, sum over n e to the minus beta epsilon. Now if you recall, we already did something like this without justification where we said that the states of a harmonic oscillator, we postulated to be quantized h bar omega n plus 1/2. And then we calculated the partition function by summing over all states. You can see that this would provide the justification for that. Now, various things that we have in classical formulations also work, such that classically if we had Z, we could take the log of Z. We could take a derivative of log Z with respect to beta. It would bring down a factor of H. And then we show that this was equal to the average of the energy. It was the average of Hamiltonian, which we then would identify with the thermodynamic energy. It is easy to show that if you take the same set of operations over here, what you would get is trace of H rho, which is the definition of the average that you find. Now, let's do one example in the canonical ensemble, which is particle in box. Basically, this is the kind of thing that we did all the time. We assume that there is some box of volume v. And for the time being, I just put one particle in it. Assume that there is no potential. So the Hamiltonian for this one particle is just its kinetic energy p1 squared over 2m, maybe plus some kind of a box potential. Now, if I want to think about this quantum mechanically, I have to think about some kind of a basis and calculating things in some kind of basis. So for the time being, let's imagine that I do things in coordinate basis. So I want to have where the particle is. And let's call the coordinates x. Let's say, a vector x that has x, y, and z-components. Then, in the coordinate basis, p is h bar over i, the gradient vector. So this becomes minus h bar squared Laplacian over 2m. And if I want to look at what are the eigenstates of his Hamiltonian represented in coordinates basis, as usual I want to have something like H1 acting on some function of position giving me the energy. And so I will indicate that function of position as x k. And I want to know what that energy is. And the reason I do that is because you all know that the correct form of these eigenfunctions are things like e to the i k dot x. And since I want these to be normalized when I integrate over all coordinates, I have to divide by square root of V. So that's all nice and fine. What I want to do is to calculate the density matrix in this coordinate basis. So I pick some point x prime rho x. And it's again, a one particle. Yes? AUDIENCE: I was thinking the box to be infinite or-- I mean, how to treat a boundary because [INAUDIBLE]. PROFESSOR: So I wasn't careful about that. The allowed values of k that I have to pick here will reflect the boundary conditions. And indeed, I will be able to use these kinds of things. Let's say, by using periodic boundary condition. And if I choose periodic boundary conditions, the allowed values of k will be quantized. So that, say, kx would be multiples of 2 pi over Lx times some integer. If I use really box boundary conditions such as the wave function having to vanish at the two ends, in reality I should use the sine and cosine type of boundary wave functions, which are superpositions of these e to the i kx and e to the minus i kx. And they slightly change the normalization-- the discretization. But up to, again, these kinds of superpositions and things like that, this is a good enough statement. If you are really doing quantum mechanics, you have to clearly be more careful. Just make sure that I'm sufficiently careful, but not overly careful in what I'm doing next. So what do I need to do? Well, rho 1 is nicely diagonalized in the basis of these plane waves. In these eigenstates of the Hamiltonian. So it makes sense for me to switch from this representation that is characterized by x and x prime to a representation where I have k. And so I can do that by inserting a sum over k here. And then I have rho 1. Rho 1 is simply e to the minus beta. It is diagonal in this basis of k's. There is a normalization that we call Z1, partition function. And because it is diagonal, in fact I will use the same k on both sides of this. So the density matrix is, in fact, diagonal in momentum. This is none other than momentum, of course. But I don't want to calculate the density matrix in momentum where it is diagonal because that's the energy basis we already saw, but in coordinate basis. So now let's write all of these things. First of all, the sum over k when l becomes large, I'm going to replace with an integral over k. And then I have to worry about the spacing that I have between k-values. So that will multiply with a density of states, which is 2 pi cubed product of lx, ly, lz, which will give me the volume of the box. And then I have these factors of-- how did I define it? kx is e to the i k dot x. And this thing where k and x prime are interchanged is its complex conjugate. So it becomes x minus x prime. I have two factors of 1 over square root of V, so that will give me a factor of 1 over V from the product of these things. And then I still have minus beta h bar squared k squared over 2m. And then I have the Z1. Before proceeding, maybe it's worthwhile for me to calculate the Z1 in this case. So Z1 is the trace of rho 1. It's essentially the same thing. It's a sum over k e to the minus beta h bar squared k squared over 2m. I do the same thing that I did over there, replace this with V integral over k divided by 2 pi cubed e to the minus beta h bar squared k squared over 2m. I certainly recognize h bar k as the same thing as momentum. This is really p squared. So just let me write this after rescaling all of the k's by h bar. So then what I would have is an integral over momentum. And then for each k, I essentially bring down a factor of h bar. What is 2 pi h bar? 2 pi h bar is h. So I would have a factor of h cubed. I have an integration over space that gave me volume. e to the minus beta h bar-- p squared over 2m. Why did I write it this way? Because it should remind you of how we made dimensionless the integrations that we had to do for classical calculations. Classical partition function, if I wanted to calculate within a single particle, for a single particle Hamiltonian, I would have this Boltzmann weight. I have to integrate over all coordinates, over all momenta. And then we divided by h to make it dimensionless. So you can see that that naturally appears here. And so the answer to this is V over lambda cubed with the lambda that we had defined before as h over root 2 pi m kT. So you can see that this traces that we write down in this quantum formulations are clearly dimensionless quantities. And they go over to the classical limit where we integrate over phase space appropriately dimensional-- made dimensionless by dividing by h per combination of p and q. So this V1 we already know is V-- Z1 we already know is V over lambda cubed. The reason I got a little bit confused was because I saw that the V's were disappearing. And then I remembered, oh, Z1 is actually proportional to mu over lambda cubed. So that's good. So what we have here from the inverse of Z1 is a 1 over V lambda cubed here. Now, then I have to complete this kind of integration, which are Gaussian integrations. If x was equal to x prime, this would be precisely the integration that I'm doing over here. So if x was equal to x prime, that integration, the Gaussian integration, would have given me this 1 over lambda cubed. But because I have this shift-- in some sense I have to shift k to remove that shift in x minus x prime. And when I square it, I will get here a factor, which is exponential, of minus x minus x prime squared-- the Square of that factor-- times the variance of k, which is m kT over h bar squared. So what do I get here? I will get 2. And then I would get essentially m kT. And then, h bar squared here. Now, this combination m kT divided by h, same thing as here, m kT and h. Clearly, what that is, is giving me some kind of a lambda. So let me write the answer. Eventually, what I find is that evaluating the one-particle density matrix between two points, x and x prime, will give me 1 over V. And then this exponential factor, which is x minus x prime squared. The coefficient here has to be proportional to lambda squared. And if you do the algebra right, you will find that the coefficient is pi over 2. Or is it pi? Let me double check. It's a numerical factor that is not that important. It is pi, pi over lambda squared. OK, so what does this mean? So I have a particle in the box. It's a quantum mechanical particle at some temperature T. And I'm trying to ask, where is it? So if I want to ask where is it, what's the probability it is at some point x? I refer to this density matrix that I have calculated in coordinate space. Put x prime and x to be the same. If x prime and x are the same, I get 1 over V. Essentially, it says that the probability that I find a particle is the same anywhere in the box. So that makes sense. It is the analog of throwing the coin. We don't know where the particle is. It's equally likely to be anywhere. Actually, remember that I used periodic boundary conditions. If I had used sine and cosines, there would have been some dependence on approaching the boundaries. But the matrix tells me more than that. Basically, the matrix says it is true that the diagonal elements are 1 over V. But if I go off-diagonal, there is this [INAUDIBLE] scale lambda that is telling me something. What is it telling you? Essentially, it is telling you that through the procedures that we have outlined here, if you are at the temperature T, then this factor tells you about the classical probability of finding a momentum p. There is a range of momenta because p squared over 2m, the excitation is controlled by kT. There is a range of momenta that is possible. And through a procedure such as this, you are making a superposition of plane waves that have this range of momentum. The superposition of plane waves, what does it give you? It essentially gives you a Guassian blob, which we can position anywhere in the space. But the width of that Gaussian blob is given by this factor of lambda, which is related to the uncertainty that is now quantum in character. That is, quantum mechanically if you know the momentum, then you don't know the position. There is the Heisenberg uncertainty principle. Now we have a classical uncertainty about the momentum because of these Boltzmann weights. That classical uncertainty that we have about the momentum translates to some kind of a wave packet size for these-- to how much you can localize a particle. So really, this particle you should think of quantum mechanically as being some kind of a wave packet that has some characteristic size lambda. Yes? AUDIENCE: So could you interpret the difference between x and x prime appearing in the [INAUDIBLE] as sort of being the probability of finding a particle at x having turned to x prime within the box? PROFESSOR: At this stage, you are sort of introducing something that is not there. But if I had two particles-- so very soon we are going to do multiple particles-- and this kind of structure will be maintained if I have multiple particles, then your statement is correct. That this factor will tell me something about the probability of these things being exchanged, one tunneling to the other place and one tunneling back. And indeed, if you went to the colloquium yesterday, Professor [? Block ?] showed pictures of these kinds of exchanges taking place. OK? But essentially, really at this stage for the single particle with no potential, the statement is that the particle is, in reality, some kind of a wave packet. OK? Fine. So any other questions? The next thing is, of course, what I said. Let's put two particles in the potential. So let's go to two particles. Actually, not necessarily even in the box. But let's say what kind of a Hamiltonian would I write for two particles? I would write something that has a kinetic energy for the first particle, kinetic energy for the second particle, and then some kind of a potential of interaction. Let's say, as a function of x1 minus x2. Now, I could have done many things for two particles. I could, for example, have had particles that are different masses, but I clearly wanted to write things that were similar. So this is Hamiltonian that describes particles labeled by 1 and 2. And if they have the same mass and the potential is a function of separation, it's certainly h that is for 2 and 1 exchange. So this Hamiltonian has a certain symmetry under the exchange of the labels. Now typically, what you should remember is that any type of symmetry that your Hamiltonian has or any matrix has will be reflected ultimately in the form of the eigenvectors that you would construct for that. So indeed, I already know that for this, I should be able to construct wave functions that are either symmetrical or anti-symmetric under exchange. But that statement aside, let's think about the meaning of the wave function that we have for the two particles. So presumably, there is a psi as a function of x1 and x2 in the coordinate space. And for a particular quantum state, you would say that the square of this quantity is related to quantum probabilities of finding particles at x1 and x2. So this could, for example, be the case of two particles. Let's say oxygen and nitrogen. They have almost the same mass. This statement would be true. But the statement becomes more interesting if we say that the particles are identical. So for identical particles, the statement of identity in quantum mechanics is damped at this stage. You would say that I can't tell apart the probability that one particle, number 1, is at x1, particle 2 is at x2 or vice versa. The labels 1 and 2 are just thinks that I assign for convenience, but they really don't have any meaning. There are two particles. To all intents and purposes, they are identical. And there is some probability for seeing the events where one particle-- I don't know what its label is-- at this location. The other particle-- I don't know what its label is-- is at that location. Labels don't have any meaning. So this is different. This does not have a classic analog. Classically, if I put something on the computer, I would say that particle 1 is here at x1, particle 2 is here at x2. I have to write some index on the computer. But if I want to construct a wave function, I wouldn't know what to do. I would just essentially have a function of x1 and x2 that is peaked here and there. And we also know that somehow quantum mechanics tells us that there is actually a stronger version of this statement, which is that psi of x1, x2 x1 is either plus psi of x1, x2 or minus psi of x1, x2 for identical particles depending on whether they are boson or fermions. So let's kind of build upon that a little bit more and go to many particles. So forth N particles that are identical, I would have some kind of a state psi that depends on the coordinates. For example, in the coordinate representation, but I could choose any other representation. Coordinate kind of makes more sense. We can visualize it. And if I can't tell them apart, previously I was just exchanging two of the labels but now I can permute them in any possible way. So I can add the permutation P. And of course, there are N factorial in number. And the generalization of the statement that I had before was that for the case of bosons, I will always get the same thing back. And for the case of fermions, I will get a number back that is either minus or plus depending on the type of permutation that I apply of what I started with. So I have introduced here something that I call minus 1 to the power of P, which is called a parity of the permutation. A permutation is either even-- in which case, this minus 1 to the power of P is plus-- or is odd-- in which case, this minus 1 to P is minus 1. How do I determine the parity of permutation? Parity of the permutation is the number of exchanges that lead to this permutation. Basically, take any permutation. Let's say we stick with four particles. And I go from 1, 2, 3, 4, which was, let's say, some regular ordering, to some other ordering. Let's say 4, 2, 1, 2. And my claim is that I can achieve this transformation through a series of exchanges. So I can get here as follows. I want 4 to come all the way back to here, so I do an exchange of 1 and 4. I call the exchange in this fashion. I do exchange of 1 and 4. The exchange of 1 and 4 will make for me 4, 2, 3, 1. OK. I compare this with this. I see that all I need to do is to switch 3 and 1. So I do an exchange of 1 and 3. And what I will get here is 4, 2, 1, 3. So I could get to that permutation through two exchanges. Therefore, this is even. Now, this is not the only way that I can get from one to the other. I can, for example, sit and do multiple exchanges of this 4, 2 a hundred times, but not ninety nine times. As long as I do an even number, I will get back to the same thing. The parity will be conserved. And there's another way of calculating parity. You just start with this original configuration and you want to get to that final configuration. You just draw lines. So 1 goes to 1, 2 goes to 2, 3 goes to 3, 4 goes to 4. And you count how many crossings you have, and the parity of those crossings will give you the parity of the permutation. So somehow within quantum mechanics, the idea of what is identical particle is stamped into the nature of the wave vectors, in the structure of the Hilbert space that you can construct. So let's see how that leads to this simple example of particle in the box, if we continue to add particles into the box. So we want to now put N particles in box. Otherwise, no interaction completely free. So the N particle Hamiltonian is some alpha running from 1 to N p alpha squared over 2m kinetic energies of all of these things. fine. Now, note that this Hamiltonian, since it is in some sense built up of lots of non-interacting pieces. And we saw already classically, that things are not interacting-- calculating probabilities, partition functions, et cetera, is very easy. This has that same structure. It's the ideal gas. Now, quantum mechanically. So it should be sufficiently easy. And indeed, we can immediately construct the eigenstates for this. So we can construct the basis, and then do the calculations in that basis. So let's look at something that I will call product state. And let's say that I had this structure that I described on the board above where I have a plane wave that is characterized by some wave number k for one particle. For N particles, I pick the first one to be k1, the second one to be k2, the last one to be kN. And I will call this a product state in the sense that if I look at the positional representation of this product state, it is simply the product of one-particle states. So this is a product over alpha of x alpha k alpha, which is this e to the i kx k dot x over square root of it. So this is perfectly well normalized if I were to integrate over x1, x2, x3. Each one of the integrals is individually normalized. The overall thing is normalized. It's an eigenstate because if I act Hn on this product state, what I will get is a sum over alpha h bar squared k alpha squared over 2m, and then the product state back. So it's an eigenstate. And in fact, it's a perfectly good eigenstate as long as I want to describe a set of particles that are not identical. I can't use this state to describe particles that are identical because it does not satisfy the symmetries that I set quantum mechanics stamps into identical particles. And it's clearly the case that, for example-- so this is not symmetrized since clearly, if I look at k1, k2, k1 goes with x1, k2 goes with x2. And it is not the same thing as k2, k1, where k2 goes with x1 and k1 goes with x2. Essentially, the two particles can be distinguished. One of them has momentum h bar k1. The other has momentum h bar k 2. I can tell them apart because of this unsymmetrized nature of this wave function. But you know how to make things that are symmetric. You basically add k1 k2 plus k2 k1 or k1 k2 minus k2 k1 to make it anti-symmetrized. Divide by square root of 2 and you are done. So now, let's generalize that to the case of the N-particle system. So I will call a-- let's start with the case of the fermionic state. In fermionic state, I will indicate by k1, k2, kN with a minus index because of the asymmetry or the minus signs that we have for fermions. The way I do that is I sum over all N factorial permutations that I have. I let p act on the product state. And again, for two particles, you have the k1 k2, then you do minus k2 k1. For general particles, I do this minus 1 to the power of p. So all the even permutations are added with plus. All the odd permutations are added with minus. Except that this is a whole bunch of different terms that are being added. Again, for two particles, you know that you have to divide by a square root of 2 because you add 2 vectors. And the length of the overall vector is increased by a square root of 2. Here, you have to divide in general by the number of terms that you have, square root of N factorial. The only thing that you have to be careful with is that you cannot have any two of these k's to be the same. Because let's say these two are the same, then along the list here I have the exchange of these two. And when I exchange them, I go from even to odd. I put a minus sign and I have a subtraction. So here, I have to make sure that all k alpha must be distinct. Now, say the bosonic one is simpler. I basically construct it, k1, k2, kN with a plus. By simply adding things together, so I will have a sum over p. No sign here. Permutation k1 through kN. And then I have to divide by some normalization. Now, the only tricky thing about this is that the normalization is not N factorial. So to give you an example, let's imagine that I choose to start with a product state where two of the k's are alpha and one of them is beta. So let's sort of put here 1, 1, 1, 2, 3 for my k1, k2, k3. I have chosen that k1 and k2 are the same. And what I have to do is to sum over all possible permutations. So there is a permutation here that is 1, 3, 2. So I get here alpha, beta, alpha. Then I will have 2, 1, 3, 2, 3, 1, 3, 1, 2, 3, 2, 1, which basically would be alpha, alpha, beta, alpha, beta, alpha, beta, alpha, alpha, beta, alpha, alpha. So there are for three things, 3 factorial or 6 permutations. But this entity is clearly twice alpha, alpha, beta, plus alpha, beta, alpha, plus beta, alpha, alpha. And the norm of this is going to be 4 times 1 plus 1 plus 1. 4 times 3, which is 12. So in order to normalize this, I have to divide not by 1/6, but 1/12. So the appropriate normalization here then becomes 1 over root 3. Now, in general what would be this N plus? To calculate N plus, I have to make sure that the norm of this entity is 1. Or, N plus is the square of this quantity. And if I were to square this quantity, I will get two sets of permutations. I will call them p and p prime. And on one side, I would have the permutation p of k1 through kN. On the other side, I would have a permutation k prime of k1 through kN. Now, this is clearly N factorial square terms. But this is not N factorial squared distinct terms. Because essentially, over here I could get 1 of N factorial possibilities. And then here, I would permute over all the other N factorial possibilities. Then I would try the next one and the next one. And essentially, each one of those would give me the same one. So that is, once I have fixed what this one is, permuting over them will give me one member of N factorial identical terms. So I can write this as N factorial sum over Q. Let's say I start with the original k1 through kN, and then I go and do a permutation of k1 through kN. And the question is, how many times do I get something that is non-zero? If these two lists are completely distinct, except for the identity any transformation that I will make here will make me a vector that is orthogonal to this and I will get 0. But if I have two of them that are identical, then I do a permutation like this and I'll get the same thing back. And then I have two things that are giving the same result. And in general, this answer is a product over all multiple occurrences factorial. So let's say here there was something that was repeated twice. If it had been repeated three times, then all six possibilities would've been the same. So I would have had 3 factorial. So if I had 3 and then 2 other ones that were the same, then the answer would have been multiplied by 3 factorial 2 factorial. So the statement here is that this N plus that I have to put here is N factorial product over k nk factorial, which is essentially the multiplicity that is associated with repeats. Oh So we've figured out what the appropriate bosonic and fermionic eigenstates are for this situation of particles in the box where I put multiple particles. So now I have N particles in the box. I know what the appropriate basis functions are. And what I can now try to do is to, for example, calculate the analog of the partition function, the analog of what I did here for N particles, or the analog of the density matrix. So let's calculate the analog of the density matrix and see what happens. So I want to calculate an N particle density matrix, completely free particles in a box, no interactions. For one particle, I went from x to x prime. So here, I go from x1, x2, xN. To here, x1 prime, x2 prime, xN prime. Our answer ultimately will depend on whether I am dealing with fermions or bosons. So I introduce an index here eta. Let me put it here, eta is plus 1 for bosons and minus 1 for fermions. So kind of goes with this thing over here whether or not I'm using bosonic or fermionic symmetrization in constructing the wave functions here. You say, well, what is rho? Rho is e to the minus this beta hn divided by some partition function ZN that I don't know. But what I certainly know about it is because I constructed, in fact, this basis so that I have the right energy, which is sum over alpha h bar squared alpha squared k alpha squared over 2m. That is, this rho is diagonal in the basis of the k's. So maybe what I should do is I should switch from this representation to the representation of k's. So the same way that for one particle I sandwiched this factor of k, I am going to do the same thing over here. Except that I will have a whole bunch of k's. Because I'm doing a multiple system. What do I have? I have x1 prime to xN prime k1 through kN, with whatever appropriate symmetry it is. I have e to the minus sum over alpha beta h bar squared k alpha squared over 2m. Essentially, generalizing this factor that I have here, except that I have to normalize it with some ZN that I have yet to calculate. And then I go back from k1, kN to x1, xN. Again, respecting whatever symmetry I want to have at the end of the day. Now, let's think a little bit about this summation that I have to do over here. And I'll put a prime up there to indicate that it is a restricted sum. I didn't have any restriction here. I said I integrate over-- or sum over all possible k's that were consistent with the choice of 2 pi over l, or whatever. Now, here I have to make sure that I don't do over-counting. What do I mean? I mean that once I do this symmetrization, let's say I start with two particles and I have k1, k2. Then, I do the symmetrization. I get something that is a symmetric version of k1, k2. I would have gotten exactly the same thing if I started with k2, k1. That is, a particular one of these states corresponds to one list that I have here. And so over here, I should not sum over k1, k2, k3, et cetera, independently. Because then I will be over-counting by N factorial. So I say, OK, let me sum over things independently and then divide by the over-counting. Because presumably, these will give me N factorial similar states. So let's just sum over all of them, forget about this. You say, well, almost. Not quite because you have to worry about these factors. Because now when you did the N factorial, you sort of did not take into account the kinds of exchanges that I mentioned that you should not include because essentially if you have k1, k1, and then k3, then you don't have 6 different permutations. You only have 3 different permutations. So actually, the correction that I have to make is to multiply here by this quantity. So now that sum be the restriction has gone into this. And this ensures that as I sum over all k's independently, the over-counting of states that I have made is taken into account by the appropriate factor. Now, this is actually very nice because when I look at these states, I have the normalization factors. The normalizations depend on the list of k's that I have over here. So the normalization of these objects will give me a 1 over N factorial product over k nk factorial. And these nk factorials will cancel. Now you say, hang on. Going too fast. This eta says you can do this both for fermions and bosons. But this calculation that you did over here applies only for the case of bosons. You say, never mind because for fermions, the allowed values of nk are either 0 or 1. I cannot put two of them. So these nk factorials are either-- again, 1 or 1. So even for fermions, this will work out fine. There will be appropriate cancellation of wave functions that should not be included when I do the summation over permutations with the corresponding factors of plus and minus. So again, the minus signs-- everything will work. And this kind of ugly factor will disappear at the end of the day. OK? Fine. Now, each one of these is a sum over permutations. Actually, before I write those, let me just write the factor of e to the minus sum over alpha beta h bar squared k alpha squared over 2m. And then I have ZN. I don't know what ZN is yet. Each one of these states is a sum over permutations. I have taken care of the overall normalization. I have to do the sum over the permutations. And let's say I'm looking at this one. Essentially, I have to sandwich that with some combination of x's. What do I get? I will get a factor of eta to the p. So actually, maybe I should have really within the following statement at some point. So we introduced these different states. I can combine them together and write k1, k2, kN with a symmetry that is representative of bosons or fermions. And that's where I introduced a factor of plus or minus, which is obtained by summing over all permutations. This factor eta, which is plus or minus, raised to the p. So for bosons, I would be adding everything in phase. For fermions, I would have minus 1 to the p. I have the permutation of the set of indices k1 through kN in the product state. Whoops. This should be product. This should be a product state. And then I have to divide by square root of N factorial product over k nk factorial. And clearly, and we will see a useful way of looking at this. I can also look at the set of states that are occupied among the original list of k's. Some k's do not occur in this list. Some k's occurs once. Some k's occur more than once. And that's how I construct these factors. And then the condition that I have is that, of course, sum over k nK should be the total number N. That nk is 0 or 1 for fermions. That nk is any value up to that constraint for bosons. Now, I take these functions and I substitute it here and here. What do I get? I get the product state. So I have eta to the p. The product state is e to the i sum over, let's say, beta k of p beta x beta prime. And then from the next one, I will get e to the minus i sum over-- again, some other beta. Or the same beta, it doesn't matter. p prime beta x beta. I think I included everything. Yes? AUDIENCE: Question. Why do you only have one term of beta to the p? PROFESSOR: OK. So here I have a list. It could be k1, k2, k2-- well, let's say k3, k4. Let's say I have four things. And the product means I essentially have a product of these things. When I multiply them with wave functions, I will have e to the i, k1, x1 et cetera. Now, I do some permutation. Let's say I go from 1, 2, 3 to 3, 2, 1. So there is a permutation that I do like that. I leave the 4 to be the same. This permutation has some parity. So I have to put a plus or minus that depends on the parity. AUDIENCE: Yeah. But my question is why in that equation over there-- PROFESSOR: Oh, OK. AUDIENCE: --we have a term A to the p for the k. PROFESSOR: Yes. AUDIENCE: k factor and not for the bra. PROFESSOR: Good. You are telling me that I forgot to put p prime and eta of p prime. AUDIENCE: OK. I wasn't sure if there was a reason. PROFESSOR: No. I was going step by step. I had written the bra part and I was about to get to the k part. I put that part of the ket, and I was about to do the rest of it. AUDIENCE: Oh, sorry. PROFESSOR: It's OK. OK. So you can see that what happens at the end of the day here is that a lot of these nk factorials disappear. So those were not things that we would have liked. There is a factor of N factorial from here and a factor of N factorial from there that remains. So the answer will be 1 over ZN N factorial squared. I have a sum over two permutations, p and p prime, of something. I will do this more closing next time around, but I wanted to give you the flavor. This double sum at the end of the day, just like what we did before, becomes one sum up to repetition of N factorial. So the N factorial will disappear. But what we will find is that the answer here is going to be something that is a bunch of Gaussians that are very similar to the integration that I did for one particle. I have e to the minus k squared over 2m. I have e to the i k x minus x prime. Except that all of these k's and x's and x primes have been permuted in all kinds of strange ways. Once we untangle that, we find that the answer is going to end up to be eta of Q x alpha minus x prime Q alpha squared divided by pi 2 lambda squared. And we have a sum over alpha, which is really kind of like a sum of things that we had for the case of one particle. So for the case of one particle, we found that the off-diagonal density matrix had elements that's reflected this wave packet nature of this. If we have multiple particles that are identical, then the thing is if I have 1, 2 and then I go 1, 1 prime, 2, 2 prime for the two different locations that I have x1 prime, x2 prime, et cetera. And 1 and 2 are identical, then I could have really also put here 2 prime, 1 prime. And I wouldn't have known the difference. And this kind of sum will take care of that, includes those kinds of exchanges that I mentioned earlier and is something that we need to derive more carefully and explain in more detail next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 11_Kinetic_Theory_of_Gases_Part_5.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So let's review what we've been trying to do in the past few lectures. It's to understand a typical situation involving something like a gas, which, let's say, initially is in one half of the container. And then expands until it occupies the entire container, going from one equilibrium state to another equilibrium state. Question is, how to describe this from the perspective of microscopic degrees of freedom, getting the idea that eventually you will come to some form of equilibrium. And actually, more seriously, quantifying the process and timescales by which the gas comes to equilibrium. So the first stage was to say that we will describe the process by looking at the density, which describes the number of representative examples of the system, whose collections of coordinates and momenta occupy a particular point in the 6N-dimensional phase space. So basically, this is a function that depends on 6N coordinates. And of course, it changes as a function of time. And we saw that, basically, if we were to look at how this set of points were streaming in phase space, and this can be mathematically represented by looking at the direct time dependence and the implicit time dependence of all of the coordinates captured through the Poisson bracket. And the answer was that it is a characteristic of the type of Hamiltonian evolution equations, that the flow is such that the phase space does not change volume. And hence, this quantity is 0. We said, OK, all of that is fine. But what does that tell me about how this gas is expanding? So for that, we said that we need to look at descriptions that are more appropriate to thinking about the volume and density of a gas expanding. And we really want to capture that through the one-particle density. But along the way, we introduced s particle densities, normalized up to something, to correspond to, essentially, integrating over all of the coordinates that we're not interested in. Say coordinates pertaining to numbers s plus 1 to N, corresponding 6N-dimensional phase space and the entire [INAUDIBLE] of this density or probability. Then the next stage was to be more specific about what governs the evolution of these gas particles, i.e., what is the Hamiltonian. And we said, let's focus on an N particle Hamiltonian that is composed, by necessity certainly, of one-body terms that is the kinetic energy and the potential energy, the latter describing the box, for example. And we added a two-particle interaction. qi minus qj, which, for the sake of simplicity, let's imagine is spherically symmetric-- only depends on the relative position of these particles. Then we say that if I look at the evolution of this s particle density, I start by writing an equation that kind of looks like the full Liouville equation. There is a Hamiltonian that describes this set of particles interacting among themselves, just replacing n by s. And so if these were the only particles in the universe, this is what I would get. Except that we know that there are other particles. And since we have two-body collisions possible, any one of the particles in the set that I'm describing here, composed of s particles, can potentially collide with a particle from this set that I don't want to include. And that will change the volume of phase space. This incompressibility condition was valid only if the entire thing of the Hamiltonian was considered. Here, I'm only looking at the partial subset. And so then what did I have here? I had the force exerted from particle s plus 1 on particle n changing the momentum of the particle n. But the likelihood that this happens depends on finding the other particle also. So I had this dependence on the density that was s plus 1. So this was the BBGKY hierarchy. Then we said, let's take a look at the typical value of the terms that we have in this equation. And for s that is larger than 2, this Hamiltonian that pertains to s particles will have a collision term among the particles. And that collision term will give you some typical inverse timescale here, which is related to the collision time. Whereas on the right-hand side, for this collision to take place, you have to find another particle to interact with in the range of interactions that we indicated by d. And for density n, this was smaller by a factor of nd cubed for dilute gases. Therefore, the left-hand side is much larger than the right-hand side. And within particular approximation, we said that we are going to approximately set this right-hand side to 0. OK? Except that we couldn't do this for the equation that pertained to s equals to 1, because for s equals to 1, I did not have the analog of this term. If I were to write the precise form of this Poisson bracket, I had the partial derivative with respect to time, the momentum divided by mass, which is the velocity driving the change in coordinate. And the force that comes from the external potential, let's call it f1, we arrive in the change in momentum of the one-particle density, which is a function-- let's write it explicitly-- of p1, q1, and t. And on the other side, I had the possibility to have a term such as this that describes a collision with a second particle. I have to look at all possible second particles that I can collide with. They could come with all variety of momenta, which we will indicate p2. They can be all over the place. So I have integrations over space. And then I have a bunch of terms. So for this, we said, let's take a slightly better look at the kind of collisions that are implicitly described via the term that we would have here as f2. And since the right-hand side of f2 we set to 0, essentially, f2 really describes the Newtonian evolution of two particles coming together and having a collision. And this collision, I can-- really, the way that I have it over there is in what I would call a lab frame. And the perspective that we should have is that in this frame, I have a particle that is coming with momentum, let's say, p1, which is the one that I have specified on the left-hand side of the equation. That's the density that I'm following in the channel represented by p1. But then along comes a particle with momentum p2. And there is some place where they hit each other, after which this one goes off with p1 prime. Actually, let me call it p1 double prime. And this one goes off with p2 double prime. Now, we will find it useful to also look at the same thing in the center of mass frame. Now, in the center of mass frame-- oops-- my particle p1 is coming with a momentum that is shifted by the p of center of mass, which is really p1 plus p2 over 2. So this is p1 minus p2 over 2. And my particle number two comes with p2 minus center of mass, which is p2 minus p1 over 2. So they're minus each other. So in the center of mass frame, they are basically coming to each other along the same line. So we could define one of the axes of coordinates as the distance along which they are approaching. And let's say we put the center of mass at the origin, which means that I have two more directions to work with. So essentially, the coordinates of this second particle can be described through a vector b, which we call the impact parameter. And really, what happens to the collision is classically determined by how close they approach together to make that. To quantify that, we really need this impact vector. So b equals to 0, they are coming at each other head on. For a different b, they are coming at some angle. So really, I need to put here a d2b. Now, once I have specified the form of the interaction that is taking place when the two things come together, then the process is completely deterministic. Particle number one will come. And after the location at which you have the interaction, will go off with p1 double prime minus center of mass. And particle number two will come and get deflected to p2 prime minus-- or double prime minus p center of mass. OK? And again, it is important to note that this parameter A is irrelevant to the collision, in that once I have stated that before collision, I have a momentum, say, p1 and p2. And this impact parameter b is specified, then p1 double prime is known completely as a function of p1, p2, and b. And p2 double prime is known completely as a function of p1, p2, and b. And everything in that sense is well-defined. So really, this parameter A is not relevant. And it's really the B that is important. But ultimately, we also need to get these other terms, something that has the appropriate dimensions of inverse time. And if I have a density of particles coming hitting a target, how frequently they hit depends on the product of the density as well as the velocity. So really, I have to add here a term that is related to the velocity with which these things are approaching each other. And so I can write it either as v2 minus v1, or p2 minus 1 over m. So that's the other thing that I have to look at. And then, essentially, what we saw last time is that the rest of the story-- if you sort of think of these as billiard balls, so that the collision is instantaneous and takes place at a well-specified point, instantaneously, at some location, you're going with some momentum, and then turn and go with some other momentum. So the channel that was carrying in momentum p1, which is the one that we are interested in, suddenly gets depleted by an amount that is related to the probability of having simultaneously particles of momenta p1 and p2. And the locations that I have to specify here-- well, again, I'm really following this channel. And in this channel, I have specified q1. And suddenly, I see that at the moment of collision, the momentum changes, if you are thinking about billiard balls. So really, it is at that location that I'm interested for particle one. For particle two, I'm interested at something that is, let's say, q1 plus, because it depends on the location of particle two shifted from particle one by an amount that is related by b and whatever this amount slightly of A is that is related to the size of your billiard ball. OK? Now, we said that there is a corresponding addition, which comes from the fact that it is possible suddenly to not have a particle moving along, say, this direction. And the collision of these two particles created something along that direction. So the same way that probabilities can be subtracted amongst some channels, they can be added to channels, because collision of two particles created something. And so I will indicate that by p1 prime, p2 prime, again, coordinates in t, something like this. So basically, I try to be a little bit more careful here, not calling these outgoing channels p prime, but p double prime. Because really, if I were to invert p1 double prime and p2 double prime, I will generate things that are going backward. So there's a couple of sign issues involved. But essentially, there is a similar function that relates p1 prime and p2 prime. It is, basically, you have to search out momenta whose outcome will create particles in the channels prescribed by p1 and p2. So as we said, up to here, we have done a variety of approximations. They're kind of physically reasonable. There is some difficulty here in specifying if I want to be very accurate where the locations of q1 and q2 are within this interaction size d that I am looking at. And if I don't think about billiard balls, where the interactions are not quite instantaneous where you change momenta, but particles that are deformable, there is also certainly a certain amount of time where the potential is acting. And so exactly what time you are looking at here is also not appropriate-- is not completely, no. But if you sort of make all of those things precise, then my statement is that this equation still respects the reversibility of the equations of motions that we had initially. But at this stage, we say, well, what can I do with this equation? I really want to solve an equation that involves f of p1, q1, and t. So the best that I can do is, first of all, simplify all of the arguments here to be more or less at the same location. I don't want to think about one particle being shifted by an amount that is related by b plus something that is related to the interaction potential size, et cetera. Let's sort of change our focus, so that our picture size, essentially, is something that is of the order of the size of the interaction. And also, our resolution in time is shifted, so that we don't really ask about moments of time slightly before and after collision, where the particle, let's say, could be deformed or whatever. So having done that, we make these approximations that, essentially, all of these f2's, we are going to replace as a product of f1's evaluated all at the same location-- the location that is specified here, as well as the same time, the time that is specified there. So this becomes-- OK? So essentially, the result of the collisions is described on the left-hand side of our equation by a term that integrates over all momenta and overall impact parameters of a bracket, which is a product of two one-particle densities. And just to simplify notation, I will call this the collision term that involves two factors of f1. So that's just a shorthand for that. Yes. AUDIENCE: Is that process equivalent of a similar molecule [INAUDIBLE]? PROFESSOR: That, if you want to give it a name, yes. It's called the assumption of molecular chaos. And physically, it is the assumption that the probability to find the two particles is the product of one-particle probabilities. And again, we say that you have to worry about the accuracy of that if you want to focus on sizes that are smaller than the interaction parameter size. But we change that as part of our resolution. So we don't worry about that aspect. AUDIENCE: So molecular chaos doesn't include eliminating this space? It's a separate thing. PROFESSOR: I've only seen the word "molecular chaos" applied in this context. I think you are asking whether or not if q1 and q2 are different. I can replace that and call that molecular chaos. I would say that that's actually a correct statement. It's not an assumption. Basically, when the two particles are away from each other to a good approximation, they don't really know about-- well, actually, what I am saying is if I really want to sort of focus on time dependence, maybe that is also an assumption. Yes. So they're really, if you like, indeed, two parts. It is the replacing of f2 with the product of two f2's, and then evaluating them at the same point. Yes. AUDIENCE: But based on your first line-- I mean, you could break that assumption of different distribution functions have different events that occur when they get close, and the framework still works, right? It would just become-- when you got down to the bottom line, it would just become a much larger equation, right? PROFESSOR: I don't understand. AUDIENCE: Like if you assume different interactions. PROFESSOR: The interactions are here in how p1 prime and p2 prime depend on p1 and p2. AUDIENCE: Right. What I'm suggesting is if you change H by keeping the top [INAUDIBLE]. That's a pretty simple H. Would this framework work for a more complicated H. But then your bottom two lines would become [INAUDIBLE]. PROFESSOR: What more can I do here? How can I make this more general? Are you thinking about adding it? AUDIENCE: No, no, no. I'm not suggesting you make it more general. I'm suggesting you make it more specific to break that assumption of molecular chaos. PROFESSOR: OK. So how should I modify if I'm not saying-- AUDIENCE: I didn't have something in mind. I was just suggesting [INAUDIBLE]. PROFESSOR: Well, that's where I have problem, because it seems to me that this is very general. The only thing that I left out is the body interactions may be having to do with this being spherically symmetric. But apart from that, this is basically as general as you can get with two bodies. Now, the part that is pertaining to three bodies, again, you would basically-- not need to worry about as long as nd cubed is less than 1. So I don't quite know what you have in mind. AUDIENCE: I guess I was thinking about three- and four-body interactions. But if you're covered against that, then what I said, it's not valid or relevant. PROFESSOR: Right. So certainly, three-body and higher body interactions would be important if you're thinking about writing a description for a liquid, for example. And then the story becomes more complicated. You can't really make the truncations that would give you the Boltzmann equation. You need to go by some other set of approximations, such as the Vlasov equation, that you will see in the problem set. And the actual theory that one can write down for liquids is very complicated. So it's-- yeah. AUDIENCE: Can it get more complicated yet if, based on the same framework, isn't this similar to, like, the formulation of the neutron transport equation, where if you have, in addition to scattering, you have particles that don't attract, but you have fission events and you have inelastic center? PROFESSOR: OK. Yes. That's good. AUDIENCE: And then your H would change dramatically, right? PROFESSOR: Yes. That's right. Yes. AUDIENCE: But my point is that it's the same framework. PROFESSOR: Yes. Yes. So basically, you're right. One assumption that I have here is that the number of particles is conserved. We say that if the particles can break or diffuse, one has to generalize this approximation. And there would be an appropriate generalization of the Boltzmann equation to cover those terms. And I don't have a problem set related to that, but maybe you are suggesting that I should provide one. [LAUGHTER] OK. That's a good point. OK? All right. So we have made some assumptions here. And I guess my claim is that this equation-- well, what equation? So basically, what I'm going to do, I call the right-hand side of this equation after this assumption of molecular chaos, CFF. For simplicity, let me call this bunch of first derivatives that act on f1 L for Liouvillian. It's kind of like a Liouville operator. So I have an equation that is the bunch of first derivatives acting on f1, which is essentially a linear partial differential equation in six or seven dimensions, depending on how you count time-- is equal to a non-linear integral on the right-hand side. OK? So this is the entity that is the Boltzmann equation. Now, the statement is that this equation no longer has time reversal symmetry. And so if I were to solve this equation for the one function that I'm interested, after all, the f1, which really tells me how the particles stream from the left-hand side of the box to the right-hand side of the box, that equation is not reversible. That is, the solution to this equation will indeed take the density that was on the left-hand side of the box and uniformly distribute it on the two sides of the box. And that will stay forever. That's, of course, a consequence of the various approximations that we made here, removing the reversibility. OK? And the statement to show that mathematically is that if there is a function-- and now, for simplicity, since from now on, we don't care about f2, f3, et cetera, we are only caring about f1, I'm going to drop the index one and say that if there is a density f, which is really my f1, that satisfies the Boltzmann-- that is, I have a form such as this-- then there is a quantity H that only depends on time that, under the evolution that is implicit in this Boltzmann equation will always decrease, where H is an integral over the coordinates and momenta that are in this function f of p, q, and t. And all I need to do is to multiply by the log. And then all of these are vectors if I drop it for saving time. And we said that once we recognize that f, up to a normalization, is the same thing as the one-particle probability-- and having previously seen objects such as this in probability theory corresponding to entropy information, entropy of mixing, you will not be surprised at how this expression came about. And I had told you that in each one of the four sections, we will introduce some quantity that plays the role of entropy. So this is the one that is appropriate to our section on kinetic theory. So let's see how we go about-- AUDIENCE: Question. PROFESSOR: Yes? AUDIENCE: When you introduced the-- initialized the concept of entropy of information of the message that's p log p, [INAUDIBLE] entropy. And as far as I understand, this concept came up in, like, mid 20th century at least, right? PROFESSOR: I think slightly later than that. Of that order, yes. AUDIENCE: OK. And these are Boltzmann equations? PROFESSOR: Yes. AUDIENCE: And so this was, like, 100 years before, at least? PROFESSOR: Not 100 years, but maybe 60 years away, yeah. 50, 60. AUDIENCE: OK. I'm just interested, what is the motivation for picking this form of functionality, that it's integral of f log f rather than integral of something else? PROFESSOR: OK. I mean, there is another place that you've seen it, that has nothing to do with information. And that's mixing entropy. If I have [? n ?] one particles of one type and two particles of another type, et cetera, and mixed them up, then the mixing entropy, you can relate to a form such as this. It is some ni log ni. OK? Fine. It's just for discrete variables, it makes much more sense than for the continuum. But OK. So let's see if this is indeed the case. So we say that dH by dt is the-- I have to take the time derivative inside. And as we discussed before, the full derivative becomes a partial derivative inside the integral. And I have either acting on f and acting on log f, which will give me 1 over f, canceling the f outside. And this term corresponded to the derivative of the normalization, which is fixed. And then we had to replace what we have for df by dt. And for df by dt, we either have the Poisson bracket of the one-particle Hamiltonian with f coming from the L part of the equation, once we get d by dt on one side. And then the collision part that depends on two f's. And we have to multiply this by f. And again, let's remind you that f really has argument-- oops, log f-- that has argument p1. I won't write q1 and t. And the next thing that we did at the end of last lecture is that typically, when you have integral of a Poisson bracket by doing some combination of integrations by part, you can even eventually show that this contribution is 0. And really, the important thing is the integration against the collision term of the Boltzmann equation. So writing that explicitly, what do we have? We have that it is integral d cubed. Let's say q1. Again, I don't really need to keep track of the index on this dummy variable. I have the integral over q1. I have the two integral over p1. Let's keep it so that it doesn't confuse anybody. Then I have the collision term. Now, the collision term is a bunch of integrals itself. It involves integrals over p2, integral over an impact vector b, the relative velocity. And then I had minus f of p1, f of p2 for the loss in the channel because of the collisions, plus the addition to the channel from the inverse collisions. And this whole thing has to be multiplied by log f evaluated at p1. OK? So that's really the quantity that we are interested. And the quantity that we are interested, we notice, has some kind of a symmetry with respect to indices one and two. Something to do with prime and un-prime coordinates. But multiplied this bracket that has all of these symmetries, with this function that is only evaluated for one of the four coordinates for momenta that are appearing here. So our next set of tasks is to somehow make this as symmetric as possible. So the first thing that we do is we exchange indices one and two, because really, these are simply dummy integration variables. I can call the thing that I was calling p1 p2, and vice versa. And the integration then becomes the integral over q. Maybe that's why it's useful not to have, for the q, any index, because I don't care to change the index for that. d cubed p2, p1. Doesn't really matter in what order I write them. d2b is a relative separation. Again, it's an impact parameter. I can call this v1 minus v2. But since it's an absolute value, it doesn't matter really. And so the only thing that happens is that I will have-- essentially, the product is symmetric. The only thing that was separately f of p1 before now becomes f of p2. And then what I can do is I can say, OK, those are really two forms of the same integral, and I can average them, and write the average, which is 1/2 the integral d cubed q, d cubed p1, d cubed p2, d2 b, relative velocity, minus f of p1, f of p2, f of p1 prime, f of p2 prime. And then I have a log of f of p1 plus log of f of p2. And I have divided through 1/2. OK? So what I will do is to now write the next line. And then we'll spend some time discussing what happens. So artificially, what will happen looks like I essentially move prime and un-prime things the same way that I exchanged one and two indices before. I will exchange the superscripts that are prime or non-prime. And if I were to do that, I will get an integral d cubed q, d cubed, let's say, p1 prime, d cubed p2 prime, d2b, v2 minus v1 prime, minus f of p1-- [SIGHS] --prime, f or p2 prime, plus f of p1, f of p2. And then here, we would have log of f of p1 prime, log of f of p2 prime. OK? So this is a statement that just saying, I moved the primes and un-primes, it's very easy to come up with this answer. What you have to think very much about what that means. Essentially, what I had originally was integration invariables p1 and p2. And p1 prime and p2 prime were never integration variables. So there was not symmetry at the level that I have written between one and two, which are integration variables, and one prime and two prime, which are the quantities that, through the collision operator, are related to p1 and p2. So essentially, p1 prime and p2 prime are the kind of functions that we have written here. Here, I had p1 double prime, p2 double prime. But up to sort of inverting some signs, et cetera, this is the kind of functional relationship that you have between p primes and p's. OK? So now we have to worry about a number of things. The simplest one is actually what happened to this factor here-- the relative velocity. Now think back about collisions that you have in the center of mass. In the center of mass, two particles come with velocities that are opposite each other. They bang onto each other, go different direction. The magnitude of the velocities does not change an inelastic collision, which is what we are looking at. Right? So after the collision, what we have is that p2 prime minus p1 prime is the same thing as p1 and minus p2 for elastic conditions. So as far as this factor is concerned, I could very well replace that with v1 minus v2 without the primes. It really doesn't make any difference. Another thing that you can convince yourself is that from the perspective of the picture that I drew, there was some kind of an impact vector B. You say, well, what happens if I look at from the perspective of the products of the interaction? So the products of the interaction are going in a different direction. I would have to redraw my coordinates that would correspond to A prime and B prime. But what you can convince yourself that essentially, the impact parameter, as far as these products is concerned, is just a rotated version of the original one without changing its magnitude. So you have also that magnitude of b prime is the same thing as magnitude of b. There's a rotation, but the impact parameter does not change. If you like, that is also related to conservation of energy, because if it wasn't so, the potential energy would be different right after the collisions. OK? So I really was kind of careless here. I should have written d2 b prime. But I could have written also d2b. It didn't make any difference. It's the same. And then, of course, these functions-- essentially, you can see that you say, OK, I start with, here, p1 and p2. And these were functions of p1 and p2. And here, I have reversed this. p1 prime and p2 prime are the integration variables. p1 and p2 are functions of p1 prime and p2 prime that, in principle, are obtained by inverting this set of equations. Right? But it really, again, doesn't matter, because once you have specified what incoming p1, p2, and b are, you know what the outcome of the collision is. So it's really the same function. Whether you are inverting it, you will, again, presume the different location of the arguments, but you will have the same function that would tell you what the inverse relationship is. So really, these functional forms are the same as the functional forms that we had before. And finally, I claim that in order to go through this change of variable, I have to also deal with the Jacobian of the transformation. But that we have the d3p1d3p2 is the same thing as d3p1 prime d3p2 prime. And that, in some sense, you can sort of also think about what we were doing before with the Liouville equation maintaining the volume of phase space. So I have some volume of phase space prior to the collision. After the collision, I would need to have the same volume. There is, of course, in the volumes that we are considering for the case of the Liouville operator, in addition, products of dq1, dq2. But you can see that all of those things are really put into the same volume when we are considering the collision. So really, the only thing that is left is that the Liouvillian preservation of the volume of phase space would say that the Jacobian here, you can also ignore. Yes. AUDIENCE: So are you assuming that this transformation [INAUDIBLE] is [? an article, ?] because it's [INAUDIBLE]? PROFESSOR: Yes. So everything that we do is within the framework of a Hamiltonian evolution equation. OK? So if you like, these kinds of things that I'm saying here sometimes are said to be consequence of microscopic reversibility, in that I'm sort of using, in this panel over here, things that are completely Newtonian and microscopic. The place that I use statistical argument was over here. OK? So essentially, having done that, you can basically also rename variables. Things that I was calling p1 prime I can call p1. It's another renaming of some variable. And again, it really doesn't matter which relative velocity I write down. They're all the same thing. So what will happen is that I will get minus f or p1, f of p2, plus f of p1 prime, f of p2 prime. And the only thing that happened is that the logs-- oops. I should have maintained. I don't know why-- where I made a mistake. But I should ultimately have come up with this answer. So what I wanted to do, and I really have to maybe look at it again, is do one more step of symmetrization. The answer will come down to an integral over p1 and p2, d2b, at relative velocity. We would have minus f of p1, f of p2, plus f of p1 prime, f of p2 prime. So this is our collision operator that we have discussed here-- the difference between what comes in and what comes out. And from the first part here, we should get log of f of p1, plus log of f of p2, which is the same thing as the log of the product. And if I had not made errors, the subtraction would have involved f of p1 prime, f of p2 prime. And the idea is that if you think about the function log of something as a function of that something, the log is a monotonic function. So that if you were to evaluate the log at two different points, the sign of the log would grow in the same way as the sign of the two points, which means that here, you have the difference of two logs. This is like log of s1 minus log of s2. And it is multiplied with minus s1 minus s2. So if s1 is greater than s2, log s1 would be greater than log s2. If s1 is less than s2, log s1 would be less than log s2. In either case, what you would find is that the product of the two brackets is negative. And this whole thing has to be negative. OK. So the thing that confused me-- and maybe we can go back and look at is that probably, what I was doing here was a little bit too impatient, because when I changed variables, p1 prime f of p2. I removed the primes. Log. Reduced primes. Actually, I had it here. I don't know why I was worried. Yeah. So the signs are, I think, right, that when I add them, I would get the subtraction. So in one case, I had the f's of the p's with the negative sign. After I did this change, they appear with the positive sign. So there is the sign difference between the two of them. OK? So we have a function, therefore, that, just like entropy throughout the process, will increase. This is like a negative entropy. If I were to really solve this set of equations, starting with the f that describes things that are distributed in this box, and then allow it to expand into the other box, it will follow-- this solution will follow some particular trajectory that will ultimately no longer change as a function of time. It will not go back and forth. It will not have the full reversibility that these set of equations does have. Now, that's actually not a bad thing. It is true, indeed, that these equations are reversible. And there is a theorem that if you wait sufficiently long, this will go back into here. But the time it take for that to happen grows something of the order of the size of the system divided these. The time, up to various pre-factors, would grow exponentially with the number of particles that you have. And remember that you have of the order of 10 to the 23 particles. So there is, indeed, mathematically rigorously, a recursion time if you really had this box sitting in vacuum and there was no influence on the walls of the container. After many, many, many, many ages of the universe, the gas would go back for an instant of time in the other box, and then would go over there. But that's something that really is of no practical relevance when we are thinking about the gas and how it expands. OK? But now let's see, with these approximations, can we do better. Can we figure out exactly how long does it take for the gas to go from one side to the other side. And what is the shape of the streamlines and everything else as this process is taking place? Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. Yes. So if you like, we want to think of this in terms of information, in terms of entropy. It doesn't matter. It is something that is changing as a function of time. And the timescale by which that happens has to do with the timescales that we set up and imprinted into this equation. OK? So the next step that I have to answer is, what is that time step? And in particular, if I look at this equation by itself, I would say that it has two timescales. On the left-hand side, you have this tau that has to do with the size of the box. We said it's very long. On the right-hand side, it has the time that I have to wait for a collision to occur, which is certainly much shorter than this time, for an ordinary gas. And so we expect, therefore, that the right-hand side of this equation would determine things much more rapidly compared to the left-hand side of the equation. And we want to now quantify that and make that into something that is relevant to things that we know and understand about a gas expand. OK? So let's sort of think about, well, what would happen? We said that this H, as a function of time, is going in one direction. Presumably, after some time-- we don't know how long here, and that's a good question-- eventually, hopefully, it will reach an equilibrium, like, the entropy will cease to change. And the question is, what is this equilibrium? So let's see if we can gain some information about equilibrium. So we said that dH by dt presumably has to be 0, because if it is not 0, then it can continue to decrease. You are not quite where you want to be. Now let's look at what is the condition for dH by dt to be 0. H is this integral that I have to perform over the entire phase space. Now, each element of that integral evaluated at some particular location in phase space is by itself positive. This argument that we had before is not about the entire integral. It's valid about every little piece of the integration that I am making. It has a particular sign. So if the entire integral has to be 0, every little piece of it has to be individually 0. So that means that I would require that log of f at any q evaluated, let's say, for p1 and p2 should be the same thing as log of f evaluated at p1 prime and p2. This would be true for all q. Right? You look at this and you say, well, what? This seems kind of-- yes? AUDIENCE: What about the other one in parentheses, beside the integral? PROFESSOR: It is the same thing. So this is s1 minus s2. And the other is log of s1 minus s2. So I'm saying that the only time it is 0 is s1 is equal to s2. When s1 equals to s2, both parentheses are 0. Although, even if it wasn't, it would have been sufficient for one parentheses to be 0. But this is necessary, because the sign is determined by this. OK? Now, you look at this and you say, well, what the he-- what does this mean? How can I-- and the functions p1 prime and p2 prime I have to get by integrating Newton's equations. I have no idea how complicated they are. And you want me to solve this equation. Well, I tell you that the answer to this is actually very simple. So let me write the answer to it in stages. Log of f of p and q. So it's a function of p and q that I'm after that have to be evaluated at p1, p2, and then should be equal to p1 prime and p2 prime for any combination of p1, p2, et cetera, that I choose. Well, I say, OK, I will put any function of q that I like. And that will work, because it's independent of p. So I have minus alpha, minus alpha is the same as minus alpha, minus alpha. OK? Then I claim that, OK, I will put a minus gamma dot p. Gamma could be dependent on q. Dot p. And I claim that that would work, because that term would give me gamma dot p1 plus p2, gamma dot p1 prime plus p2 prime. What do I know for sure? Momentum is conserved. p1 plus p2 is the same thing as p1 prime plus p2 prime. It's just the conservation of momentum. I know that to be true for sure. And finally, kinetic energy is conserved. Once I'm away from the location where the interaction is taking place, I can choose any function of q multiplied with p squared over 2m. And then on the left-hand side, I would have the sum of the incoming kinetic energies. On the right-hand side, I would have the sum of outgoing kinetic energies. So what have I done? I have identified quantities that are conserved in the collision. So basically, I can keep going if I have additional such things. And so basically, the key to this is to identify collision-conserved quantities. And so momentum, energy, the first one corresponds to the number of particles. These are quantities that are conserved. And so you know that this form will work, which means that I have a candidate for equilibrium, which is that f of p and q should have a form that is exponential of some function of q that I don't-- at this stage, could be quite arbitrary. And some other function of q times p squared minus 2m. And let me absorb this other gamma term. I can certainly do so by putting something like pi of 1 squared over 2m. OK? And this certainly is a form that will set the right-hand side of the H theorem to 0. And furthermore, it sets the right-hand side of the Boltzmann equation to 0, because, again, for the Boltzmann equation, all I needed was that the product of two f's should be the same before and after the collision. And this is just the logs. If I were to exponentiate that, I have exactly what I need. So basically, this also sets the right-hand side of the Boltzmann equation to 0. And this is what I will call a local equilibrium. What do I mean by that? Essentially, we can see that right now, I have no knowledge about the relationship between different points in space, because these alpha, beta, and pi are completely arbitrary, as far as my argumentation so far is concerned. So locally, at each point in q, I can have a form such as this. And this form, you should remind you it's something like a Boltzmann weight of the kinetic energy, but moving in some particular direction. And essentially, what this captures is that through relaxing the right-hand side of the Boltzmann equation, we randomize the magnitude of the momenta, so that the magnitudes of the kinetic energy are kind of Boltzmann-distributed. This is what this term does. If I say that this term being much larger than this term is the more important one. If I need neglect this, then this is what happens. I will very rapidly reach a situation in which the momenta have been randomized through the collisions. Essentially, the collisions, they change the direction of the momenta. They preserve the average of the momenta. So there is some sense of the momentum of the incoming thing that is left over still. But there is some relaxation that took place. But clearly, this is not the end of the story, because this f that I have written here-- let's say local equilibrium. f local equilibrium does not satisfy the Boltzmann equation. Right? It set the right-hand side of the Boltzmann equation to 0. But it doesn't set the left-hand side. The left-hand side completely is unhappy with this. For the left-hand side to be 0, I would require the Poisson bracket of H1 and f to be 0. After all, the Liouvillian operator had the Poisson bracket of H1 and f. And we have seen functions that satisfy this have to be of the form f1 of H. And in our case, my f1 is simply the kinetic energy of one particle, plus the potential due to the box. OK? So you can see that the only way that I can make this marriage with this is to have a global equilibrium, where f for the global equilibrium is proportional, let's say, to exponential of minus beta p squared over 2m plus U of q. OK? So I identify a piece that looks like this p squared over 2m. I have to make pi to be 0 in order to achieve this. I have to make beta to be independent of q. I have to make alpha of f marriage with beta to give me the potential. OK? So essentially, what happens is this. I open this hole. The gas starts to stream out. And then the particles will collide with each other. The collision of particles over some timescale that is related to this collision time will randomize their momenta so that locally, suddenly, I have a solution such as this. But then the left-hand side of the Boltzmann equation takes over. And over a timescale that is much longer, it will then change the parameters of this. So basically, these parameters should really be thought of as functions of time, whose evolution changes. So initially, maybe locally over here, there is a stream velocity that goes towards the right. So I should have an average at that time that prefers to go to the right. I wait sufficiently long. Presumably, the whole thing comes to equilibrium. And that average goes to 0 [INAUDIBLE]. So presumably, the next thing that I need to do in order to answer the question, how does this thing come to equilibrium, is to find out how the left-hand side of the equation manipulates these parameters alpha, beta, and pi. But that's not quite consistent either. Because I first assume that the right-hand side is 0, and then said, oh, that's not a good solution. Let's go and see what the left-hand side does. But then if the left-hand side is changing things, I can't be over here where dH by dt is 0. So I have not done things consistently yet, although I have clearly captured the whole lot of what I want to describe eventually. OK? So let's see how we can do it systematically. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: When you [? write ?] the condition for [INAUDIBLE], [? did you ?] write f1 of Hamiltonian. And ends up Hamiltonian [INAUDIBLE] includes interaction terms. PROFESSOR: It's the one-particle Hamiltonian. AUDIENCE: Oh. PROFESSOR: Right. So by the time I got to the equation for f1, I have H1. H1 does not have a second particle in it to collide with. OK? AUDIENCE: Then equilibrium system's distribution of particles in container does not depend on their interactions after the [INAUDIBLE]? PROFESSOR: In the story that we are following here, yes, it does not. So this is an approximation that actually goes with the approximations that we made in neglecting some of the higher order terms, with nd cubed being much less than 1, and not looking at things at resolution that is of the order of d. And we would imagine that it is only at the resolutions-- AUDIENCE: [INAUDIBLE] neglected [INAUDIBLE] atomized gas conjoined onto the molecules. Something like that. PROFESSOR: That would require a different description. AUDIENCE: Yeah. It would require [INAUDIBLE]. PROFESSOR: Yeah. So indeed. Eventually, in the description that we are going to get, we are looking at the distribution that would be appropriate to a dilute gas and which interactions are not seen. OK? All right. So let's see how we can do it better. So clearly, the key to the process is that collisions that take place frequently at timescale tau x randomize a lot of things. But they cannot randomize quantities that are conserved in collisions. So those are the quantities that we have to look at. And again, these are quantities that are conserved in collision, as opposed to quantities that are conserved according to H1. Like, momentum is conserved in the collision. Clearly, momentum was not conserved in H1 because of the size of the box and did not appear in the eventual solution. OK. Let's see what we get. So we are going to make a couple of definitions. First, we define a local density. I say that I take my solution to Boltzmann equation. And now I don't know what it is. But whatever it is, I take the solution to the Boltzmann equation as a function of p and q and t. But all I am interested is, at this particular location, have the particles arrived yet, what's the density. So to get that, I essentially integrate over the momenta, which is the quantity that I am not interested. OK? Obvious. I define-- maybe I need the average kinetic energy at that location. So maybe, in general, I need the average or some other quantity that I define at location q and t. What I do is I integrate at that position f of p, and q, and t, and this function O. Actually, I also define a normalization. I divide by n of q and t. OK? Now, I implicitly used something over there that, again, I define here, which is a collision-conserved quantity, is some function of momentum-- I could put q also-- such that when evaluated for p1 plus p2 for any pair of p1's and p2's that I take is the same thing as when evaluated for functions of p1 and p2 that would correspond to the outcomes of the collision from or giving rights to p1, p2. OK? And then there's a following theorem that we were going to use, which is that for any collision-conserved quantity, what I have is that I will define a quantity J, which is a function of q and t in principle, which is obtained by integrating over all momenta. Chi-- let's say, evaluated at p1; it could be evaluated at q-- times this collision operator-- C of f, f. And the answer is that for any collision-conserved quantity, if you were to evaluate this, the answer would be 0. OK? So let's write that down. So J of q and t is an integral over some momentum p1. Because of the collision integral, what do I have? I have the integration over momentum. I have the integration over b. I have the relative velocity v2 minus v1. And then I have the subtraction of interaction between p1 and p2-- addition of the interactions between p1 prime, p2 prime. And this collision integral has to be multiplied with chi evaluated at p1. OK? Fine. So then I will follow exactly the steps that we had in the proof of the H theorem. I have some function that only depends on one of four momenta that are appearing here. And I want to symmetrize it. So I change the index one to two. So this becomes 1/2 of chi of p1 plus chi of p2. I do this transition between primed and un-primed. And when I do that, I pick this additional factor of minus chi of p1 prime, chi of p2 prime. And this becomes division by 2 times 2. But then I stated that chi is a collision-conserved quantity, so that this term in the bracket is 0 for each point that I'm integrating. And hence, the answer is 0. Yes. AUDIENCE: In the [INAUDIBLE] theorem, in the second [INAUDIBLE], do you want to put minus sign of the--? PROFESSOR: There was a minus sign worry that I had somewhere. AUDIENCE: I think in front of 1/2, at the right term. PROFESSOR: In front of the 1/2, minus f of p1. You say I have to do this? AUDIENCE: No, no, no. The next term. PROFESSOR: Here? AUDIENCE: Yeah. I mean, if you put minus sign in front of the 1/2, then [INAUDIBLE]? PROFESSOR: OK. Let's put a question mark here, because I don't want to go over that. AUDIENCE: [INAUDIBLE] but you switched the order of the subtraction [? 1/2. ?] So it's minus [INAUDIBLE]. PROFESSOR: But I changed also the arguments here. AUDIENCE: Oh, OK. PROFESSOR: Right? So there is some issue. But this eventual thing is correct. So we will go back to that. OK. So what is the use of stating that for every conserved quantity, you have this? It will allow us to get equations that will determine how these parameters change as a function of time? OK. So let's see how that goes. If then I have that Lf is C, f, f, and I showed that the integral of chi against C, f, f is 0, then I also know that the integral of chi against this L operator on f is 0. OK? And let's remind you that the L operator is a bunch of first derivatives-- d by dt, d by dq, d by dp. And so I can use the following result. I can write the result as this derivative acting on both chi and f. And then I will get chi prime f plus f chi prime. Chi f prime I have, so I have to subtract fL acting on chi. OK? And proceeding a little bit. One more line maybe. d cubed p. I have dt plus-- well, actually, let me do the rest next time around. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 12_Classical_Statistical_Mechanics_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So the question is, how do we describe the various motions that are taking place? And in principle, the Boltzmann equation that we have been developing should be able to tell us something about all of that. Because we put essentially all of the phenomena that we think are relevant to a dilute gas, such as what's going on in this room, into that equation, and reduced the equation to something that had a simple form where on the left hand side there was a set of derivatives acting on the one particle, probability as a function of position, and momentum, let's say, for the gas in this room. And on the right hand side, we had the collision operator. And again, to get the notation straight, this set of operations on the left hand side includes a time derivative, a thing that involves the velocity causing variations in the coordinate, and any external forces causing variations in momentum. I write that because I will, in order to save some space, use the following notation in the remainder of this lecture. I will use del sub t to indicate partial derivatives with respect to time. This term I will indicate as p alpha over m d alpha. So d sub alpha stands for derivative with respect to, say, x, y, and z-coordinates of position. And this summation over the repeated index is implied here. And similarly here, we would write F alpha d by dp alpha. We won't simplify that. So basically, L is also this with the summation implicit. Now, the other entity that we have, which is the right hand side of the equation, had something to do with collisions. And the collision operator Cff was an integral that involved bringing in a second coordinate, or second particle, with momentum p2, would come from a location such that the impact parameter would be indicated by B. We would need to know the flux of incoming particles. So we had the relative velocities. And then there was a term that was essentially throwing you off the channel that you were looking at. And let's say we indicate the variable here rather than say p by p1. Then I would have here F evaluated at p1, F evaluated at p2, p2 being this momentum that I'm integrating. And then there was the addition from channels that would bring in probability. So p1 prime and p2 prime, their collision would create particles in the channel p1, p2. And there was some complicated relation between these functions and these functions for which you have to solve Newton's equation. But fortunately, for our purposes, we don't really need all of that. Again, all of this is evaluated at the same location as the one that we have over here. Now let's do this. Let's take this, which is a function of some particular q and some particular p1, which is the things that we have specified over there. Let's multiply it with some function that depends on p1, q, and potentially t, and integrate it over P1. Once I have done that, then the only thing that I have left depends on q and t, because everything else I integrated over. But in principle, I made a different integration. I didn't have the integration over q. So eventually, this thing will become a function of q. OK, so let's do the same thing on the right hand side of this equation. So what I did was I added the integration over p1. And I multiplied by some function. And I'll remember that it does depend on q. But since I haven't written the q argument, I won't write it here. So it's chi of p1. So I want to-- this quantity j that I wrote is equal to this integral on the right. Now, we encountered almost the same integral when we were looking for the proof of the H-theorem where the analog of this chi of p1 was in fact log of f evaluated at p1. And we did some manipulations that we can do over here. First of all, here, we have dummy integration variables p1 and p2. We can just change their name and then essentially average over those two possibilities. And the other thing that we did was this equation has the set of things that come into the collision, and set the things that, in some sense, got out of the collision, or basically things that, as a result of these collisions, create these two. So you have this symmetry between the initiators and products of the collision. Because essentially the same function describes going one way and inverting things and going backwards. And we said that in principle, I could change variables of integration. And the effect of doing that is kind of moving the prime coordinates to the things that don't have primes. I don't know how last time I made the mistake of the sign. But it's clear that if I just put the primes from here to here, there will be a minus sign. So the result of doing that symmetrization should be a minus chi of p1 prime minus chi of p2 prime. And again, to do the averaging, I have to put something else. Now, this statement is quite generally true. Whatever chi I choose, I will have this value of j as a result of that integration. But now we are going to look at something specific. Let's assume that we have a quantity that is conserved in collision. This will be 0 for collision conserved quantity. Like let's say if my chi that I had chosen here was some component of momentum px, then whatever some of the incoming momenta will be the sum of the outgoing momenta. So essentially anything that I can think of that is conserved in the collision, this function that relates p primes to p1 and p2 has the right property to ensure that this whole thing will be 0. And that's actually really the ultimate reason. I don't really need to know about all of these cross sections and all of the collision properties, et cetera. Because my focus will be on things that are conserved in collisions. Because those are the variables that are very slowly relaxing, and the things that I'm interested in. So what you have is that for these collision conserved quantities, which is the things that I'm interested in, this equation is 0. Now, if f satisfies that equation, I can certainly substitute over here for Cff Lf. So if f satisfies that equation, and I pick a collision conserved quantity, the integral over p1 of that function of the collision conserved quantity times the bunch of first derivatives acting on f has to be 0. So this I can write in the following way-- 0 is the integral over p1. Actually, I have only one momentum. So let's just ignore it from henceforth. The other momentum I just introduced in order to be able to show that when integrated against the collision operator, it will give me 0. I have the chi, and then this bunch of derivatives dt plus p alpha over m p alpha plus f alpha d by dp alpha acting on f. And that has to be 0. STUDENT: So alpha stands for x, y, z? PROFESSOR: Yes, alpha stands for the three components x, y, and z throughout this lecture. And summation over a repeated index is assumed. All right, so now what I want to do is to move this chi so that the derivatives act on both of them. So I'll simply write the integral of dqp-- if you like, this bunch of derivatives that we call L acting on the combination chi f. But a derivative of f chi gives me f prime chi. But also it keeps me chi prime f that I don't have here. So I have to subtract that. And why did I do that? Because now I end up with integrals that involve integrating over f against something. So let's think about these typical integrals. If I take the integral over momentum of f of p and q and t-- remember, f was the one particle density. So I'm integrating, let's say, at a particular position in space over all momentum. So it says, I don't care what momentum. I just want to know whether there's a particle here. So what is that quantity? That quantity is simply the density at that location. Now suppose I were to integrate this against some other function, which could depend on p, q, and t, for example? I use that to define an average. So this is going to be defined to be the average of O at that location q and t. So for example, rather than just calculating whether or not there are particles here, I could be asking, what is the average kinetic energy of the particles that are here? Then I would integrate this against p squared over 2m. And this average would give me the local expectation value of p squared over m, just a normalization n so that it's appropriately normalized. So with this definition, I can write the various terms that I have over here. So let me write it a little bit more explicitly. What do we have? We have integral d cubed p. We have this bunch of derivatives acting on chi f minus f times this bunch of derivatives acting on chi. So let's now look at things term by term. The first term is a time derivative. The time derivative I can take outside the integral. Once I take the time derivative outside the integral, what is left? What is left is the integral of chi f, exactly what I have here. O has been replaced by chi. So what I have is the time derivative of n expectation value of chi using this definition. Let's look at the next term. The next term, these derivatives are all over position. The integration is over momentum. I can take it outside. So I can write it as d alpha. And then I have a quantity that I'm integrating against f. So I will get n times this local average of that quantity. What's that quantity? It's p alpha over m times chi. What's the third term? The third term is actually an integral over momentum. But I'm integrating over momentum. So again, you can sort of remove things to boundaries and convince yourself that that integral will not give you a contribution. The next bunch of terms are simply directly this-- f integrated against something. So they're going to give me minus n times the various averages involved-- d t chi minus n, the average of p alpha over m, the alpha chi. And then f alpha I can actually take outside, minus n f alpha, the average of d chi by d p alpha. And what we've established is that that whole thing is 0 for quantities that are conserved under collisions. So why did I do all of that? It's because solving the Boltzmann equation in six dimensional phase space with all of its integrations and derivatives is very complicated. But really, the things that are slowly relaxing are quantities that are conserved collisions, such as densities, average momentum, et cetera. And so I can focus on variations of these through this kind of equation. Essentially, what that will allow me to do is to construct what are known as hydrodynamic equations, which describe the time evolution of slow variables of your system, the variables that are kind of relevant to making thermodynamic observations, as opposed to variables that you would be interested in if you're thinking about atomic collisions. So what I need to do is to go into that equation and pick out my conserved quantities. So what are the conserved quantities, and how can I describe them by some chi? Well, we already saw this when we were earlier on trying to find some kind of a solution to the H by dt equals 0. We said that log f has to be the sum of collision conserved quantities. And we identified three types of quantities. One of them was related to the number conservation. And essentially, what you have is that 1 plus 1 equals to 1 plus 1. So it's obvious. The other is momentum. And there are three components of this-- px, py, pz. And the third one is the kinetic energy, which is conserved in collisions. In a potential, clearly the kinetic energy of a particle changes as a function of position. But within the short distances of the collisions that we are interested in, the kinetic energy is a conserved quantity. So my task is to insert these values of chi into that equation and see what information they tell me about the time evolution of the corresponding conserved quantities. So let's do this one by one. Let's start with chi equals to 1. If I put chi equals to 1, all of these terms that involve derivatives clearly needed to vanish. And here, I would get the time derivative of the density. And from here, I would get the alpha n expectation value of p alpha over m. We'll give that a name. We'll call that u alpha. So I have introduced u alpha to be the expectation value of p alpha over m. And it can in principle depend on which location in space you are looking at. Somebody opens the door, there will be a current that is established. And so there will be a local velocity of the air that would be different from other places in the room. And that's all we have. And this is equal to 0. And this is of course the equation of the continuity of the number of particles. You don't create or destroy particles. And so this density has to satisfy this nice, simple equation. We will sometimes rewrite this slightly in the following way. This is the derivative of two objects. I can expand that and write it as dt of n plus, let's say, u alpha d alpha of n. And then I would have a term that is n d alpha u alpha, which I will take to the other side of the equation. Why have I done that? Because if I think of n as a function of position and time-- and as usual, we did before define a derivative that moves along this streamline-- you will have both the implicit time derivative and the time derivative because the stream changes position by an amount that is related to velocity. Now, for the Liouville equation, we have something like this, except that the Liouville equation, the right hand side was 0. Because the flows of the Liouville equation, the Hamiltonian flows, were divergenceless. But in general, for a compressable system, such as the gas in this room, the compressibility is indicated to a nonzero divergence of u. And there's a corresponding term on the right hand side. So that's the first thing we can do. What's the second thing we can do? I can pick p-- let's say p beta, what I wrote over there. But I can actually scale it. If p is a conserved quantity, p/m is also a conserved quantity. Actually, as far as this chi is concerned, I can add anything that depends on q and not p. So I can subtract the average value of this quantity. And this is conserved during collisions. Because this part is the same thing as something that is related to density. It's like 1 plus 1 equals to 1 plus 1. And p over beta is conserved also. So we'll call this quantity that we will use for our candidate chi as c beta. So essentially, it's the additional fluctuating speed, velocity that the particles have, on top of the average velocity that is due to the flow. And the reason maybe it's useful to do this is because clearly the average of c is 0. Because the average of p beta over m is u beta. And if I do that, then clearly at least the first thing in the equation I don't have to worry about. I have removed one term in the equation. So let's put c beta for chi over here. We said that the first term is 0. So we go and start with the second term. What do I have? I have d alpha expectation value of c beta. And then I have p alpha over m. Well, p alpha over m is going to be u alpha plus c alpha. So let's write it in this fashion-- u alpha plus c alpha for p alpha over m. And that's the average I have to take. Now let's look at all of these terms that involve derivatives. Well, if I want to take a time derivative of this quantity, now that I have introduced this average, there is a time derivative here. So the average of the time derivative of chi will give me the time derivative from u beta. And actually the minus sign will cancel. And so I will have plus n, the expectation value of d-- well, there's no expectation value of something like this. When I integrate over p, there's no p dependence. So it's just itself. So it is n dt of u beta. OK, what do we have for the next term? Let's write it explicitly. I have n. p alpha over m I'm writing as u alpha plus c alpha. And then I have the position derivative of c beta. And that goes over here. So I will get d alpha of u beta with a minus sign. So this becomes a plus. The last term is minus n f alpha. And I have to take a derivative of this object with respect to p alpha. Well, I have a p here. The derivative of p beta with respect to p alpha gives me delta alpha beta. So this is going to give me delta alpha beta over m. And the whole thing is 0. So let's rearrange the terms over here. The only thing that I have in the denominator is a 1/m. So let me multiply the whole equation by m and see what happens. This term let's deal with last. This term, the first term, becomes nmdt of u beta. The change in velocity kind of looks like an acceleration. But you have to be careful. Because you can only talk about acceleration acting for a particle. And the particle is moving with the stream. OK, and this term will give me the appropriate derivative to make it a stream velocity. Now, when I look at this, the average of c that appears here will be 0. So the term that I have over there is u alpha d alpha u beta. There's no average involved. It will give me n. m is common, so I will get u alpha d alpha u beta, which is nice. Because then I can certainly regard this as one of these stream derivatives. So these two terms, the stream derivative of velocity with time, times mass, mass times the density to make it mass per unit volume, looks like an acceleration. So it's like mass times acceleration. Newton's law, it should be equal to the force. And what do we have if we take this to the other side of the equation? We have f beta. OK, good, so we have reproduced Newton's equation. In this context, if we're moving along with the stream, mass times the acceleration of the group of particles moving with the stream is the force that is felt from the external potential. But there's one other term here. In this term, the term that is uc will average to 0, because the average of u is 0. So what I will have is minus-- and I forgot to write down an n somewhere here. There will be an n. Because all averages had this additional n that I forgot to put. I will take it to the right hand side of the equation. And it becomes d by dq alpha of n. I multiply the entire equation by m. And then I have the average of c alpha c beta. So what happened here? Isn't force just the mass times acceleration? Well, as long as you include all forces involved. So if you imagine that this room is totally stationary air, and I heat one corner here, then the particles here will start to move more rapidly. There will be more pressure here. Because pressure is proportional to temperature, if you like. There will be more pressure, less pressure here. The difference in pressure will drive the flow. There will be an additional force. And that's what it says. If there's variation in these speeds of the particles, the change in pressure will give you a force. And so this thing, p alpha beta, is called the pressure tensor. Yes. STUDENT: Shouldn't f beta be multiplied by n, or is there an n on the other side of that? PROFESSOR: There is an n here that I forgot, yes. So the n was in the first equation, somehow got lost. STUDENT: So the pressure is coming from the local fluctuation? PROFESSOR: Yes. And if you think about it, the temperature is also the local fluctuation. So it has something to do with temperature differences. Pressure is related to temperature. So all the things are connected. And in about two minutes, I'll actually evaluate that for you, and you'll see how. Yes. STUDENT: Is the pressure tensor distinct from the stress tensor? PROFESSOR: It's the stress tensor that you would have for a fluid. For something more complicated, like an elastic material, it would be much more complicated-- not much more complicated, but more complicated. Essentially, there's always some kind of a force per unit volume depending on what kind of medium you have. And for the gas, this is what it is. Yes. STUDENT: So a basic question. When we say u alpha is averaged, averaged over what? Is it by the area? PROFESSOR: OK, this is the definition. So whenever I use this notation with these angles, it means that I integrated over p. Why do I do that? Because of this asymmetry between momenta and collision and coordinates that is inherent to the Boltzmann equation. When we wrote down the Liouville equation, p and q were completely equivalent. But by the time we made our approximations and we talked about collisions, et cetera, we saw that momenta quickly relax. And so we can look at the particular position and integrate over momenta and define averages in the sense of when you think about what's happening in this room, you think about the wind velocity here over there, but not fluctuations in the momentum so much, OK? All right, so this clearly is kind of a Navier-Stokes like equation, if you like, for this gas. That tells you how the velocity of this fluid changes. And finally, we would need to construct an equation that is relevant to the kinetic energy, which is something like p squared over 2m. And we can follow what we did over here and subtract the average. And so essentially, this is kinetic energy on top of the kinetic energy of the entire stream. This is clearly the same thing as mc squared over 2, c being the quantity that we defined before. And the average of mc squared over 2 I will indicate by epsilon. It's the heat content. Or actually, let's say energy density. It's probably better. So now I have to put mc squared over 2 in this equation for chi and do various manipulations along the lines of things that I did before. I will just write down the final answer. So the final answer will be that dt of epsilon. We've defined dt. I move with the streamline. So I have dt plus u alpha d alpha acting on this density, which is a function of position and time. And the right hand side of this will have two terms. One term is essentially how this pressure kind of moves against the velocity, or the velocity and pressure are kind of hitting against each other. So it's kind of like if you were to rub two things-- "rub" was the word I was looking. If you were to rub two things against each other, there's heat that is generated. And so that's the term that we are looking at. So what is this u alpha beta? u alpha beta-- it's just because p alpha beta is a symmetric object. It doesn't make any difference if you exchange alpha and beta. You symmetrize the derivative of the velocity. And sometimes it's called the rate of strain. And there's another term, which is minus 1/n d alpha of h alpha. And for that, I need to define yet another quantity, this h alpha, which is nm over 2 the average of 3 c's, c squared and then c alpha. And this is called the heat transport. So for a simpler fluid where these are the only conserved quantities that I have, in order to figure out how the fluid evolves over time, I have one equation that tells me about how the density changes. And it's related to the continuity of matter. I have one equation that tells me how the velocity changes. And it's kind of an appropriately generalized version of Newton's law in which mass times acceleration is equated with appropriate forces. And mostly we are interested in the forces that are internally generated, because of the variations in pressure. And finally, there is variations in pressure related to variations in temperature. And they're governed by another equation that tells us how the local energy density, local content of energy, changes as a function of time. So rather than solving the Boltzmann equation, I say, OK, all I need to do is to solve these hydrodynamic equations. Question? STUDENT: Last time for the Boltzmann equation, [INAUDIBLE]. PROFESSOR: What it says is that conservation laws are much more general. So this equation you could have written for a liquid, you could have written for anything. This equation kind of looks like you would have been able to write it for everything. And it is true, except that you wouldn't know what the pressure is. This equation you would have written on the basis of energy conservation, except that you wouldn't know what the heat transport vector is. So what we gained through this Boltzmann prescription, on top of what you may just guess on the basis of conservation laws, are expressions for quantities that you would need in order to sort these equations, because of the internal pressures that are generated because of the way that the heat is flowing. STUDENT: And this quantity is correct in the limit of-- PROFESSOR: In the limit, yes. But that also really is the Achilles' heel of the presentation I have given to you right now. Because in order to solve these equations, I should be able to put an expression here for the pressure, and an expression here for the h. But what is my prescription for getting the expression for pressure and h? I have to do an average that involves the f. And I don't have the f. So have I gained anything, all right? So these equations are general. We have to figure out what to do for the p and h in order to be able to solve it. Yes. STUDENT: In the last equation, isn't that n epsilon instead of epsilon? PROFESSOR: mc squared over 2-- I guess if I put here mc squared over 2, probably it is nf. STUDENT: I think maybe the last equation it's n epsilon, the equation there. PROFESSOR: This equation is OK. OK, so what you're saying-- that if I directly put chi here to be this quantity, what I would need on the left hand side of the equations would involve derivatives of n epsilon. Now, those derivatives I can expand, write them, let's say, dt of n epsilon is epsilon dt of n plus ndt of epsilon. And then you can use these equations to reduce some of that. And the reason that I didn't go through the steps that I would go from here to here is because it would have involved a number of those cancellations. And it would have taken me an additional 10, 15 minutes. All right, so conceptually, that's the more important thing. We have to find some way of doing these things. Now, when I wrote this equation, we said that there is some kind of a separation of time scales involved in that the left hand side of the equation, the characteristic times are order of the time it takes for a particle to, say, go over the sides of the box, whereas the collision times, 1 over tau x, are such that the right hand side is much larger than the left hand side. So as a 0-th order approximation, what I will assume is that the left hand side is so insignificant that I will set it to 0. And then my approximation for the collision is the thing that essentially sets this bracket to 0. This is the local equilibrium that we wrote down before. So that means that I'm assuming a 0-th order approximation to the solution of the Boltzmann equation. And very shortly, we will improve up that. But let's see what this 0-th order approximation gives us, which is-- we saw what it is. It was essentially something like a Gaussian in momentum. But the coefficient out front of it was kind of arbitrary. And now that I have defined the integral over momentum to be density, I will multiply a normalized Gaussian by the density locally. And I will have an exponential. And average of p I will shift by an amount that depends on position. And I divide by some parameter we had called before beta. But that beta I can rewrite in this fashion. So I have just rewritten the beta that we had before that was a function of q and t as 1 over kBT. And this has to be properly normalized. So I will have 2 pi mkBT, which is a function of position to the 3/2. And you can check that the form that I have written here respects the definitions that I gave, namely that if I were to integrate it over momentum, since the momentum part is a normalized Gaussian, I will just get the density. If I were to calculate the average of p/m, I have shifted the Gaussian appropriately so that the average of p/m is the quantity that I'm calling u. The other one-- let's check. Essentially what is happening here, this quantity is the same thing as mc squared over 2kT if I use the definition of c that I have over there. So it's a Gaussian weight. And from the Gaussian weight, you can immediately see that the average of c alpha c beta, it's in fact diagonal. It's cx squared, cy squared. So the answer is going to be delta alpha beta. And for each particular component, I will get kT over m. So this quantity that I was calling epsilon, which was the average of mc squared over 2, is essentially multiplying this by m/2 and summing over delta alpha alpha, which gives me a factor of 3. So this is going to give me 3/2 kT. So really, my energy density is none other than the local 3/2 kT. Yes? STUDENT: So you've just defined, what is the temperature. So over all previous derivations, we didn't really use the classical temperature. And now you define it as sort of average kinetic energy. PROFESSOR: Yeah, I have introduced a quantity T here, which will indeed eventually be the temperature for the whole thing. But right now, it is something that is varying locally from position to position. But you can see that the typical kinetic energy at each location is of the order of kT at that location. And the pressure tensor p alpha beta, which is nm expectation value of c alpha c beta, simply becomes kT over m-- sorry, nKT delta alpha beta. So now we can sort of start. Now probably it's a better time to think about this as temperature. Because we know about the ideal gas type of behavior where the pressure of the ideal gas is simply density times kT. So the diagonal elements of this pressure tensor are the things that we usually think about as being the pressure of a gas, now at the appropriate temperature and density, and that there are no off diagonal components here. I said I also need to evaluate the h alpha. h alpha involves three factors of c. And the way that we have written down is Gaussian. So it's symmetric. So all odd powers are going to be 0. There is no heat transport vector here. So within this 0-th order, what do we have? We have that the total density variation, which is dt plus u alpha d alpha acting on density, is minus nd alpha u alpha. That does not involve any of these factors that I need. This equation-- let's see. Let's divide by mn. So we have Dt of u beta. And let's again look at what's happening inside the room. Forget about boundary conditions at the side of the box. So I'm going to write this essentially for the case that is inside the box. I can forget about the external force. And all I'm interested in is the internal forces that are generated through pressure. So this is dt plus u alpha d alpha of u beta. I said let's forget the external force. So what do we have? We have the contribution that comes from pressure. So we have minus the alpha. I divided through by nm. So let me write it correctly as 1 over nm. I have the alpha. My pressure tensor is nkT delta alpha beta. Delta alpha beta and this d alpha, I can get rid of that and write it simply as d beta. So that's the equation that governs the variations in the local stream velocity that you have in the gas in response to the changes in temperature and density that you have in the gas. And finally, the equation for the energy density, I have dt plus u alpha d alpha. My energy density is simply related to this quantity T. So I can write it as variations of this temperature in position. And what do I have on the right hand side? I certainly don't have the heat transport vectors. So all I have to do is to take this diagonal p alpha beta and contract it with this strain tensor u alpha beta. So the only term that I'm going to get after contracting delta alpha beta is going to be d alpha u alpha. So let's make sure that we get the factors right. So I have minus p alpha beta is nkT d alpha u alpha. So now we have a closed set of equations. They tell me how the density, temperature, and velocity vary from one location to another location in the gas. They're completely closed. That's the only set of things that come together. So I should be able to now figure out, if I make a disturbance in the gas in this room by walking around, by talking, by striking a match, how does that eventually, as a function of time, relax to something that is uniform? Because our expectation is that these equations ultimately will reach equilibrium. That's essentially the most important thing that we deduce from the Boltzmann equation, that it was allowing things to reach equilibrium. Yes. STUDENT: For the second equation, that's alpha? The right side of the second equation? PROFESSOR: The alpha index is summed over. STUDENT: The right side. Is it the derivative of alpha or beta? Yeah, that one. PROFESSOR: It is beta. Because, you see, the only index that I have left is beta. So if it's an index by itself, it better be beta. How did this index d alpha become d beta? Because the alpha beta was delta alpha beta. STUDENT: Also, is it alpha or is it beta? PROFESSOR: When I sum over alpha of d alpha delta alpha beta, I get d beta. Yes. STUDENT: Can I ask again, how did you come up with the f0? Why do you say that option? PROFESSOR: OK, so this goes back to what we did last time around. Because we saw that when we were writing the equation for the hydt, we came up with a factor of what that was-- this multiplying the difference of the logs. And we said that what I can do in order to make sure that this equation is 0 is to say that log is additive in conserved quantities, so log additive in conserved quantities. I then exponentiate it. So this is log of a number. And these are all things that are, when I take the log, proportional to p squared and p, which are the conserved quantities. So I know that this form sets the right hand side of the Boltzmann equation to 0. And that's the largest part of the Boltzmann equation. Now what happens is that within this equation, some quantities do not relax to equilibrium. Some-- let's call them variations. Sometimes I will use the word "modes"-- do not relax to equilibrium. And let's start with the following. When you have a sheer velocity-- what do I mean by that? So let's imagine that you have a wall that extends in the x direction. And along the y direction, you encounter a velocity field. The velocity field is always pointing along this direction. So it only has the x component. There's no y component or z component. But this x component maybe varies as a function of position. So my ux is a function of y. This corresponds to some kind of a sheer. Now, if I do that, then you can see that the only derivatives that would be nonzero are derivatives that are along the y direction. But this derivative along the y direction in all of these equations has to be contracted typically with something else. It has to be contacted with u. But the u's have no component along the y direction. So essentially, all my u's would be of this form. Basically, there will be something like uy. Something like this would have to be 0. You can see that if I start with an initial condition such as that, then the equations are that dt of n-- this term I have to forget-- is 0. Because for this, I need a divergence. And this flow has no divergence. And similarly over here what I see as dt of the temperature is 0. Temperature doesn't change. And if I assume that I am under circumstances in which the pressure is uniform, there's also nothing that I would get from here. So essentially, this flow will exist forever. Yes. STUDENT: Why does your u alpha d alpha n term go away? Wouldn't you get a uxdxn? PROFESSOR: OK, let's see, you want a uxdxn. What I said is that all variations are along the y direction. STUDENT: Oh, so this is not just for velocity, but for everything. PROFESSOR: Yes, so I make an assumption about some particular form. So this is the reasoning. If these equations bring everything to equilibrium, I should be able to pick any initial condition and ask, how long does it take to come to equilibrium? I pick this specific type of equation in which the only variations for all quantities are along the y direction. It's a non-equilibrium state. It's not a uniform state. Does it come relax to equilibrium? And the answer is no, it doesn't. STUDENT: What other properties, other than velocity, is given [INAUDIBLE]? PROFESSOR: Density and temperature. So these equations describe the variations of velocity, density, and temperature. And the statement is, if the system is to reach equilibrium, I should be able to start with any initial configuration of these three quantities that I want. And I see that after a while, it reaches a uniform state. Yes. STUDENT: But if your initial conditions aren't exactly that, but you add a slight fluctuation, it is likely to grow, and it will eventually relax. PROFESSOR: It turns out the answer is no. So I'm sort of approaching this problem from this more kind of hand-waving perspective. More correctly, what you can do is you can start with some initial condition that, let's say, is in equilibrium, and then do a perturbation, and ask whether the perturbation will eventually relax to 0 or not. And let's in fact do that for another quantity, which is the sound mode. So let's imagine that we start with a totally nice, uniform state. There is zero velocity initially. The density is uniform. The temperature is uniform. And then what I do is I will start here. And I will start talking, creating a variation that propagates in this x direction. So I generated a stream that is moving along the x direction. And presumably, as I move along the x direction, there is a velocity that changes with position and temperature. Now initially, I had the density. I said that was uniform. Once I make this sound, as I move along the x direction, and the air is flowing back and forth, what happens is that the density will vary from the uniform state. And the deviations from the uniform state I will indicate by mu. Similarly, the third quantity, let's assume, will have a form such as this. And currently, I have written the most general form of variations that I can have along the x direction. You could do it in different directions. But let's say for simplicity, we stick with this. I haven't told you what mu theta and u are. So I have to see what they are consistent with the equations that we have up there. One thing that I will assume is that these things are small perturbations around the uniform state. And uniform-- sorry, small perturbations typically means that what I intend to do is to do a linearized approximation. So basically, what I will do is I will essentially look at the linear version of these equations. And again, maybe I didn't emphasize it before. Clearly these are nonlinear equations. Because let's say you have u grad u. It's the same nonlinearity that you have, let's say, in Navier-Stokes equation. Because you're transporting something and moving along with the flow. But when you do the linearization, then these operators that involve dt plus something like u-- I guess in this case, the only direction that is varying is x-- something like this of whatever quantity that I have, I can drop this nonlinear term. Why? Because u is a perturbation around a uniform state. And gradients will pick up some perturbations around the uniform state. So essentially the linearization amounts to dropping these nonlinear components and some other things that I will linearizer also. Because all of these functions here, the derivatives act on product of n temperature over here. These are all nonlinear operations. So let's linearize what we have. We have that Dt of the density-- I guess when I take the time derivative, I get n bar the time derivative of the quantity that I'm calling mu. And that's it. I don't need to worry about the convective part, the u dot grad part. That's second order. On the right hand side, what do I have? I have ndu. Well, divergence of u is already the first variation. So for n, I will take its 0-th order term. So I have minus n bar dxux. The equation for ux, really the only component that I have, is dt of ux. Actually, let's write down the equation for temperature. Let's look at this equation. So I have that dt acting on 3/2 kB times T. I will pick up T bar. And then I would have dt of theta. What do I have on the right hand side? I have a derivative here. So everything else here I will evaluate at the 0-th order term, so n bar k T bar dxux. So I can see that I can certainly divide through n bar here. And one of my equations becomes dt of mu is minus dxux. But from here, I see that dxux is also related once I divide by kT to 3/2 dt theta. And I know this to be true. And I seem to have an additional factor of 1 over n bar here. And so I made the mistake at some point, probably when I wrote this equation. STUDENT: It's the third equation. PROFESSOR: Yeah, so this should not be here. And that should not be here means that I probably made a mistake here. So this should be a 1/n, sorry. There was indeed a 1/n / here. And there is no factor here. So we have a relationship between the time derivatives of these variations in density and dx of ux. Fine, what does the equation for u tell us? It tells us that dt of ux is minus 1 over m n bar. Because of the derivative, I can set everything at the variation. And what do I have here? I have d by dx of n bar kB T bar. And if I look at the variations, I have 1 plus mu plus theta. The higher order terms I will forget. Yes. STUDENT: Shouldn't that be plus 3/2 dt theta? PROFESSOR: It should be plus, yes. There is a minus sign here. And that makes it plus. So the n bar we can take outside. This becomes minus kT over m at space variations of mu plus theta. Now, what we do is that what I have here is information about the time derivatives of mu and theta. And here I have space derivatives. So what do I do? I basically apply an additionally dt here, which we'll apply here. And then we can apply it also here. And then we know how dt of mu and dt of theta are related to dx of ux. The minus signs disappear. I have kB T bar divided by m. I have dt of mu is dxux. dt of theta is 2/3 dxux. So I will get 1 plus 2/3 dx squared of ux. So the second derivative of ux in time is proportional to the second derivative of ux in space. So that's the standard wave equation. And the velocity that we have calculated for these sound waves is 5/3 kB the average T over m. So that part is good. These equations tell me that if I create these disturbances, there are sound waves, and we know there are sound waves. And sound waves will propagate with some velocity that is related up to some factors to the average velocity of the gas particles. But what is not good is that, according to this equation, if my waves, let's say, bounce off perfectly from the walls, they will last in this room forever. So you should still be hearing what I was saying last week and the week before. And clearly what we are missing is the damping that is required. So the statement is that all of these equations are fine. They capture a lot of the physics. But there is something important that is left out in that there are some modes-- and I describe two of them here-- that basically last forever, and don't come to equilibrium. But we said that the Boltzmann equation should eventually bring things to equilibrium. So where did we go wrong? Well, we didn't solve the Boltzmann equation. We solved an approximation to the Boltzmann equation. So let's try to do better. STUDENT: I'm sorry, but for the last equation, you took another derivative with respect to t. PROFESSOR: Yes, I took a derivative with respect to t. And it noted that the derivative with respect to t of these quantities mu is related to derivative with respect to x or u. And there was one other derivative with respect to x already, making it two derivatives. So this is the kind of situation that we are facing. Yes. STUDENT: Is the 5/3 k in any way related to the heat capacity ratio of [INAUDIBLE] gas? PROFESSOR: Yes, that's right, yes. So there are lots of these things that are implicit in these questions. And actually, that 3/2 is the same thing as this 3/2. So you can trace a lot of these things to the Gaussian distribution. And they appear in cp versus cv and other things. Yes. STUDENT: Just clarifying something-- this v is different from the mu in the top right? PROFESSOR: Yes, this is v, and that's mu. This v is the velocity of the sound. So I defined this combination, the coefficient relating the second derivatives in time and space as the sound velocity. So let's maybe even-- we can call it vs. All right? STUDENT: And how did you know that that is the [INAUDIBLE] oscillation, the solution that you got? PROFESSOR: Because I know that the solution to dx squared anything is v squared-- sorry, v squared dx squared anything is dt squared anything, is phi is some function of x minus vt. That is a pulse that moves with uniform velocity is a solution to this equation. So we want to do better. And better becomes so-called first order solution. Now, the kind of equation that we are trying to solve at the top is something that its algebraic analog would be something like this-- 2 times x. It's a linear on the left hand side, is quadratic on the right hand side. Let's write it in this form-- except that the typical magnitude of one side is much larger than the other side, so let's say something like this. So if I wanted to solve this equation, I would say that unless x is very close to 2, this 10 to the 6 will blow things up. So my x0 is 2. And that's what we have done. We've solved, essentially, the right hand side of the equation. But I can get a better solution by taking 2 and saying there's a small variation to that that I want to calculate. And I substitute that into the original equation. On the left hand side, I will get 2 times 2, 1 plus epsilon. On the right hand side, I will get 10 to the sixth. And then essentially, I subtract 2 plus 2 epsilon squared from 5. What do I get? I will get 4 epsilon plus 4 epsilon squared. Then I say that epsilon is small. So essentially, I linearize the right hand side. I forget about that. I say that I keep the epsilon here, because it's multiplying 10 to the 6. But the epsilon on the other side is multiplying nothing. So I forget that. So then I will have my epsilon to be roughly, I don't know, 2 times 2 divided by 4 times 10 to the 6. So I have gotten the correction to my first 0-th order solution to this first order. Now we will do exactly the same thing, not for our algebraic equation, but for our Boltzmann equation. So for the Boltzmann equation, which was Lf is C of ff, we said that the right hand side is larger by a factor of 1 over tau x compared to the left hand side. And so what we did was we found a solution f0 that when we put in the collision integral, the answer was 0. Now I want to have a better solution that I will call f1. Just like I did over there, I will assume that f0 is added to a small function that I will call g. And then I substitute this equation, this thing, to the equation. So when I substitute, I will get L acting on f0 1 plus g. Now, what did I do over here? On the left hand side, I ignored the first order term. Because I expect the first order term to be already small. And the left hand side is already small. So I will ignore this. And on the right hand side, I have to linearize. So I have to put f0 1 plus g, f0 1 plus g. Essentially what I have to do is to go to the collisions that I have over here and write for this f0 1 plus g. There are four of such things. Now, the 0-th order term already cancels. Because f0 f0 was f0 f0 by the way that I constructed things. And then I can pull out one factor of f0 out of the integration. So when I linearize this, what I will get is something like f0 that goes on the outside. I have the integral d2p2 d2b v2 minus v1. And then I have something like g of p1 plus g of p2 minus g of p1 prime minus g of p2 prime. So basically, what we have done is we have defined a linearized version of the collision operator that is now linear in this variable g. Now in general, this is also still, although a linear operator, much simpler than the previous quadratic operator-- still has a lot of junk in it. So we are going to simply state that I will use a form of this linearized approximation that is simply g over tau, get rid of all of the integration. And this is called the single collision time approximation. So having done that, what I have is that the L acting on f0 on the left hand side is minus f0 g over tau x on the right hand side. And so I can immediately solve for g. Because L is a first order derivative operator. My g is minus tau x, the Liouville operator acting on f0-- sorry, log of f0. I divide it through by f0. So derivative of f0 divided by f0 is the derivative acting on log of f0. So all I need to do is to essentially do the operations involved in taking these derivatives. Let's say we forget about the force. Because we are looking in the middle of the box, acting on log of what I had written before. So what I have is log of n minus p minus mu squared over 2mkT. Remember, ukTn are all functions of position. So there will be derivatives involved here. And I will just write down what the answer is. So the answer becomes minus tau x. You would have, once you do all of these derivatives and take advantage of the equations that you have written before-- so there's some lines of algebra involved. The final answer is going to be nm c alpha c beta minus-- I should really look at this. m over kT c alpha c beta minus delta alpha beta over 3 c squared u alpha beta, and then mc squared over 2kT minus 5/2 c alpha over T d alpha of T. Yes. STUDENT: Sorry, what's the thing next to the c squared, something alpha beta? PROFESSOR: Delta alpha beta, sorry. So there is a well-defined procedure-- it's kind of algebraically involved-- by which more or less in the same fashion that you can improve on the algebraic solution, get a better solution than the one that we had that now knows something about these relaxations. See, the 0-th order solution that we had knew nothing about tau x. We just set tau x to be very small, and set the right hand side to 0. And then nothing relaxed. Now we have a better solution that involves explicitly tau x. And if we start with that, we'll find that we can get relaxation of all of these modes once we calculate p alpha beta and h alpha with this better solution. We can immediately, for example, see that this new solution will have terms that are odd. There is c cubed term here. So when you're evaluating this average, you will no longer get 0. Heat has a chance to flow with this improved equation. And again, whereas before our pressure was diagonal because of these terms, we will have off diagonal terms that will allow us to relax the shear modes. And we'll do that next. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 8_Kinetic_Theory_of_Gases_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's say that I tell you that I'm interested in a gas that has some temperature. I specify, let's say, temperature. Pressure is room temperature, room pressure. And I tell you how many particles I have. So that's [INAUDIBLE] for you what the macro state is. And I want to then see what the corresponding micro state is. So I take one of these boxes, whereas this is a box that I draw in three dimensions, I can make a correspondence and draw this is 6N-dimensional coordinate space, which would be hard for me to draw. But basically, a space of six n dimensions. I figure out where the position, and the particles, and the momenta are. And I sort of find that there is a corresponding micro state that corresponds to this macro state. OK, that's fine. I made the correspondence. But the thing is that I can imagine lots and lots of boxes that have exactly the same macroscopic properties. That is, I can imagine putting here side by side a huge number of these boxes. All of them are described by exactly the same volume, pressure, temperature, for example. The same macro state. But for each one of them, when I go and find the macro state, I find that it is something else. So I will be having different micro states. So this correspondence is certainly something where there should be many, many points here that correspond to the same thermodynamic representation. So faced with that, maybe it makes sense to follow Gibbs and define an ensemble. So what we say is, we are interested in some particular macro state. We know that they correspond to many, many, many different potential micro states. Let's try to make a map of as many micro states that correspond to the same macro state. So consider n copies of the same macro state. And this would correspond to n different points that I put in this 6N-dimensional phase space. And what I can do is I can define an ensemble density. I go to a particular point in this space. So let's say I pick some point that corresponds to some set of p's and q's here. And what I do is I draw a box that is 6N-dimensional around this point. And I define a density in the vicinity of that point, as follows. Actually, yeah. What I will do is I will count how many of these points that correspond to micro states fall within this box. So at the end is the number of mu points in this box. And what I do is I divide by the total number. I expect that the result will be proportional to the volume of the box. So if I make the box bigger, I will have more. So I divide by the volume of the box. So this is, let's call d gamma is the volume of box. Of course, I have to do this in order to get a nice result by taking the limit where the number of members of the ensemble becomes quite large. And then presumably, this will give me a well-behaved density. In this limit, I guess I want to also have the size of the box go to 0. OK? Now, clearly, with the definitions that I have made, if I were to integrate this quantity against the volume d gamma, what I would get is the integral dN over N. N is, of course, a constant. And the integral of dN is the total number. So this is 1. So we find that this quantity rho that I have constructed satisfies two properties. Certainly, it is positive, because I'm counting the number of points. Secondly, it's normalized to 1. So this is a nice probability density. So this ensemble density is a probability density function in this phase space that I have defined. OK? All right. So once I have a probability, then I can calculate various things according to the rules of probability that we defined before. So for example, I can define an ensemble average. Maybe I'm interested in the kinetic energy of the particles of the gas. So there is a function O that depends on the sum of all of the p squareds. In general, I have some function of O that depends on p and q. And what I defined the ensemble average would be the average that I would calculate with this probability. Because I go over all of the points in phase space. And let me again emphasize that what I call d gamma then really is the product over all points. For each point, I have to make a volume in both momentum and in coordinate. It's a 6N-dimensional volume element. I have to multiply the probability, which is a function of p and q against this O, which is another function of p and q. Yes? AUDIENCE: Is the division by M necessary to make it into a probability density? PROFESSOR: Yes. AUDIENCE: Otherwise, you would still have a density. PROFESSOR: Yeah. When I would integrate then, I would get the total number. But the total number is up to me, how many members of the ensemble I took, it's not a very well-defined quantity. It's an arbitrary quantity. If I set it to become very large and divide by it, then I will get something that is nicely a probability. And we've developed all of these tools for dealing with probabilities. So that would go to waste if I don't divide by. Yes? AUDIENCE: Question. When you say that you have set numbers, do you assume that you have any more informations than just the microscopic variables GP and-- PROFESSOR: No. AUDIENCE: So how can we put a micro state in correspondence with a macro state if there is-- on the-- like, with a few variables? And do you need to-- from-- there's like five variables, defined down to 22 variables for all the particles? PROFESSOR: So that's what I was saying. It is not a one-to-one correspondence. That is, once I specify temperature, pressure, and the number of particles. OK? Yes? AUDIENCE: My question is, if you generate identical macro states, and create-- which macro states-- PROFESSOR: Yes. AUDIENCE: Depending on some kind of a rule on how you make this correspondence, you can get different ensemble densities, right? PROFESSOR: No. That is, if I, in principle and theoretically, go over the entirety of all possible macroscopic boxes that have these properties, I will be putting infinite number of points in this. And I will get some kind of a density. AUDIENCE: What if you, say, generate infinite number of points, but all in the case when, like, all molecules of gas are in right half of the box? PROFESSOR: OK. Is that a thermodynamically equilibrium state? AUDIENCE: Did you mention it needed to be? PROFESSOR: Yes. I said that-- I'm talking about things that can be described macroscopically. Now, the thing that you mentioned is actually something that I would like to work with, because ultimately, my goal is not only to describe equilibrium, but how to reach equilibrium. That is, I would like precisely to answer the question, what happens if you start in a situation where all of the gas is initially in one half of the room? And as long as there is a partition, that's a well-defined macroscopic state. And then I remove the partition. And suddenly, it is a non-equilibrium state. And presumably, over time, this gas will occupy that. So there is a physical process that we know happens in nature. And what I would like eventually to do is to also describe that physical process. So what I will do is I will start with the initial configuration with everybody in the half space. And I will calculate the ensemble that corresponds to that. And that's unique. Then I remove the partition. Then each member of the ensemble will follow some trajectory as it occupies eventually the entire box. And we would like to follow how that evolution takes place and hopefully show that you will always have, eventually, at the end of the day, the gas occupying the system uniform. AUDIENCE: Yeah, but just would it be more correct to generate many, many different micro states as the macro states which correspond to them? And how many different-- PROFESSOR: What rule do you use for generating many, many micro states? AUDIENCE: Like, all uniformly arbitrary perturbations of particles always to put them in phase space. And look to-- like, how many different micro states give rise to the same macro state? PROFESSOR: Oh, but you are already then talking about the macro state? AUDIENCE: A portion-- which description do you use as the first one to generate the second one? So in your point of view, let's do-- [INAUDIBLE] to macro states, and go to micro state. But can you reverse? PROFESSOR: OK. I know that the way that I'm presenting things will lead ultimately to a useful description of this procedure. You are welcome to try to come up with a different prescription. But the thing that I want to ensure that you agree is that the procedure that I'm describing here has no logical inconsistencies. I want to convince you of that. I am not saying that this is necessarily the only one. As far as I know, this is the only one that people have worked with. But maybe somebody can come up with a different prescription. So maybe there is another one. Maybe you can work on it. But I want you to, at this point, be convinced that this is a well-defined procedure. OK? AUDIENCE: But because it's a well-defined procedure, if you did exist on another planet or in some universe where the physics were different, the point is, you can use this. But can't you use this for information in general when you want to-- like, if you have-- the only requirement is that at a fine scale, you have a consistent way of describing things; and at a large scale, you have a way of making sense of generalizing. PROFESSOR: Right. AUDIENCE: So it's sort of like a compression of data, or I use [INAUDIBLE]. PROFESSOR: Yeah. Except that part of this was starting with some physics that we know. So indeed, if you were in a different universe-- and later on in the course, we will be in a different universe where the rules are not classical or quantum-mechanical. And you have to throw away this description of what a micro state is. And you can still go through the entire procedure. But I want to do is to follow these set of equations of motion and this description of micro state, and see where it leads us. And for the gas in this room, it is a perfectly good description of what's happened. Yes? AUDIENCE: Maybe a simpler question. Is big rho defined only in spaces where there are micro states? Like, is there anywhere where there isn't a micro state? PROFESSOR: Yes, of course. So if I have-- thinking about a box, and if I ask what is rho out here, I would say the answer is rho is 0. But if you like, you can say rho is defined only within this space of the box. So the description of the macro state which has something to do with the box, over which I am considering, will also limit what I can describe. Yes. And certainly, as far as if I were to change p with velocity, let's say, then you would say, a space where V is greater than speed of light is not possible. That's the point. So your rules of physics will also define implicitly the domain over which this is there. But that's all part of mechanics. So I'm going to assume that the mechanics part of it is you are comfortable. Yes? AUDIENCE: In your definition of the ensemble average, are you integrating over all 6N dimensions of phase space? PROFESSOR: Yes. AUDIENCE: So why would your average depend on p and q? If you integrate? PROFESSOR: The average is of a function of p and q. So in the same sense that, let's say, I have a particle of gas that is moving on, and I can write the symbol, p squared over 2m, what is this average? The answer will be KT over 2, for example. It will not depend on p. But the quantity that I'm averaging inside the triangle is a function of p and q. Yes? AUDIENCE: So if it's an integration, or basically the--? PROFESSOR: The physical limit of the problem. AUDIENCE: Given a macro state? PROFESSOR: Yes. So typically, we will be integrating q over the volume of a box, and p from minus infinity to infinity, because classically, without relativity, this is a lot. Yes? AUDIENCE: Sorry. So why is the [INAUDIBLE] from one end for every particle, instead of just scattering space, would you have a [INAUDIBLE]? Or is that the same thing? PROFESSOR: I am not sure I understand the question. So if I want to, let's say, find out just one particle that is somewhere in this box, so there is a probability that it is here, there is a probability that it is there, there is a probability that it is there. The integral of that probability over the volume of the room is one. So how do I do that? I have to do an integral over dx, dy, dz, probability as a function of x, y, and z. Now I just repeat that 6N times. OK? All right. So that's the description. But the first question to sort of consider is, what is equilibrium in this perspective? Now, we can even be generous, although it's a very questionable thing, to say that, really, when I sort of talk about the kinetic energy of the gas, maybe I can replace that by this ensemble average. Now, if I'm in equilibrium, the results should not depend as a function of time. So I expect that if I'm calculating things in equilibrium, the result of equations such as this should not depend on time, which is actually a problem. Because we know that if I take a picture of all of these things that I am constructing my ensemble with and this picture is at time t, at time t plus dt, all of the particles have moved around. And so the point that was here, the next instant of time is going to be somewhere else. This is going to be somewhere else. Each one of these is flowing around as a function of time. OK? So the picture that I would like you to imagine is you have a box, and there's a huge number of bees or flies or whatever your preferred insect is, are just moving around. OK? Now, you can sort of then take pictures of this cluster. And it's changing potentially as a function of time. And therefore, this density should potentially change as a function of time. And then this answer could potentially depend on time. So let's figure out how is this density changing as a function of time. And hope that ultimately, we can construct a solution for the equation that governs the change in density as a function of time that is in fact invariant in time. It is going back to my flies or bees or whatever, you can imagine a circumstance in which the bees are constantly moving around. Each individual bee is now here, then somewhere else. But all of your pictures have the same density of bees, because for every bee that left the box, there was another bee that came in its place. So one can imagine a kind of situation where all of these points are moving around, yet the density is left invariant. And in order to find whether such a density is possible, we have to first know what is the equation that governs the evolution of that density. And that is given by the Liouville's equation. So this governs evolution of rho with time. OK. So let's kind of blow off the picture that we had over there. Previously, they're all of these coordinates. There is some point in coordinate space that I am looking at. Let's say that the point that I am looking at is here. And I have constructed a box around it like this in the 6N-dimensional space. But just to be precise, I will be looking at some particular coordinate q alpha and the conjugate momentum p alpha. So this is my original point corresponds to, say, some specific version of q alpha p alpha. And that I have, in this pair of dimensions, created a box that in this direction has size dq alpha, in this direction has size dp alpha. OK? And this is the picture that I have at some time t. OK? Then I look at an instant of time that is slightly later. So I go to a time that is t plus dt. I find that the point that I had initially over here as the center of this box has moved around to some other location that I will call q alpha prime, p alpha prime. If you ask me what is q alpha prime and p alpha prime, I say, OK, I know that, because I know with equations of motion. If I was running this on a computer, I would say that q alpha prime is q alpha plus the velocity q alpha dot dt, plus order of dt squared, which hopefully, I will choose a sufficient and small time interval I can ignore. And similarly, p alpha prime would be p alpha plus p alpha dot dt order of dt squared. OK? So any point that was in this box will also move. And presumably, close-by points will be moving to close-by points. And overall, anything that was originally in this square actually projected from a larger dimensional cube will be part of a slightly distorted entity here. So everything that was here is now somewhere here. OK? I can ask, well, how wide is this new distance that I have? So originally, the two endpoints of the square were a distance dq alpha apart. Now they're going to be a distance dq alpha prime apart. What is dq alpha prime? I claim that dq alpha prime is whatever I had originally, but then the two N's are moving at slightly different velocities, because the velocity depends on where you are in phase space. And so the difference in velocity between these two points is really the derivative of the velocity with respect to the separation that I have between those two points, which is dq times dq alpha. And this is how much I would have expanded plus higher order. And I can apply the same thing in the momentum direction. The new vertical separation dp alpha prime is different from what it was originally, because the two endpoints got stretched. The reason they got stretched was because their velocities were different. And their difference is just the derivative of velocity with respect to separation times their small separation. And if I make everything small, I can, in principle, write higher order terms. But I don't have to worry about that. OK? So I can ask, what is the area of this slightly distorted square? As long as the dt is sufficiently small, all of the distortions, et cetera, will be small enough. And you can convince yourself of this. And what you will find is that the product of dq alpha prime, dp alpha prime, if I were to multiply these things, you can see that dq alpha and dp alpha is common to the two of them. So I have dq alpha, dp alpha. From multiplying these two terms, I will get one. And then I will get two terms that are order of dt, that I will get from dq alpha dot with respect to dq alpha, plus dp alpha dot with respect to dp alpha. And then there will be terms that are order of dt squared and higher order types of things. OK? So the distortion in the area of this particle, or cross section, is governed by something that is proportional to dt. And dq alpha dot over dq alpha plus dp alpha dot dp alpha. But I have formally here for what P i dot and qi dot are. So this is the dot notation for the time derivative other than the one that I was using before. So q alpha dot, what do I have for it? It is dh by dp alpha. So this is d by dq alpha of the H by dp alpha. Whereas p alpha dot, from which I have to evaluate d by dp alpha, p alpha dot is minus dH by dq alpha. So what do I have? I have two second derivatives that appear with opposite sign and hence cancel each other out. OK? So essentially, what we find is that the volume element is preserved under this process. And I can apply this to all of my directions. And hence, conclude that the initial volume that was surrounding my point is going to be preserved. OK? So what that means is that these classical equations of motion, the Hamiltonian equations, for this description of micro state that involves the coordinates and momenta have this nice property that they preserve volume of phase space as they move around. Yes? AUDIENCE: If the Hamiltonian has expontential time dependence, that doesn't work anymore. PROFESSOR: No. So that's why I did not put that over there. Yes. And actually, this is sometimes referred to as being something like an incompressible fluid. Because if you kind of deliver hydrodynamics for something like water if you regard it as incompressible, the velocity field has the condition that the divergence of the velocity is 0. Here, we are looking at a 6N-dimensional mention velocity field that is composed of q alpha dot and p alpha dot. And this being 0 is really the same thing as the divergence in this 6N-dimensional space is 0. And that's a property of the Hamiltonian dynamics that one has. Yes? AUDIENCE: Could you briefly go over why you have to divide by the separation when you expand the times between the displacement? PROFESSOR: Why do I have to multiply by the separation? AUDIENCE: Divide by. PROFESSOR: Where do I divide? AUDIENCE: dq alpha by-- PROFESSOR: Oh, this. Why do I have to take a derivative. So I have two points here. All of my points are moving in time. So if these things were moving with uniform velocity, one second later, this would have moved here, this would have moved the same distance, so that the separation between them would have been maintained if they were moving with the same velocity. So if you are following somebody and you are moving with the same velocity as them, thus, your separation does not change. But if one of you is going faster than the other one, then the difference in velocity will determine how you separate. And what is the difference in velocity? The difference in velocity depends, in this case, to how far apart the points are. So the difference between velocity here and velocity here is the derivative of velocity as a function of this coordinate. Derivative of velocity as a function of that coordinate multiplied by the separation. OK? So what does this incompressibility condition mean? It means that however many points I had over here, they end up in a box that has exactly the same volume, which means that the density is going to be the same around here and around this new point. So essentially, what we have is that the rho at the new point, p prime, q prime, and time, t, plus dt, is the same thing as the rho at the old point, p, q, at time, t. Again, this is the incompressibility condition. Now we do mathematics. So let's write it again. So I've said, in other words, that rho p, q, t is the same as the rho at the new point. What's the momentum at the new point? It is p plus. For each component, it is p alpha plus p alpha dot. Let's put it this way. p plus p dot dt, q plus q dot dt, and t plus dt. That is, if I look at the new location compared to the old location, the time changed, the position changed, the momentum changed. They all changed-- in each arguments changed infinitesimally by an amount that is proportional to dt. And so what I can do is I can expand this function to order of dt. So I have rho at the original point. So this is all mathematics. I just look at variation with respect to each one of these arguments. So I have a sum over alpha, p alpha dot d rho by dp alpha plus q alpha dot d rho by dq alpha plus the explicit derivative, d rho by dt. This entirety is going to be multiplied by dt. And then, in principle, the expansion would have higher order terms. OK? Now, of course, the first term vanishes. It is the same on both times. So the thing that I will have to set to 0 is this entity over here. Now, quite generally, if you have a function of p, q, and t, you evaluate it at the old point and then evaluate at the new point. One can define what is called a total derivative. And just like here, the total derivative will come from variations of all of these arguments. We'll have a partial derivative with respect to time and partial derivatives with respect to any of the other arguments. So I wrote this to sort of make a distinction between the symbol that is commonly used, sometimes d by dt, which is straight, sometimes Df by Dt. And this is either a total derivative or a streamline derivative. That is, you are taking derivatives as you are moving along with the flow. And that is to be distinguished from these partial derivatives, which is really sitting at some point in space and following how, from one time instant to another time instant, let's say the density changes. So Df by Dt with a partial really means sit at the same point. Whereas this big Df by Dt means, go along with the flow and look at the changes. Now, what we have established here is that for the density, the density has some special character because of this Liouville's theorem, that the streamlined derivative is 0. So what we have is that d rho by dt is 0. And this d rho by dt I can also write down as d rho by dt plus sum over all 6N directions, d rho by dp alpha. But then I substitute for p alpha dot from here. p dot is minus dH by dq. And then I add d rho by dq alpha. q alpha dot is dH by dp r. OK? So then I can take this combination with the minus sign to the other side and write it as d rho by dt is something that I will call the Poisson bracket of H and rho, where, quite generally, if I have two functions in phase space that is defending on B and q, this scalary derivative the Poisson bracket is defined as the sum over all 6N possible variation. The first one with respect to q, the second one with respect to p. And then the whole thing with the opposite sign. dA, dP, dB, dq. So this is the Poisson bracket. And again, from the definition, you should be able to see immediately that Poisson bracket of A and B is minus the Poisson bracket of B and A. OK? Again, we ask the question that in general, I can construct in principle a rho of p, q, let's say, for an equilibrium ensemble. But then I did something, like I removed a partition in the middle of the gas, and the gas is expanding. And then presumably, this becomes a function of time. And since I know exactly how each one of the particles, and hence each one of the micro states is evolving as a function of time, I should be able to tell how this density in phase space is changing. So this perspective is, again, this perspective of looking at all of these bees that are buzzing around in this 6N-dimensional space, and asking the question, if I look at the particular point in this 6N-dimensional space, what is the density of bees? And the answer is that it is given by the Poisson bracket of the Hamiltonian that governs the evolution of each micro state and the density function. All right. So what does it mean? What can we do with this? Let's play around with it and look at some consequences. But before that, does anybody have any questions? OK. All right. We had something that I just erased. That is, if I have a function of p and q, let's say it's not a function of time. Let's say it's the kinetic energy for this system where, at t equals to 0, I remove the partition, and the particles are expanding. And let's say the other place you have a potential, so your kinetic energy on average is going to change. You want to know what's happening to that. So you calculate at each instant of time an ensemble average of the kinetic energy or any other quantity that is interesting to you. And your prescription for calculating an ensemble average is that you integrate against the density the function that you are proposing to look at. Now, in principle, we said that there could be situations where this is dependent on time, in which case, your average will also depend on time. And maybe you want to know how this time dependence occurs. How does the kinetic energy of a gas that is expanding into some potential change on average? OK. So let's take a look. This is a function of time, because, as we said, these go inside the average. So really, the only explicit variable that we have here is time. And you can ask, what is the time dependence of this quantity? OK? So the time dependence is obtained by doing this, because essentially, you would be adding things at different points in p, q. And at each point, there is a time dependence. And you take the derivative in time with respect to the contribution of that point. So we get something like this. Now you say, OK, I know what d rho by dt is. So this is my integration over all of the phase space. d rho by dt is this Poisson bracket of H and rho. And then I have O. OK? Let's write this explicitly. This is an integral over a whole bunch of coordinates and momenta. There is, for the Poisson bracket, a sum over derivatives. So I have a sum over alpha-- dH by dq alpha, d rho by dp alpha minus dH by dp alpha, d rho by dq alpha. And that Poisson bracket in its entirety then gets multiplied by this function of phase space. OK. Now, one of the mathematical manipulations that we will do a lot in this class. And that's why I do this particular step, although it's not really necessary to the logical progression that I'm following, is to remind you how you would do an integration by parts when you're faced with something like this. An integration by parts is applicable when you have variables that you are integrating that are also appearing as derivatives. And whenever you are integrating Poisson brackets, you will have derivatives for the Poisson bracket. And the integration would allow you to use integration by parts. And in particular, what I would like to do is to remove the derivative that acts on the densities. So I'm going to essentially rewrite that as minus an integral that involves-- again. I don't want to keep rewriting that thing. I want to basically take the density out and then have the derivative, which is this d by dp in the first term and d by dq in the second term, act on everything else. So in the first case, d by dp alpha will act on O and dH by dq alpha. And in the second case, d by dq alpha will act on O and dH by dp alpha. Again, there is a sum over alpha that is implicit. OK? Again, there is a minus sign. So every time you do this procedure, there is this. But every time, you also have to worry about surface terms. So on the surface, you would potentially have to evaluate things that involve rho, O, and these d by d derivatives. But let's say we are integrating over momentum from minus infinity to infinity. Then the density evaluated at infinity momenta would be 0. So practicality, in all cases that I can think of, you don't have to worry about the boundary terms. So then when you look at these kinds of terms, this d by dp alpha can either act on O. So I will get dO by dp alpha, dH by dq alpha. Or it can act on dH by d alpha. So I will get plus O d2 H, dp alpha dq alpha. And similarly, in this term, either I will have dO by dq alpha, dH by dp alpha, or O d2 H, dq alpha dp alpha. Once more, the second derivative terms of the Hamiltonian, the order is not important. And what is left here is this set of objects, which is none other than the Poisson bracket. So I can rewrite the whole thing as d by dt of the expectation value of this quantity, which potentially is a function of time because of the time dependence of my density is the same thing as minus an integration over the entire phase space of rho against this entity, which is none other than the Poisson bracket of H with O. And this integration over density of this entire space is just our definition of the expectation value. So we get that the time derivative of any quantity is related to the average of its Poisson bracket with the Hamiltonian, which is the quantity that is really governing time dependences. Yes? AUDIENCE: Could you explain again why the time derivative when N is the integral, it's rho as a partial derivative? PROFESSOR: OK. So suppose I'm doing a two-dimensional integral over p and q. So I have some contribution from each point in this p and q. And so my integral is an integral dpdq, something evaluated at each point in p, q that could potentially depend on time. Imagine that I discretize this. So I really-- if you are more comfortable, you can think of this as a limit of a sum. So this is my integral. If I'm interested in the time dependence of this quantity-- and I really depends only on time, because I integrated over p and q. So if I'm interested in something that is a sum of various terms, each term is a function of time. Where do I put the time dependence? For each term in this sum, I look at how it depends on time. I don't care on its points to the left and to the right. OK? Because the big D by Dt involves moving with the streamline. I'm not doing any moving with the streamline. I'm looking at each point in this two-dimensional space. Each point gives a contribution at that point that is time-dependent. And I take the derivative with respect to time at that point. Yes? AUDIENCE: Couldn't you say that you have function O as just some function of p and q, and its time derivative would be Poisson bracket? PROFESSOR: Yes. AUDIENCE: And does the average of the time derivative would be the average of Poisson bracket, and you don't have to go through all the-- PROFESSOR: No. But you can see the sign doesn't work. AUDIENCE: How come? PROFESSOR: [LAUGHS] Because of all of these manipulations, et cetera. So the statement that you made is manifestly incorrect. You can't say that the time dependence of this thing is the-- whatever you were saying. [LAUGHS] AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. Let's see what you are saying. AUDIENCE: So Poisson bracket only counts for in place for averages, right? PROFESSOR: OK. So what do we have here? We have that dp by dt is the Poisson bracket of H and rho. OK? And we have that O is an integral of rho O. Now, from where do you conclude from this set of results that d O by dt is the average of a Poisson bracket that involves O and H, irrespective of what we do with the sign? AUDIENCE: Or if you look not at the average failure of O but at the value of O and point, and take-- I guess it would be streamline derivative of it. So that's assuming that you're just like assigning value of O to each point, and making power changes with time as this point moves across the phase space. PROFESSOR: OK. But you still have to do some bit of derivatives, et cetera, because-- AUDIENCE: But if you know that the volume of the in phase space is conserved, then we basically don't care much that the function O is any much different from probability density. PROFESSOR: OK. If I understand correctly, this is what you are saying. Is that for each representative point, I have an O alpha, which is a function of time. And then you want to say that the average of O is the same thing as the sum over alpha of O alpha of t's divided by N, something like this. AUDIENCE: Eh. Uh, I want to first calculate what does time derivative of O? O remains in a function of time and q and p. So I can calculate-- PROFESSOR: Yes. So this O alpha is a function of AUDIENCE: So if I said-- PROFESSOR: Yes. OK. Fine. AUDIENCE: O is a function of q and p and t, and I take a streamline derivative of it. So filter it with respect to t. And then I average that thing over phase space. And then I should get the same version-- PROFESSOR: You should. AUDIENCE: --perfectly. But-- PROFESSOR: Very quickly, I don't think so. Because you are already explaining things a bit longer than I think I went through my derivation. But that's fine. [LAUGHS] AUDIENCE: Is there any special-- PROFESSOR: But I agree in spirit, yes. That each one of these will go along its streamline, and you can calculate the change for each one of them. And then you have to do an average of this variety. Yes. AUDIENCE: [INAUDIBLE] when you talk about time derivative of probability density, it's Poisson bracket of H and rho. But when you talk about time derivative of average, you have to add the minus sign. PROFESSOR: And if you do this correctly here, you should get the same result. AUDIENCE: Oh, OK. PROFESSOR: Yes. AUDIENCE: Well, along that line, though, are you using the fact that phase space volume is incompressible to then argue that the total time derivative of the ensemble average is the same as the ensemble average of the total time derivative of O, or not? PROFESSOR: Could you repeat that? Mathematically, you want me to show that the time derivative of what quantity? AUDIENCE: Of the average of O. PROFESSOR: Of the average of O. Yes. AUDIENCE: Is it in any way related to the average of dO over dt? PROFESSOR: No, it's not. Because O-- I mean, so what do you mean by that? You have to be careful, because the way that I'm defining this here, O is a function of p and q. And what you want to do is to write something that is a sum over all representative points, divided by the total number, some kind of an average like this. And then I can define a time derivative here. Is that what you are-- AUDIENCE: Well, I mean, I was thinking that even if you start out with your observable being defined for every point in phase space, then if you were to, before doing any ensemble averaging, if you were to take the total time derivative of that, then you would be accounted for p dot and q dot as well, right? And then if you were to take the ensemble average of that quantity, could you arrive at the same result for that? PROFESSOR: I'm pretty sure that if you do things consistently, yes. That is, what we have done is essentially we started with a collection of trajectories in phase space and recast the result in terms of density and variables that are defined only as positions in phase space. The two descriptions completely are equivalent. And as long as one doesn't make a mistake, one can get one or the other. This is actually a kind of a well-known thing in hydrodynamics, because typically, you write down, in hydrodynamics, equations for density and velocity at each point in phase space. But there is an alternative description which we can say that there's essentially particles that are flowing. And particle that was here at this location is now somewhere else at some later time. And people have tried hard. And there is a consistent definition of hydrodynamics that follows the second perspective. But I haven't seen it as being practical. So I'm sure that everything that you guys say is correct. But from the experience of what I know in hydrodynamics, I think this is the more practical description that people have been using. OK? So where were we? OK. So back to our buzzing bees. We now have a way of looking at how densities and various quantities that you can calculate, like ensemble averages, are changing as a function of time. But the question that I had before is, what about equilibrium? Because the thermodynamic definition of equilibrium, and my whole ensemble idea, was that essentially, I have all of these boxes and pressure, volume, everything that I can think of, as long as I'm not doing something that's like opening the box, is perfectly independent of time. So how can I ensure that various things that I calculate are independent of time? Clearly, I can do that by having this density not really depend on time. OK? Now, of course, each representative point is moving around. Each micro state is moving around. Each bee is moving around. But I want the density that is characteristic of equilibrium be something that does not change in time. It's 0. And so if I posit that this particular function of p and q is something that is not changing as a function of time, I have to require that the Poisson bracket of that function of p and q with the Hamiltonian, which is another function of p and q, is 0. OK? So in principle, I have to go back to this equation over here, which is a partial differential equation in 6N-dimensional space and solve with equal to 0. Of course, rather than doing that, we will guess the answer. And the guess is clear from here, because all I need to do is to make rho equilibrium depend on coordinates in phase space through some functional dependence on Hamiltonian, which depends on the coordinates in phase space. OK? Why does this work? Because then when I take the Poisson bracket of, let's say, this function of H with H, what do I have to do? I have to do a sum over alpha d rho with respect to, let's say-- actually, let's write it in this way. I will have the dH by dp alpha from here. I have to multiply by d rho by dq alpha. But rho is a function of only H. So I have to take a derivative of rho with respect to its argument H. And I'll call that rho prime. And then the derivative of the argument, which is H, with respect to q alpha. That would be the first term. The next term would be, with the minus sign, from the H here, dH by dq alpha, the derivative of rho with respect to p alpha. But rho only depends on H, so I will get the derivative of rho with respect to its one argument. The derivative of that argument with respect to p alpha. OK? So you can see that up to the order of the terms that are multiplying here, this is 0. OK? So any function I choose of H, in principle, satisfies this. And this is what we will use consistently all the time in statistical mechanics, in depending on the ensemble that we have. Like you probably know that when we are in the micro canonical ensemble, we look at-- in the micro canonical ensemble, we'd say what the energy is. And then we say that the density is a delta function-- essentially zero. Except places, it's the surface that corresponds to having the right energy. So you sort of construct in phase space the surface that has the right energy. So it's a function of these four. So this is the micro canonical. When you are in the canonical, I use a rho this is proportional to e to the minus beta H and other functional features. So it's, again, the same idea. OK? That's almost but not entirely true, because sometimes there are also other conserved quantities. Let's say that, for example, we have a collection of particles in space in a cavity that has the shape of a sphere. Because of the symmetry of the problem, angular momentum is going to be a conserved quantity. Angular momentum you can also write as some complicated function of p and q. For example, p cross q summed over all of the particles. But it could be some other conserved quantity. So what does this conservation law mean? It means that if you evaluate L for some time, it is going to be the same L for the coordinates and momenta at some other time. Or in other words, dL by dt, which you obtained by summing over all coordinates dL by dp alpha, p alpha dot, plus dL by dq alpha q alpha dot. Essentially, taking time derivatives of all of the arguments. And I did not put any explicit time dependence here. And this is again sum over alpha dL by dp alpha. p alpha dot is minus dH by dq alpha. And dL by dq alpha, q alpha dot is dH by dp alpha. So you are seeing that this is the same thing as the Poisson bracket of L and H. So essentially, conserved quantities which are essentially functions of coordinates and momenta that you calculate that don't change as a function of time are also quantities that have zero Poisson brackets with the Hamiltonian. So if I have a conserved quantity, then I have a more general solution. To my d rho by dt equals to 0 requirement. I could make a rho equilibrium which is a function of H of p and q, as well as, say, L of p and q. And when you go through the Poisson bracket process, you will either be taking derivatives with respect to the first argument here. So you would get rho prime with respect to the first argument. And then you would get the Poisson bracket of H and H. Or you would be getting derivatives with respect to the second argument. And then you would be getting the Poisson bracket of L and H. And both of them are 0. By definition, L is a conserved quantity. So any solution that's a function of Hamiltonian, the energy of the system is conserved as a function of time, as well as any other conserved quantities, such as angular momentum, et cetera, is certainly a valid thing. So indeed, when I drew here, in the micro canonical ensemble, a surface that corresponds to a constant energy, well, if I am in a spherical cavity, only part of that surface that corresponds to the right angular momentum is going to be accessible. So essentially, what I know is that if I have conserved quantities, my trajectories will explore the subspace that is consistent with those conservation laws. And this statement here is that ultimately, those spaces that correspond to the appropriate conservation law are equally populated. Rho is constant around. So in some sense, we started with the definition of rho by putting these points around and calculating probability that way, which was my objective definition of probability. And through this Liouville theorem, we have arrived at something that is more consistent with the subjective assignment of probability. That is, the only thing that I know, forgetting about the dynamics, is that there are some conserved quantities, such as H, angular momentum, et cetera. And I say that any point in phase space that does not violate those constants in this, say, micro canonical ensemble would be equally populated. There was a question somewhere. Yes? AUDIENCE: So I almost feel like the statement that the rho has to not change in time is too strong, because if you go over to the equation that says the rate of change is observable is equal to the integral that was with a Poisson bracket of rho and H, then it means that for any observable, it's constant in time, right? PROFESSOR: Yes. AUDIENCE: So rho of q means any observable we can think of, its function of p and q is constant? PROFESSOR: Yep. Yep. Because-- and that's the best thing that we can think of in terms of-- because if there is some observable that is time-dependent-- let's say 99 observables are time-independent, but one is time-dependent, and you can measure that, would you say your system is in equilibrium? Probably not. OK? AUDIENCE: It seemed like the same method that you did to show that the density is a function of [INAUDIBLE], or [INAUDIBLE] that it's the function of any observable that's a function of q. Right? PROFESSOR: Observables? No. I mean, here, if this answer is 0, it states something about this quantity averaged. So if this quantity does not change as a function of time, it is not a statement that H and 0 is 0. A statement that H and O is 0 is different from its ensemble average being 0. What you can show-- and I think I have a problem set for that-- is that if this statement is correct for every O that the average is 0, then your rho has to satisfy this theorem equals to-- Poisson bracket of rho and H is equal to 0. OK. So now the big question is the following. We arrived that the way of thinking about equilibrium in a system of particles and things that are this many-to-one mapping, et cetera, in terms of the densities-- we arrived that the definition of what the density is going to be in equilibrium. But the thermodynamic statement is much, much more severe. The statement, again, is that if I have a box and I open the door of the box, the gas expands to fill the empty space or the other part of the box. And it will do so all the time. Yet the equations of motion that we have over here are time reversal invariant. And we did not manage to remove that. We can show that this Liouville equation, et cetera, is also time reversal invariant. So for every case, if you succeed to show that there is a density that is in half of the box and it expands to fill the entire box, there will be a density that presumably goes the other way. Because that will be also a solution of this equation. So when we sort of go back from the statement of what is the equilibrium solution and ask, do I know that I will eventually reach this equilibrium solution as a function of time, we have not shown that. And we will attempt to do so next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 1_Thermodynamics_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: This is 8.333 Statistical Mechanics. And I'll start by telling you a little bit about the syllabus before going through the structure of the course. So what I have written here is a, kind of, rough definition of statistical mechanics from my perspective. And the syllabus is a guide to how we are going to approach this object. So let's take a look at the syllabus over there. So the first thing that you're going to focus on is, what is it that you're trying to describe? And these are the equilibrium properties that are described best through what I've been doing section one, which has to do with thermodynamics. You'll start today with thermodynamics. Basically, it's a phenomenological approach, so you essentially look at something, as kind of a black box, without knowing what the ingredients are, and try to give some kind of description of how it's function and properties change. And these can be captured, for the case of thermal properties of matter through the laws of thermodynamics, which we will set out in this first section, which will roughly take us the first four lectures of the course. Then I said that statistical mechanics is a probabilistic approach. So we need to establish what the language of probability is. And that can be the topic for the second two and half lectures of the course. It is something that is less physics-y, but since the topic itself has to deal with probabilities, it is very important, from my perspective, to set out the language and the properties of systems that are described probabilistically. Separately, we'll devote a couple of lectures to do so. And in particular, it is very important that the laws of probability, kind of, simplify when you're dealing with a large number of variables, as captured, for example, by what we call the central limit theorem. So you can see that the third element of this course, which is the law of large numbers, is inherent also to what simplification you will see in the section on probability. And feeds back very much into how statistical mechanics is developed. But then we said large number of degrees of freedom. So what are these degrees of freedom? Well, this is now taking a different perspective. For us in thermodynamics, when you look at the system as a black box, and try to develop laws based on observations, we say that well, from the perspective of physics, we know that this box contains atoms and molecules. And these atoms and molecules are following very specific laws, either from Newtonian mechanics or quantum mechanics. And so if we know everything about how atoms and molecules behave, then we should be able to derive how large collections of them behave. And get the laws of thermodynamics as a consequence of these microscopic degrees of freedom and their dynamics. And so that's what we will discuss in the third part of the course that is devoted to kinetic theory. We will see that even at that stage, it is beneficial to, rather than follow individual particles in a system, to adopt a probabilistic approach, and think about densities, and how those densities evolve according to Liouville's Theorem. And what we will try to also establish is a very distinct difference that exists between thermodynamics, and where things are irreversible and going one direction, and Newtonian, or quantum mechanics, where things are reversible in time. And we'll see that really it's a matter of adapting the right perspective in order to see that these two ways of looking at the same system are not in contradiction. So having established these elements, we will then finally be in the place where we can discuss statistical mechanics in terms of some postulates about how probabilities behave for systems that are in equilibrium. And how based on those postulates, we can then derive all the laws of thermodynamics and all the properties of thermodynamics systems. That they're ordained while observations and phenomenological theories before. Now initially, in section four, we will do that in the context of classical systems-- description of particles following classical laws of motion. And, again, as a first simplification, we will typically deal with non-interacting systems, such as ideal gas. And make sure that we understand the properties of this important fundamental system from all possible perspectives. Then in section five, we will go on to more realistic systems where there are interactions among these particles. And there are two ways to then deal with interactions. You can either go by the way of perturbation theory. We can start with ideal system and add a little bit of interaction, and see how that changes things. And we develop some elements of graphical perturbation theories in this context. Or, you can take another perspective, and say that because of the presence of interactions, the system really adopts a totally different type of behavior. And there's a perspective known as mean field theory that allows you to do that. Then see how the same system can be present in different phases of matter, such as liquids and gas, and how this mean field type of prescription allows you to discuss the transitions between the different types of behavior. Eventually, you will go on, towards the last quarter of the course, rather than the classical description of matter to a quantum description of the microscopic degrees of freedom. And we will see how the differences and similarities between quantum statistical mechanics, classical statistical mechanics, emerge. And just, historically, of course, radius macroscopic properties of the matter, the black body laws, or heat capacities, have been very important in showing the limitations of classical description of matter, and the need to have something else, such as the quantum description. We will not spend too much time, more than three lectures, on the sort of principles of quantum statistical mechanics. The place where quantum statistical mechanics shows its power is in dealing with identical particles, which classically, really are kind of not a very well-defined concept, but quantum-mechanically, they are very precisely defined, what identical particles mean. And there are two classes-- fermions and bosons-- and how even if there's no interaction between them, quantum statistics leads to unusual behavior for quantum systems of identical particles, very distinct for fermions and for bosons. So that's a rough syllables of how the course will be arranged over the next 25, 26 lectures. Any questions about what we're going to cover? OK, then let's go here. So I will be teaching the course. My research is in condensed matter theory and statistical physics. So this is a subject that I like very much. And I hope to impart some of that love of statistical physics to you. And why it is an interesting topic. Lectures and recitations will be conducted in this room, Monday, Wednesday, Friday. And you ask, well, what does it mean that both lectures and recitations are here? Well, for that you will have to consult the timetable. And that's probably the most important part of this web page. And it tells you, for example, that the first five events for the course are all going to be lectures Monday, Wednesday, Friday of next week. And the first recitation will come up on Monday, September the 16th. And the reason for that is that the first problem set will be due on the 18th. And I will arrange for you to have six recitations on the six events that are before the due dates of those problem sets. Also indicated here is the due dates, naturally, of the problem sets. And although I had indicated that this will be handed out tomorrow, the first problem set is already available on the web, so you can start going to take a look at that. And eventually, also on the web page, will be posted the solutions. And the first one will be posted here. Of course, it's not available yet. Surprisingly. Once the due date has passed on the date that is indicated, the solutions will be posted. Also indicated here is that there will be various tests. The first test will fall on October 2. And another time and recitations will take place are prior to the tests. So there is actually three tests, and there will be three citations that will take place before that. And, ultimately, at the end, there will be a final exam. It's date I don't know yet. So I just randomly put it on the Monday of the week where the final exams will be held. And once the actual date is announced by the registrar, I will make sure that I put the correct date and place in this place. OK, so that's the arrangement of the various lectures and recitations. In addition to me, the teaching staff consists of Max [? Imachaov ?] and Anton Goloborodko. Anton is here, sitting at that extreme corner. Max, I believe, is now in Paris. OK, both of them work on biological systems that use a lot of statistical physics content to them. So maybe they will tell you some interesting problems related to that in the recitations. At this point in time, they have both set their office hours to be Thursday, 3:00 to 5:00, in their lab, which is close to where the medical facilities are. If you find that inconvenient, you could potentially change that, or you could get in touch with either the TAs, or myself, when you want to have specific time to meet us. Otherwise, I have indicated my own availability to be the half hours typically after lectures on Monday, Wednesdays, and Fridays. One other set of important things to note is how the course is organized. So I already mentioned to you what the syllabus of the course is. I indicated where and when the lectures and recitations are going to take place. This is the web page that I have been surfing through. And all of the material will be posted through the web page, so that's where you have to go for problem sets, solutions, et cetera. Also, grades. And, in particular, I have my own system of posting the grades, for which I need a pseudonym from each one of you. So if you could all go through this checking online, indicate your name, your email address, and choose a pseudonym, which I emphasize has to be different from your real name. And if it is your real name, I have to randomly come up with something like "forgot to put pseudonym" or something. I cannot have your real name followed by the grades. OK? I'll discuss anonymous comments, et cetera, later on. As you will see, I will hand out, through the web page, extensive lecture notes covering all of the material that I talk about. So in principle, you don't need any textbooks. You can refer to the notes and what you write in the lectures. But certainly, I-- some people like to have a book sitting on their bookshelf. So I have indicated a set of books that you can put on your bookshelf. And hopefully consult for various topics at different stages. And I will, through the problem sets, indicate what are good useful chapters or parts of these books to take a look at. Now how is the grade for the course constructed? An important part of it is through this six problem sets that we mentioned. So each one of them will count for 5%, for a total of 30% going towards the contribution of the problem sets. I have no problem with you forming study groups, as long as each person, at the end, writes their own solution. And I know that if you look at around sufficiently, you can find solutions from previous years, et cetera, but you will really be cheating yourself. And I really bring your attention to the code of honor that is part of the MIT integrity handbook. We have indicated, through the schedule page, the timeline for this six problem sets are due. They are typically due at 5:00 PM on the date that is indicated on that page, and also on the problem set. And the physics department will set up a Dropbox, so you can put the problem set there, or you can bring it to the lecture on the date that it is due. That's also acceptable. There is a grey area of about a day or so sometime, between when the problem set is due and when the solutions are posted. If problem sets are handed in during that gray area, they will count towards final 50%, rather than full towards the grade. Unless you sort of write to me a good excuse that I can give you an extension. Now every now and then, people encounter difficulties, some particular week you are overwhelmed, or whatever, and you can't do the particular problem set, and they ask me for an excuse of some form. And rather than doing that, I have the following metric. That is, each one of these problem sets you will find that there's a set of problems that are indicated as optional. You can do those problems. And they will be graded like all the other problems. And in case, at some later time, you didn't hand in some problem set, or you miss half of the problem set, et cetera, what you did on these optional problems can be used to make up your grade, pushing it, eventually, up to the 30% mark. If you don't do any of the optional problems, you just do the required problem, you will correctly you will reach the 30% mark. If you do every single problem, including optional ones, you will not get more than 30%, so the 30% is upper-bound. So there's that. Then you have the three tests that will be taking place during the lecture time, 2:30 to 4:00, here. Each one of them will count 15%, so that's another 45%. And the remaining 25% will be the final exam that will be scheduled in the finals week. So basically, that's the way that the grades are made up. And the usual definition of what grades mean-- typically, we have been quite generous. I have to also indicate that things will not be graded on a curve. So that's a MIT policy. And there are some links here to places that you can go to if you encounter difficulties during the semester. So any questions about the organization of the course? OK, so let's see what else we have. OK, course outline and schedule, we already discussed. They're likely to be something that are unexpected, or some things that have to be changed. Every now and then there is going to be a hurricane almost with probability, close to one. We will have a hurricane sometime during the next month or so. We may have to postpone a particular lectures accordingly. And then the information about that will be posted here. Currently, the only announcement is what I had indicated to you before. Please check in online indicating that you are taking this course, and what your pseudonym is going to be. OK? I give you also the option, I would certainly welcome any questions that you may have for me here. Please feel free to interrupt. But sometimes people, later on, encounter questions. And for whatever reason, it may be question related to the material of the course. It may be related to when various things are due, or it may be there is some wrong notation in the problem set or whatever, you can certainly anonymously send this information to me. And I will try to respond. And anonymous responses will be posted and displayed web page here. Currently there is none, of course. And, finally, something that-- OK, so there's a web page where the problems will be posted. And I want to emphasize that the web page where the solutions are posted, you may see that you cannot get to it. And the reason would be that you don't have an MIT certificate. So MIT certificates are necessary to reach the solution page. And also they are necessary to reach the page that is devoted to tests. And actually there is something about the way that I do these three in-class tests that is polarizing. And some people very much dislike it. But that's the way it is, so let me tell you how it's going to be conducted. So you will have this one and a half hour [INAUDIBLE] test. And I can tell you, that the problems from the test will be out of this collection that I already posted here. So there is a collection of problems that is posted on this page. And furthermore, the solutions are posted. So there's a version of this that is with solution. So the problems will be taken from this, as well as the problem sets that you have already encountered-- [INAUDIBLE] solution is posted. So if you are familiar with this material, it should be no problem. And that's the way the first three tests this will go. The final will be, essentially, a collection of new problems that are variants of things that you've seen, but will not be identical to those. OK, so where is my cursor? And finally, as I indicated, the grades will be posted according to your pseudonym. So as time goes on, this table will be completed. And the only other thing to note is that there's actually going to be lecture notes for the various materials, starting from the first lecture, that will be devoted to thermodynamics. Any questions? OK. So let me copy that first sentence, and we will go on and talk about thermodynamics. The phenomenological description of equilibrium properties of microscopic systems. And get rid of this. One thing that I should have emphasized when I was doing the syllabus is that I expect that most of you have seen thermodynamics, have done in a certain amount of statistical mechanics, et cetera. So the idea here is really to bring the diversity of backgrounds that you have, for our graduate students, and also I know that our students from other departments, more or less in line with each other. So a lot of these things I expect to be, kind of, review materials, or things that you have seen. That's one reason that we kind of go through them rapidly. And hopefully, however, there will be some kind of logical systematic way of thinking about the entirety of them that would be useful to you. And in particular, thermodynamics, you say, is an old subject, and if you're ultimately going to derive the laws of thermodynamics from some more precise microscopic description, why should we go through this exercise? The reason is that there is really a beautiful example of how you can look at the system as a black box, and gradually, based on observation, build a consistent mathematical framework to describe its properties. And it is useful in various branches of science and physics. And kind of a more 20th century example that I can think of is, Landau's approach to superconductivity and superfluidity, where without knowing the microscopic origin of that, based on kind of phenomenology, you could write down very precise description of the kinds of things that superconductors and superfluids could manifest. So let's sort of put yourselves, put ourselves, in the perspective of how this science of thermodynamics was developed. And this is at the time where Newtonian mechanics had shown its power. It can describe orbits of things going around the sun, and all kinds of other things. But that description, clearly, does not apply to very simple things like how you heat up a pan of water. So there's some elements, including thermal properties, that are missing from that description. And you would like to complete that theory, or develop a theory, that is able to describe also heat and thermal properties. So how do you go about that, given that your perspective is the Newtonian prescription? So first thing to sort of a, kind of, parse among all of these elements, is system. So when describing Newtonian mechanics, you sort of idealize, certainly you realize that Newtonian mechanics does not describe things that we see in everyday world. You kind of think about point particle and how a point particle would move in free space. And so let's try to do a similar thing for our kinds of systems. And the thing that is causing us some difficulty is this issue of heat. And so what you can do is you can potentially isolate your system thermally by, what I would call, adiabatic laws. Basically say that there's these things, such as heat, that goes into the system that causes difficulty for me. So let's imagine in the same that I'm thinking of the point particle, that whatever I have is isolated from the rest of the universe, in some kind of box that does not allow heat transfer. This is to be opposed with walls that we would like to eventually look at, which do allow heat transfer. Let me choose a different color. Let's say green. So ultimately, I want to allow the exchange of whatever this heat is in thermal properties to go and take place with my system. OK? Now, this is basically isolation. The next element is to wait for your system to come to equilibrium. be You realize that when you, for example, start with something that is like this, you change one of the walls to allow heat to go into it. Then the system undergoes some changes. Properties that you're measuring are not well-defined over some period where these changes taking place, but if you wait sufficiently, then they relax to some new values. And then you can start making measurements. So this is when properties don't change. And the key here is observation time. This is part of the phenomenology, because it is not precise. I can't tell you how long you have to wait. It depends on the system under consideration. And some systems come to equilibrium easily, some take a long time. So what are the properties that you can measure? Once things have settled down and no longer change with time, you can start to measure various properties. The ones that are very easy for you to identify are things that are associated with mechanical work, or mechanical properties. And, for example, if you have a box that contains a gas, you can immediately see well, what's the volume of the gas? You can calculate what pressure it is exerting on its environment. So this is for a gas. You could have, for example, a wire. If you have a wire, you could calculate, rather than its volume, its length and the force with which you are pulling it. It could be something like a magnet. And you could put some kind of a magnetic field on it, and figure out what the magnetization is. And this list of mechanical properties goes on. But you know that that's not the end of the story. You kind of expect that there are additional things that have to do with thermal aspects that you haven't taken into account. And as of yet, you don't quite know what they are. And you have to gradually build upon those properties. So you have a system. These are kind of the analogs of the coordinates and potentially velocities that you would have for any Newtonian particles. A way of describing your idealized system. And then you want to find rules by which these coordinates are coevolving, or doing things together. And so for that you rely on observations, and construct laws of thermodynamics. All right so this is the general approach. And once you follow this, let's say, what's the first thing that you encounter? You encounter what is encoded to the zeroth law. What's the zeroth law? The zeroth law is the following statement, if two systems-- let's call them A and B-- are separately in equilibrium with C-- with a third system-- then they are in equilibrium with each other. This is sort of this statement that the property of equilibrium has the character of transitivity. So what that means, pictorially, is something like this. Suppose I have my two boxes, A and B. And the third system that we're calling C. And we've established that A and B are separately in thermal equilibrium with C, which means that we have allowed exchange of heat to take place between A and C, between B and C, but currently, we assume nothing-- or we assume that B and C are not connected to each other. The statement of the law is that if I were to replace this red with green, so that I have also exchange that is going on between A and B, then nothing happens. A and B are already in equilibrium, and the fact that you open the possibility of exchange of heat between them does not change things. And again, this is this consequence of transitivity. And one of its kind of important implications ultimately is that we are allowed now based on this to add one more coordinate to this description that you have. That coordinate is the analog of temperature. So this transitivity really implies the existence of some kind of empirical temperature. And you may say well, this is such an obvious thing, there should be transitivity. Well, I want to usually give people examples that transitivity is not a universal property, by as follows. Suppose that within this room, there is A who wants to go on a date with C, and B who wants to go on a date with C. I'm pretty sure that it's not going to be the property that A wants to go through the date with B. It's 20%. All right. So let's see how we can ensure this. So we said that if some system has reached equilibrium, it has some set of coordinates. Let's call them A1, A2, et cetera. There's some number of them. We don't know. Similarly, here, we have C1, C2, et cetera. And for B, we have B1, B2. Now what does it mean that I have equilibrium of A and B? The implication is that if I have a system by itself, I have a bunch of possible coordinates. I can be anywhere in this coordinate space. And if I separately have system C, I have another bunch of coordinates. And I can be anywhere in this coordinate space. But if I force A and B to come together and exchange heat and reaches equilibrium, that is a constraint, which means that this set of coordinates of the two cannot be independently varied. There has to be some functional relationship between them, which we can, for example, cast into this form. So equilibrium, one constraint, one kind of mathematical relation. Similarly, equilibrium of-- this was A and C, B and C-- would give us some other function BC of coordinates of B and coordinates of C equal to 0. OK? So there is one thing that I can do. I can take this expression and recast it, and write it as let's say, the first coordinate that describes C is some other function, which I will call big F AC of coordinates of A. And all coordinates of C, except the first one that I have removed. Yes? AUDIENCE: When you put that function F AC or all the coordinates of A, and all the coordinates of C being zero, is that [? polynomy ?] always going to be true for the first law, or do you just give an example? PROFESSOR: It will be always true that I have some set of coordinates for 1, some set of coordinates with 2, and equilibrium for the two sets of coordinates is going to be expressible in terms of some function that could be arbitrarily complicated. It may be that I can't even write this, but I have to graph it, or some kind of place in the coordinates space. So what I mean is the following, you can imagine some higher dimensional space that is spanned by As and the Cs. And each point, where the things are in equilibrium, will be some-- you can put a cross in this coordinate space. And you can span the various places where equilibration takes place. And you will have a surface in this space. And that surface potential you can describe mathematically like this. OK? And similarly, I can do the same thing. And pick out coordinate C1 here, and write it as F BC of A1, A2, and C2, C3-- sorry, this is B1, B2. Actually, this brings me, this question, to maybe one point that I should make. That sometimes during this course, I will do things, and maybe I will even say that I do things that are physically rigorous, but maybe not so mathematically. And one example of this is precisely this statement. That is, if you give a mathematician and there is a function like this. And then you pick one, C1, and you write it as a function of all the others, they say, oh, how do know that that's even possible, that this exists? And generally, it isn't. A simple example would be A squared plus C squared equals to zero, then C's multiple valued. So the reason this is physically correct is because if we set up this situation, and really these very physical quantities, I know that I dialed my C1 to this number here, and all of the other things adjusted. So this is kind of physically OK, although mathematically, you would have to potentially do a lot of handwaving to arrive at this stage. OK. So now if I have to put all of those things together, the fact that I have equilibrium of A and B, plus equilibrium of B and C, then implies that there is eliminating C1 between these two, some function that depends on the coordinates of A and coordinates of C, starting from the second one, equal to some other function. And these functions could be very different, coordinates of B and coordinates of C starting from the second one. Yes? AUDIENCE: Do you need A and C on the left there? PROFESSOR: A and C, and B and C. Thank you. OK? So this, everything that we have worked out here, really concerns putting the first part of this equation in mathematical form. But this statement-- in mathematical form-- but given the first part of this statement, that is, the second part of the statement, that is that I know that if I remove that red and make it green, so that heat can exchange, for the same values of A and B, A and B are in equilibrium, so I know that there is a functional form. So this is equilibrium of A and B, implies that there is this function that we were looking at before, that relates coordinates of A and B that constrains the equilibrium that should exist between A and B. OK. So the first part of the statement of the zeroth law, and the second part of the statement of the zeroth law can be written mathematically in these two forms. And the nice part about the first part is that it says that the equilibrium constraint that I have between A's and B's can mathematically be kind of spread out into some function on the left that only depends on the coordinates of A, and some function on the right that only depends on coordinates of B. So that's important, because it says that ultimately the equilibration between two objects can be cast mathematically as having some kind of a function-- and we don't know anything about the form of that function-- that only depends on the coordinates of A. And equilibration implies that there exists some other function that only depends on the coordinates of the other one. And in equilibrium, those two functional forms have to be the same. Now, there's two ways of getting to this statement from the two things that I have within up there. One of them is to choose some particular reference system for C. Let's say your C is seawater at some particular set of conditions. And so then, these are really constants that are appropriate to see water. And then you have chosen a function that depends on variables of A, of course of some function of B. Or you can say, well, I can replace this seawater by something else. And irrespective of what I choose, A and B, there by our definition in equilibrium with each other, no matter what I did with C. Or some various range of C things that I can do, maintaining this equilibrium between A and B. So in that perspective, the C variables are dummy coordinates. So you should be able to cancel them out from the two sides of the equation, and get some kind of a form like this. Either way, what that really means is that if I list all of the coordinates of, say, A, and I put it, say, in equilibrium with a bath that is at some particular temperature, the coordinates of A and B are constrained to lie in some particular surface that would correspond to that particular theta. And if I have another system for B, the isotherm could have completely different for that data. But any time A and B are in equilibrium, they would be lying on the isotherm that would correspond to the same fate. Now, again, a mechanical version of this, that is certainly hopefully demystifies any mystification that may remain, is to think a scale. You have something A on this scale. And you have something C on this scale. And the scale is balanced. You replace A with B, and B and C are in balance, then you know that A and B are in balance with each other. And that implies that, indeed, there is a property of the objects that you put on the balance. You can either think of it as mass, or more appropriately, the gravitational force that they experience, that they need. The thing is balanced. The forces are equal. So it's essentially the same thing. Now having established this, then you want to go and figure out what the formula is that relates the property that needs to be balanced, which is maybe the mass or the gravitational force in terms of density, volume, et cetera. So that's what you would like to do here. We would like to be able to relate this temperature function to all the other coordinates of the system, as we're going to. Any questions? Yes? AUDIENCE: So how did you deduce that there is the isotherm? Is it coming from the second conclusion? PROFESSOR: OK. So what we have said is that when two objects are in equilibrium, this function is the same between. So let's say that we pick some kind of a vat-- like, it could be a lake. And we know that the lake is so big that if you put something in equilibrium with that, it's not extracting too much heat or whatever from the lake can change its temperature function. So we put our system in equilibrium with the lake, which is at some particular fixed value of theta. And we don't know what theta is, but it's a constant. So if I were to fiddle around with the system that I put in the lake-- I change its volume, I change its length, I do something-- and it stays in equilibrium with the length, there is some function of the coordinates of that system that is equal to theta. So again, in general, I can make a diagram that has various coordinates of the system, A1, A2, A3. And for every combination that is that this theta, I will put a point here. And in principle, I can vary these coordinates. And this amounts to one constraint in however many dimensional space I have. So if we span some kind of a surface-- so if you're in three dimension, there would be a two dimensional surface. If you're in two dimension, there would be a line that would correspond to this constraint. Presumably, if I change the lake with something else, so that theta changes, I will be prescribing some other curve and some other surface in this. Now these are surfaces in coordinate space of A. In order to be in equilibrium with an entity at the fixed value of theta, they prescribe some particular surface in the entire coordinate space and they're called isotherms. OK? Actually, let's state that a little bit further, because you would like to give a number to temperature, so many degrees Celsius, or Fahrenheit, or whatever. So how do you do that? And one way to do that is to use what is called the ideal gas temperature space-- the ideal gas scale. So you need some property at this stage in order to construct a temperature scale. And it turns out that a gas is a system that we keep coming back to again and again. So as I go through the various laws of thermodynamics, I will mention something special that happens to this law for the case of this ideal gas. And actually, right now, define also what I mean by the ideal gas. So we said that a gas, in general, you can define through coordinates P and V. So if I put this gas in a piston, and I submerge this piston, let's say in a lake, so that it is always at whatever temperature this lake is. Then I can change the volume of this, and measure what the pressure is, or change the pressure, and figure out what the volume is. And I find out that there is a surface in this space where this curve that corresponds to being equilibrium with this [? leaves-- ?] this is the measure of the isotherm. Now the ideal gas law is that when I go to the limit there, V goes to infinity, or P goes to zero. So that the whole thing becomes very dilute. No matter what gas you put here-- whether it's argon, oxygen, krypton, whatever-- you find that in this limit, this shape of this curve is special in the sense that if you were to increase the volume by a factor of two, the pressure will go down by a factor of two, such that the product PV is a constant. And again, this is only true in the limit where either V goes to infinity or P goes to 0. And this is the property that all gases have. So you say OK, I will use that. Maybe I will define what the value of this product is to be the temperature. So if I were to replace this bath with a bath that was hotter than this product for the same amount of gas, for the same wire, would be different. And I would get a different constant. You maybe still want a number. So what you say is that the temperature of the system in degrees Kelvin is 273 times 16. The limit of PV, as V goes to infinity of your system, divided by the limit of the same thing at the triple point of water, ice, steam. So what does that mean? So you have this thing by which you want to measure temperature. You put it in contact with the system-- could be a bath of water, could be anything, yes? You go through this exercise and you calculate what this product is. So you have this product. Then what you do is you take your system, and you put it in a case that there are icebergs, and there's water, and there will be some steam that will naturally evaporate. So you calculate the same product of PV in this system that is the triple point of ice water, et cetera. So you've measured one product appropriate to your system, one product appropriate to this reference point that people have set, and then the ratio of those things is going to give you the temperature of the system that you want to measure. This is clearly a very convoluted way of doing things, but it's a kind of rigorous definition of what the ideal gas temperature scale is. And it depends on this particular property of the diluted gases that the production of PV is a constant. And again, this number is set by definition to be the temperature of the triple point of the ice water gas. OK. Other questions? All right. So this is now time to go through the first law. Now I'll write again this statement, and then we'll start to discuss what it really means. So if the state of an otherwise adiabatically isolated system is changed by work, the amount of work is only function of initial and final points. OK. So let's parse what that means and think about some particular example. So let's imagine that we have isolated some system. So that it's not completely boring, let's imagine that maybe it's a gas. So it has P and V as some set of coordinates. Let's say that we put some kind of a spring or wire in it, so we can pull on it. And we can ask how much we pulled, and what is the length of this system. Maybe we even put a magnet in it, so we have magnetization that we can measure if we were to pass some kind of a current and exert some kind of magnetic field. So there's a whole bunch of coordinates that I'm completely familiar with from doing my mechanics courses and electromagnetic courses. So I know various ways to do mechanical work on this system. So the system is otherwise isolated, because I don't really know how to handle this concept of heat yet, but I certainly have no problems with mechanical work. And so what I do is, I imagine that it is initially isolated. What I do is therefore, I have some-- in this case, six dimensional coordinate space. I'm will only draw two out of the six. And I start some initial point, let's call it I. And then I start doing various types of things to this. I could, for example, first pull on this, so that the length changes, changes the current, put pressure so that the volume changes, et cetera. At the end of this story, I'm at some other point that I will call F. Now I could have performed this change from the initial to the final state through one set of changes taking place one after the other. But maybe I will change that, and I will perform things in a different way. So there's path number 1. Then there's path number 2. And there's huge number of different paths that I can, in principle, take in order to change between the initial and final points by playing around with the mechanical coordinates that describe the system. I always ensure that, initially, I was in equilibrium, so I could know exactly what the value of these parameters are. And finally, I wait until I have reached equilibrium. So again, I know what this things are. And I know what mechanical work is. And I can calculate along each one of these paths, the net amount of work. The work is delivered in different ways-- through the magnetic field, through the pulling of the spring, to the hydrostatic pressure, et cetera-- but ultimately, when I add up all of the increments of the work, I will find that all of them will give you the same delta W, irrespective of the path. OK. Now this reminds me of the following, that if I'm rolling a ball on top of a hill. And there is no friction. The amount of work that I do in order to take it from here to here, irrespective of the path that I take on the hill, it really is a function of the difference in potential energy between the final and initial points. So it's the same type of thing. Rather than moving these coordinates on a hill, I am moving them in this set of parameters that thermodynamically describe the system, but I see the same thing that I would see in the absence of friction for rolling a ball of the hill. And immediately, I would deduce here that there is this potential energy, and the amount of work that I have to do to roll this off the hill is the difference between the potential energy between the two points. So here, similarly, I would say that this delta W-- I define it to be the difference between some function that is called the internal energy-- that depends on the final set of coordinates. And there's a whole bunch of them. Think of them in some pictorial context. Minus what you have initially. So in the same sense that the zeroth law allowed me to construct some function of coordinates that was relevant to thermal equilibrium, the first law allows me to define another function of coordinates, which is this internal energy. Of course, the internal energy is the remnant of the energy that we know in mechanical systems to be conserved quantity. And this is the statement of fact. So far, nothing surprising. Now the real content of the first law is when we violate this condition. So essentially, what we do is we replace the adiabatic walls that were surrounding our very same system with walls that allow the exchange of heat. And I do exactly the same set of changes. So maybe in one case I stretch this, and then I change the volume, et cetera. I do exactly the same set of changes that I did in this case, I try to repeat them in the presence of diathermic walls, go from the same initial state to the same final state. So the initial state is the same. The final state, I postulate to be the same. And what is observed is that, in this case, the diathermic walls-- which allow heat exchange-- that the amount of work that you have to do is not equal to the change in internal energy. OK? Now you really believe that energy is a good quantity. And so at this stage, you make a postulate, if you like, it's the part of the corollary of the first law that allows you to define exactly what heat is. So you're gradually defining all of the things that were missing in the original formulation. You define this heat input to the system to be the difference in energy that you expected, minus the amount of work that you did in the presence of these walls. OK? Yes? AUDIENCE: If we need the first law to the point of heat-- PROFESSOR: Yes. AUDIENCE: --how did we define adiabatic used in-- PROFESSOR: Right. AUDIENCE: --first law and the zeroth law? PROFESSOR: As I said, it's an idealization. So it's, in the same sense that you can say well, how would you define Newtonian mechanics that force is proportional to mass times acceleration, what is the experimental evidence for that? You can only do that really when you go to vacuum. So what you can do is you can gradually immerse your particle into more and more dilute systems, and see what is the limiting behavior in that sense. You can try to do something similar here. You can imagine that you put your system in some kind of a glass, double glass, container, and you gradually pump out all of the gas this is between the two of them. So that, ultimately, you arrive at vacuum. You also have to mirror things, so there is no radiation exchange, et cetera. So gradually, you can try to experimentally reach that idealization and see what happens. But essentially, it is certainly correct that any statement that I make about adiabatic walls is an isolation. But it's an isolation in the same sense that when you think about the point particle in Newton's laws. OK? Let's go a little bit further with this. So we have that in differential form, if I go from one point in coordinate space that describes the system in equilibrium, where energy's defined to a nearby point, I can calculate what the value of the change in this energy function is. And I can say that there is a quantity, dE, that depends on a whole bunch of coordinates that define the system. And what the first law says is that if you try to operationally make this change from one point to another point, you have to supply work through the system, or you have to supply heat through the system. And we write them in this form. And what this D and d bar-- and we will encounter this many times in future also and define them better-- is that E is a function of state. It depends on where you are in this parameter space. So in the same sense that maybe you have a function of x and y, could be like x squared plus 3x y, I can define what dE is in terms of the x and y. So that's where I have this quantity. But dW and dQ depend on precisely how this system was made to go from here to here. And you can sort of go between how much contribution to dE comes from here or from there, by certainly say, changing the properties of the walls from being adiabatic to being diathermal, et cetera. So these quantities really, as opposed to this quantity that depends on stage, these quantities depend on path, the conditions by which you implement a particular change. Now there is a desire, and it's very important for us to actually construct what this function is. You want to, sort of, know what the energy function for, let's say, a mechanical system is. In a potential, you want to know what the form of E is and then you can do a lot of things with it. So how do we construct that for our thermodynamics system? Again, we can idealize things and go to processes that are so-called, quasi-static. Which effectively means slow, or slow enough, to maintain equilibrium. And the general idea is, suppose I wanted to calculate what the potential energy of a spring is, or what the potential energy of a particle rolling on a hill is, well, one way that I could do that is I could, let's say, pull on this sufficiently slowly, so that as I'm pulling this by a certain amount, the spring does not start to vibrate. And so the force that I'm exerting externally, is really the force that is internal to the spring. If I really push it to rapidly, the spring will start to oscillate. There's really no relation now between the external force that, let's say, is in uniform value and the internal force that is oscillating. I don't want to do that because when that happens I don't know where I am in this coordinate space. I want to do things sufficiently slowly so that I go from here to here here-- every time I am on this plane that defines my properties in equilibrium. If I do that, then I can calculate the amount of work, and for the case of the spring, it would be the force times extension. And so we generalize that, and we say that if I have multiple ways of doing work on the system, there is the analog of the change in length of the spring-- that is the displacement of the spring, if you like-- multiplied by some generalized force that is conjugate to that. And indeed, mechanically, you would define the conjugate variables by differentiation. So if you, for example, know the potential energy of the spring as a function of its length, you take a derivative and you know what the force is. So this is essentially writing that relationship for the case of the spring, generalize to multiple coordinates. And we can make a table of what these displacements are and the corresponding coordinates for the types of systems that we are likely to encounter. So what is x? What is j? And the first thing that we mentioned was the case of the wire. And for the wire we can-- the displacement of the length is important, and the conjugate variable is the force with which you are pulling on this. In the first problem set, you will deal-- in the first test preparation, you will deal with the case of film. And for the film, what you do is you change the area, or if you have a balloon, you can blow on the balloon, and the surface area of the balloon changes. And there's a corresponding conjugate variable to that which is called the surface tension. This is essentially the same thing, going from one dimension to two dimension. And if I were to go one dimension higher, in the case of the gas, I have the volume of the box. And I went through this just for the notation that the hydrostatic pressure of the work is defined to be minus p w-- minus PBV. Sorry. pW is minus PVV, as opposed to, say, the force of the spring, that is FDL. It's just, again, matter of definition and how we define the sign of the pressure. And for the case of the magnet that we also briefly mentioned, we have here MDB. The one thing that you can see is the trend that all of these quantities, if you make the size of your system twice as big, these quantities will get proportionately bigger. So they're called extensive. Whereas the force and all the other quantities are intensive. So what we've established is that if we focus only on these types of transformations, that don't include heat, we can relate the change in energy directly to dW. And the dW, we can write as the sum over i Ji dxi. Now in order to really get dE, in general, I have to add to this dQ. And so really the question that we have is, is there some kind of analog of dW that we can write down for dQ? And if you think about it, if you have two springs that are in equilibrium, so that the thing does not go one way or the other, the force exerted from one is the same as the force exerted on the other. So in equilibrium, mechanical equilibrium forces, pressures, et cetera, are generally the same. And we've established that when you are in thermal equilibrium, temperatures are the same. So we have a very good guess that we should really have temperature appearing here. And then the question that we'll sort of build on is what is the analog of the quantity that we have to put here. Let me finish by telling you one other story that is related to the ideal gas. I said that for every one of these laws, we can kind of come up with a peculiar feature that is unique to the case of the ideal gas. And there is a property related to its energy that we will shortly explore. But let me first say what the consequence of these kinds of relations for measurable quantities that we can already try to deduce are. So one thing that you can relate to heat, and properties of particular material, is the heat capacity. So what you can do is you can take your material and put some amount of heat into it. And ask, what is the corresponding change in temperature? That would be the heat capacity. Now this bar that we have over here, tells me that this quantity-- which we will denote on C-- will depend on the path through which the heat is added to the system. Because we've established that, depending on how you add heat to the system, the change that you have in the coordinate space is going to be distinct potentially. Actually, we establish the other way that for a given change in the coordinate system, the amount of the Q depends on path, so they're kind of equivalent statements. So if I'm dealing with a gas, I can add the heat to it, at least among many other possibilities, in two distinct ways. I can do this at constant volume or at constant pressure. So if I think about the coordinate space of the gas, which is PV, and I start from some particular point, I can either go along a path like this, or I can go along the path like this, depending on which one of these quantities I keep fixed. So then in one case what I need is the change in Q at constant v, dT. In the other case, this change in Q, a constant P, dT. Now a consequence of the first law is that I know that dQ's are related to dE minus dW. And dW for the gas is minus PdV. So I can write it in this fashion. Divided by dT, and in one case, done at constant V, and in the other case, done at constant p. The distinction between these two paths immediately becomes clear, because along the paths where the volume is kept constant, there is no mechanical work that is done along the this path, yes? And so this contribution is 0. And you have the result that this is going to be related to the change in energy with temperature at constant V, whereas here you have the change in energy, in temperature at constant P, plus P dV by dT at constant P. So there is some additional manipulations of derivatives that is involved, but rather than looking at that in the general case, I will follow the consequence of that for a particular example, which is ideal gas expansion. So as this is an observation called Joule's experiment. I take a gas that is adiabatically isolated from its environment. I connect it, also maintaining adiabatic isolation to another chamber. And initially all of the gas is here, and this chamber is empty. And then I remove this, and the gas goes and occupies both chambers. Observation, there is some temperature initially in this system that I can measure. Finally, I can measure the temperature of this system. And I find that the two temperatures are the same. In this example, since the whole thing was adiabatically isolated, that the Q is 0. There is no mechanical work that done on the system, so delta W is also 0. And since delta is the same, I conclude that delta E is the same for this two cases. Now, in principle, E is a function of pressure and volume. And pressure and volume certainly changed very much by going from that place to another place. OK, so let's follow that. So E we said is a function of pressure and volume. Now since I know that for the ideal gas, the product of pressure and volume is temperature, I can certainly exchange one of these variables for temperature. So I can write, let's say, pressure to be proportional to temperature over volume. And then rewrite this as a function of temperature and volume. Now I know that in this process the volume changed, but the temperature did not change. And therefore, the internal energy that did not change can only be a function of temperature. So this Joule expansion experiment immediately tells me that while internal energy, in principle, is a function of P and V, it is really a function of the product of P and V, because the product of P and V, we established to be proportional to temperature. OK? So now if I go and look at these expressions, I can see that these, dE by dT, and irrespective of V and P is the same thing, because E only depends on temperature. And since V is-- we know that PV is proportional to temperature, a constant P, dV by dT is the same as V over T. And so if I look at the difference between those two expressions, what I find is that this part cancels. This part gives me the value of this product, PV over T, which we said is a constant. And it is certainly depends on the amount of gas that you have. And so you can pick a particular amount of gas. You can experimentally verify this phenomenon, that the difference of heat capacity along these two paths is a constant. That constant is the same for the same amount of gas for different types of argon, krypton, et cetera. And since it is proportional, eventually to the amount of matter, we will ultimately see that it can be set to be the number of particles making up the gas in some constant of proportionality that we will identify later in statistical physics is the Boltzmann parameter, which is 1.43 times 10 to the minus 23 or whatever it is. All of this depends partly on the definition that you made of temperature. What we will do next time is to review all of this, because I've went through them a little bit more rapidly, and try to identify what the conjugate variable is that we have to put for temperature, so that we can write the form of dE in a more symmetric fashion. Thank you. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 5_Probability_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We established that, essentially, what we want to do is to describe the properties of a system that is in equilibrium. And a system in equilibrium is characterized by a certain number of parameters. We discussed displacement and forces that are used for mechanical properties. We described how when systems are in thermal equilibrium, the exchange of heat requires that there is temperature that will be the same between them. So that was where the Zeroth Law came and told us that there is another function of state. Then, we saw that, from the First Law, there was energy, which is another important function of state. And from the Second Law, we arrived at entropy. And then by manipulating these, we generated a whole set of other functions, free energy, enthalpy, Gibbs free energy, the grand potential, and the list goes on. And when the system is in equilibrium, it has a well-defined values of these quantities. You go from one equilibrium to another equilibrium, and these quantities change. But of course, we saw that the number of degrees of freedom that you have to describe the system is indicated through looking at the changes in energy, which if you were only doing mechanical work, you would write as sum over all possible ways of introducing mechanical work into the system. Then, we saw that it was actually useful to separate out the chemical work. So we could also write this as a sum of an alpha chemical potential number of particles. But there was also ways of changing the energy of the system through addition of heat. And so ultimately, we saw that if there were n ways of doing chemical and mechanical work, and one way of introducing heat into the system, essentially n plus 1 variables are sufficient to determine where you are in this phase space. Once you have n plus 1 of that list, you can input, in principle, determine others as long as you have not chosen things that are really dependent on each other. So you have to choose independent ones, and we had some discussion of how that comes into play. So I said that today we will briefly conclude with the last, or the Third Law. This is the statement about trying to calculate the behavior of entropy as a function of temperature. And in principle, you can imagine as a function of some coordinate of your system-- capital X could indicate pressure, volume, anything. You calculate that at some particular value of temperature, T, T the difference in entropy that you would have between two points parametrized by X1 and X2. And in principle, what you need to do is to find some kind of a path for changing parameters from X1 to X2 and calculate, in a reversible process, how much heat you have to put into the system. Let's say at this fixed temperature, T, divide by T. T is not changing along the process from say X1 to X2. And this would be a difference between the entropy that you would have between these two quantities, between these two points. You could, in principle, then repeat this process at some lower temperature and keep going all the way down to 0 temperature. What Nernst observed was that as he went through this procedure to lower and lower temperatures, this difference-- Let's call it delta s of T going from X1 to X2 goes to 0. So it looks like, certainly at this temperature, there is a change in entropy going from one to another. There's also a change. This change gets smaller and smaller as if, when you get to 0 temperature, the value of your entropy is independent of X. Whatever X you choose, you'll have the same value of entropy. Now, that led to, after a while, to a more ambitious version statement of the Third Law that I will write down, which is that the entropy of all substances at the zero of thermodynamic temperature is the same and can be set to 0. Same universal constant, set to 0. It's, in principle, through these integration from one point to another point, the only thing that you can calculate is the difference between entropies. And essentially, this suggests that the difference between entropies goes to 0, but let's be more ambitious and say that even if you look at different substances and you go to 0 temperature, all of them have a unique value. And so there's more evidence for being able to do this for different substances via what is called allotropic state. So for example, some materials can exist potentially in two different crystalline states that are called allotropes, for example, sulfur as a function of temperature. If you lower it's temperature very slowly, it stays in some foreign all the way down to 0 temperature. So if you change its temperature rapidly, it stays in one form all the way to 0 temperature in crystalline structure that is called monoclinic. If you cool it very, very slowly, there is a temperature around 40 degrees Celsius at which it makes a transition to a different crystal structure. That is rhombohedral. And the thing that I am plotting here, as a function of temperature, is the heat capacity. And so if you are, let's say, around room temperature, in principle you can say there's two different forms of sulfur. One of them is truly stable, and the other is metastable. That is, in principle, if you rate what sufficiently is of the order of [? centuries ?], you can get the transition from this form to the stable form. But for our purposes, at room temperature, you would say that at the scale of times that I'm observing things, there are these 2 possible states that are both equilibrium states of the same substance. Now using these two equilibrium states, I can start to test this Nernst theorem generalized to different substances. If you, again, regard these two different things as different substances. You could say that if I want to calculate the entropy just slightly above the transition, I can come from two paths. I can either come from path number one. Along path number one, I would say that the entropy at this Tc plus is obtained by integrating degree heat capacity, so integral dT Cx of T divided by T. This combination is none other than dQ. Basically, the combination of heat capacity dT is the amount of heat that you have to put the substance to change its temperature. And you do this all the way from 0 to Tc plus. Let's say we go along this path that corresponds to this monoclinic way. And I'm using this Cm that corresponds to this as opposed to 0 that corresponds to this. Another thing that I can do-- and I made a mistake because what I really need to do is to, in principle, add to this some entropy that I would assign to this green state at 0 because this is the difference. So this is the entropy that I would assign to the monoclinic state at T close to 0. Going along the orange path, I would say that S evaluated at Tc plus is obtained by integrating from 0. Let's say to Tc minus dT, the heat capacity of this rhombic phase. But when I get to just below the transition, I want to go to just above the transition. I have to actually be put in certain amount of latent heat. So here I have to add latent heat L, always at the temperatures Tc, to gradually make the substance transition from one to the other. So I have to add here L of Tc. This would be the integration of dQ, but then I would have to add the entropy that I would assign to the orange state at 0 temperature. So this is something that you can do experimentally. You can evaluate at these integrals, and what you'll find is that these two things are the same. So this is yet another justification of this entropy being independent of where you start at 0 temperature. Again at this point, if you like, you can by [INAUDIBLE] state that this is 0 for everything will start with 0. So this is a supposed new law of thermodynamics. Is it useful? What can we deduce from that? So let's look at the consequences. First thing is so what I have established is that the limit as T goes to 0 of S, irrespective of whatever set of parameters I have-- so I pick T as one of my n plus one coordinates, and I put some other bunch of coordinates here. I take the limit of this going to 0. This becomes 0. So that means, almost by construction, that if I take the derivative of S with respect to any of these coordinates-- if I take then the limit as T goes to 0, this would be fixed T. This is 0. Fine. So basically, this is another way of stating that entropy differences go through 0. But it does have a consequence because one thing that you will frequently measure are quantities, such as extensivity. What do I mean by that? Let's pick a displacement. Could be the length of a wire. Could be the volume of a gas. And we can ask if I were to change temperature, how does that quantity change? So these are quantities typically called alpha. Actually, usually you would also divide by x to make them intensive because otherwise x being extensive, the whole quantity would have been extensive. Let's say we do this at fixed corresponding displacement. So something that is very relevant is you take the volume of gas who changes temperature at fixed pressure, and the volume of the gas will shrink or expand according to this extensive. Now, this can be related to this through Maxwell relationship. So let's see what I have to do. I have that dE is something like Jdx plus, according to what I have over there, TdS. I want to be able to write a Maxwell relation that relates a derivative of x. So I want to make x into a first derivative. So I look at E minus Jx. And this Jdx becomes minus xdJ. But I want to take a derivative of x with respect not s, but with respect to T. So I'll do that. This becomes a minus SdT. So now, I immediately see that I will have a Maxwell relation that says dx by dT at constant J is the same thing as dS by dJ at constant T. So this is the same thing by the Maxwell relation as dS by dJ at constant T. All right? This is one of these quantities, therefore, as T goes 0, this goes to 0. And therefore, the expansivity should go to 0. So any quantity that measures expansion, contraction, or some other change as a function of temperature, according to this law, as you go through 0 temperature, should go to 0. There's one other quantity that also goes to 0, and that's the heat capacity. So if I want to calculate the difference between entropy at some temperature T and some temperature at 0 along some particular path corresponding to some constant x for example, you would say that what I need to do is to integrate from 0 to T the heat that I have to put into the system at constant x. And so if I do that slowly enough, this heat I can write as CxdT. Cx, potentially, is a function of T. Actually, since I'm indicating T as the other point of integration, let me call the variable of integration T prime. So I take a path in which I change temperature. I calculate the heat capacity at constant x. Integrate it. Multiply by dT to convert it to T, and get the result. So all of these results that they have been formulating suggest that the result that you would get as a function of T, for entropy, is something that as T goes to 0, approaches 0. So it should be a perfectly nice, well-defined value at any finite temperature. Now, if you integrate a constant divided by T, divided by dT, then essentially the constant would give you a logarithm. And the logarithm would blow up as we go to 0 temperature. So the only way that this integral does not blow up on you-- so this is finite only if the limit as T goes to 0 of the heat capacities should also go to 0. So any heat capacity should also essentially vanish as you go to lower and lower temperature. This is something that you will see many, many times when you look at different heat capacities in the rest of the course. There is one other aspect of this that I will not really explain, but you can go and look at the notes or elsewhere, which is that another consequence is unattainability of T equals to 0 by any finite set of operations. Essentially, if you want to get to 0 temperature, you'll have to do something that cools you step by step. And the steps become smaller and smaller, and you have to repeat that many times. But that is another consequence. We'll leave that for the time being. I would like to, however, end by discussing some distinctions that are between these different laws. So if you think about whatever could be the microscopic origin, after all, I have emphasized that thermodynamics is a set of rules that you look at substances as black boxes and you try to deduce a certain number of things based on observations, such as what Nernst did over here. But you say, these black boxes, I know what is inside them in principle. It's composed of atoms, molecules, light, quark, whatever the microscope theory is that you want to assign to the components of that box. And I know the dynamics that governs these microscopic degrees of freedom. I should be able to get the laws of thermodynamics starting from the microscopic laws. Eventually, we will do that, and as we do that, we will find the origin of these different laws. Now, you won't be surprised that the First Law is intimately connected to the fact that any microscopic set of rules that you write down embodies the conservation of energy. And all you have to make sure is to understand precisely what heat is as a form of energy. And then if we regard heat as another form of energy, another component, it's really the conservation law that we have. Then, you have the Zeroth Law and the Second Law. The Zeroth Law and Second Law have to do with equilibrium and being able to go in some particular direction. And that always runs a fall of the microscopic laws of motion that are typically things that are time reversible where as the Zeroth Law and Second Law are not. And what we will see later on, through statistical mechanics, is that the origin of these laws is that we are dealing with large numbers of degrees of freedom. And once we adapt the proper perspective to looking at properties of large numbers of degrees of freedom, which will be a start to do the elements of that [? prescription ?] today, the Zeroth Law and Second Law emerge. Now the Third Law, you all know that once we go through this process, eventually for example, we get things for the description of entropy, which is related to some number of states that the system has indicated by g. And if you then want to have S going through 0, you would require that g goes to something that is order of 1-- of 1 if you like-- as T goes to 0. And typically, you would say that systems adopt their ground state, lowest energy state, at 0 temperature. And so this is somewhat a statement about the uniqueness of the state of all possible systems at low temperature. Now, if you think about the gas in this room, and let's imagine that the particles of this gas either don't interact, which is maybe a little bit unrealistic, but maybe repel each other. So let's say you have a bunch of particles that just repel each other. Then, there is really no reason why, as I go to lower and lower temperatures, the number of configurations of the molecules should decrease. All configurations that I draw that they don't overlap have roughly the same energy. And indeed, if I look at say any one of these properties, like the expansivity of a gas at constant pressure which is given in fact with a minus sign. dV by dT at constant pressure would be the analog of one of these extensivities. If I use the Ideal Gas Law-- So for ideal gas, we've seen that PV is proportional to let's say some temperature. Then, dV by dT at constant pressure is none other than V over T. So this would give me 1 over V, V over T. Probably don't need it on this. This is going to give me 1 over T. So not only doesn't it go to 0 at 0 temperature, if the Ideal Gas Law was satisfied, the extensivity would actually diverge at 0 temperature as different as you want. So clearly the Ideal Gas Law, if it was applicable all the way down to 0 temperature, would violate the Third Law of thermodynamics. Again, not surprising given that I have told you that a gas of classical particles with repulsion has many states. Now, we will see later on in the course that once we include quantum mechanics, then as you go to 0 temperature, these particles will have a unique state. If they are bosons, they will be together in one wave function. If they are fermions, they will arrange themselves appropriately so that, because of quantum mechanics, all of these laws would certainly breakdown at T equals to 0. You will get 0 entropy, and you would get consistency with all of these things. So somehow, the nature of the Third Law is different from the other laws because its validity rests on being able to be living in a world where quantum mechanics applies. So in principle, you could have imagined some other universe where h-bar equals to 0, and then the Third Law of thermodynamics would not hold there whereas the Zeroth Law and Second Law would hold. Yes? AUDIENCE: Are there any known exceptions to the Third Law? Are we going to [? account for them? ?] PROFESSOR: For equilibrium-- So this is actually an interesting question. What do I know about-- classically, I can certainly come up with lots of examples that violate. So your question then amounts if I say that quantum mechanics is necessary, do I know that the ground state of a quantum mechanical system is unique. And I don't know of a proof of that for interacting system. I don't know of a case that's violated, but as far as I know, there is no proof that I give you an interacting Hamiltonian for a quantum system, and there's a unique ground state. And I should say that there'd be no-- and I'm sure you know of cases where the ground state is not unique like a ferromagnet. But the point is not that g should be exactly one, but that the limit of log g divided by the number of degrees of freedom that you have should go to 0 as n goes to infinity. So something like a ferromagnet may have many ground states, but the number of ground states is not proportional to the number of sites, the number of spins, and this entity will go to 0. So all the cases that we know, the ground state is either unique or is order of one. But I don't know a theorem that says that should be the case. So this is the last thing that I wanted to say about thermodynamics. Are there any questions in general? So I laid out the necessity of having some kind of a description of microscopic degrees of freedom that ultimately will allow us to prove the laws of thermodynamics. And that will come through statistical mechanics, which as the name implies, has to have certain amount of statistic characters to it. What does that mean? It means that you have to abandon a description of motion that is fully deterministic for one that is based on probability. Now, I could have told you first the degrees of freedom and what is the description that we need for them to be probabilistic, but I find it more useful to first lay out what the language of probability is that we will be using and then bring in the description of the microscopic degrees of freedom within this language. So if we go first with definitions-- and you could, for example, go to the branch of mathematics that deals with probability, and you will encounter something like this that what probability describes is a random variable. Let's call it X, which has a number of possible outcomes, which we put together into a set of outcomes, S. And this set can be discrete as would be the case if you were tossing a coin, and the outcomes would be either a head or a tail, or we were throwing a dice, and the outcomes would be the faces 1 through 6. And we will encounter mostly actually cases where S is continuous. Like for example, if I want to describe the velocity of a gas particle in this room, I need to specify the three components of velocity that can be anywhere, let's say, in the range of real numbers. And again, mathematicians would say that to each event, which is a subset of possible outcomes, is assigned a value which we must satisfy the following properties. First thing is the probability of anything is a positive number. And so this is positivity. The second thing is additivity. That is the probability of two events, A or B, is the sum total of the probabilities if A and B are disjoint or distinct. And finally, there's a normalization. That if you're event is that something should happen the entire set, the probability that you assign to that is 1. So these are formal statements. And if you are a mathematician, you start from there, and you prove theorems. But from our perspective, the first question to ask is how to determine this quantity probability that something should happen. If it is useful and I want to do something real world about it, I should be able to measure it or assign values to it. And very roughly again, in theory, we can assign probabilities two different ways. One way is called objective. And from the perspective of us as physicists corresponds to what would be an experimental procedure. And if it is assigning p of e as the frequency of outcomes in large number of trials, i.e. you would say that the probability that event A is obtained is the number of times you would get outcome A divided by the total number of trials as n goes to infinity. So for example, if you want to assign a probability that when you throw a dice that face 1 comes up, what you could do is you could make a table of the number of times 1 shows up divided by the number of times you throw the dice. Maybe you throw it 100 times, and you get 15. You throw it 200 times, and you get-- that is probably too much. Let's say 15-- you get 35. And you do it 300 times, and you get something close to 48. The ratio of these things, as the number gets larger and larger, hopefully will converge to something that you would call the probability. Now, it turns out that in statistical physics, we will assign things through a totally different procedure which is subjective. If you like, it's more theoretical, which is based on uncertainty among all outcomes. Because if I were to subjectively assign to throwing the dice and coming up with value of 1, I would say, well, there's six possible faces for the dice. I don't know anything about this dice being loaded, so I will say they are all equally alike. Now, that may or may not be a correct assumption. You could test it. You could maybe throw it many times. You will find that either the dice is loaded or not loaded and this is correct or not. But you begin by making this assumption. And this is actually, we will see later on, exactly the type of assumption that you would be making in statistical physics. Any question about these definitions? So let's again proceed slowly to get some definitions established by looking at one random variable. So this is the next section on one random variable. And I will assume that I'll look at the case of the continuous random variable. So x can be any real number minus infinity to infinity. Now, a number of definitions. I will use the term Cumulative-- make sure I'll use the-- Cumulative Probability Function, CPF, that for this one random variable, I will indicate by capital P of x. And the meaning of this is that capital P of x is the probability of outcome less than x. So generically, we say that x can take all values along the real line. And there is this function that I want to plot that I will call big P of x Now big P of x is a probability, therefore, it has to be positive according to the first item that we have over there. And it will be less than 1 because the net probability for everything toward here is equal to 1. So asymptotically, where I go all the way to infinity, the probability that I will get some number along the line-- I have to get something, so it should automatically go to 1 here. And every element of probability is positive, so it's a function that should gradually go down. And presumably, it will behave something like this generically. Once we have the Cumulative Probability Function, we can immediately construct the Probability Density Function, PDF, which is the derivative of the above. P of x is the derivative of big P of x with respect to the x. And so if I just take here the curve that I have above and take its derivative, the derivative will look something like this. Essentially, clearly by the definition of the derivative, this quantity is therefore ability of outcome in the interval x to x plus dx divided by the size of the interval dx. couple of things to remind you of, one of them is that the Cumulative Probability is a probability. It's a dimensionless number between 0 and 1. Probability Density is obtained by taking a derivative, so it has dimensions that are inverse of whatever this x is. So if I change my variable from meters to centimeters, let's say, the value of this function would change by a factor of 100. And secondly, while the Probability Density is positive, its value is not bounded. It can be anywhere that you like. One other, again, minor definition is expectation value. So I can pick some function of x. This could be x itself. It could be x squared. It could be sine x, x cubed minus x squared. The expectation value of this is defined by integrating the Probability Density against the value of the function. So essentially, what that says is that if I pick some function of x-- function can be positive, negative, et cetera. So maybe I have a function such as this-- then the value of x is random. If x is in this interval, this would be the corresponding contribution to f of x. And I have to look at all possible values of x. Question? Now, very associated to this is a change of variables. You would say that if x is random, then f of x is random. So if I ask you what is the value of x squared, and for one random variable, I get this. The value of x squared would be this. If I get this, the value of x squared would be something else. So if x is random, f of x is itself a random variable. So f of x is a random variable, and you can ask what is the probability, let's say, the Probability Density Function that I would associate with the value of this. Let's say what's the probability that I will find it in the interval between small f and small f plus df. This will be Pf f of f. You would say that the probability that I would find the value of the function that is in this interval corresponds to finding a value of x that is in this interval. So what I can do, the probability that I find the value of f in this interval, according to what I have here, is the Probability Density multiplied by df. Is there a question? No. So the probability that I'm in this interval translates to the probability that I'm in this interval. So that's probability p of x dx. But that's boring. I want to look at the situation maybe where the function is something like this. Then, you say that f is in this interval provided that x is either here or here or here. So what I really need to do is to solve f of x equals to f for x. And maybe there will be solutions that will be x1, x2, x3, et cetera. And what I need to do is to sum over the contributions of all of those solutions. So here, it's three solutions. Then, you would say the Probability Density is the sum over i P of xi, the xi by df, which is really the slopes. The slopes translate the size of this interval to the size of that interval. You can see that here, the slope is very sharp. The size of this interval is small. It could be wider accordingly, so I need to multiply by dxi by df. So I have to multiply by dx by df evaluated at xi. That's essentially the value of the derivative of f. Now, sometimes, it is easy to forget these things that I write over here. And you would say, well obviously, the probability of something that is positive. But without being careful, it is easy to violate such basic condition. And I violated it here. Anybody see where I violated it. Yeah, the slope here is positive. The slope here is positive. The slope here is negative. So I am subtracting a probability here. So what I really should do-- it really doesn't matter whether the slope is this way or that way. I will pick up the same interval, so make sure you don't forget the absolute values that go accordingly. So this is the standard way that you would make change of variables. Yes? AUDIENCE: Sorry. In the center of that board, on the second line, it says Pf. Is that an x or a times? PROFESSOR: In the center of this board? This one? AUDIENCE: Yeah. PROFESSOR: So the value of the function is a random variable, right? It can come up to be here. It can come up to be here. And so there is, as any other one parameter random variable, a Probability Density associated with that. That Probability Density I have called P of f to indicate that it is the variable f that I'm considering as opposed to what I wrote originally that was associated with the value of x. AUDIENCE: But what you have written on the left-hand side, it looks like your x [? is random. ?] PROFESSOR: Oh, this was supposed to be a multiplication sign, so sorry. AUDIENCE: Thank you. PROFESSOR: Thank you. Yes? AUDIENCE: CP-- that function, is this [INAUDIBLE]? PROFESSOR: Yes. So you're asking whether this-- so I constructed something, and my statement is that the integral from minus infinity to infinity df Pf of f better be one which is the normalization. So if you're asking about this, essentially, you would say the integral dx p of x is the integral dx dP by dx, right? That was the definition p of x. And the integral of the derivative is the value of the function evaluated at its two extremes. And this is one minus 0. So by construction, it is, of course, normalized in this fashion. Is that what you were asking? AUDIENCE: I was asking about the first possibility of cumulative probability function. PROFESSOR: So the cumulative probability, its constraint is that the limit as its variable goes to infinity, it should go to 1. That's the normalization. The normalization here is that the probability of the entire set is 1. Cumulative adds the probabilities to be anywhere up to point x. So I have achieved being anywhere on the line by going through this point. But certainly, the integral of P of xdx is not equal to 1 if that's what you're asking. The integral of small p of x is 1. Yes? AUDIENCE: Are we assuming the function is invertible? PROFESSOR: Well, rigorously speaking, this function is not invertible because for a value of f, there are three possible values of x. So it's not a function, but you can certainly solve for f of x equals to f to find particular values. So again, maybe it is useful to work through one example of this. So let's say that you have a probability that is of the form e to the minus lambda absolute value of x. So as a function of x, the Probability Density falls off exponentially on both sides. And again, I have to ensure that when I integrate this from 0 to infinity, I will get one. The integral from 0 to infinity is 1 over lambda, from minus infinity to zero by symmetry is 1 over lambda. So it's really I have to divide by 2 lambda-- to lambda over 2. Sorry. Now, suppose I change variables to F, which is x squared. So I want to know what the probability is for a particular value of x squared that I will call f. So then what I have to do is to solve this. And this will give me x is minus plus square root of small f. If I ask for what f of-- for what value of x, x squared equals to f, then I have these two solutions. So according to the formula that I have, I have to, first of all, evaluate this at these two possible routes. In both cases, I will get minus lambda square root of f. Because of the absolute value, both of them will give you the same thing. And then I have to look at this derivative. So if I look at this, I can see that df by dx equals to 2x. The locations that I have to evaluate are at plus minus square root of f. So the value of the slope is minus plus to square root of f. And according to that formula, what I have to do is to put the inverse of that. So I have to put for one solution, 1 over 2 square root of f. For the other one, I have to put 1 over minus 2 square root of f, which would be a disaster if I didn't convert this to an absolute value. And if I did convert that to an absolute value, what I would get is lambda over 2 square root of f e to the minus lambda root f. It is important to note that this solution will only exist only if f is positive. And there's no solution if f is negative, which means that if I wanted to plot a Probability Density for this function f, which is x squared as a function of f, it will only have values for positive values of x squared. There's nothing for negative values. For positive values, I have this function that's exponentially decays. Yet at f equals to 0 diverges. One reason I chose that example is to emphasize that these Probability Density functions can even go all the way infinity. The requirement, however, is that you should be able to integrate across the infinity because integrating across the infinity should give you a finite number less than 1. And so the type of divergence that you could have is limited. 1 over square root of f is fine. 1/f is not accepted. Yes? AUDIENCE: I have a doubt about [? the postulate. ?] It says that if you raise the value of f slowly, you will eventually get to-- yeah, that point right there. So if the prescription that we have of summing over the different roots, at some point, the roots, they converge. PROFESSOR: Yes. AUDIENCE: So at some point, we stop summing over 2 and we start summing over 1. It just seems a little bit strange. PROFESSOR: Yeah. If you are up here, you have only one term in the sum. If you are down here, you have three terms. And that's really just the property of the curve that I have drawn. And so over here, I have only one root. Over here, I have three roots. And this is not surprising. There are many situations in mathematics or physics where you encounter situations where, as you change some parameters, new solutions, new roots, appear. And so if this was really some kind of a physical system, you would probably encounter some kind of a singularity of phase transitions at this point. Yes? AUDIENCE: But how does the equation deal with that when [INAUDIBLE]? PROFESSOR: Let's see. So if I am approaching that point, what I find is that the f by the x goes to 0. So the x by df has some kind of infinity or singularity, so we have to deal with that. If you want, we can choose a particular form of that function and see what happens. But actually, we have that already over here because the function f that I plotted for you as a function of x has this behavior that, for some range of f, you have two solutions. So for negative values of f, I have no solution. So this curve, after having rotated, is precisely an example of what is happening here. And you see what the consequence of that is. The consequence of that is that as I approach here and the two solutions merge, I have the singularity that is ultimately manifested in here. So in principle, yes. When you make these changes of variables and you have functions that have multiple solution behavior like that, you have to worry about this. Let me go down here. One other definition that, again, you've probably seen, before we go through something that I hope you haven't seen, moment. A form of this expectation value-- actually, here we did with x squared, but in general, we can calculate the expectation value of x to the m. And sometimes, that is called mth moment is the integral 0 to infinity dx x to the m p of x. Now, I expect that after this point, you would have seen everything. But next one maybe half of you have seen. And the next item, which we will use a lot, is the characteristic function. So given that I have some probability distribution p of x, I can calculate various expectation values. I calculate the expectation value of e to the minus ikx. This is, by definition that you have, I have to integrate over the domain of x-- let's say from minus infinity to infinity-- p of x against e to the minus ikx. And you say, well, what's special about that? I know that to be the Fourier transform of p of x. And it is true. And you also know how to invert the Fourier transform. That is if you know the characteristic function, which is another name for the Fourier transform of a probability distribution, you would get the p of x back by the integral over k divided by 2pi, the way that I chose the things, into the ikx p tilde of k. Basically, this is the standard relationship between these objects. So this is just a Fourrier transform. Now, something that appears a lot in statistical calculations and implicit in lots of things that we do in statistical mechanics is a generating function. I can take the characteristic function p tilde of k. It's a function of this Fourrier variable, k. And I can do an expansion in that. I can do the expansion inside the expectation value because e to the minus ikx I can write as a sum over n running from 0 to infinity minus ik to the power of m divided by n factorial x to the nth. This is the expansion of the exponential. The variable here is x, so I can take everything else outside. And what I see is that if I make an expansion of the characteristic function, the coefficient of k to the n up to some trivial factor of n factorial will give me the nth moment. That is once you have calculated the Fourrier transform, or the characteristic function, you can expand it. And you can, from out of that expansion, you can extract all the moments essentially. So this is expansion generates for you the moments, hence the generating function. You could even do something like this. You could multiply e to the ikx0 for some x0 p tilde of k. And that would be the expectation value of e to the ikx minus x0. And you can expand that, and you would generate all of the moments not around the origin, but around the point x0. So simple manipulations of the characteristic function can shift and give you other set of moments around different points. So the Fourier transform, or characteristic function, is the generator of moments. An even more important property is possessed by the cumulant generating function. So you have the characteristic function, the Fourier transform. You take its log, so another function of k. You start expanding this function in covers of k. Add the coefficients of that, you call cumulants. So I essentially repeated the definition that I had up there. I took a log, and all I did is I put this subscript c to go from moments to cumulants. And also, I have to start the series from 1 as opposed to 0. And essentially, I can find the relationship between cumulants and moments by writing this as a log of the characteristic function, which is 1 plus some n plus 1 to infinity of minus ik to the n over n factorial, the nth moments. So inside the log, I have the moments. Outside the log, I have the cumulants. And if I have a log of 1 plus epsilon, I can use the expansion of this as epsilon minus epsilon squared over 2 epsilon cubed over 3 minus epsilon to the fourth over 4, et cetera. And this will enable me to then match powers of minus ik on the left and powers of minus ik on the right. You can see that the first thing that I will find is that the expectation value of x-- the first power, the first term that I have here is minus ik to the mean. Take the log, I will get that. So essentially, what I get is that the first cumulant on the left is the first moment that I will get from the expansion on the right. And this is, of course, called the mean of the distribution. The second cumulant, I will have two contributions, one from epsilon, the other from minus epsilon squared over 2. And If you go through that, you will get that it is expectation value of x squared minus the average of x, the mean squared, which is none other than the expectation value of x around the mean squared, which is clearly a positive quantity. And this is the variance. And you can keep going. The third cumulant is x cubed minus 3 average of x squared average of x plus 2 average of x itself cubed. It is called the skewness. I don't write the formula for the next one which is called a [? cortosis ?]. And you keep going and so forth. So it turns out that this hierarchy of cumulants, essentially, is a hierarchy of the most important things that you can know about a random variable. So if I tell you that the outcome of some experiment is some number x, distribute it somehow-- I guess the first thing that you would like to know is whether the typical values that you get are of the order of 1, are of the order of million, whatever. So somehow, the mean is something that tells you something that is most important is zeroth order thing that you want to know about the variable. But the next thing that you might want to know is, well, what's the spread? How far does this thing go? And then the variance will tell you something about the spread. So the next thing that you want to do is maybe if given the spread, am I more likely to get things that are on one side or things that are on the other side. So the measure of its asymmetry, right versus left, is provided by the third cumulant, which is the skewness and so forth. So typically, the very first few members of this hierarchy of cumulants tells you the most important information that you need about the probability. Now, I will mention to you, and I guess we probably will deal with it more next time around, the result that is in some sense the backbone or granddaddy of all graphical expansions that are carrying [INAUDIBLE]. And that's a relationship between the moments and cumulants that I will express graphically. So this is graphical representation of moments in terms of cumulants. Essentially, what I'm saying is that you can go through the procedure as I outlined. And if you want to calculate minus ik to the fifth power so that you find the description of the fifth cumulant in terms of the moment, you'll have to do a lot of work in expanding the log and powers of this object and making sure that you don't make any mistakes in the coefficient. There is a way to circumvent that graphically and get the relationship. So how do we do that? You'll represent nth cumulant as let's say a bag of endpoints. So let's say this entity will represent the third cumulant. It's a bag with three points. This-- one, two, three, four, five, six-- will represent the sixth cumulant. Then, the nth moment is some of all ways of distributing end points amongst bags. So what do I mean? So I want to calculate the first moment x. That would correspond to one point. And really, there's only one diagram I can put the bag around it or not that would correspond to this. And that corresponds to the first cumulant, basically rewriting what I had before. If I want to look at the second moment, the second moment I need two points. The two points I can either put in the same bag or I can put into two separate bags. And the first one corresponds to calculating the second cumulant. The second term corresponds to two ways in which their first cumulant has appeared, so I have to squared x. if I want to calculate the third moment, I need three dots. The three dots I can either put in one bag or I can take one of them out and keep two of them in a bag. And here I had the choice of three things that I could've pulled out. Or, I could have all of them in individual bags of their own. And mathematically, the first term corresponds to x cubed c. The third term corresponds to three versions of the variance times the mean. And the last term is just the mean cubed. And you can massage this expression to see that I get the expression that I have for the skewness. I didn't offhand remember the relationship that I have to write down for the fourth cumulant. But I can graphically, immediately get the relationship for the fourth moment in terms of the fourth cumulant which is this entity. Or, four ways that I can take one of the back and maintain three in the bag, three ways in which I have two bags of two, six ways in which I can have a bag of two and two things that are individually apart, and one way in which there are four things that are independent of each other. And this becomes x to the fourth cumulant, the fourth cumulant, 4 times the third cumulant times the mean, 3 times the square of the variance, 6 times the variance multiplied by the mean squared, and the mean raised to the fourth power. And you can keep going. AUDIENCE: Is the variance not squared in the third term? PROFESSOR: Did I forget that? Yes, thank you. All right. So the proof of this is really just the two-line algebra exponentiating these expressions that we have over here. But it's much nicer to represent that graphically. And so now you can go between things very easily. And what I will show next time is how, using this machinery, you can calculate any moment of a Gaussian, for example, in just a matter of seconds as opposed to having to do integrations and things like that. So that's what we will do next time will be to apply this machinery to various probability distribution, such as a Gaussian, that we are likely to encounter again and again. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 6_Probability_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We'll begin this time by looking at some probability distribution that you should be familiar with from this perspective, starting with a Gaussian distribution for one variable. We're focused on a variable that takes real values in the interval minus infinity to infinity and the Gaussian has the form exponential that is centered around some value, let's call it lambda, and has fluctuations around this value parameterized by sigma. And the integral of this p over the interval should be normalized to unity, giving you this hopefully very family of form. Now, if you want to characterize the characteristic function, all we need to do is to Fourier transform this. So I have the integral dx e the minus ikx. So this-- let's remind you alternatively was the expectation value of e to the minus ikx. minus x minus lambda squared over 2 sigma squared, which is the probably distribution. And you should know what the answer to that is, but I will remind. You can change variables to x minus lambda [INAUDIBLE] y. So from here we will get the factor of 2 to the minus ik lambda. You have then the integral over y minus y squared over 2 sigma squared. And then what we need to do is to complete this square over here. And you can do that, essentially, by adding and subtracting a minus k squared sigma squared over 2. So that if I change variable to y plus ik sigma squared, let's call that z, then I have outside the integral e to the minus ik lambda minus k squared sigma squared over 2. And the remainder I can write as a full square. And this is just a normalized Gaussian integral that comes to 1. So as you well know, a Fourier transform of a Gaussian is itself a Gaussian, and that's what we've established. E to the minus ik lambda minus k squared sigma squared over 2m. And if I haven't made a mistake when I said k equals to 0, the answer should be 1 because k equals to 0 expectation value of 1 just amounts to normalization. Now what we said was that a more interesting function is obtained by taking the log of this. So from here we go to the log of [INAUDIBLE] of k. Log of [INAUDIBLE] of k is very simple for the Gaussian. And what we had said was that by definition this log of the characteristic function generates cumulants through the series minus ik to the power of n over n factorial, the end cumulant. So looking at this, we can immediately see that the Guassian is characterized by a first cumulant, which is the coefficient of minus ik. It's lambda. It is characterized by a second cumulant, which is the coefficient of minus ik squared. This, you know, is the variance. And we can explicitly see that the coefficient of minus ik squared over 2 factorial is simply sigma squared. So this is reputation. But one thing that is interesting is that our series has now terminates, which means that if I were to look at the third cumulant, if I were to look at the fourth cumulant, and so forth, for the Guassian, they're all 0. So the Gaussian is the distribution that is completely characterized just by its first and second cumulants, all the rest being 0. So, now our last time we developed some kind of a graphical method. We said that I can graphically describe the first cumulant as a bag with one point in it. Second cumulant with something that has two points in it. A third cumulant with three three, fourth cumulant with four and four points and so forth. This is just rewriting. Now, the interesting thing was that we said that the various moments we could express graphically. So that, for example, the second moment is either this or this, which then graphically is the same thing as lambda squared plus sigma squared because this is indicated by sigma squared. Now, x cubed you would say is either three things by themselves or put two of them together and then one separate. And this I could do in three different ways. And in general, for a general distribution, I would have had another term, which is a triangle. But the triangle is 0. So for the Gaussian, this terminates here. I have lambda cubed plus 3 lambda sigma squared. If I want to calculate x to the fourth, maybe the old way of doing it would have been too multiply the Gaussian distribution against x to the fourth and try to do the integration. And you would ultimately be able to do that rearranging things and looking at the various powers of the Gaussian integrated from minus infinity to infinity. But you can do it graphically. You can say, OK. It's either this or I can have-- well, I cannot put one aside and three together, because that doesn't exist. I could have two together and two not together. And this I can do in six different ways. You can convince yourself of that. What I could do two pairs, which I can do three different ways because I can either do one, two; one, three; one, four and then the other is satisfied. So this is lambda to the fourth plus 6 lambda squared sigma squared plus 3 sigma to the fourth. And you can keep going and doing different things. OK. Question? Yeah? AUDIENCE: Is the second-- [INAUDIBLE]. PROFESSOR: There? AUDIENCE: Because-- so you said that the second cumulative-- PROFESSOR: Oh. AUDIENCE: --x sqaured. Yes. PROFESSOR: Yes. So that's the wrong-- AUDIENCE: [INAUDIBLE]. PROFESSOR: The coefficient of k squared is the second cumulant. The additional 2 was a mistake. OK. Anything else? All right. Let's take a look at couple of other distributions, this time discrete. So the binomial distribution is repeat a binary random variable. And what does this mean? It means two outcomes that's binary, let's call them A and B. And if I have a coin that's head or tails, it's binary. Two possibilities. And I can assign probabilities to the two outcomes to PA and PB, which has to be 1 minus PA. And the question is if you repeat this binary random variables n times, what is the probability of NA outcomes of A? And I forget to say something important, so I write it in red. This should be independent. That is the outcome of a coin toss at, say, the fifth time should not influence the sixth time and future times. OK. So this is easy. The probability to have NA occurrences of A in N trials. So it has to be index yn. Is that within the n times that I tossed, A came up NA times. So it has to be proportional to the probability of A independently multiplied by itself NA times. But if I have exactly NA occurrences of A, all the other times I had B occurring. So I have the probability of B for the remainder, which is N minus NA. Now this is the probability for a specific occurrence, like the first NA times that I threw the coin I will get A. The remaining times I would get B. But the order is not important and the number of ways that I can shuffle the order and have a total of NA out of N times is the binomial factor. Fine. Again, well known. Let's look at its characteristic function. So p tilde, which is now a function of k, is expectation value of e to the minus ik NA, which means that I have to weigh e minus ik NA against the probability of occurrences of NA times, which is this binomial factor PA to the power of NA, PB to the power of N minus NA. And of course, I have to sum over all possible values of NA that go all the way from 0 to N. So what we have here is something which is this combination, PA e to the minus ik raised to the power of NA. PB raised to the complement N minus NA multiplied by the binomial factor summed over all possible values. So this is just the definition of the binomial expansion of PA e to the minus ik plus PB raised to the power of N. And again, let's check. If I set k equals to 0, I have PA plus PB, which is 1 raised to the power of N. So things are OK. So this is the characteristic function. At this stage, the only thing that I will note about this is that if I look at the characteristic function I will get N times. So this is-- actually, let's make sure that we maintain the index N. So this is the characteristic function appropriate to N trials. And what I get is that up to factor of N, I will get the characteristic function that would be appropriate to one trial. So what that means if I were to look at powers of k, the expectation value of some cumulant, if I go to repeat things N times-- so this carries an index N. It is going to be simply N times what I would have had in a single trial. So for a single trial, you really have two outcomes-- 0 or 1 occurrences of this object. So for a binary variable, you can really easily compute these quantities and then you can calculate the corresponding ones for N trials simply multiplying by N. And we will see that this is characteristic, essentially, of anything that is repeated N times, not just the binomial. So this form that you have N independent objects, you would get N times what you would have for one object is generally valid and actually something that we will build a lot of statistical mechanics on because we are interested in the [INAUDIBLE]. So we will see that shortly. But rather than following this, let's look at a third distribution that is closely related, which is the Poisson. And the question that we are asking is-- we have an interval. And the question is, what is the probability of m events in an interval from 0 to T? And I kind of expressed it this way because prototypical Poisson distribution is, let's say, the radioactivity. And you can be waiting for some time interval from 0 to 1 minute and asking within that time interval, what's the probability that you will see m radioactive decay events? So what is the probability if two things happen? One is that the probability of 1 and only 1 event in interval dt is alpha dt as dt goes to 0. OK? So basically if you look at this over 1 minute, the chances are that you will see so many events, so many radioactivities. If you shorten the interval, the chances that you would see events would become less and less. If you make your event infinitesimal, most of the time nothing would happen with very small probability that vanishes. As the size of the interval goes to 0, you will see 1 event. So this is one condition. And the second condition is events in different intervals are independent. And since I wrote independent in red up there, let me write it in red here because it sort of harks back to the same condition. And so this is the question. What is this probability? And the to get the answer, what we do is to subdivide our big interval into N, which is big T divided by the small dt subintervals. So basically, originally let's say on the time axis, we were covering a distance that went from 0 to big T and we were asking what happens here. So what we are doing now is we are sort of dividing this interval to lots of subintervals, the size of each one of them being dt. And therefore, the total number is big T over dt. And ultimately, clearly, I want to sit dt going to 0 so that this condition is satisfied. So also because of the second condition, each one of these will independently tell me whether or not I have an event. And so if I want to count the total number of events, I have to add things that are occurring in different intervals. And we can see that this problem now became identical to that problem because each one of these intervals has two possible outcomes-- nothing happens with probability 1 minus alpha dt, something happens with probability alpha dt. So no event is probability 1 minus alpha dt. One event means probability alpha dt. So this is a binomial process. So we can calculate, for example, the characteristic function. And I will indicate that we are looking at some interval of size T and parameterized by this straight alpha, we'll see that only the product will occur. So this is this before Fourier variable. We said that it's a binary, so it is one of the probabilities plus e to the minus ik plus the other probability raised to the power of N. Now we just substitute the probabilities that we have over here. So the probability of not having an event is 1 minus alpha dt. The probability of having an event is alpha dt. So alpha dt is going to appear here as e to the minus ik minus 1. So alpha st, e to the minus ik, is from here. From PB I will get 1 minus alpha dt. And I bunched together the two terms that are proportional to alpha dt. And then I have to raise to the power of N, which is T divided by dt. And this whole prescription is valid in the limit where dt is going to 0. So what you have is 1 plus an infinitesimal raised to a huge power. And this limiting procedure is equivalent to taking the exponential. So basically this is the same thing as exponential of what is here multiplied by what is here. The dt's cancel each other out and the answer is alpha T e to the minus ik minus 1. So the characteristic function for this process that we described is simply given by this form. You say, wait. I didn't ask for the characteristic function. I wanted the probability. Well, I say, OK. Characteristic function is simply the Fourier transform. So let me Fourier transform back, and I would say that the probability along not the Fourier axis but the actual axis is open by the inverse Fourier process. So I have to do an integral dk over 2 pi e to the ikx times the characteristic function. And the characteristic function is e to minus-- what was it? e to the alpha T. E to the minus ik k minus 1. Well, there is an equal to minus alpha T that I can simply take outside the integration. I have the integration over k e to that ikx. And then what I will do is I have this factor of e to the something in the-- e to the alpha T to the minus ik. I will use the expansion of the exponential. So the expansion of the exponential is a sum over m running from 0 to infinity. The exponent raised to the m-th power. So I have alpha T raised to the m-th power, e to the minus ik raised to the m-th power divided m factor here. So now I reorder the sum and the integration. The sum is over m, the integration is over k. I can reorder them. So on the things that go outside I have a sum over m running from 0 to infinity e to the minus alpha T, alpha T to the power of m divided by m factorial. Then I have the integral over k over 2 pi e to the ik. Well, I had the x here and I have e to the minus ikm here. So I have x minus m. And then I say, OK, this is an integral that I recognize. The integral of e to the ik times something is simply a delta function. So this whole thing is a delta function that's says, oh, x has to be an integer. Because I kind of did something that maybe, in retrospect, you would have said why are you doing this. Because along how many times things have occurred, they have either occurred 0 times, 1 times, 2 decays, 3 decays. I don't have 2.5 decays. So I treated x as a continuous variable, but the mathematics was really clever enough to say that, no. The only places that you can have are really integer values. And the probability that you have some particular value integer m is simply what we have over here, e to the minus alpha T alpha T to the power of m divided by m factorial, which is the Poisson distribution. OK. But fine. So this is the Poisson distribution, but really we go through the root of the characteristic function in order to use this machinery that we developed earlier for cumulants et cetera. So let's look at the cumulant generating function. So I have to take the log of the function that I had calculated there. It is nicely in the exponential, so I get alpha T e to the minus ik minus 1. So now I can make an expansion of this in powers of k so I can expand the exponential. The first term vanishes because this starts with one. So really I have alpha T sum running from 1 to infinity or N running from 1 to infinity minus ik to the power of n over n factorial. So my task for identifying the cumulants is to look at the expansion of this log and read off powers of minus ikn to the power of n factorial. So what do we see? We see that the first cumulant of the Poisson is alpha T, but all the coefficients are the same thing. The expectation value-- sorry. The second cumulant is alpha T. The third cumulant, the fourth cumulant, all the other cumulants are also alpha T. So the average number of decays that you see in the interval is simply alpha T. But there are fluctuations, and if somebody should, for example, ask you what's the average number cubed of events, you would say, OK. I'm going to use the relationship between moments and cumulants. I can either have three first objects or I can put one of them separate in three different factions. But this is a case where the triangle is allowed, so diagrammatically all three are possible. And so the answer for the first term is alpha T cubed. For the second term, it is a factor of 3. Both this variance and the mean give me a factor of alpha T, so I will get alpha T squared. And the third term, which is the third cumulant, is also alpha T. So the answer is simply of this form. Again, m is an integer. Alpha T is dimensionless. So there is no dimension problem by it having different powers. OK. Any questions? All right. So that's what I wanted to say about one variable. Now let's go and look at corresponding definitions when you have multiple variables. So for many random variables, the set of possible outcomes, let's say, has variables x1, x2. Let's be precise. Let's end it a xn. And if these are distributed, each one of them continuously over the interval, to each point we can characterize some kind of a probability density. So this entity is called the joint probability density function. And its definition would be to look at probability of outcome in some interval that is between, say, x1, x1 plus dx1 in one, x2, x2 plus dx2 in the second variable. xn plus dxn in the last variable. So you sort of look at the particular point in this multi-dimensional space that you are interested. You build a little cube around it. You ask, what's the probability to being that cube? And then you divide by the volume of that cube. So dx1, dx2, dxn, which is the same thing that you would be doing in constructing any density and by ultimately taking the limit that all of the x's go to 0. All right. So this is the joint probability distribution. You can construct, now, the joint characteristic function. Now how do you do that? Well, again, just like you would do for your transform with multiple variables. So you would go for each variable to a conjugate variable. So x1 would go to k1. x2 would go to k2. xn would go to kn. And this would mathematically amount to calculating the expectation value of e to the minus i k1x1 x1, k2x2 and so forth, which you would obtain by integrating over all of these variables, e to the minus ik alpha x alpha, against the probability of x1 through xn. Question? AUDIENCE: It's a bit hard to read. It's getting really small. PROFESSOR: OK. [LAUGHTER] PROFESSOR: But it's just multi-dimensional integral. OK? All right. So this is, as the case of one, I think the problem is not the size but the angle I see. I can't do much for that. You have to move to the center. OK. So what we can look at now is joint moment. So you can-- when we had one variable, we could look at something like the expectation value of x to the m. That would be the m-th moment. But if you have two variable, we can raise x1 to some other, x2 to another power, and actually xn to another power. So this is a joint moment. Now the thing is, that the same way that moments for one variable could be generated by expanding the characteristic function, if I were to expand this function in powers of k, you can see that in meeting the expectation value, I will get various powers of x1 to some power, x2 to some power, et cetera. So by appropriate expansion of that function, I can generate all of-- read off all of these moments. Now, a more common way of generating the Taylor series expansion is through derivatives. So what I can do is I can take a derivative with respect to, say, ik1. If I take a derivative with respect to ik1 here, what happens is I will bring down a factor of minus x alpha. So actually let me put the minus so it becomes a factor of x alpha. And if I integrate x alpha against this, I will be generating the expectation value of x alpha provided that ultimately I set all of the k's to 0. So I will calculate the derivative of this function with respect to all of these arguments. At the end of the day, I will set k equals to 0. That will give me the expectation value of x1. But I don't want x1, I want x1 raised to the power of m1. So I do this. Each time I take a derivative with respect to minus ik, I will bring down the factor of the corresponding x. And I can do this with multiple different things. So d by the ik2 raised to the power of m2 minus d by the ikn, the whole thing raised to the power of mn. So I can either take this function of multiple variables-- k1 through kn-- and expand it and read off the appropriate powers of k1k2. Or I can say that the terms in this expansion are generated through taking appropriate derivative. Yes? AUDIENCE: Is there any reason why you're choosing to take a derivative with respect to ikj instead of simply putting the i in the numerator? Or are there-- are there things that I'm not-- PROFESSOR: No. No. There is no reason. So you're saying why didn't I write this as this like this? i divided? AUDIENCE: Yeah. PROFESSOR: I think I just visually saw that it was kind of more that way. But it's exactly the same thing. Yes. OK. All right. Now the interesting object, of course, to us is more the joint cumulants. So how do we generate joint cumulants? Well previously, essentially we had a bunch of objects for one variable that was some moment. And in order to make them cumulants, we just put a sub C here. So we do that and we are done. But what did operationally happen was that we did the expansion rather than for the characteristic function for the log of the characteristic function. So all I need to do is to do precisely this set of derivatives applied rather than to the joint characteristic function to the log of the joint characteristic function. And at the end, set all of the case to 0. OK? So by looking at these two definitions and the expansion of the log, for example, you can calculate various things. Like, for example, x1x2 with a C is the expectation value of x1x2. This joint moment minus x1x2, just as you would have thought, would be the appropriate generalization of the variance. And this is the covariance. And you can construct appropriate extensions. OK. Now we made a lot of use of the relationship between moments and cumulants. We just-- so the idea, really, was that the essence of a probability distribution is characterized in the cumulants. Moments kind of depend on how you look at things. The essence is in the cumulants, but sometimes the moments are more usefully computed, and there was a relationship between moments and cumulants, we can generalize that graphical relation to the case joint moments and joint cumulants. So graphical relation applies as long as points are labeled by appropriate or by corresponding variable. So suppose I wanted to calculate some kind of a moment that is x1 squared. Let's say x2, x3. This may generate for me many diagrams, so let's stop from here. So what I can do is I can have points that I label 1, 1, and 2. And have them separate from each other. Or I can start pairing them together. So one possibility is that I put the 1's together and the 2 starts separately. Another possibility is that I can group the 1 and the 2 together. And then the other 1 starts separately. But I had a choice of two ways to do this, so this comes-- this diagram with an overall factor of 2. And then there's the possibility to put all of them in the same bag. And so mathematically, that means that the third-- this particular joint moment is obtained by taking average of x1 squared x2 average, which is the first term. The second term is the variance of x1. And then multiplied by x2. The third term is twice the covariance of x1 and x2 times the mean of x1. And the final term is just the third cumulant. So again, you would need to compute these, presumably, from the law of the characteristic function and then you would be done. Couple of other definitions. One of them is an unconditional probability. So very soon we will be talking about, say, probabilities appropriate to the gas in this room. And the particles in the gas in this room will be characterized where they are, some position vector q, and how fast they are moving, some momentum vector p. And there would be some kind of a probability density associated with finding a particle with some momentum at some location in space. But sometimes I say, well, I really don't care about where the particles are, I just want to know how fast they are moving. So what I really care is the probability that I have a particle moving with some momentum p, irrespective of where it is. Then all I need to do is to integrate over the position the joint probability distribution. And the check that this is correct is that if I first do not integrate this over p, this would be integrated over the entire space and the joint probabilities appropriately normalized so that the joint integration will give me one. So this is a correct normalized probability. And more generally, if I'm interested in, say, a bunch of coordinates x1 through xs, out of a larger list of coordinates that spans x1 through xs all the way to something else, all I need to do to get unconditional probability is to integrate over the variables that I'm not interested. Again, check is that it's a good properly normalized. Now, this is to be contrasted with the conditional probability. The conditional probability, let's say we would be interested in calculating the pressure that is exerted on the board. The pressure is exerted by the particles that impinge on the board and then go away, so I'm interested in the momentum of particles right at the board, not anywhere else in space. So if I'm interested in the momentum of particles at the particular location, which could in principle depend on location-- so now q is a parameter p is the variable, but the probability distribution could depend on q. How do we obtain this? This, again. is going to be proportional to the probability that I will find a particle both at this location with momentum p. So I need to have that. But it's not exactly that there's a normalization involved. And the way to get normalization is to note that if I integrate this probability over its variable p but not over the parameter q, the answer should be 1. So this is going to be, if I apply it to the right-hand side, the integral over p of p of p and q, which we recognize as an example of an unconditional probability to find something at position 1. So the normalization is going to be this so that the ratio is 1. So most generally, we find that the probability to have some subset of variables, given that the location of the other variables in the list are somewhat fixed, is given by the joint probability of all of the variables x1 through xn divided by the unconditional probability that that applies to the parameters of our fixed. And this is called Bayes' theorem. By the way, if variables are independent, which actually does apply to the case of the particles in this room as far as their momentum and position is concerned, then the joint probability is going to be the product of one that is appropriate to the position and one that is appropriate to the momentum. And if you have this independence, then what you'll find is that there is no difference between conditional and unconditional probabilities. And when you go through this procedure, you will find that all the joint cumulants-- but not the joint moments, naturally-- all the joint cumulants will be 0. OK. Any questions? Yes? AUDIENCE: Could you explain how the condition of p-- PROFESSOR: How this was obtained? Or the one above? AUDIENCE: Yeah. The condition you applied that the integral is 1. PROFESSOR: OK. So first of all, what I want to look at is the probability that is appropriate to one random variable at the fixed value of all the other random variables. Like you say, in general I should specify the probability as a function of momentum and position throughout space. But I'm really interested only at this point. I don't really care about other points. However, the answer may depend whether I'm looking at here or I'm looking at here. So the answer for the probability of momentum is parametrized by q. On the other hand, I say that I know the probability over the entire space to be a disposition with the momentum p as given by this joint probability. But if I just set that equal to this, the answer is not correct because the way that this quantity is normalized is if I first integrate over all possible values of its variable, p. The answer should be 1, irrespective of what q is. So I can define a conditional probability for momentum here, a conditional probability for momentum there. In both cases the momentum would be the variable it. And integrating over all possible values of momentum should give me one for a properly normalized probability distribution. AUDIENCE: [INAUDIBLE]. PROFESSOR: Given that q is something. So q could be some-- now here q can be regarded as some parameter. So the condition is that this integration should give me 1. I said that on physical grounds, I expect this conditional probability to be the joint probability up to some normalization that I don't know. OK. So what is that normalization? The whole answer should be 1. What I have to do is an integration over momentum of the joint probability. I have said that an integration over some set of variables of a joint probability will give me the unconditional probability for all the others. So integrating over all momentum of this joint probability will give me the unconditional probability for position. So the normalization of 1 is the unconditional probability for position divided by n. So n-- this has to be this. And in general, it would have to be this in order to ensure that if I integrate over this first set of variables of the joint probability distribution which would give me the unconditional, cancels the unconditional in the denominator to give me 1. Other questions? OK. So I'm going to erase this last board to be underneath that top board in looking at the joint Gaussian distribution. So that was the Gaussian, and we want to look at the joint Gaussian. So we want to generalize the formula that we have over there for one variable to multiple variables. So what I have there initially is a factor, which is exponential of minus 1/2, x minus lambda squared. I can write this x minus lambda squared as x minus lambda x minus lambda. And then put the variance. Let's call it is 1 over sigma rather than a small sigma squared or something like this. Actually, let me just write it as 1 over sigma squared for the time being. And then the normalization was 1 over root 2 pi sigma squared. But you say, well, I have multiple variables, so maybe this is what I would give for my B variable. And then I would sum over all N, running from 1 to N. So this is essentially the form that I would have for an independent Gaussian variables. And then I would have to multiply here factors of 2 pi sigma squared, so I would have 2 pi to the N over 2. And I would have product of-- actually, let's write it as 2 pi to the N square root. I would have the product of sigma i squared. But that's just too limiting a form. The most general form that these quadratic will allow me to have also cross terms where it is not only the diagonal terms x1 and x1 that's are multiplying each other, but x2 and x3, et cetera. So I would have a sum over both m and n running from 1 to n. And then to coefficient here, rather than just being a number, would be the variables that would be like a matrix. Because for each pair m and n, I would have some number. And I will call them the inverse of some matrix C. And if you, again, think of the problem as a matrix, if I have the diagonal matrix, then the product of elements along the diagonal is the same thing as the determinant. If I were to rotate the matrix to have off diagonal elements, the determinant will always be there. So this is really the determinant of C that will appear here. Yes? AUDIENCE: So are you inverting the individual elements of C or are you inverting the matrix C and taking its elements? PROFESSOR: Actually a very good point. I really wanted to write it as the inverse of the matrix and then peak the mn [INAUDIBLE]. So we imagine that we have the matrix. And these are the elements of some-- so I could have called this whatever I want. So I could have called the coefficients of x and n. I have chosen to regard them as the inverse of some other matrix C. And the reason for that becomes shortly clear, because the covariances will be related to the inverse of this matrix. And hence, that's the appropriate way to look at it. AUDIENCE: Can [INAUDIBLE] what C means up there? PROFESSOR: OK. So let's forget about this lambdas. So I would have in general for two variables some coefficient for x1 squared, some coefficient for x2 squared, and some coefficient for x1, x2. So I could call this a11. I could call this a22. I could call this 2a12. Or actually I could, if I wanted, just write it as a12 plus a21 x2x1 or do a1 to an a21 would be the same. So what I could then regard this is as x1 2. The matrix a11, a12, a21, a22, x1, x2. So this is exactly the same as that. All right? So these objects here are the elements of this matrix C inverse. So I could call this x1, x2 some matrix A x1 x2. That A is 2 by 2 matrix. The name I have given to that 2 by 2 matrix in C inverse. Yes? AUDIENCE: The matrix is required to be symmetric though, isn't it? PROFESSOR: The matrix is required to be symmetric for any quadrant form. Yes. So when I wrote it initially, I wrote as 2 a12. And then I said, well, I can also write it this fashion provided the two of them are the same. Yes? AUDIENCE: How did you know the determinant of C belonged there? PROFESSOR: Pardon? AUDIENCE: How did you know that the determinant of C [INAUDIBLE]? PROFESSOR: OK. How do I know the determinant of C? Let's say I give you this form. And then I don't know what the normalization is. What I can do is I can do a change of variables from x1 x2 to something like y1 y2 such that when I look at y1 and y2, the matrix becomes diagonal. So I can rotate the matrix. So any matrix I can imagine that I will find some U such that A U U dagger is this diagonal matrix lambda. Now under these procedures, one thing that does not change is the determinant. It's always the product of the eigenvalues. The way that I set up the problem, I said that if I hadn't made the problem to have cross terms, I knew the answers to be the product of that eigenvalues. So if you like, I can start from there and then do a rotation and have the more general form. The answer would stay as the determinant. Yes? AUDIENCE: The matrix should be positive as well or no? PROFESSOR: The matrix should be positive definite in order for the probability to be well-defined and exist, yes. OK. So if you like, by stating that this is a probability, I have imposed a number of conditions such as symmetry, as well as positivity. Yes. OK. But this is just linear algebra. I will assume that you know linear algebra. OK. So this property normalized Gaussian joint probability. We are interested in the characteristic function. So what we are interested is the joint Gaussian characteristic. And so again we saw the procedure was that I have to do the Fourier transform. So I have to take this probability that I have over there and do an integration product, say, alpha running from 1 to N dx alpha of e to the minus ik alpha x alpha. This product exists for all values. Then I have to multiply with this probability that I have up there, which would appear here. OK. Now, again maybe an easy way to imagine is what I was saying to previously. Let's imagine that I have rotated into a basis where everything is diagonal. Then in the rotated basis, all you need to do is to essentially do product of characteristic functions such as what we have over here. So the corresponding product to this first term would be exponential of minus i sum over N running from 1 to N k alpha lambda alpha. kn lambda n. I guess I'm using n as the variable here. And as long as things would be diagonal, the next ordered term would be a sum over alpha kn squared the corresponding eigenvalue inverted. So remember that in the diagonal form, each one of these sigmas would appear as the diagonal. If I do my rotation, essentially this term would not be affected. The next term would give me minus 1/2 sum over m and n rather than just having k1 squared k2 squared, et cetera. Just like here, I would have km kn. What happened previously was that each eigenvalue would get inverted. If you think about rotating a matrix, all of its eigenvalues are inverted, you are really rotating the inverse matrix. So this here would be the inverse of whatever matrix I have here. So this would be C mn. So I did will leave you to do the corresponding linear algebra here, but the answer is correct. So the answer is that the generator of cumulants for a joint Gaussian distribution has a form which has a bunch of the linear terms-- kn lambda n. And a bunch of second order terms, so we will have minus 1/2 sum over m and n km kn times some coefficient. And the series terminates here. So for the joint Gaussian, you have first cumulant. So the expectation value of nm cumulant is the same thing as lambda m. You have covariances or second cumulants xm, xn, C is Cmn. And in particular, the diagonal elements would correspond to the variances. And all the higher orders are 0 because there's no further term in the expansion. So for example, if I were to calculate this thing that I have on the board here for the case of a Gaussian, for the case of the Gaussian, I would not have this third term. So the answer that I would write down for the case of the third term would be something that didn't have this. And in the way that we have written things, the answer would have been x1 squared x2-- would be just a lambda 1 squared lambda 2 plus sigma 1 squared, or let's call it C11, times lambda 2 plus 2 lambda 1 C12. And that's it. So there is something that follows from this that it is used a lot in field theory. And it's called Wick's theorem. So that's just a particular case of this, but let's state it anyway. So for Gaussian distributed variables of 0 mean, following condition applies. I can take the first variable raised to power n1, the second variable to n2, the last variable to some other nN and look at a joint expectation value such as this. And this is 0 if sum over alpha and alpha is odd and is called sum over all pairwise contraction if a sum over alpha and alpha is even. So actually, I have right here an example of this. If I have a Gaussian variable-- jointly distributed Gaussian variables where the means are all 0-- so if I say that lambda 1 and lambda 2 are 0, then this is an odd power, x1 squared x2. Because of the symmetry it has to be 0, but you explicitly see that every term that I have will be multiplying some power of [INAUDIBLE]. Whereas if for other than this, I was looking at something like x1 squared x2 x3 where the net power is even, then I could sort of imagine putting them into these kinds of diagrams. Or alternatively, I can imagine pairing these things in all possible ways. So one pairing would be this with this, this with this, which would have given me x1 squared C x2 x3 C. Another pairing would have been x1 with x2. And then, naturally, x1 with x3. So I would have gotten x1 with x2 covariance x1 with extreme x3 covariance. But I could have connected the x1 to x2 or the second x1 to x2. So this comes in 2 variance. And so the answer here would be C11 C23 plus 2 C1 2 C13. Yes? AUDIENCE: In your writing of x1 to the n1 [INAUDIBLE]. It should be the cumulant, right? Or is it the moment? PROFESSOR: This is the moment. AUDIENCE: OK. PROFESSOR: The contractions are the covariances. AUDIENCE: OK. PROFESSOR: So the point is that the Gaussian distribution is completely characterized in terms of its covariances. Once you know the covariances, essentially you know everything. And in particular, you may be interested in some particular combination of x's. And then you use to express that in terms of all possible pairwise contractions, which are the covariances. And essentially, in all of field theory, you expand around some kind of a Gaussian background or Gaussian 0 toward the result. And then in your perturbation theory you need various powers of your field or some combination of powers, and you express them through these kinds of relationships. Any questions? OK. This is fine. Let's get rid of this. OK. Now there is one result that all of statistical mechanics hangs on. So I expect that as I get old and I get infirm or whatever and my memory vanishes, the last thing that I will remember before I die would be the central limit theorem. And why is this important is because you end up in statistical physics adding lots of things. So really, the question that you have or you should be asking is thermodynamics is a very precise thing. It says that heat goes from the higher temperature to lower temperature. It doesn't say it does that 50% of the time or 95% of the time. It's a definite statement. If I am telling you that ultimately I'm going to express everything in terms of probabilities, how does that jive? The reason that it jives is because of this theorem. It's because in order to go from the probabilistic description, you will be dealing with so many different-- so many large number of variables-- that probabilistic statements actually become precise deterministic statements. And that's captured by this theorem, which says that let's look at the sum of N random variables. And I will indicate the sum by x and my random variables as small x's. And let's say that this is, for the individual set of things that I'm adding up together, some kind of a joint probability distribution out of which I take these random variables. So each instance of this sum is selected from this joint PDF, so x itself is a random variable because of possible choices of different xi from this probability distribution. So what I'm interested is what is the probability for the sum? So what is the p that determines this sum? I will go by the root of these characteristic functions. I will say, OK, what's the expectation value of-- well, let's-- what's the Fourier transform of this probability distribution? If we transform, by definition it is the expectation of e to the minus ik this big X, which is the sum over all off the small x's. Do I have that definition somewhere? I erased it. Basically, what is this? If this k was, in fact, different k's-- if I had a k1 multiplying x1, k2 multiplying x2, that would be the definition of the joint characteristics function for this joint probability distribution. So what this is is you take the joint characteristic function, which depends on k1 k2, all the way to kn. And you set all of them to be the same. So take the joint characteristic function depends on N Fourier variables. Put all of them the same k and you have that for the sum. So I can certainly do that by adding a log here. Nothing has changed. I know that the log is the generator of the cumulants. So this is a sum over, let's say, n running from 1 to infinity minus ik to the power of n over n factorial, the joint cumulant of the sum. So what is the expansion that I would have for log of the joint characteristic function? Well, typically I would say have at the lowest order k1 times the mean of the first variable, k2 times the mean of the second variable. But all of them are the same. So the first order, I would get minus i the same k sum over n of the first cumulant of the N-th variable. Typically, this second order term, I would have all kinds of products. I would have k1 k3 k2 k4, as well as k1 squared. But now all of them become the same, and so what I will have is a minus ik squared. But then I have all possible pairings mn of xm xn cumulants. AUDIENCE: Question. PROFESSOR: Yes? AUDIENCE: [INAUDIBLE] expression you probably should use different indices when you're summing over elements of Taylor serious and when you're summing over your [INAUDIBLE] random variables. Just-- it gets confusing when both indexes are n. PROFESSOR: This here, you want me to right here, say, i? AUDIENCE: Yeah. PROFESSOR: OK. And here I can write i and j. So I think there's still a 2 factorial. And then there's higher orders. Essentially then, matching the coefficients of minus ik from the left minus ik from the right will enable me to calculate relationships between cumulants of the sum and cumulants of the individual variables. This first one of them is not particularly surprising. You would say that the mean of the sum is sum of the means of the individual variables. The second statement is that the variance of the sum really involves a pair, i and j, running from 1 to N. So if these variable were independent, you would be just adding the variances. Since they are potentially dependent, you have to also keep track of covariances. And this kind of summation extends to higher and higher cumulants, essentially including more and more powers of cumulants that you would put on that side. And what we do with that, I guess we'll start next time around. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 19_Interacting_Particles_Part_5.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Begin with a new topic, which is breakdown of classical statistical mechanics. So we developed a formalism to probabilistically describe collections of large particles. And once we have that, from that formalism calculate properties of matter that have to do with heat, temperature, et cetera, and things coming to equilibrium. So question is, is this formalism always successful? And by the time you come to the end of the 19th Century, there were several things that were hanging around that had to do with thermal properties of the matter where this formalism was having difficulties. And the difficulties ultimately pointed out to emergence of quantum mechanics. So essentially, understanding the relationship between thermodynamics, statistical mechanics, and properties of matter was very important to development of quantum mechanics. And in particular, I will mention three difficulties. The most important one that really originally set the first stone for quantum mechanics is the spectrum of black body radiation. And it's basically the observation that you heat something. And when it becomes hot, it starts to radiate. And typically, the color of the radiation that you get is a function of temperature, but does not depend on the properties of the material that you are heating. So that has to do with heat. And you should be able to explain that using statistical mechanics. Another thing that we have already mentioned has to do with the third law of thermodynamics. And let's say the heat capacity of materials such as solids. We mentioned this Nernst theorem that was the third law of thermodynamics based on observation. Consequence of it was that heat capacity of most things that you can measure go to 0 as you go to 0 temperature. We should be able to explain that again, based on the phenomena of statistical-- the phenomenology of thermodynamics and the rules of statistical mechanics. Now, a third thing that is less often mentioned but is also important has to do with heat capacity of the atomic gases such as the air in this room, which is composed of, say, oxygen and nitrogen that are diatomic gases. So probably, historically they were answered and discussed and resolving the order that I have drawn for you. But I will go backwards. so we will first talk about this one, then about number two-- heat capacity of solids-- and number three about black body radiation. OK. Part of the reason is that throughout the course, we have been using our understanding of the gas as the sort of measure of how well we understand thermal properties of the matter. And so let's stick with the gas and ask, what do I know about the heat capacity of the gas in this room? So let's think about heat capacity of dilute diatomic gas. It is a gas that is sufficiently dilute that it is practically having ideal gas law. So PV is roughly proportional to temperature. But rather than thinking about its pressure, I want to make sure I understand something about the heat capacity, another quantity that I can measure. So what's going on here? I have, let's say, a box. And within this box, we have a whole bunch of these diatomic molecules. Let's stick to the canonical ensemble. So I tell you the volume of this gas, the number of diatomic molecules, and the temperature. And in this formalism, I would calculate the partition function. Out of that, I should be able to calculate the energy, heat capacity, et cetera. So what do I have to do? I have to integrate over all possible coordinates that occur in this system. To all intents and purposes, the different molecules are identical. So I divide by the phase space that is assigned to each one of them. And I said it is dilute enough that for all intents and purposes, the pressure is proportional to temperature. And that occur, I know, when I can ignore the interactions between particles. So if I can ignore the interactions between particles, then the partition function for the entire system would be the product of the partition functions that I would write for the individual molecules, or one of them raised to the N power. So what's the Z1 that I have to calculate? Z1 is obtained by integrating over the coordinates and momenta of a single diatomic particle. So I have a factor of d cubed p. I have a factor of d cubed q. But I have two particles, so I have d cubed p1, d cubed q1, d cubed p2, d cubed q 2, and I have six pairs of coordinate momenta. So I divide it by h cubed. I have e to the minus beta times the energy of this system, which is p1 squared over 2m p2 squared over 2m. And some potential of interaction that is responsible for bringing and binding these things together. So there is some V that is function of q1 and q2 that binds the two particles together and does not allow them to become separate. All right, so what do we do here? We realize that immediately for one of these particle,s there is a center of mass that can go all over the place. So we change coordinates to, let's say, Q, which is q1 plus q2 over 2. And corresponding to the center of mass position, there is also a center of mass momentum, P, which is related to p1 minus p2. But when I make the change of variables from these coordinates to these coordinates, what I will get is that I will have a simple integral over the relative coordinates. So I have d cubed Q d cubed big P h cubed. And the only thing that I have over there is e to the minus beta p squared divided by 2 big M. Big M being the sum total of the two masses. If the two masses are identical, it would be 2M. Otherwise, it would be M1 plus M2. And then I have an integration over the relative coordinate. Let's call that q relative momentum p h cubed e to the minus beta p squared over 2 times the reduced mass here. And then, the potential which only is a function of the relative coordinate. Point is that what I have done is I have separated out this 6 degrees of freedom that make up-- or actually, the 3 degrees of freedom and their conjugate momenta that make up a single molecule into some degrees of freedom that correspond to the center of mass and some degrees of freedom that correspond to the relative motion. Furthermore, for the relative motion I expect that the form of this potential as a function of the separation has a form that is a minimum. Basically, the particles at 0 temperature would be sitting where this minimum is. So essentially, the shape of this diatomic molecule would be something like this if I find its minimum energy configuration. But then, I can allow it to move with respect to, say, the minimum energy. Let's say it occurs at some distance d. It can oscillate around this minimum value. If it oscillates around this minimum value, it basically will explore the bottom of this potential. So I can basically think of this center of mass contribution to the partition function. And this contribution has a part that comes from these oscillations around the center of mass. Let's call that u. Then, there is the corresponding momentum. I don't know, let's call it pi. I divide by h and I have e to the minus beta pi squared over 2 mu. And then I have minus beta. Well, to the lowest order, I have v of d, which is a constant. And then I have some frequency, some curvature at the bottom of this potential that I choose to write as mu omega squared over 2 multiplying by u squared. Essentially, what I want to do is to say that really, there is a vibrational degree of freedom and there is a harmonic oscillator that describes that. The frequency of that is related to the curvature that I have at the bottom of this potential. So this degree of freedom corresponds to vibrations. But that's not the end of this story because here I had three q's. One of them became the amplitude of this oscillation. So basically, the relative coordinate is a vector. One degree of freedom corresponds to stretching, but there are two other components of it. those two other components correspond to essentially keeping the length of this fixed but moving in the other directions. What do they correspond to? They correspond to rotations. So then there is essentially another partition function that I want right here. That corresponds to the rotational degrees of freedom. Now, the rotational degrees of freedom have a momentum contribution because this p is also three components. One component went in to the vibrations. There are two more components that really combine to tell you about the angular momentum and the energy that is proportional to the square of the angular momentum. But there is no restoring force for them. There is no corresponding term that is like this. So maybe I will just write that as an integral over angles that I can rotate this thing. An integral over the two components of the angular momentum divided by h squared. There's actually two angles. And the contribution is e to the minus beta angular momentum squared over 2I. So I wrote the entire thing. So essentially, all I have done is I have taken the Hamiltonian that corresponds to two particles that are bound together and broken it into three pieces corresponding to the center of mass, to the vibrations, and to the rotations. Now, the thing is that if I now ask, what is the energy that I would get for this one particle-- I guess I'll call this Z1-- what is the contribution of the one particular to the energy of the entire system? I have minus the log Z1 with respect to beta. That's the usual formula to calculate energies. So I go and look at this entire thing. And where do the beta dependencies come from? Well, let's see. So my Z1 has a part that comes from this center of mass. It gives me a V. We expect that. And then from the integration over the momenta, I will get something like 2 pi m over beta h squared to the 3/2 power. From the vibrations-- OK, what do I have? I have e to the minus beta V of d, which is the constant. We really don't care. But there are these two components that give me root 2 pi mu divided by beta. There is a corresponding thing that comes from the variance that goes with this object, which is square root of 2 pi divided by beta mu omega squared. The entire thing has a factor of 1/h. So this is the vibrations. And for the rotations, what do I get? I will get a 4 pi from integrating over all orientations. Divided by h squared. I have essentially the two components of angular momentum. So I get essentially, the square of 2 pi I divided by beta. So this is rotations. And this is center of mass. We can see that if I take that formula, take its log divide by-- take a derivative with respect to beta. First of all, I will get this constant that is the energy of the bond state at 0 temperature. But the more interesting things are the things that I take from the derivatives of the various factors of beta. Essentially, for each factor of beta in the denominator, log Z will have a minus log of beta. I take a derivative, I will get a factor of 1 over beta. So from here, I will get 3/2 1 over beta, which is 3/2 kT. So this is the center of mass. From here, I have two factors of beta to the 1/2. So they combine to give me one factor of kT. This is for vibrations. And similarly, I have two factors of beta to the 1/2, which correspond to 1 kT for rotations. So then I say that the heat capacity at constant volume is simply-- per particle is related to d e1 by dT. And I see that that amounts to kb times 3/2 plus 1 plus 1, or I should get 7/2 kb. Per particle, which says that if you go and calculate the heat capacity of the gas in this room, divide by the number of molecules that we have-- doesn't matter whether they are oxygens or nitrogen. They would basically give the same contribution because you can see that the masses and all the other properties of the molecule do not appear in the heat capacity. That as a function of temperature, I should get a value of 7/2. So basically, C in units of kb. So I divide by kb. And my predictions is that I should see 7/2. So you go and do a measurement and what do you get? What you get is actually 5/2. So something is not quite right. We are not getting the 7/2 that we predicted. Except that I really mentioned that you are getting this measurement when you do measurements at room temperature, you get this value. So when we measure the heat capacity of the gas in this room, we will get 5/2. But if we heat it up, by the time we get to temperatures of a few thousand degrees Kelvin. So if you heat the room by a factor of 5 to 10, you will actually get the value of 7/2. And if you cool it, by the time you get to the order of 10 degrees or fewer, then you will find that the heat capacity actually goes even further. It goes all the way to 3/2. And the 3/2 is the thing that you would have predicted for a gas that had monatomic particles, no internal structure. Because then the only thing that you would have gotten is the center of mass contribution. So it seems like by going to low temperatures, you somehow freeze the degrees of freedom that correspond to vibrations and rotations of the gas. And by going to really high temperatures, you are able to liberate all of these degrees of freedom and store energy in them. Heat capacity is the measure of the ability to store heat and energy into these molecules. So what is happening? Well, by 1905, Planck Had already proposed that there is some underlying quantization for heat that you have in the black body case. And in 1905, Einstein said, well, maybe we should think about the vibrational degrees of the molecule also as being similarly quantized. So quantize vibrations. It's totally a phenomenological statement. We have to justify it later. But the statement is that for the case where classically we had a harmonic oscillator. And let's say in this case we would have said that its energy depends on its momentum and its position or displacement-- I guess I called it u-- through a formula such as this. Certainly, you can pick lots of values of u and p that are compatible with any value of the energy that you choose. But to get the black body spectrum to work, Planck had proposed that really what you should do is rather than thinking of this harmonic oscillator as being able to take all possible values, that somehow the values of energy that it can take are quantized. And furthermore, he had proposed that they are proportional to the frequency involved. And how did he guess that? Ultimately, it was related to what I said about black body radiation. That as you heat up the body, you will find that there's a light that comes out and the frequency of that light is somehow related to temperature and nothing else. And based on that, he had proposed that frequencies should come up in certain packages that are proportional to-- the energies of the particular frequencies should come in packages that are proportional to that frequency. So there is an integer here n that tells you about the number of these packets. And not that it really matters for what we are doing now, but just to be consistent with what we currently know with quantum mechanics, let me add the 0 point energy of the harmonic oscillator here. So then, to calculate the contribution of a system in which energy is in quantized packages, you would say, OK, I will calculate a Z1 for these vibrational levels, assuming this quantization of energy. And so that says that the possible states of my harmonic oscillator have energies that are in these units h bar omega n plus 1/2. And if I still continue to believe statistical mechanics, I would say that at a temperature t, the probability that I will be in a state that is characterized by integer n is e to the minus beta times the energy that corresponds to that integer n. And then I can go and sum over all possible energies and that would be the normalization of the probability that I'm in one of these states. So this is e to the minus beta h bar omega over 2 from the ground state contribution. The rest of it is simply a geometric series. Geometric series, we can sum very easily to get 1 minus e to the minus beta h bar omega. And the interesting thing-- or a few interesting things about this expression is that if I evaluate this in the limit of low temperatures. Well, actually, let's go first to the high temperature where beta goes to 0. So t goes to become large, beta goes to 0. Numerator goes to 1, denominator I can expand the exponential. And to lowest order, I will get 1 over beta h bar omega. Now, compare this result with the classical result that we have over here for the vibration. Contribution of a harmonic oscillator to the partition function. You can see that the mu's cancel out. I will get 1 over beta. I will get h divided by 2 pi. So if I call h divided by 2 pi to be h bar, then I will get exactly this limit. So somehow this constant that we had introduced that had dimensions of action made to make our calculations of partition function to be dimensionless will be related to this h bar that quantizes the energy levels through the usual formula of h being h bar-- h bar being h over 2 pi. So basically, this quantization of energy clearly does not affect the high temperature limit. This oscillator at high temperature behaves exactly like what we had calculated classically. Yes? AUDIENCE: Is it h equals h bar over 2 pi? Or is it the other way, based on your definitions above? PROFESSOR: Thank you. Good. All right? So this is, I guess, the corresponding formula. Now, when you go to low temperature, what do you get? You essentially get the first few terms in the series. Because at the lowest temperature you get the term that corresponds to n equals to 0, and then you will get corrections from subsequent terms. Now, what this does is that it affects the heat capacity profoundly. So let's see how that happens. So the contribution of 1 degrees of freedom to the energy in this quantized fashion d log Z by d beta. So if I just take the log of this expression, log of this expression will get this factor of minus beta h bar omega over 2 from the numerator. The derivative of that will give you this ground state energy, which is always there. And then you'll have to take the derivative of the log of what is coming out here. Taking a derivative with respect to beta, we'll always pick out a factor of h bar omega. Indeed, it will pick out a factor of h bar omega e to the minus beta h bar omega. And then in the denominator, because I took the log, I will get this expression back. So again, in this expression, if I take the limit where beta goes to 0, what do I get? I will get this h bar omega over 2. It's always there. Expanding these results here, I will have a beta h bar omega. It will cancel this and it will give me a 1 over beta. I will get this kT that I had before. Indeed, if I am correct to the right order, I will just simply get 1 over beta. Whereas, if I go to large beta, what I get is this h bar omega over 2 plus a correction from here, which is h bar omega e to the minus beta h bar. And that will be reflected in the heat capacity, which is dE by dT. This h bar omega over 2 does not continue to heat capacity, not surprisingly. From here, I have to take derivatives with temperatures. They appear in the combination h bar omega over kT. So what happens is I will get something that is of the order of h bar omega. And then from here, I will get another h bar omega divided by kb T squared. I will write it in this fashion and put the kb out here. And then the rest of these objects will give me a contribution that is minus h bar omega over kT divided by 1 minus e to the minus h bar omega over kT squared. The important thing is the following-- If I plot the heat capacity that I get from one of these oscillators-- and the natural units of all heat capacities are kb, essentially. Energy divided by temperature, as kb has that units. At high temperatures, what I can see is that the energy is proportional to kT. So heat capacity of the vibrational degree of freedom will be in these units going to 1. At low temperatures, however, it becomes this exponentially hard problem to create excitations. Because of that, you will get a contribution that as T goes to 0 will exponentially go to 0. So the shape of the heat capacity that you would get will be something like this. The natural way to draw this figure is actually what I made the vertical axis to be dimensionless. So it goes between 0 and 1. I can make the horizontal axis to be dimensionless by introducing a theta of vibrations, so that all of the exponential terms are of the form e to the minus T over this theta of vibrations, which means that this theta of vibration is h bar omega over kb. That is, you tell me what the frequency of your oscillator is. I can calculate the corresponding temperature, theta. And then the heat capacity of a harmonic oscillator is this universal function there, presumably at some value that is of the order of 1. It switches from being of the order of 1 to going exponentially to 0. So basically, the dependence down here to leading order is e to the minus T over theta vibration. OK. So you say, OK, Planck has given us some estimate of what this h bar is based on looking at the spectrum of black body radiation. We can, more or, less estimate the typical energies of interactions of molecules. And from that, we can estimate what this frequency of vibration is. So we should be able to get an order of magnitude estimate of what this theta y is. And what you find is that theta y is of the order of 10 to the 3 degrees Kelvin. It depends, of course, on what gas you are looking at, et cetera. But as an order of magnitude, it is something like that. So we can now transport this curve that we have over here and more or less get this first part of the curve that we have over here. So essentially, in this picture what we have is that there is no vibrations. The vibrations have been frozen out. And here you have vibrations. Of course, in all of the cases, you have the kinetic energy of the center of mass. And presumably since we are getting the right answer at very high temperatures now, we also have the rotations. And it makes sense that essentially what happened as we go to very low temperatures is that the rotations are also frozen out. Now, that's part of the story-- actually, you would think that among all of the examples that I gave you, this last one should be the simplest thing because it's really a two-body problem. Whereas, solids you have many things. Radiation, you have to think about the electromagnetic waves, et cetera. That somehow, historically, this would be the one that is resolved first. And indeed, as I said in 1905, Einstein figured out something about this. But this part dealing with the rotational degrees of freedom and quantizing them appropriately had to really wait until you had developed quantum mechanics beyond the statement that harmonic oscillators are quantized in energy. You had to know something more. So since in retrospect we do know something more, let's finish and give that answer before going on to something else. OK? So the next part of the story of the diatomic gas is quantizing rotations. So currently what I have is that there is an energy classically for rotations that is simply the kinetic energy of rotational degrees of freedom. So there is an angular momentum L, and then there's L squared over 2I. It looks pretty much like P squared over 2M, except that the degrees of freedom for translation and motion are positions. They can be all over the place. Whereas, the degrees of freedom that you have to think in terms of rotations are angles that go between 0, 2 pi, or on the surface of a sphere, et cetera. So once we figure out how to do quantum mechanics, we find that the allowed values of this are of the form h bar squared over 2I l, l plus 1, where l now is the number that gives you the discrete values that are possible for the square of the angular momentum. So you say OK, let's calculate a Z for the rotational degrees of freedom assuming this kind of quantization. So what I have to do, like I did for the harmonic oscillator, is I sum over all possible values of l that are allowed. The energy e to the minus beta h bar squared over 2I l, l plus 1. Except that there is one other thing, which is that these different values of l have degeneracy that is 2l plus 1. And so you have to multiply by the corresponding degeneracy. So what am I doing over here? I have to do a sum over different values of l, contributions that are really the probability that I am in these different values of the index l-- 0, 1, 2, 3, 4, 5, 6. And I have to add all of these contributions. Now, the first thing that I will do is I ask whether the limit of high temperatures that I had calculated before is correctly reproduced or not. So I have to go to the limit where temperature is high or beta goes to 0. If beta goes to 0, you can see that going from one l to another l, it is multiply this exponent by a small number. So what does that mean? It means that the values from one point to another point of what am I summing over is not really that different. And I can think of a continuous curve that goes through all of these points. So if I do that, then I can essentially replace the sum with an integral. In fact, you can systematically calculate corrections to replacing the sum with an integral mathematically and you have a problem set that shows you how to do that. But now what I can do is I can call this combination l, l plus 1 x. And then dx will simply be 2l plus 1 dl. So essentially, the degeneracy works out precisely so that when I go to the continuum limit, whatever quantization I had for these angular momenta corresponds to the weight or measure that I would have in stepping around the l-directions. And then, this is something that I can easily do. It's just an integral dx e to the minus alpha x. The answer is going to be 1 over alpha, or the answer to this is simply 2I beta h bar squared. So this is the classical limit of the expression that we had over here. Let's go and see what we had when we did things classically. So when we did things classically, I had two factors of h and 2 pi and 4 pi. So I can write the whole thing as h bar squared and 2. I have I, and then I have beta. And you can see that this is exactly what we have over there. So once more, properly accounting for phase space, measure, productive p q's being dimension-- made dimensionless by this quantity h is equivalent to the high temperature limit that you would get in quantum mechanics where things are discretized. Yes. AUDIENCE: When you're talking about the quantum interpretations, then h bar is the precise value of Planck's constant, which can be an experimental measure. PROFESSOR: Right. AUDIENCE: But when you're talking about the classical derivations, h is just some factor that we mention of curve dimension. PROFESSOR: That's correct. AUDIENCE: So if you're comparing the limits of large temperatures, how can you be sure to establish the h bar in two places means the same thing? PROFESSOR: So far, I haven't told you anything to justify that. So when we were doing things classically, we said that just to make things dimensionless, let's introduce this quantity that we call h. Now, I have shown you two examples where if you do things quantum mechanically properly and take the limit of going to high temperatures, you will see that the h that you would get-- because the quantum mechanical partition functions are dimensionless quantities, right? So these are dimensionless quantities. They have to be made dimensionless by something. They're made dimensionless by Boltzmann's constant. By a Planck's constant, h bar. And we can see that as long as we are consistent with this measure of phase space, the same constant shows up both for the case of the vibrations, for the case of the rotations. And very soon, we will see that it will also arise in the case of the center of mass. And so there is certainly something in the transcriptions that we ultimately will make between quantum mechanics and classical mechanics that must account for this. And somehow in the limit where quantum mechanics is dealing with large energies, it is indistinguishable from classical mechanics. And quantum partition functions are-- all of the countings that we do in quantum mechanics are kind of unambiguous because we are dealing with discrete levels. So if you remember the original part of the difficulty was that we could define things like entropy only properly when we had discrete levels. If we had a continuum probability distribution and if we made a change of variable, then the entropy was changed. But in quantum mechanics, we don't have that problem. We have discretized values for the different states. Probabilities will be-- once we deal with them appropriately be discretized. And all of the things here are dimensionless. And somehow they reproduce the correct classical dynamics. Quantum mechanics goes to classical mechanics in the appropriate high-energy limit. And what we find is that what happens is that this shows up. If you like, another way of achieving-- why is there this correspondence? In classical statistical mechanics, I emphasize that I should really write h in units of p and q. And it was only when I calculated partition functions in coordinates p and q that were canonically conjugate that I was getting results that were meaningful. One way of constructing quantum mechanics is that you take the Hamiltonian and you change these into operators. And you have to impose these kinds of commutation relations. So you can see that somehow the same prescription in terms of phase space appears both in statistical mechanics, in calculating measures of partition function, in quantum mechanics. And not surprisingly, you have introduced in quantum mechanics some unit for phase space p, q. It shows up in classical mechanics as the quantity [INAUDIBLE]. But there is, indeed, a little bit more work than I have shown you here that one can do. Once we have developed the appropriate formalism for quantum statistical mechanics, which is this [INAUDIBLE] performed and appropriate quantities defined for partition functions, et cetera, in quantum statistical mechanics that we will do in a couple of lectures. Then if you take the limit h bar goes to 0, you should get the classical integration over phase space with this factor of h showing up. But right now, we are just giving you some heuristic response. If I go, however, in the other limit, where beta is much larger than 1, what do I get? Basically, then all of the weight is going to be in the lowest energy level, 0, 1. And then the rest of them will be exponentially small. I cannot replace the sum with an integral, so basically I will get a contribution that starts with 1 for l equals to 0. And then I will get 3e to the minus beta h bar squared divided by 2I. l being 1, this will give me 1 times 2. So I will have a 2 here. And then, higher-order terms. So once you have the partition function, you go through the same procedure as we described before. You calculate the energy, which is d log Z by d beta. What do you get? Again, in the high temperature limit you will get the same answer as before. So you will get beta goes to 0. You will get kT. If you go to the low temperature limit-- well, let's be more precise. What do I mean by low temperatures? Beta larger than what? Clearly, the unit that is appearing everywhere is this beta h bar squared over 2I, which has units of 1 over temperature from beta. So I can introduce a theta for rotations to make this demonstrate that this is dimensionless. So the theta that goes with rotations is h bar squared over 2I kb. And so what I mean by going to the low temperatures is that I go for temperatures that are much less than the theta of these rotations. And then what happens is that essentially this state will occur with exponentially small probability and will contribute to the energy and amount that is of the order of h bar squared 2I times 2. That's the energy of the l equals to 1 state. There are three of them, and they occur with probability e to the minus theta rotation divided by T times a factor of 2. All of those factors is not particularly important. Really, the only thing that is important is that if I look now at the rotational heat capacity, which again should properly have units of kb, as a function of temperature. Well, temperatures I have to make dimensionless by dividing by this rotational heat capacity. I say that at high, temperature I get the classical result back. So basically, I will get to 1 at high temperatures. At low temperatures, again I have this situation that there is a gap in the allowed energies. So there is the lowest energy, which is 0. The next one, the first type of rotational mode that is allowed has a finite energy that is larger than that by an amount that is of the order of h bar squared over I. And if I am at these temperatures that are less than this theta of rotation, I simply don't have enough energy from thermal fluctuations to get to that level. So the occupation of that level will be exponentially small. And so I will have a curve that will, in fact, look something like this. So again, you basically go over at a temperature of the order of 1 from heat capacity that is order of 1 to heat capacity that is exponentially small when you get to temperatures that are lower than this rotational temperature. AUDIENCE: Is that over-shooting, or is that-- PROFESSOR: Yes. So you have a problem set where you calculate the next correction. So there is the summation replacing the sum with an integral. This gives you this to the first order, and then there's a correction. And you will show that the correction is such that there is actually the approach to one for the case of the rotational heat capacity is from above. Whereas, for the vibrational heat capacity, it is from below. So there is, indeed, a small bump. OK? So you can ask, well, I know the typical size of one of these oxygen molecules. I know the mass. I can figure out what the moment of inertia I is. I put it over here and I figure out what the theta of rotation is. And you find that, again, as a matter of order of magnitudes, theta of rotations is of the order of 10 degrees K. So this kind of accounts for why when you go to sufficiently low temperatures for the heat capacity of the gas in this room, we see that essentially the rotational degrees of freedom are also frozen out. OK. So now let's go to the second item that we have, which is the heat capacity of the solid. So what do I mean? So this is item 2, heat capacity of solid. And you measure heat capacities for some solid as a function of temperature. And what you find is that the heat capacity has a behavior such as this. So it seems to vanish as you to go to lower and lower temperatures. So what's going on here? Again, Einstein looked at this and said, well, it's another case of the story of vibrations and some things that we have looked at here. And in fact, I really don't have to do any calculation. I'll do the following. Let's imagine that this is what we have for the solid. It's some regular arrangement of atoms or molecules. And presumably, this is the situation that I have at 0 temperature. Everybody is sitting nicely where they should be to minimize the energy. If I go to finite temperature, then these atoms and molecules start to vibrate. And he said, well, basically, I can estimate the frequencies of vibrations. And what I will do is I will say that each atom is in a cage by its neighbors. That is, this particular atom here, if it wants to move, it find that its distance to the neighbors has been changed. And if I imagine that there are kind of springs that are connecting this atom only to its neighbors, moving around there will be some kind of a restoring force. So it's like it is sitting in some kind of a harmonic potential. And if it tries to move, it will experience this restoring force. And so it will have some kind of a frequency. So each atom vibrates at some frequency. Let's call it omega E. Now, in principle, in this picture if this cage is not exactly symmetric, you may imagine that oscillations in the three different directions could give you different frequencies. But let's ignore that and let's imagine that the frequencies is the same in all of these. So what have we done? We have reduced the problem of the excitation energy that you can put in the atoms of the solid to be the same now as 3N harmonic oscillators of frequency omega. Why 3N? Because each atom essentially sees restoring force in three directions. And forgetting about boundary effects, it's basically three per particle. So you would have said that the heat capacity that I would calculate per particle in units of kb should essentially be exactly what we have over here, except that I multiply by 3 because each particular has 3 possible degrees of freedom. So all I need to do is to take that green curve and multiply it by a factor of 3. And indeed, the limiting value that you get over here is 3. Except that if I just take that green curve and superpose it on this, what I will get is something like this. So this is 3 times harmonic oscillator. What do I mean by that, is I try to sort of do my best to match the temperature at which you go from one to the other. But then what I find is that as we had established before, the green curve goes to 0 exponentially. So there is going to be some theta associated with this frequency. Let's call it theta Einstein divided by T. And so the prediction of this model is that the heat capacities should vanish very rapidly as this form of exponential. Whereas, what is actually observed in the experiment is that it is going to 0 proportional to T cubed, which is a much slower type of decay. OK? AUDIENCE: That's negative [INAUDIBLE]? PROFESSOR: As T goes to 0, the heat capacity goes to 0. T to the third power. So it's the limit-- did I make a mistake somewhere else? All right. So what's happening here? OK, so what's happening is the following. In some average sense, it is correct that if you try to oscillate some atom in the crystal, it's going to have some characteristic restoring force. The characteristic restoring force will give you some corresponding typical scale for the frequencies of the vibrations. Yes? AUDIENCE: Is this the historical progression? PROFESSOR: Yes. AUDIENCE: I mean, it seems interesting that they would know that-- like this cage hypothesis is very good, considering where a quantum [INAUDIBLE] exists. I don't understand how that's the logic based-- if what we know is the top board over there, the logical progression is that you would have-- I don't know. PROFESSOR: No. At that time, the proposal was that essentially if you have oscillator of frequency omega, its energy is quantized in multiples of omega. So that's really the only aspect of quantum mechanics. So I actually jumped the historical development where I gave you the rotational degrees of freedom. So as I said, historically this was resolved last in this part because they didn't know what to do with rotations. But now I'm saying that you know about rotations, you know that the heat capacity goes to 0. You say, well, solid is composed. The way that you put heat into the system, enhance its heat capacity, is because there is kinetic energy that you put in the atoms of the solid. And as you try to put kinetic energy, there is this cage model and there's restoring force. The thing that is wrong about this model is that, basically, if you ask how easy it is to give energy to the system, if rather than having one frequency you have multiple frequencies, then at low temperatures you would put energy in the lower frequency. Because the typical scale we saw for connecting temperature and frequency, they are kind of proportional to each other. So if you want to go to low temperature, you are bound to excite things that have lower frequency. So the thing is that it is true that there is a typical frequency. But the typical frequency becomes less and less important as you go to low temperature. The issue is, what are the lowest frequencies of excitation? And basically, the correct picture of excitations of the solid is that you bang on something and you generate these sound waves. So what you have is that oscillations or vibrations of solid are characterized by wavelength and wave number k, 2 pi over lambda. So if I really take a better model of the solid in which I have springs that connect all of these things together and ask, what are the normal modes of vibration? I find that the normal modes can be characterized by some wave number k. As I said, it's the inverse of the wavelength. And frequency depends on wave number. In a manner that when you go to 0k, frequency goes to 0. And why is that? Essentially, what I'm saying is that if you look at particles that are along a line and may be connected by springs. So a kind of one-dimensional version of a solid. Then, the normal modes are characterized by distortions that have some particular wavelength. And in the limit where the wavelength goes to 0, essentially-- Sorry, in the limit where the wavelength goes to infinity or k goes to 0, it looks like I am taking all of the particles and translating them together. And if I take the entire solid here and translate it, there is no restoring force. So omega has to go to 0 as your k goes to 0, or wavelength goes to infinity. And there is a symmetry between k and minus k, in fact, that forces the restoring force to be proportional to k squared. And when you take the square root of that, you get the frequency. You always get a linear behavior as k goes to 0. So essentially, that's the observation that whatever you do with your solid, no matter how complicated, you have sound modes. And sound modes are things that happen in the limit where you have long wavelengths and there is a relationship between omega and k through some kind of velocity of sound. Now, to be precise there are really three types of sound waves. If I choose the direction k along which I want to create an oscillation, the distortions can be either along that direction or perpendicular to that. They can either be longitudinal or transfers. So there could be one or two other branches. So there could, in principle, be different straight lines as k goes to 0. And the other thing is that there is a shortest wavelength that you can think about. So if these particles are a distance a apart, there is no sense in going to wave numbers that are larger than pi over a. So you have some limit to these curves. And indeed, when you approach the boundary, this linear dependence can shift and change in all kinds of possible ways. And calculating the frequency inside one of these units that is called a Brillouin zone is a nice thing to do for the case of using methods of solid state. And you've probably seen that. And there is a whole spectrum of frequencies as a function of wave number that correctly characterize a solid. So it may be that somewhere in the middle of this spectrum is a typical frequency omega E. But the point is that as you go to lower and lower temperatures, because of these factors of e to the minus beta h bar omega, you can see that as you go to lower and lower temperature, the only things that get excited are omegas that are also going to 0 proportionately to kT. So I can draw a line here that corresponds to frequencies that are of the order of kT over h bar. All of the harmonic oscillators that have these larger frequencies that occur at short wavelengths are unimportant. They're kind of frozen, just like the vibrations of the oxygen molecules in this room are frozen. You cannot put energy in them. They don't contribute to heat capacity. But all of these long wavelength modes down here have frequencies that go to 0. Their excitation possibility is large. And it, indeed, these long wavelength modes that are easy to excite and continue to heat capacity. I'll do maybe the precise calculation next time, but even within this picture we can figure out why the answer should be proportional to T cubed. So what I need to do, rather than counting all harmonic oscillators-- the factor of 3n-- I have to count how many oscillators have frequencies that are less than this kT over h bar. So I claim that number of modes with frequency less than kT over h bar goes like kT over h bar cubed V. Essentially, what I have to do is to do a summation over all k that is less than some k max. This k max is set by this condition that Vk max is of the order of kT over h bar. So this k max is of the order of kT over h bar V. So actually, to be more precise I have to put a V here. So I have to count all of the modes. Now, this separation between these modes-- if you have a box of size l is 2 pi over l. So maybe we will discuss that later on. But the summations over k you will always replace with integrations over k times the density of state, which is V divided by 2 pi cubed. So this has to go between 0 and k max. And so this is proportional to V k max cubed, which is what I wrote over there. So as I go to lower and lower temperature, there are fewer and fewer oscillators. The number of those oscillators grows like T cubed. Each one of those oscillators is fully excited as energy kT contributes 1 unit to heat capacity. Since the number of oscillators goes to 0 as T cubed, the heat capacity that they contribute also goes to 0 as T cubed. So you don't really need to know-- this is actually an interesting thing to ponder. So rather than doing the calculations, maybe just think about this. That somehow the solid could be arbitrarily complicated. So it could be composed of molecules that have some particular shape. They are forming some strange lattice of some form, et cetera. And given the complicated nature of the molecules, the spectrum that you have for potential frequencies that a solid can take, because of all of the different vibrations, et cetera, could be arbitrary complicated. You can have kinds of oscillations such as the ones that I have indicated. However, if you go to low temperature, you are only interested in vibrations that are very low in frequency. Vibrations that are very low in frequencies must correspond to the formations that are very long wavelength. And when you are looking at things that are long wavelength, this is, again, another thing that has statistical in character. That is, rather you are here looking at things that span thousands of atoms or molecules. However, as you go to lower and lower temperature, more and more atoms and molecules. And so again, some kind of averaging is taking place. All of the details, et cetera, wash out. You really see some global characteristic. The global characteristic that you see is set by this symmetry. Just the fact that when I go to exactly k equals to 0, I am translating. I have 0 frequency. So when I'm doing something that is long wavelength, the frequency should somehow be proportional to that wavelength. So that's just a statement of continuity if you like. Once I have made that statement, then it's just a calculation of how many modes are possible. The number of modes will be proportional to T cubed. And I will get this T cubed law irrespective of how complicated the solid is. All of the solids will have the same T cubed behavior. The place where they come from the classical behavior to this quantum behavior will depend on the details of the solid, et cetera. But the low temperature law, this T cubed law, is something that is universal. OK, so next time around, we will do this calculation in more detail, and then see also its connection to the blackbody radius. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 15_Interacting_Particles_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So we said that the task of statistical mechanics is to assign probabilities to different microstates given that we have knowledge of the microstate. And the most kind of simple logical place to start was in the microcanonical ensemble where the microstate that we specified was one in which there was no exchange of work or heat with the surroundings so that the energy was constant, and the parameters, such as x and N, that account for chemical and mechanical work were fixed also. Then this assignment was, we said, like counting how many faces the dice has, and saying all of them are equally likely. So we would say that the probability here of a microstate is of the form 0 or 1 depending on whether the energy of that microstate is or is not the right energy that is listed over here. And as any probability, it has to be normalized. So we had this 1 over omega. And we also had a rule for converting probabilities to entropies, which in dimensionless form-- that is, if we divide by kB-- was simply minus the expectation value of the log of the probability, this probability being uniformly 1 over omega. This simply gave us log of omega E, x, N. Once we had the entropy as a function of E, x, N, we also identified the derivatives of this quantity in equilibrium. And partial derivative with respect to energy was identified as 1 So systems that were in equilibrium with each other, this derivative dS by dE had to be the same for all of them. And because of mechanical stability, we could identify the next derivative with respect to x. That was minus J/T. And with respect to N by identifying a corresponding thing, we would get this. So from here, we can proceed and calculate thermodynamic properties using these microscopic rules, as well as probabilities in the entire space of microstates that you have. Now, the next thing that we did was to go and look at the different ensemble, the canonical ensemble, in which we said that, again, thermodynamically, I choose a different set of variables. For example, I can replace the energy with temperature. And indeed, the characteristic of the canonical ensemble is that rather than specifying the energy, you specify the temperature. But there's still no chemical or mechanical work. So the other two parameters are kept fixed also. And then the statement that we had was that by putting a system in contact with a huge reservoir, we could ensure that it is maintained at some temperature. And since system and reservoir were jointly microcanonical, we could use the probabilities that we had microcanonically. We integrated over the degrees of freedom of the reservoir that we didn't care about. And we ended up with the probability of a microstate in the canonical ensemble, which was related to the energy of that by the form exponential of minus beta h where we introduced beta to be 1 over kT. And this probably, again, had to be normalized. So the normalization we called Z. And Z is attained by summing over all of the microstates E to the minus beta H of the microstate where this could be an integration over the entire phase space if you're dealing with continuous variables. OK, now the question was, thermodynamically, these quantities T and E are completely exchangeable ways of identifying the same equilibrium state, whereas what I have done now is I have told you what the temperature is. But the energy of the system is a random variable. And so I can ask, what is the probability that I look at my system and I find a particular energy E? So how can I get that? Well, that probability, first of all, has to come from a microstate that has the right energy. And so I will get e to the minus beta E divided by Z, which is the probability of getting that microstate that has the right energy. But I only said something about the energy. And there are a huge number of microstates, as we've seen, that have the same energy. So I could have picked from any one of those microstates, their number being omega of E, the omega that we had identified before. So since omega can be written as exponential of s over kB, this you can also think of as being proportional to exponential of minus beta E minus T, the entropy that I would get for that energy according to the formula above, divided by Z. Now, we said that the quantity that I'm looking at here in the numerator has a dependence on the size of the system that is extensive. Both the entropy and energy we expect to be growing proportionately to the size of the system. And hence, this exponent also grows proportionately to the size of the system. So I expect that if I plot this probability as a function of energy, this probability to get energy E, it would be one of those functions. It's certainly positive. It's a probability. It has an exponential dependence. So there's a part that maybe from the density of states grows exponentially. e to the minus beta E will exponential kill it. And maybe there is, because of this competition, one or potentially more maxima. But if there are more maxima locally, I don't really care. Because one will be exponentially larger than the other. So presumably, there is some location corresponding to some E star where the probability is maximized. Well, how should I characterize the energy of the system? Should I pick E star, or given this probability, should I look at maybe the mean value, the average of H? How can I get the average of H? Well, what I need to do is to sum over all the microstates H of the microstate e to the minus beta H divided by sum over microstates e to the minus beta H, which is the normalization. The denominator is of course the partition function. The numerator can be obtained by taking a derivative of this with respect to beta of the minus sign. So what this is is minus the log Z by d beta. So if I have calculated H, I can potentially go through this procedure and maybe calculate where the mean value is. And is that a better representation of the energy of the system than the most likely value, which was E star? Well, let's see how much energy fluctuates. Basically, we saw that if I repeat this procedure many times, I can see that the n-th moment is minus 1 to the n 1 over Z the n-th derivative of Z with respect to beta. And hence the partition function, by expanding it in powers of beta, will generate for me higher and higher moments. And we therefore concluded that the cumulant would be obtained by the same procedure, except that I will replace this by log Z. So this will be the n-th derivative of log Z with respect to beta. So the variance H squared c is a second derivative of log Z. The first derivative gives me the first cumulant, which was the mean. So this is going to give me d by d beta with a minus sign of the expectation value of H, which is the same thing as kBT squared, because the derivative of 1 over kT is 1 over T squared times derivative with respect to T. And it goes to the other side, to the numerator. And so then I have the derivative of this object with respect to temperature, which is something like a heat capacity. But most importantly, the statement is that all of these quantities are things that are of the order of the size of the system. And just like we did in the central limit theorem, with the addition of random variables, you have a situation that is very much like that. The distribution, in some sense, is going to converge more and more towards the Gaussian dominated by the first and second cumulant. And in fact, even the second cumulant we see is of the order of square root of n. And hence the fluctuations between these two quantities, which are each one of them order of n, is only of the order of square root of n. And when the limit of n goes to infinity, we can ignore any such difference. We can essentially identify either one of them with the energy of the system thermodynamically. And then this quantity, if I identify this with energy of the system, is simply the usual heat capacity of constant x. So the scale of the fluctuations over here is set by square root of kBT squared the heat capacity. By the way, which is also the reason that heat capacities must be positive. Because statistically, the variances are certainly positive quantities. So you have a constraint that we had seen before on the sign emerging in its statistical interpretation. What I did over here was to identify an entropy associated with a particular form of the probability. What happens if I look at an entropy with this probability that I have over here? OK, so this is a probability phase space. What is its entropy? So S over k is expectation value of log p. And what is log of p? Well, there is log of this minus beta H. So what I will have, because of the change in sign-- beta expectation value of H. And then I have minus log Z here. The sign changes, and I will get plus log Z. So yeah, that's correct. If I were to rearrange this, what do I get? I can take this minus-- let's see. Yeah, OK, so if I take this to the other side, then I will get that log Z equals-- this goes to the other side-- minus beta. Expectation value of H we are calling E. And then I have multiplying this by beta. The kB's disappear, and I will get minus TS. So we see that I can identify log Z with the combination E minus TS, which is the Helmholtz free energy. So log Z, in the same way that the normalization here that was omega, gave us the entropy. The normalization of the probability in the canonical ensemble, which is also called the partition function if I take its log, will give me the free energy. And I can then go and compute various thermodynamic quantities based on that. There's an alternative way of getting the same result, which is to note that actually the same quantity is appearing over here, right? And really, I should be evaluating the probability here at the maximum. Maximum is the energy of the system. So if you like, I can call this variable over here epsilon. This is the probability of epsilon. And it is only when I'm at the maximum that I can replace this epsilon with E. Then since with almost probability 1, I'm going to see this state and none of the other states, because of this exponential dependence. When this expression is evaluated at this energy, it should give me probability 1. And so you can see again that Z has to be-- so basically what I'm saying is that this quantity has to be 1 so that you get this relationship back from that perspective also. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: So I agree that if in order to plug in d, mean energy, into that expression, you would get a probability of 1 at that point. But because even though the other energies are exponentially less probable, they strictly speaking aren't 0 probability, are they? PROFESSOR: No. AUDIENCE: So how does this get normalized? How does this probability expression get normalized? PROFESSOR: OK, so what are we doing? So Z, I can also write it as the normalization of the energy. So rather than picking the one energy that maximizes things, you say, you should really do this, right? Now, I will evaluate this by the saddle point method. The saddle point method says, pick the maximum of this. So I have e to the minus beta F evaluated at the maximum. And then I have to integrate over variations around that maximum. Variations around that maximum I have to expand this to second order. And if I really correctly expand this to second order, I will get delta E squared divided by this 2kTCx, because we already established what this variance is. I can do this Gaussian integration. I get e to the minus beta F of e star. And then the variance of this object is going to give me root of 2 pi kTC of x-- T-square. So what do I mean when I say that this quantity is Z? Really, the thing that we are calculating always in order to make computation is something like a free energy, which is log Z. So when I take the log, log of Z is going to be this minus beta F star S. But you're right. There was a weight. All of those things are contributing. How much are they contributing? 1/2 log of 2 pi kBT squared C. Now, in the limit of large N, this is order of N. This is order of log N. And I [INAUDIBLE]. So it's the same saddle point. The idea of saddle point was that there is a weight. But you can ignore it. There maybe another maximum here. You say, what about that? Well, that will be exponentially small. So everything I keep emphasizing only works because of this N goes to infinity limit. And it's magical, you see? You can replace this sum with just one, the maximum. And everything is fine and consistent. Yes. AUDIENCE: Not to belabor this point, but if you have an expected return for catastrophe where this outlier causes an event that brings the system down, couldn't that chase this limit in the sense that as that goes to 0, that still goes to infinity, and thus you're-- you understand what I'm saying. If an outlier causes this simulation-- that's my word-- causes this system to crumble, then-- so is there a paradox there? PROFESSOR: No, there is here the possibility of a catastrophe in the sense that all the oxygen in this room could go over there, and you and I will suffocate after a few minutes. It's possible, that's true. You just have to wait many, many ages of the universe for that to happen. Yes. AUDIENCE: So when you're integrating, that introduces units into the problem. So we have to divide by something to [INAUDIBLE]. PROFESSOR: Yes, and when we are doing this for the case of the ideal gas, I will be very careful to do that. But it turns out that those issues, as far as various derivatives are concerned, will not make too much difference. But when we are looking at the specific example, such as the ideal gas, I will be careful about that. So we saw how this transition occurs. But when we were looking at the case of thermodynamical descriptions, we looked at a couple of other microstates. One of them was the Gibbs canonical. And what we did was we said, well, let's allow now some work to take place on the system, mechanical work. And rather than saying that, say, the displacement x is fixed, I allow it to vary. But I will say what the corresponding force is. But let's keep the number of particles fixed. So essentially, the picture that we have is that somehow my system is parametrized by some quantity x. And I'm maintaining the system at fixed value of some force J. So x is allowed potentially to find the value that is consistent with this particular J that I impose on the system. And again, I have put the whole system in contact with the reservoir temperature T so that if I now say that I really maintain the system at variable x, but fix J, I have to put this spring over there. And then this spring plus the system is jointly in a canonical perspective. And what that really means is that you have to keep track of the energy of the microstate, as well as the energy that you extract from the spring, which is something like J dot x. And since the joint system is in the canonical state, it would say that the probability for the joint system, which has T and J and N specified, but I don't know the microstate, and I don't know the actual displacement x, this joint probability is canonical. And so it is proportional to e to the minus beta H of the microstate plus this contribution Jx that I'm getting from the other degree of freedom, which is the spring, which keeps the whole thing at fixed J. And I have to divide by some normalization that in addition includes integration over x. And this will depend on T, J, N. So in this system, both x and the energy of the system are variables. They can potentially change. What I can now do is to either characterize like I did over here what the probability of x is, or go by this other route and calculate what the entropy of this probability is, which would be minus the log of the corresponding probability. And what is the log in this case? I will get beta expectation value of H. And I will get minus beta J expectation value of x. And then I will get plus log of this Z tilde. So rearranging things, what I get is that log of this Z tilde is minus beta. I will call this E like I did over there. Minus J-- I would call this the actual thermodynamic displacement x. And from down here, the other side, I will get a factor of TS. So this should remind you that we had called a Gibbs free energy the combination of E minus TS minus Jx. And the natural variables for this G were indeed, once we looked at the variation DE, T. They were J, and N that we did not do anything with. And to the question of, what is the displacement, since it's a random variable now, I can again try to go to the same procedure as I did over here. I can calculate the expectation value of x by noting that this exponent here has a factor of beta Jx. So if I take a derivative with respect to beta J, I will bring down a factor of x. So if I take the derivative of log of this Z tilde with respect to beta J, I would generate what the mean value is. And actually, maybe here let me show you that dG was indeed what I expected. So dG would be dE. dE has a TdS. This will make that into a minus SdT. dE has a Jdx. This will make it minus xdJ. And then I have mu dN. So thermodynamically, you would have said that x is obtained as a derivative of G with respect to J. Now, log Z is something that is like beta G. At fixed temperature, I can remove these betas. What do I get? This is the same thing as dG by dJ. And I seem to have lost a sign somewhere. OK, log Z was minus G. So there's a minus here. Everything is consistent. So you can play around with things in multiple ways, convince yourself, again, that in this ensemble, I have fixed the force, x's variable. But just like the energy here was well defined up to something that was square root of N, this x is well defined up to something that is of the order square root of N. Because again, you can second look at the variance. It will be related to two derivatives of log G, which would be related to one derivative of x. And ultimately, that will give you something like, again, kBT squared d of x by dJ. Yes. AUDIENCE: Can you mention again, regarding the probability distribution? What was the idea? PROFESSOR: How did I get this probability? AUDIENCE: Yes. PROFESSOR: OK, so I said that canonically, I was looking at a system where x was fixed. But now I have told you that J-- I know what J is, what the force is. And so how can I make sure that the system is maintained at a fixed J? I go back and say, how did I ensure that my microcanonical system was at a fixed temperature? I put it in contact with a huge bath that had the right temperature. So here, what I will do is I will connect the wall of my system to a huge spring that will maintain a particle [? effect ?] force J on the system. When we do it for the case of the gas, I will imagine that I have a box. I have a piston on top that can move up and down. I put a weight on top of it. And that weight will ensure that the pressure is at some particular value inside the gas. But then the piston can slide up and down. So a particular state of the system I have to specify where the piston is, how big x is, and what is the microstate of the system. So the variables that I'm not sure of are mu and x. Now, once I have said what mu and x is, and I've put the system in contact with a bath at temperature T, I can say, OK, the whole thing is canonical, the energy of the entire system composed of the energy of this, which is H of mu, and the energy of the spring, which is Jx. And so the canonical probability for the joint system is composed of the net energy of the two. And I have to normalize it. Yes. AUDIENCE: Are you taking x squared to be a scalar, so that way the dx over dJ is like a divergence? Or are you taking it to be a covariance tensor? PROFESSOR: Here, I have assumed that it is a scalar. But we can certainly do-- if you want to do it with a vector, then I can say something about positivity. So I really wanted this to be a variance and positive. And so if you like, then it would be the diagonal terms of whatever compressibility you have. So you certainly can generalize this expression to be a vector. You can have xI xJ, and you would have d of xI with respect to JJ or something like this. But here, I really wanted the scalar case. Everything I did was scalar manipulation. And this is now the variance of something and is positive. OK, there was one other ensemble, which was the grand canonical. So here, I went from the energy to temperature. I kept N fixed. I didn't make it into a chemical potential. But I can do that. Rather than having fixed number, I can have fixed chemical potential. But then I can't allow the other variable to be J. It has to be x. Because as we have discussed many times, at least one of them has to be extensive. And then you can follow this procedure that you had here. Chemical work is just an analog of mechanical work mathematically. So now I would have the probability that I have specified this set of variables. But now I don't know what my microstate is. Actually, let me put mu s. Because I introduced a chemical potential mu. So mu sub s is the microstate of the system. I don't know how many particles are in that microstate. The definition of mu s would potentially cover N. Or I can write it explicitly. And then, what I will have is e to the beta mu N minus beta H of the microstate's energy divided by the normalization, which is the grand partition function Q. So this was the Gibbs partition function. This is a function of T, x at the chemical potential. And the analog of all of these expressions here would exist. So the average number in the system, which now we can use as the thermodynamic number, as we've seen, would be d of log of Q with respect to d beta mu. And you can look at the variances, et cetera. Yes. AUDIENCE: So do you define, then, what the microstate of the system really is if the particle number is [INAUDIBLE]? PROFESSOR: OK, so let's say I have five particles or six particles in the system. It would be, in the first case, the momentum coordinates of five particles. In the next case, in the momentum, I have coordinates of six particles. So these are spaces, as it has changed, where the dimensionality of the phase space is also being modified. And again, for the ideal gas, we will explicitly calculate that. And again, you can identify what the log of this grand partition is going to be. It is going to be minus beta E minus TS. Rather than minus Jx, it will be minus mu N. And this combination is what we call the grand potential g, which is E minus TS minus mu N. You can do thermodynamics. You can do probability. Essentially, in the large end limit, and only in the large end limit, there's a consistent identification of most likely states according to the statistical description and thermodynamic parameters. Yes. AUDIENCE: Can one build a more general ensemble or description where, say, J is not a fixed number, but I know how it varies? I know that it's subject to-- say the spring is not exactly a Hookean spring. It's not linear. The proportion is not linear. PROFESSOR: I think you are then describing a [? composing ?] system, right? Because it may be that your system itself has some nonlinear properties for its force as a function of displacement. And most generally, it will. Very soon, we will look at the pressure of a gas, which will be some complicated function of its density-- does not necessarily have to be linear. So the system itself could have all kinds of nonlinear dependencies. But you calculate the corresponding force through this procedure. And it will vary in some nonlinear function of density. If you're asking, can I do something in which I couple this to another system, which has some nonlinear properties, then I would say that really what you're doing is you're putting two systems together. And you should be doing separate thermodynamical calculations for each system, and then put the constraint that the J of this system should equal the J of the other system. AUDIENCE: I don't think that's quite what I'm getting at. I'm thinking here, the J is not given by another thermodynamical system. We're just applying it-- like we're not applying thermodynamics to whatever is asserting to J. PROFESSOR: No, I'm trying to mimic thermodynamics. So in thermodynamics, I have a way of describing equilibrium systems in terms of a certain set of variables. And given that set of variables, there are conjugate variables. So I'm constructing something that is analogous to thermodynamics. It may be that you want to do something else. And my perspective is that the best way to do something else is to sort of imagine different systems that are in contact with each other. Each one of them you do thermodynamics, and then you put equilibrium between them. If I want to solve some complicated system mechanically, then I sort of break it down into the forces that are acting on one, forces that are acting on the other, and then how this one is responding, how that one is responding. I don't see any advantage to having a more complicated mechanical description of an individual system. AUDIENCE: It's not like in reality. It's hard to obtain situations where the external force is fixed. Really, if we're doing an experiment here, the pressure in the atmosphere is a fixed number. But maybe in other circumstances-- PROFESSOR: Yes, so if you are in another circumstance, like you have one gas that is expanding in a nonlinear medium, then what I would do is I would calculate the pressure of gas as a function of its volume, et cetera. I would calculate the response of that medium as a function of stress, or whatever else I'm exerting on that. And I say that I am constrained to move around the trajectory where this force equals this force. So now let's carry out this procedure for the ideal gas. And the microscopic description of the ideal gas was that I have to sum over-- its Hamiltonian is composed of the kinetic energies of the particles. So I have their momentum. Plus the potential that we said I'm going to assume describes box of volume mu. And it is ideal, because we don't have interactions among particles. We will put that shortly for the next version. Now, given this, I can go through the various prescriptions. I can go microcanonically and say that I know the energy volume and the number of particles. And in this prescription, we said that the probability is either 0 or 1 divided by omega. And this is if qi not in box and sum over i Pi squared over 2m not equal to E and one otherwise. AUDIENCE: Is there any reason why we don't use a direct delta there? PROFESSOR: I don't know how to use the direct delta for reading a box. For the energy, the reason I don't like to do that is because direct deltas have dimensions of 1 over whatever thing is inside. Yeah, you could do that. It's just I prefer not to do it. But you certainly could do that. It is certainly something that is 0 or 1 over omega on that surface. The S for this, what was it? It was obtained by integrating over the entirety of the 6N dimensional phase space. From the Q integrations, we got V to the N. From the momentum integrations, we got the surface area of this hypersphere, which was 2 pi to the 3N over 2, because it was 3N dimensional. And this was the solid angle. And then we got the radius, which is 2mE square root raised to the power of number of dimensions minus 1. Then we did something else. We said that the phase space is more appropriately divided by N factorial for identical particles. Here, I introduced some factor of H to dimensionalize the integrations over PQ space so that this quantity now did not carry any dimensions. And oops, this was not S. This was omega. And S/k, which was log of omega-- we took the log of that expression. We got N log of V. From the log of N factorial, Stirling's approximation gave us a factor of N over e. All of the other factors were proportional to 3N over 2. So the N is out front. I will have a factor of 3 over 2 here. I have 2 pi mE divided by 3N over 2. And then there's E from Stirling's approximation. And that was the entropy of the ideal gas. Yes. AUDIENCE: Does that N factorial change when describing the system with distinct particles? PROFESSOR: Yes, that's right. AUDIENCE: So that definition changes? PROFESSOR: Yes, so this factor of-- if I have a mixture of gas, and one of them are one kind, and two of them are another kind, I would be dividing by N1 factorial, N2 factorial, and not by N1 plus N2 factorial. AUDIENCE: So that is the correct definition for all the cases? PROFESSOR: That's the definition for phase space of identical particles. So all of them are identical particle, is what I have written. All of them are distinct. There's no such factor. If half of them are of one type and identical, half of them are another type of identical, then I will have N over 2 factorial, N over 2 factorial. AUDIENCE: Question. PROFESSOR: Yeah. AUDIENCE: So when you're writing the formula for number of microstates, just a question of dimensions. You write V to the N. It gives you something that mentions coordinate to the power 3N times something of dimensions of [INAUDIBLE]. The power is 3N minus 1 divided by H to 3N. So overall, this gives you 1 over momentum. PROFESSOR: OK, let's put a delta E over here-- actually, not a delta E, but a delta R. OK, so now it's fine. It's really what I want to make sure is that the extensive part is correctly taken into account. I may be missing a factor of dimension that is of the order of 1 every now and then. And then you can put some [INAUDIBLE] to correct it. And it's, again, the same story of the orange. It's all in the skin, including-- so the volume is the same thing as the surface area, which is essentially what I'm-- OK? So once you have this, you can calculate various quantities. Again, the S is dE over T plus PdV over T minus mu dN over T. So you can immediately identify, for example, that 1 over T derivative with respect to energy would give me a factor of N/E. If I take a derivative with respect to volume, P over T, derivative of this object with respect to volume will give me N over V. And mu over T is-- oops, actually, in all of these cases, I forgot the kB. So I have to restore it. And here I would get kB log of the N out front. I can throw out V/N 4 pi m E over 3N. And I forgot the H's. So the H's would appear as an H squared here. They will appear as an H squared here raised to the 3/2 power. We also said that I can look at the probability of a single particle having momentum P1. And we showed that essentially I have to integrate over everything else if I'm not interested, such as the volume of the particle number one. The normalization was omega E, V, N. And integrating over particles numbers two to N, where the energy that is left to them is E minus the kinetic energy of the one particle, gave me this expression. And essentially, we found that this was proportional to one side divided by E. 1 minus P1 squared over 2m raised to the power that was very close to 3N over 2 minus/plus something, and that this was proportional, therefore, to this P1 squared over 2m times-- P1 squared 2mE-- 3N over 2E. And 3N over 2E from here we see is the same thing as 1 over kT. So we can, with this prescription, calculate all of the properties of the ideal gas, including probability to see one particle with some momentum. Now, if I do the same thing in the canonical form, in which the microstate that I'm looking at is temperature, volume, number of particles, then the probability of a microstate, given that I've specified now the temperature, is proportional to the energy of that microstate, which is e to the minus beta sum over i Pi squared over 2m. Of course I would have to have all of Qi's in box. Certainly they cannot go outside the box. And this has to be normalized. Now we can see that in this canonical ensemble, the result that we had to do a couple of lines of algebra to get, which is that the momentum of a particle is Gaussian distributed, is automatically satisfied. And in this ensemble, each one of the momenta you can see is independently distributed according to this probability distribution. So somethings clearly emerge much easier in this perspective. And if I were to look at the normalization Z, the normalization Z I obtained by integrating over the entirety of the phase space. So I have to do the integration over d cubed Pi d cubed qi. Since this is the phase space of identical particles, we said we have to normalize it by N factorial. And I had this factor of H to the 3N to make things dimensionless. I have to exponentiate this energy sum over i Pi squared over 2m and just ensure that the qi are inside the box. So if I integrate the qi, what do I get? I will get V per particle. So I have V to the N divided by N factorial. So that's the volume contribution. And then what are the P integrations? Each one of the P integrations is an independent Gaussian. In fact, each component is independent. So I have 3N Gaussian integrations. And I can do Gaussian integrations. I will get root 2 pi m inverse of beta, which is kT per Gaussian integration. And there are 3N of them. And, oh, I had the factor of h to the 3N. I will put it here. So I chose these H's in order to make this phase space dimensionless. So the Z that I have now is dimensionless. So the dimensions of V must be made up by dimensions of all of these things that are left. So I'm going to make that explicitly clear by writing it as 1 over N factorial V over some characteristic volume raised to the N-th power. The characteristic volume comes entirely from these factors. So I have introduced lambda of T, which is h over root 2 pi mkT, which is the thermal de Broglie wavelength. At this stage, this h is just anything to make things dimensionally work. When we do quantum mechanics, we will see that this lens scale has a very important physical meaning. As long as the separations of particles on average is larger than this, you can ignore quantum mechanics. When it becomes less than this, you have to include quantum factors. So then what do we have? We have that the free energy is minus kT log Z. So F is minus kT log of this partition function. Log of that quantity will give me N log V over lambda cubed. Stirling's formula, log of N factorial, will give me N log N over e. So that's the free energy. Once I have the free energy, I can calculate, let's say, the volume. Let's see, dF, which is d of E minus TS, is minus SdT minus PdV, because work of the gas we had identified as minus PdV. I have mu dN. What do we have, therefore? We have that, for example, P is minus dF by dV at constant T and N. So I have to go and look at this expression where the V up here, it just appears in log V. So the answer is going to be NkT over V. I can calculate the chemical potential. Mu will be dF by dN at constant T and V, so just take the derivative with respect to N. So what do I get? I will get minus kT log of V over N lambda cubed. And this E will disappear when I take the derivative with respected to that log inside. So that's the formula for my mu. I won't calculate entropy. But I will calculate the energy, noting that in this ensemble, a nice way of calculating energy is minus d log Z by d beta. So this is minus d log Z by d beta. And my Z, you can see, has a bunch of things-- V, N, et cetera. But if I focus on temperature, which is the inverse beta, you can see it appears with a factor of 3N over 2. So this is beta to the minus 3N over 2. So this is going to be 3N over 2 derivative of log beta with respect to beta, which is 1 over beta, which is the same thing as 3 over 2 NkT. So what if I wanted to maintain the system at some fixed temperature, but rather than telling you what the volume of the box is, I will tell you what its pressure is and how many particles I have? How can I ensure that I have a particular pressure? You can imagine that this is the box that contains my gas. And I put a weight on top of some kind of a piston that can move over the gas. So then you would say that the net energy that I have to look at is the kinetic energy of the gas particles here plus the potential energy of this weight that is going up and down. If you like, that potential energy is going to be mass times delta H. Delta H times area gives you volume. Mass times G divided by area will give you pressure. So the combination of those two is the same thing as pressure times volume. So this is going to be the same thing as minus sum over i Pi squared over 2m. For all of the gas particles for this additional weight that is going up and down, it will give me a contribution that is minus beta PV. And so this is the probability in this state. It is going to be the same as this provided that I divide by some Gibbs partition function. So this is the probability of the microstate. So what is the Gibbs partition function? Well, what is the normalization? I now have one additional variable, which is where this piston is located in order to ensure that it is at the right pressure. So I have to integrate also over the additional volume. This additional factor only depends on PV. And then I have to integrate given that I have some particular V that I then have to integrate over all of the microstates that are confined within this volume. Their momenta and their coordinates-- what is that? I just calculated that. That's the partition function as a function of T, V, and N. So for a fixed V, I already did the integration over all microscopic degrees of freedom. I have one more integration to do over the volume. And that's it. So if you like, this is like doing a Laplace transform. To go from Z of T, V, and N, to this Z tilde of T, P, and N is making some kind of a Laplace transformation from one variable to another variable. And now I know actually what my answer was for the partition function. It was 1 over N factorial V over lambda cubed raised to the power of N. So I have 1 over N factorial lambda to the power of 3N. And then I have to do one of these integrals that we have seen many times, the integral of V to the N against an exponential. That's something that actually we used in order to define N factorial, except that I have to dimensionalize this V. So I will get a factor of beta P to the power of N plus 1. The N factorials cancel. And the answer is beta P to the power of N plus 1 divided by lambda cubed to the power of N. So my Gibbs free energy, which is minus kT log of the Gibbs partition function, is going to be minus NkT log of this object. I have ignored the difference between N and N plus 1. And what I will get here is the combination beta P lambda cubed. Yes. AUDIENCE: Is your beta P to the N plus 1 in the numerator or the denominator? PROFESSOR: Thank you, it should be in the denominator, which means that-- OK, this one is correct. I guess this one I was going by dimensions. Because beta PV is dimensionless. All right, yes. AUDIENCE: One other thing, it seems like there's a dimensional mismatch in your expression for Z. Because you have an extra factor of beta P-- PROFESSOR: Exactly right, because this object is a probability density that involves a factor of volume. And as a probability density, the dimension of this will carry an extra factor of volume. So if I really wanted to make this quantity dimensionless also, I would need to divide by something that has some dimension of volume. But alternatively, I can recognize that indeed this is a probability density in volume. So it will have the dimensions of volume. And again, as I said, what I'm really always careful to make sure that is dimensionless is the thing that is proportional to N. If there's a log of a single dimension out here, typically we don't have to worry about it. But if you think about its origin, the origin is indeed that this quantity is a probability density. But it will, believe me, not change anything in your life to ignore that. All right, so once we have this G, then we recognize-- again, hopefully I didn't make any mistakes. G is E plus PV minus TS. So dG should be minus SdT plus VdP plus mu dN. So that, for example, in this ensemble I can ask-- well, I told you what the pressure is. What's the volume? Volume is going to be obtained as dG by dP at constant temperature and number. So these two are constant. Log P, its derivative is going to give me NkT over P. So again, I get another form of the ideal gas equation of state. I can ask, what's the chemical potential? It is going to be dG by dN at constant T and P. So I go and look at the N dependence. And I notice that there's just an N dependence out front. So what I will get is kT log of beta P lambda cubed. And if you like, you can check that, say, this expression for the chemical potential and-- did we derive it somewhere else? Yes, we derived it over here. This expression for the chemical potential are identical once you take advantage of the ideal gas equation of state to convert the V over N in that expression to beta P. And finally, we have the grand canonical. Let's do that also. So now we are going to look at an ensemble where I tell you what the temperature is and the chemical potential. But I have to tell you what the volume is. And then the statement is that the probability of a particular microstate that I will now indicate mu s force system-- not to be confused with the chemical potential-- is proportional to e to the beta mu N minus beta H of the microstate energy. And the normalization is this Q, which will be a grand partition function that is function of T, V, and mu. How is this probability normalized? Well, I'm spanning over a space of microstates that have indefinite number. Their number runs, presumably, all the way from 0 to infinity. And I have to multiply each particular segment that has N of these present with e to the beta mu N. Now, once I have that segment, what I need to do is to sum over all coordinates and momenta as appropriate to a system of N particles. And that, once more, is my partition function Z of T, V, and N. And so since I know what that expression is, I can substitute it in some e to the beta mu N. And Z is 1 over N factorial V over lambda cubed raised to the power of N. Now fortunately, that's a sum that I recognize. It is 1 over N factorial something raised to the N-th power summed over all N, which is the summation for the exponential. So this is the exponential of e to the beta mu V over lambda cubed. So once I have this, I can construct my G, which is minus kT log of Q, which is minus kT e to the beta mu V divided by lambda cubed. Now, note that in all of the other expressions that I had all of these logs of something, they were extensive. And the extensivity was ensured by having results that were ultimately proportional to N for these logs. Let's say I have here, I have an N here. For the s, I have an N there. Previously I had also an N here. Now, in this ensemble, I don't have N. Extensivity is insured by this thing being proportional to G. Now also remember that G was E minus TS minus mu N. But we had another result for extensivity, that for extensive systems, E was TS plus mu N minus PV. So this is in fact because of extensivity, we had expected it to be proportional to the volume. And so this combination should end up being in fact the pressure. I can see what the pressure is in different ways. I can, for example, look at what this dG is. It is minus SdT. It is minus PdV minus Nd mu. I could, for example, identify the pressure by taking a derivative of this with respect to volume. But it is proportional to volume. So I again get that this combination really should be pressure. You say, I don't recognize that as a pressure. You say, well, it's because the formula for pressure that we have been using is always in terms of N. So let's check that. So what is N? I can get N from minus dG by d mu. And what happens if I do that? When I take a derivative with respect to mu, I will bring down a factor of beta. Beta will kill the kT. I will get e to the beta mu V over lambda cubed, which by the way is also these expressions that we had previously for the relationship between mu and N over V lambda cubed if I just take the log of this expression. And then if I substitute e to the beta mu V over lambda cubed to BN, you can see that I have the thing that I was calling pressure is indeed NkT over V. So everything is consistent with the ideal gas law. The chemical potential comes out consistently-- extensivity, everything is correctly identified. Maybe one more thing that I note here is that, for this ideal gas and this particular form that I have for this object-- so let's maybe do something here. Note that the N appears in an exponential with e to the beta mu. So another way that I could have gotten my N would have been d log Q with respect to beta mu. And again, my log Q is simply V over lambda cubed e to the beta mu. And you can check that if I do that, I will get this formula that I had for N. Well, the thing is that I can get various cumulants of this object by continuing to take derivatives. So I take m derivatives of log of Q with respect to beta mu. So I have to keep taking derivatives of the exponential. And as long as I keep taking the derivative of the exponential, I will get the exponential back. So all of these things are really the same thing. So all cumulants of the number fluctuations of the gas are really the same thing as a number. Can you remember what that says? What's the distribution? AUDIENCE: [INAUDIBLE] PROFESSOR: Poisson, very good. The distribution where all of the cumulants were the same is Poisson distribution. Essentially it says that if I take a box, or if I just look at that imaginary volume in this room, and count the number of particles, as long as it is almost identical, the distribution of the number of particles within the volume is Poisson. You know all of the fluctuations, et cetera. Yes. AUDIENCE: So this expression, you have N equals e to the beta mu V divided by lambda to the third. Considering that expression, could you then say that the exponential quantity is proportional to the phase space density [INAUDIBLE]? PROFESSOR: Let's rearrange this. Beta mu is log of N over V lambda cubed. This is a single particle density. So beta mu, or the chemical potential up to a factor of kT, is the log of how many particles fit within one de Broglie volume. And this expression is in fact correct only in the limit where this is small. And as we shall see later on, there will be quantum mechanical corrections when this combination is large. So this combination is very important in identifying when things become quantum mechanic. All right, so let's just give a preamble of what we will be doing next time. I should have erased the [INAUDIBLE]. So we want to now do interacting systems. So this one example of the ideal gas I did for you to [INAUDIBLE] in all possible ensembles. And I could do that, because it was a collection of non-interacting degrees of freedom. As soon as I have interactions among my huge number of degrees of freedom, the story changes. So let's, for example, look at a generalization of what I had written before-- a one particle description which if I stop here gives me an ideal system, and potentially some complicated interaction among all of these coordinates. This could be a pairwise interaction. It could have three particles. It could potentially-- at this stage I want to write down the most general form. I want to see what I can learn about the properties of this modified version, or non-ideal gas, and the ensemble that I will choose initially. Microcanonical is typically difficult. I will go and do things canonically, which is somewhat easier. And later on, maybe even grand canonical is easier. So what do I have to do? I would say that the partition function is obtained by integrating over the entirety of this phase space-- product d cubed Pi d cubed qi. I will normalize things by N factorial, dimensionalize them by h to the 3N. And I have exponential of minus beta sum over i Pi squared over 2m. And then I have the exponential of minus U. Now, this time around, the P integrals are [INAUDIBLE]. I can do them immediately. Because typically the momenta don't interact with each other. And practicality, no matter how complicated a system of interactions is, you will be able to integrate over the momentum degrees of freedom. And what you get, you will get this factor of 1 over lambda to the power of 3N from the 3N momentum integrations. The thing that is hard-- there will be this factor of 1 over N factorial-- is the integration over all of the coordinates, d cubed qi of this factor of e to the minus beta U. I gave you the most general possible U. There's no way that I can do this integration. What I will do is I will divide each one of these integrations over coordinate of particle by its volume. I will therefore have V to the N here. And V to the N divided by lambda to the 3N N factorial is none other than the partition function that we had calculated before for the ideal gas. And I will call Z0 for the ideal gas. And I claim that this object I can interpret as a kind of average of a function e to the beta U defined over the phase space of all of these particles where the probability to find each particle is uniform in the space of the box, let's say. So what this says is for the 0-th order case, for the ideal case that we have discussed, once I have set the box, the particle can be anywhere in the box uniformly. For that uniform description probability, calculate what the average of this quantity is. So what we have is that Z is in fact Z0, that average that I can expand. And what we will be doing henceforth is a perturbation theory in powers of U. Because I know how to do things for the case of U equals 0. And then I hope to calculate things in various powers of U. So I will do that expansion. And then I say, no really, what I'm interested in is something like a free energy, which is log Z. And so for that, I will need log of Z0 plus log of that series. But the log of these kinds of series I know I can write as minus beta to the l over l factorial, replacing, when I go to the log, moments by corresponding cumulants. So this is called the cumulant expansion, which we will carry out next time around. Yes. AUDIENCE: In general, [INAUDIBLE]. PROFESSOR: For some cases, you can. For many cases, you find that that will give you wrong answers. Because the phase space around which you're expanding is so broad. It is not like a saddle point where you have one variable you are expanding in a huge number of variables. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 20_Quantum_Statistical_Mechanics_Part_1.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So last lecture, what we talked about was limitations of classical statistical mechanics, and what I will contrast with what I will talk about today, which is new version. The old version of quantum mechanics, which was based on the observation originally from Planck, and then expanded by Einstein, that for a harmonic oscillator, a frequency omega, the energies cannot take all values. But values that are multiples of the frequency of the oscillator and then some integer n. What we did with this observation to extract thermal properties was to simply say that we will construct a partition function for the harmonic oscillator by summing over all of the states of e to the minus beta e n according to the formula given above. Just thinking that these are allowed states of this oscillator-- and this you can very easily do. It starts with the first term and then it's a algebraic series, which will give you that formula. Now, if you are sitting at some temperature t, you say that the average energy that you have in your system, well, the formula that you have is minus the log z by the beta, which if I apply to the z that I have about, essentially weighs each one of these by these Boltzmann weights by the corresponding energy and sums them. What we do is we get, essentially, the contribution of the ground state. Actually, for all intents and purposes, we can ignore this, and, hence, this. But for completion, let's have them around. And then from what is in the denominator, if you take derivative with respect to beta, you will get a factor of h bar omega. And then this factor of 1 minus e to the minus beta h bar. Actually, we then additional e to the minus beta h bar omega in here. Now, the thing that we really compare to was what happens if we were to take one more derivative to see how much the heat capacity is that we have in the harmonic oscillator. So basically taking an average of the raw formula with respect to temperature, realizing that these betas are inverse temperatures. So derivatives with respect to t will be related to derivative with respect to beta, except that I've will get an additional factor of 1 over k bt squared. So the whole thing I could write that as kb and then I had the h bar omega over kt squared. And then from these factors, I had something like e to the minus e to the h bar omega over kt. E to the h bar omega over kt minus 1 squared. So if we were to plug this function, the heat capacity in its natural unit that are this kb, then as a function of temperature, we get behavior that we can actually express correctly in terms of the combination. You can see always we get temperature in units of kb over h bar omega. So I can really plug this in the form of, say, kt over h bar omega, which we call t to some characteristic temperature. And the behavior that we have is that close to 0 temperatures, you go to 0 exponentially, because of essentially the ratio of these exponentials. We leave one exponential in the denominator. So the gaps that you have between n equals to 0 and n equals to 1 translates to behavior of that at low temperatures is exponentially decaying to leading order. Then, eventually, at high temperatures, you get the classical result where you saturate to 1. And so you will have a curve that has a shift from one behavior to another behavior. And the place where this transition occurs is when this combination is of the order of 1. I'm not saying it's precisely 1, but it's of the order of 1. So, basically, you have this kind of behavior. OK? So we use this curve to explain the heat capacity of diatomic gas, such as the gas in this room, and why at room temperature, we see a heat capacity in which it appears that the vibrational degrees of freedom are frozen, are not contributing anything. While at temperatures above the characteristic vibrational frequency, which for a gas is of the order of 10 to the 3 degrees k, you really get energy in the harmonic oscillator also in the vibrations. And the heat capacity jumps, because you have another way of storing energy. So the next thing that we asked was whether this describes also heat capacity of a solid. So basically, for the diatomic gas, you have two atoms that are bonded together into a molecule. And you consider the vibrations of that. You can regard the solid as a huge molecule with lots of atoms joined together. And they have vibrations. And if you think about all of those vibrations giving you something that is similar to this, you would conclude that the heat capacity of a solid should also have this kind of behavior. Whereas we noted that in actuality, the heat capacity of a solid vanishes much more slowly at low temperatures. And the dependence at low temperatures is proportional to t cubed. So at the end of last lecture, we gave an explanation for this, which I will repeat. Again, the picture is that the solid, like a huge molecule, has vibrational modes. But these vibrational modes cover a whole range of different frequencies. And so if you ask, what are the frequencies omega alpha of vibrations of a solid, the most natural way to characterize them is, in fact, in terms of a wave vector k that indicates a direction for the oscillatory wave that you set up in the material. And depending on k, you'll have different frequencies. And I said that, essentially, the longest wave length corresponding to k equals to 0 is taking the whole solid and translating it. Again, thinking back about the oxygen molecule, the oxygen molecule, you have two coordinates. It's the relative coordinate that has the vibration. And you have a center of mass coordinate that has no energy. If you make a molecule more and more complicated, you will have more modes, but you will always have the 0 mode that corresponds to the translation. And that carries over all the way to the solid. So there is a mode that corresponds to translations-- and, in fact, rotations-- that would carry no energy, and corresponds, therefore, to 0 frequency. And then if you start to make long wavelength oscillations, the frequency is going to be small. And, indeed, what we know is that we tap on the solid and you create sound waves, which means that the low-frequency long wavelength modes have a dispersion relation in which omega is proportional to k. We can write that as omega is v times k, where v is the velocity of the sound in the solid. Now, of course, the shortest wave length that you can have is related to the separation between the atoms in the solid. And so, basically, there's a limit to the range of k's that you can put in your system. And this linear behavior is going to get modified once you get towards the age of the solid. And the reason I have alpha here is because you can have different polarizations. There are three different possible polarizations. So in principle, you will have three of these curves in the system hard. And these curves could be very complicated when you get to the edge of [INAUDIBLE] zone and you have to solve a big dynamical matrix in order to extract what the frequencies are, if you want to have the complete spectrum. So the solid is a collection of these harmonic oscillators that are, in principle, very complicated. But we have the following. So I say, OK, I have all of these. And I want to calculate at a given temperature how much energy I have put in the solid. So this energy that I have put in the vibrations at some temperature t, assuming that these vibrations are really a collection of these oscillators. Well, what I have to do is to add up all of these terms. There's going to be adding up all of the h bar omega over 2s for all of these. OK? That will give me something that I will simply call e 0, because it doesn't depend on temperature. Presumably will exist at 0 temperature. And I can even fold into that whatever the value of the potential energy of the interactions between the particles is at 0 temperature. What I'm interested in really is the temperature dependence. So I basically take the formula that I have over there, and sum over all of these oscillators. These oscillators are characterized by polarization and by the wave vector k. And then I have, essentially, h bar omega alpha of k divided by e to the beta h bar omega alpha of k minus 1. So I have to apply that formula to this potentially very complicated set of frequencies. The thing is, that according to the picture that I have over here, to a 0 order approximation, you would say that the heat capacity is 1 if you are on this side, 0 if you're on that side. What distinguishes those two sides is whether the frequency in combination with temperature is less than or larger than 1. Basically, low frequencies would end up being here. High frequencies would end up being here. And would not contribute. So for a given temperature, there is some borderline. That borderline would correspond to kt over h bar. So let me draw where that borderline is. Kt over h bar. For a particular temperature, all of these modes are not really contributing. All of these modes are contributing. If my temperature is high enough, everything is contributing. And the total number of oscillators is 3 n. It's the number of atoms. So essentially, I will get 3 n times whatever formula I have over there. As a come further and further down, there's some kind of complicated behavior as I go through this spaghetti of modes. But when I get to low enough structures, then, again, things become simple, because I will only be sensitive to the modes that are described by this a omega being vk. OK? So if I'm interested in t going to 0, means less than some characteristic temperature that we have to define shortly. So let's say, replace this with t less than some theta d that I have to get for you shortly, then I will replace this with e 0 plus sum over alpha and k of h bar v alpha k e to the beta h bar e alpha k minus 1. OK? Now, for simplicity, essentially I have to do three different sums. All of them are the same up to having to use different values of v. Let's just for simplicity assume that all of the v alphas are the same v, so that I really have only one velocity. There's really no difficulty in generalizing this. So let's do this for simplicity of algebra. So if I do that, then the sum over alpha will simply give me a factor of 3. There are three possible polarizations, so I put a 3 there. And then I have to do the summation over k. Well, what does the summation over k mean? When I have a small molecule for the, let's say, three or four atoms, then I can enumerate what the different vibrational states are. As I go to a large solid, I essentially have modes that are at each value of k, but, in reality, they are discrete. They are very, very, very, very finely separated by a separation that is of the order of 2 pi over the size of the system. So to ensure that eventually when you count all of the modes that you have here, you, again, end up to have of the order of n states. So if that's the case, this sum, really I can replace with an integral, because going from one point to the next point does not make much difference. So I will have an integral over k. But I have to know how densely these things are. And in one direction it is 2 pi over l. So the density would be l over 2 pi. If I look at all three directions, I have to multiply all of them. So I will get v divided by 2 pi cubed. So this is the usual density of states. And you go to description in terms of wave numbers, or, later on, in terms of momentums. And what we have here is this integral h bar v k e to the beta h bar v k minus 1. OK? So let's simplify this a little bit further. I have e 0. I have 3v. The integrand only depends on the magnitude of k, so I can take advantage of that spherical symmetry and write this as 4 pi k squared v k divided by this 8 pi cubed. What I can do is I can also introduce a factor of beta here, multiplied by k t. Beta k t is 1. And if I call this combination to be x, then what I have is k b t x e to the x minus 1. Of course, k is simply related to x by k being x kt over h bar v. And so at the next level of approximation, this k squared v k I will write in terms of x squared v x. And so what do I have? I have e 0. I have 3v divided by 2 pi squared. Because of this factor of kt that I will take outside I have a kt. I have a k squared vk that I want to replace with x squared v x. And that will give me an additional factor of kv over h bar v cubed. And then I have an integral 0 to e 0 v x x cubed e to the x minus 1. Now, in principle, when I start with this integration, I have a finite range for k, which presumably would translate into a finite range for x. But in reality none of these modes is contributing, so I could extend the range of integration all the way to infinity, and make very small error at low temperatures. And the advantage of that is that then this becomes a definite integral. Something that you can look up in tables. And its value is in fact pi to the fourth over 15. So substituting that over there, what do we have? We have that the energy is e 0 plus 3 divided by 15, will give me 5, which turns the 2 into a 10. I have pi to the fourth divided by pi squared, so there's a pi squared that will survive out here. I have a kt. I have kt over h bar v cubed. And then I have a factor of volume. But volume is proportional to the number of particles that I have in the system times the size of my unit cell. Let's call that a cubed. So this I can write this as l a cubed. Why do I do that is because when I then take the derivative, I'd like to write the heat capacity per particle. So, indeed, if I now take the derivative, which is de by dt, the answer will be proportional to n and kv. The number of particles and this k v, which is the function, the unit of heat capacities. The overall dependence is t to the fourth. So when I take derivatives, I will get 4t cubed. That 4 will change the 1 over 10 to 2 pi squared over 5. And then I have the combination kvt h bar v, and that factor of a raised to the third power. So the whole thing is proportional to t cubed. And the coefficient I will call theta d for [INAUDIBLE]. And theta d I have calculated to be h bar v over a h bar v a over k t. No, h bar v over a k t. So the heat capacity of the solid is going to be proportional, of course, to n k b. But most importantly, is proportional to t cubed. And t cubed just came from this argument that I need low omegas. And how many things I have at the omega. How many frequencies do I have that are vibrating? The number of those frequencies is essentially the size of a cube in k space. So it goes like this-- maximum k cubed in three dimensions. In two dimensions, it will be squared and all of that. So it's very easy to figure out from this dispersion relation what the low temperature behavior of the heat capacity has to be. And you will see that this is, in fact, predictive, in that later on in the course, we will come an example of where the heat capacity of a liquid, which was helium, was observed to have this t cubed behavior based on that Landau immediately postulated that there should be a phonon-like dispersion inside that superfluid. OK. So that's the story of the heat capacity of the solid. So we started with a molecule. We went from a molecule into an entire solid. The next step that what I'm going to do is I'm going to remove the solid and just keep the box. So essentially, they calculation that I did, if you think about it, corresponded to having some kind of a box, and having vibrational modes inside the box. But let's imagine that it is an empty box. But we know that even in empty space we have light. So within an empty box, we can still have modes of the electromagnetic field. Modes of electromagnetic field, just like the modes of the solid, we can characterize by the direction along which oscillations travel. And whereas for the atoms in the solid, they have displacement and the corresponding momentum for the electromagnetic field, you have the electric field. And its conjugate is the magnetic field. And these things will be oscillating to create for you a wave. Except that, whereas for the solid, for each atom we had three possible directions, and therefore we had three branches, for this, since e and b have to be orthogonal to k, you really have only two polarizations. But apart from that, the frequency spectrum is exactly the same as we would have for the solids at low temperature replacing to v that we have with the speed of light. And so you would say, OK. If I were to calculate the energy content that is inside the box, what I have to do is to sum over all of the modes and polarizations. Regarding each one of these as a harmonic oscillator, going through the system of quantizing according to this old quantum mechanics, the harmonic oscillators, I have to add up the energy content of each oscillator. And so what I have is this h bar omega of k. And then I have 1/2 plus 1 over e to the beta h bar omega of k minus 1. And then I can do exactly the kinds of things that I had before, replacing the sum over k with a v times an integral. So the whole thing would be, first of all, proportional to v, going from the sum over k to the integration over k. I would have to add all of these h bar omega over 2s. Has no temperature dependence, so let me just, again, call it some e 0. Actually, let's call it epsilon 0, because it's more like an energy density. And then I have the sum over all of the other modes. There's two polarisations. So as opposed to the three that I had before, I have two. I have, again, the integral over k of 4 pi k squared v k divided by 8 pi cubed, which is part of this density of state calculation. I have, again, a factor of h bar omega. Now, I realize that my omega is ck. So I simply write it as h bar ck. And then I have e to the beta h bar ck minus 1. So we will again allow this to go from 0 to infinity. And what do we get? We will get v epsilon 0 plus, well, the 8 and 8 cancel. I have pi over pi squared. So pi over pi cubed. So it will give me 1 over pi squared. I have one factor of kt. Again, when I introduce here a beta and then multiply by kt, so that this dimension, this combination appears. Then I have, if I were to change variable and call this the new variable, I have factor of k squared dk, which gives me, just as before over there, a factor of kt over h bar c cubed. And then I have this integral left, which is the 0 to infinity v x x cubed e to the x minus 1, which we stated is pi to the fourth over 15. So the part that is dependent on temperature, the energy content, just as in this case, scales as t to the fourth. There is one part that we have over here from all of the 0s, which is, in fact, an infinity. And maybe there is some degree of worry about that. We didn't have to worry about that infinity in this case, because the number of modes that we had was, in reality, finite. So once we were to add up properly all of these 0 point energies for this, we would have gotten a finite number. It would have been large, but it would have been finite. Whereas here, the difference is that there is really no upper cut-off. So this k here, for a solid, you have a minimum wavelength. You can't do things shorter than the separation of particles. But for light, you can have arbitrarily short wavelength, and that gives you this infinity over here. So typically, we ignore that. Maybe it is related to the cosmological constant, et cetera. But for our purposes, we are not going to focus on that at all. And the interesting part is this part, that proportional to t to the fourth. There are two SOP calculations to this that I will just give part of the answer, because another part of the answer is something that you do in problem sets. One of them is that what we have here is an energy density. It's proportional to volume. And we have seen that energy densities are related to pressures. So indeed, there is a corresponding pressure. That is, if you're at the temperature t, this collection of vibrating electromagnetic fields exerts a pressure on the walls of the container. This pressure is related to energy density. The factor of 1/3 comes because of the dispersion relation. And you can show that in one of the problem sets. You know that already. So that would say that you would have, essentially, something like some kind of p 0. And then something that is proportional to t to the fourth. So I guess the correspondent coefficient here would be p squared divided by 45 kt kt over h bar c cubed. So there is radiation pressure that is proportional to temperature. The hotter you make this box, the more pressure it will get exerted from it. There is, of course, again this infinity that you may worry about. But here the problem is less serious, because you would say that in reality, if I have the wall of the box, it is going to get pressure from both sides. And if there's an infinite pressure from both sides, they will cancel. So you don't have to worry about that. But it turns out that, actually, you can measure the consequences of this pressure. And that occurs when rather than having one plate, you have two plates that there are some small separation apart. Then the modes of radiation that you can fit in here because of the quantizations that you have, are different from the modes that you can have out here. So that difference, even from the 0 point fluctuations-- the h bar omega over 2s-- will give you a pressure that pushes these plates together. That's called a Casimir force, or Casimir pressure. And that was predicted by Casimir in 1950s, and was measured experimentally roughly 10 years ago to high precision, matching the formula that we had. So sometimes, these infinities have consequences that you have to worry about. But that's also to indicate that there's kind of modern physics to this. But really it was the origin of quantum mechanics, because of the other aspect of the physics, which is imagine that again you have this box. I draw it now as an irregular box. And I open a hole of size a inside the box. And then the radiation that was inside at temperatures t will start to go out. So you have a hot box. You open a hole in it. And then the radiation starts to come out. And so what you will have is a flux of radiation. Flux means that this it energy that is escaping per unit area and per unit time. So there's a flux, which is per area per time. It turns out that that flux-- and this is another factor, this factor of 1/3 that I mentioned-- is related the energy density with a factor of 1 c over 4. Essentially, clearly the velocity with which energy escaping is proportional to c. So you will get more radiation flux the larger c. The answer has to be proportional to c. And it is what is inside that is escaping, so it has to be proportional to the energy density that you have inside, some kind of energy per unit volume. And the factor of 1/4 is one of these geometric factors. Essentially, there's two factors of cosine of theta. And you have to do an average of cosine squared theta. And that will give you the additional 1/4. OK? But rather than looking-- so this would tell you that there is an energy that is streaming out. That is, the net value is proportional to t to the fourth. But more interestingly, we can ask what is the flux per wavelength? And so for that, I can just go back to the formula before I integrated over k, and ask what is the energy density in each interval of k? And so what I have to do is to just go and look at the formula that I have prior to doing the integration over k. Multiply it by c over 4. What do I have? I have 8 pi divided by 8 pi cubed. I have a factor of k squared from the density of states. I have this factor of h bar c k divided by e to the beta h bar c k minus 1. So there's no analogue of this, because I am not doing the integration over k. So we can simplify some of these factors up front. But really, the story is how does this quantity look as the function of wave number, which is the inverse of wave length, if you like. And what we see is that when k goes to 0, essentially, this factor into the beta h bar ck I have to expand to lowest order. I will get beta h bar c k, because the 1 disappears. H bar ck is cancelled, so the answer is going to be proportional to inverse beta. It's going to be proportional to kt and k squared. So, essentially, the low k behavior part of this is proportional to k squared c, of course, and kt. However, when I go to the high k numbers, the exponential will kill things off. So the large k part of this is going to be exponentially small. And, actually, the curve will look something like this, therefore. It will have a maximum around the k, which presumably is of the order of kt over h bar c. So basically, the hotter you have, this will move to the right. The wavelengths will become shorter. And, essentially, that's the origin of the fact that can you heat some kind of material, it will start to emit radiation. And the radiation will be peaked at some frequency that is related to its temperature. Now, if we didn't have this quantization effect, if h bar went to 0, then what would happen is that this k squared kt would continue forever. OK? Essentially, you would have in each one of these modes of radiation, classically, you would put a kt of energy. And since you could have arbitrarily short wavelengths, you would have infinite energy at shorter and shorter wavelengths. And you would have this ultraviolet catastrophe. Of course, the shape of this curve was experimentally known towards the end of the 19th century. And so that was the basis of thinking about it, and fitting an exponential to the end, and eventually deducing that this quantization of the oscillators would potentially give you the reason for this to happen. Now, the way that I have described it, I focused on having a cavity and opening the cavity, and having the energy go out. Of course, the experiments for black body are not done on cavities. They're done on some piece of metal or some other thing that you heat up. And then you can look at the spectrum of the radiation. And so, again, there is some universality in this, that it is not so sensitive to the properties of the material, although there are some emissivity and other factors that multiply the final result. So the final result, in fact, would say that if I were to integrate over frequencies, the total radiation flux, which would be c over 4 times the energy density total, is going to be proportional to temperature to the fourth power. And this constant in front is the Stefan-Boltzmann, which has some particular value that you can look up, units of watts per area per degrees Kelvin. So this perspective is rather macroscopic. The radiated energy is proportional to the surface area. If you make things that are small, and the wavelengths that you're looking at over here become compatible to the size of the object, these formulas break down. And again, go forward about 150 years or so, there is ongoing research-- I guess more 200 years-- ongoing research on-- no, 100 and something-- ongoing research on how these classical laws of radiation are modified when you're dealing with objects that are small compared to the wavelengths that are emitted, etc. Any questions? So the next part of the story is why did you do all of this? It works, but what is the justification? In that I said there was the old quantum mechanics. But really, we want to have statements about quantum systems that are not harmonic oscillators. And we want to be able to understand actually what the underlying basis is in the same way that they understand how we were doing things for classical statistical mechanics. And so really, we want to look at how to make the transition from classical to quantum statistical mechanics. So for that, let's go and remind us. Actually, so basically the question is something like this-- what does this partition function mean? I'm calculating things as if I have these states that are the energy levels. And the probabilities are e to the minus beta epsilon n. What does that mean? Classically, we knew the Boltzmann rates had something to do with the probability of finding a particle with a particular position and momentum. So what is the analogous thing here? And you know that in new quantum mechanics, the interpretation of many things is probabilistic. And in statistical mechanics, even classically we had a probabilistic interpretation. So presumably, we want to build a probabilistic theory on top of another probabilistic theory. So how do we go about understanding precisely what is happening over here? So let's kind of remind ourselves of what we were doing in the original classical statistical mechanics, and try to see how we can make the corresponding calculations when things are quantum mechanical. So, essentially, the probabilistic sense that we had in classical statistical mechanics was to assign probabilities for micro states, given that we had some knowledge of the macro state. So the classical microstate mu was a point which was a collection of p's and q's in phase space. So what is a quantum microstate? OK. So here, I'm just going to jump several decades ahead, and just write the answer. And I'm going to do it in somewhat of a more axiomatic way, because it's not up to me to introduce quantum mechanics. I assume that you know it already. Just a perspective that I'm going to take. So the quantum microstate is a complex unit vector in Hilbert space. OK? So for any vector space, we can choose a set of unit vectors that form an orthonormal basis. And I'm going to use this bra-ket notation. And so our psi, which is a vector in this space, can be written in terms of its components by pointing to the different directions in this space, and components that I will indicate by this psi n. And these are complex. And I will use the notation that psi n is the complex conjugate of n psi. And the norm of this I'm going to indicate by psi psi, which is obtained by summing over all n psi psi n star, which is essentially the magnitude of n psi squared. And these are unit vectors. So all of these states are normalized such that psi psi is equal to 1. Yes. AUDIENCE: You're not allowing particle numbers to vary, are you? PROFESSOR: At this stage, no. Later on, when we do the grand canonical, we will change our Hilbert space. OK? So that's one concept. The other concept, classically, we measure things. So we have classical observable. And these are functions all of which depend on this p and q in phase space. So basically, there's the phase space. We can have some particular function, such as the kinetic energy-- sum over i pi squared over 2 n-- that's an example of an observable. Kinetic energy, potential energy, anything that we like, you can classically write a sum function that you want to evaluate in phase space, given that you are at some particular point in phase space, the state of your system, you can evaluate what that is. Now in quantum mechanics, observables are operators, or matrices, if you like, in this vector space. OK? So among the various observables, certainly, are things like the position and the momentum of the particle. So there are presumably matrices that correspond to position and momentum. And for that, we look at some other properties that this classical systems have. We had defined classically a Poisson bracket, which was a sum over all alphas d a by d q alpha d b by d p alpha minus the a by the p alpha b d by the q f. OK? And this is an operation that you would like to, and happens to, carry over in some sense into quantum mechanics. But one of the consequence of this is you can check if I pick a particular momentum, key, and a particular coordinate, q, and put it over here, most of the time I will get 0, unless the alphas match exactly the p q's that I have up there. And if you go through this whole thing, I will get something like that is like a delta i j. OK. So this structure somehow continues in quantum mechanics, in this sense that the matrices that correspond to p and q satisfy the condition that p i and q j, thinking of two matrices, and this is the commutator, so this is p i q j minus q j p i is h bar over i delta h. So once you have the matrices that correspond to p and q, you can take any function of p and q that you had over here, and then replace the p's and q's that appear in, let's say, a series expansion, or an expansion of this o in powers of p and q, with corresponding matrices p hat and q hat. And that way, you will construct a corresponding operator. There is one subtlety that you've probably encountered, in that there is some symmetrization that you have to do before you can make this replacement. OK. So what does it mean? In classical theory, if something is observable the answer that you get is a number. Right? You can calculate what the kinetic energy is. In quantum mechanics, what does it mean that observable is a matrix? The statement is that observables don't have definite values, but the expectation value of a particular observable o in some state pi is given by psi o psi. Essentially, you take the vector that correspond to the state, multiply the matrix on it, and then sandwich it with the conjugate of the vector, and that will give you your state. So in terms of elements of some particular basis, you would write this as a sum over n and m. Psi n n o m m psi. And in that particular basis, your operator would have these matrix elements. Now again, another property, if you're measuring something that is observable, is presumably you will get a number that is real. That is, you expect this to be the same thing as its complex conjugate. And if you follow this condition, you will see that that reality implies that n o m should m o n complex conjugate, which is typically written as the matrix being its Hermitian conjugate, or being Hermitian. So all observables in quantum mechanics would correspond to Hermitian operators or matrices. OK. There's one other piece, and then we can forget about axioms. We have a classical time evolution. We know that the particular point in the classical phase space changes as a function of time, such that q i dot is plus d h by d p i. P i dot is minus d h by d q i. By the way, both of these can be written as q i and h Poisson bracket and p i and h Poisson bracket. But there is a particular function observable, h, the Hamiltonian, that derives the classical evolution. And when we go to quantum evolution, this vector that we have in Hilbert space evolves according to i h bar d by dt of the vector psi is the matrix that we have acting on psi. OK. Fine. So these are the basics that we need. Now we can go and do statistical descriptions. So the main element that we had in constructing statistical descriptions was deal with a macrostate. We said that if I'm interested in thinking about the properties of one cubic meter of gas at standard temperature and pressure, I'm not thinking about a particular point in phase space, because different gases that have exactly the same macroscopic properties would correspond to many, many different possible points in this phase space that are changing as a function of time. So rather than thinking about a single microstate, we talked about an ensemble. And this ensemble had a whole bunch of possible microstates. In the simplest prescription, maybe we said they were all equally likely. But, actually, we could even assign some kind of probability to them. And we want to know what to do with this, because then what happened was from this description, we then constructed a density which was, again, some kind of a probability in phase space. And we looked at its time evolution. We looked at the averages and all kinds of things in terms of this density. So the question is, what happens to all of this when we go to quantum descriptions? OK. You can follow a lot of that. We can, again, take the example of the one cubic meter of gas at standard temperature and pressure. But the rather than describing the state of the system classically, I can try to describe it quantum mechanically. Presumably the quantum mechanical description at some limit becomes equivalent to the classical description. So I will have an ensemble of states. I don't know which one of them I am in. I have lots of boxes. They would correspond to different microstates, presumably. And this has, actually, a word that is used more in the quantum context. I guess one could use it in the classical context. It's called a mixed state. A pure state is one you know exactly. Mixed state is, well, like the gas I tell you. I tell you only the macroscopic information, you don't know much about microscopically what it is. If these are possibilities, and not knowing those possibilities, you can say that it's a mixture of all these states. OK. Now, what would I use, classically, a density for? What I could do is I could calculate the average of some observable, classically, in this ensemble. And what I would do is I would integrate over the entirety of the six n-dimensional phase space the o at some particular point in phase space and the density at that point in phase space. And this average I will indicate by a bar. So my bars stand for ensemble average, to make them distinct from these quantum averages that I will indicate with the Bra-Kets. OK? So let's try to do the analogue of that in quantum states. I would say that, OK, for a particular one of the members of this ensemble, I can calculate what the expectation value is. This is the expectation value that corresponds to this observable o, if I was in a pure state psi alpha. But I don't know that I am there. I have a probability, so I do a summation over all of these states. And I will call that expectation value ensemble average. So that's how things are defined. Let's look at this in some particular basis. I would write this as a sum over alpha m and n p alpha psi alpha m m o n n psi alpha. So, essentially, writing all of these psis in terms of their components, just as I had done above. OK. Now what I want to do is to reorder this. Do the summation over n and m first, the summation over alpha last. So what do I have? I have m o n. And then I have a sum over alpha of p alpha n psi alpha psi alpha n. So what I will do, this quantity that alpha is summed over-- so it depends on the two indices n and m. I can give it a name. I can call it n rho m. If I do that, then this o bar average becomes simply-- let's see. This summation over n gives me the matrix product o rho. And then summation over m gives me the trace of the product. So this is the trace of rho o. OK? So I constructed something that is kind of analogous to the classical use of the density in phase space. So you would multiply the density and the thing that you wanted to calculate the observable. And the ensemble average is obtained as a kind of summing over all possible what values of that product in phase space. So here, I'm doing something similar. I'm multiplying this o by some matrix row. So, again, this I can think of as having introduced a new matrix for an operator. And this is the density matrix. And if I basically ignore, or write it in basis-independent form, it is obtained by summing over all alphas the alphas, and, essentially, cutting off the n and m. I have the matrix that I would form out of state alpha by, essentially, taking the vector and its conjugate and multiplying rows and columns together to make a matrix. And then multiplying or averaging that matrix over all possible values of the ensemble-- elements of the ensemble-- would give me this density. So in the same way that any observable in classical mechanics goes over to an operator in quantum mechanics, we find that we have another function in phase space-- this density. This density goes over to the matrix or an operator that is given by this formula here. It is useful to enumerate some properties of the density matrix. First of all, the density matrix is positive definite. What does that mean? It means that if you take the density matrix, multiply it by any state on the right and the left to construct a number, this number will be positive, because if I apply it to the formula that I have over there, this is simply sum over alpha p alpha. Then I have phi psi psi alpha, and them psi alpha psi, which is its complex conjugate. So I get the norm of that product, which is positive. All of the p alphas are positive probabilities. So this is certainly something that is positive. We said that anything that makes sense in quantum mechanics should be Hermitian. And it is easy to check. That if I take this operator rho and do the complex conjugate, essentially what happens is that I have to take sum over alpha. Complex conjugate of p alpha is p alpha itself. Probabilities are real numbers. If I take psi alpha psi alpha and conjugate it, essentially I take this and put it here. And I take that and put it over there. And I get the same thing. So it's the same thing as rho. And, finally, there's a normalization. If, for my o over here in the last formula, I choose 1, then I get the expectation value of 1 has to be the trace of rho. And we can check that the trace of rho, essentially, is obtained by summing over all alpha p alpha, and the dot product of the two psi alphas. Since any state in quantum mechanics corresponds to a unit vector, this is 1. So I get a sum over alpha of p alphas. And these are probabilities assigned to the members of the ensemble. They have to add up to 1. And so this is like this. So the quantity that we were looking at, and built, essentially, all of our later classical statistical mechanics, on is this density. Density was a probability in phase space. Now, when you go to quantum mechanics, we don't have phase space. We have Hilbert space. We already have a probabilistic theory. Turns out that this function, which was the probability in phase space classically, gets promoted to this matrix, the density matrix, that has, once you take traces and do all kinds of things, the kinds of properties that you would expect the probability to have classically. But it's not really probability in the usual sense. It's a matrix. OK. There is one other thing element of this to go through, which is that classically, we said that, OK, I pick a set of states. They correspond to some density. But the microstates are changing as a function of time. So the density was changing as a function of time. And we had Liouville's theorem, which stated that d rho by d t was the Poisson bracket of the Hamiltonian with rho. So we can quantum mechanically ask, what happens to our density matrix? So we have a matrix rho. I can ask, what is the time derivative of that matrix? And, actually, I will insert the i h bar here, because I anticipate that, essentially, rho having that form, what I will have is sum over alpha. And then I have i h bar d by dt acting on these p alpha psi alpha psi f. So there rho is sum over alpha p alpha psi alpha psi alpha. Sum over alpha p alpha I can take outside. I h bar d by dt acts on these two psis that are appearing a complex conjugates. So it can either, d by dt, act on one or the other. So I can write this as sum over alpha p alpha i h bar d by dt acting on psi alpha psi alpha, or i, or psi alpha, and then i h bar d by dt acting on this psi alpha. Now, i h bar d by dt psi alpha, we said that, essentially, the quantum rule for time evolution is i h bar d by dt of the state will give you is governed by h times acting on psi alpha. If I were to take the complex conjugate of this expression, what I would get is minus i h bar d by dt acting on psi that is pointing the other way of our complex conjugation is h acting on the psi in the opposite way. So this thing is minus psi alpha with h f acting on it. OK. So then I can write the whole thing as h-- for the first term take the h out front. I have a sum over alpha p alpha psi alpha psi alpha, minus, from this complex conjugation-- here, h is completely to the right-- I have a sum over alpha p alpha psi alpha psi alpha. And then we have h. Now, these are again getting rho back. So what I have established is that i h bar, the time derivative of this density matrix, is simply the commutator of the operators h and o. So what we had up here was the classical Liouville theorem, relating the time derivative of the density in phase space to the Poisson bracket with h. What we have here is the quantum version, where the time derivative of this density matrix is the commutator of rho with h. Now we are done, because what did we use this Liouville for? We used it to deduce that if I have things that are not changing as a function of time, I have equilibrium systems, where the density is invariant. It's the same. Then rho of equilibrium not changing as a function of time can be achieved by simply making it a function of h. And, more precisely, h and conserved quantities that have 0 Poisson bracket with h. How can I make the quantum density matrix to be invariant of time? All I need to do is to ensure that the Poisson bracket of that density with the Hamiltonian is 0. Not the Poisson bracket, the commutator. Clearly, the commutator of h with itself is 0. Hh minus hh is 0. So this I can make a function of h, and any other kind of quantity also that has 0 commutator with h. So, essentially, the quantum version also applies. That is, the quantum version of rho equilibrium, I can make it by constructing something that depends on the Hilbert space through the dependence of the Hamiltonian and on the Hilbert space and any other conserved quantities that have 0 commutator also with the Hamiltonian. So now, what we will do next time is we can pick and choose whatever rho equilibrium we had before. Canonical e to the minus beta h. We make this matrix to be e to the minus beta h. Uniform anything, we can just carry over whatever functional dependence we had here to here. And we are ensure to have something that is quantum mechanically invariant. And we will then interpret what the various quantities calculated through that density matrix and the formulas that we described actually. OK? |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 26_Ideal_Quantum_Gases_Part_5.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last time we started talking about superfluid helium. And we said that the phase diagram of helium-4, the isotope that is a boson has the following interesting properties. First of all, helium stays a liquid all the way down to 0 temperature because of the combinations of its light mass and heat interactions. And secondly, that [INAUDIBLE] to cool down helium through this process of evaporated cooling, one immediately observes something interesting happening at temperatures below 2 degrees Kelvin, where it becomes this superfluid that has a number of interesting properties. And in particular as pertaining to viscosity, we made two observations. First of all, you can make these capillaries-- and I'll show you been movie in more detail later on where it flows through capillaries as if there is no resistance and there is nothing that sticks to the walls of the capillaries. It flows without viscosity, whereas there was this experiment of Andronikashvili, in which you had something that was oscillating and you were calculating how much of the helium was stuck to the plates of the container. And the result was something like this, that is there was a decrease in the amount of fluid that is stuck to the plates. But it doesn't go immediately down to 0. It has a kind of form such as this that I will draw more clearly now and explain. So what we did last time was to note that people observed that there were some similarities between this superfluid transition and Bose-Einstein condensation. But what I would like to highlight in the beginning of this lecture is that there are also very important differences. So let's think about these distinctions between Bose-Einstein condensate add the superfluid helium. One set of things we would like to take from the picture that I have over there, which is diffraction of the fluid that is stuck to the plates and in some sense behaves like a normal fluid. Now let me make the analogy to Bose-Einstein condensation. You know that in the Bose-Einstein condensation there was also this phenomenon that there was a separation into two parts of the total density. And be regarded as a function of temperature some part of the density as belonging to the normal state. So when you are above Tc of n, everything is essentially normal. And then what happens is that when you hit Tc of n you can no longer put all of the particles that you have in the excited states. So the fraction that goes in the excited states goes down and eventually goes to 0 at 0 temperature. And essentially, this would be the reverse of the curve that we have in that figure over there. Basically, there's a portion that would be the normal part that would be looking like this. Now, the way that we obtained this result was that basically there was a fraction that was in the normal state. The part that was excited was described by this simple formula that was g over lambda cubed [INAUDIBLE] of 3/2. So it went to 0 as T to the 3/2. So basically, the proportionality here is T to the 3/2. And then basically, the curve would come down here and go to 0 linearly. Now what is shown in the experiment is that the curve actually goes to 0 in a much more sharp fashion. And actually, when people try to fit a curve through this, the curve looks something like Tc minus T to the 2/3 over. But also it goes to 0 much more rapidly than the curve that we have for Bose-Einstein condensation. Indeed, it goes through 0 proportionately to T to the fourth. And so that's something that we need to understand and explain. Now all of the properties of the Bose-Einstein condensate was very easy to describe once we realized that all of the things that correspond to excitation, such as the energy, heat capacity, pressure, come from this fraction that is in the excited state. And we can calculate, say, the contribution to energy heat capacity, et cetera. And in particular, if we look at the behavior of the heat capacity as a function of temperature, for this Bose-Einstein condensate, the behavior that we had was that again simply at low temperatures it was going proportionately to T to the 3/2, because this was the number of excitations that you had. So these two T to the 3/2 are very much related to each other. And then this curve would basically go along its way until it hit Tc of n at some point. And we separately calculated the behavior coming from high temperatures. And the behavior from high temperatures would start with the classical result that the heat capacity is 3/2 per particle due to the kinetic energy that you can put in these things. And then it would rise, and it would then join this curve over here. Now when you look at the actual heat capacity, indeed the shape up the heat capacity is the thing that gives this transition the name of a lambda transition. It kind of looks like a lambda. And at Tc there are divergences approaching from the two sides that behave like the log. And again more importantly, what we find is that at 0 temperature, it doesn't go through 0, the heat capacity as T to the 3/2, but rather as T to the third power. So the red curve corresponds to superfluid, the green curve corresponds to Bose-Einstein condensate. And so they're clearly different from each other. So that's what we would like you to understand. Well the thing that is easiest to understand and figure out is the difference between these heat capacities. And the reason for that is that we had already seen a form that was of the heat capacity that behaved like T cubed. That was when we were looking at phonons in a solid. So let's remind you why was it that for the Bose-Einstein condensate we were getting this T to the 3/2 behavior? The reason for that was that the various excitations, I could plot as a function of k, or p, which is h bar k. They're very much related to each other. And for the Bose-Einstein condensate, the form was simply a parabola, which is this p squared over 2 mass of the helium. Let's say, assuming that what we are dealing with is non-interacting particles with mass of helium. And this parabolic curve essentially told us that various quantities behave as T to the 3/2. Roughly, the idea is that at some temperature that has energy of the order of kT, you figure out how far you have excited things. And since this form is a parabola, the typical p is going to scale like T to the 1/2. You have a volume in three dimensions in p space. If the radius goes like T to the 1/2, the volume goes like T to the 3/2. That's why you have all kinds of excitations such as this. And the reason for the Bose-Einstein condensation was that you would start to fill out all of these excitations. And when you were adding all of the mean occupation numbers, the answer was not coming up all the way to the total number of particles. So then you have to put an excess at p close to 0, which corresponds to the ground state of this system. Now of course when you look at helium, helium molecules, helium atoms have the interactions between them that we discussed. In particular, you can't really put them on top of each other. There is a hard exclusion when you bring things close to each other. So the ground state of the system must look very different from the ground state of the Bose-Einstein condensate in which the particles freely occupy the entire box. So there is a very difficult story associated with figuring out what the ground state of this combination of interacting particles that make up liquid helium is. What is the behavior? What's the many-body wave function at 0 temperature? Now as we see here, in order to understand the heat capacity we really don't need to know what is happening at the ground state. What we need to know in order to find heat capacity is how to put more energy in the system above the ground state. So we need to know something about excitations. And so that's the perspective that Landau took. Landau said, well, this is the spectrum of excitations if you had [? plain ?] particles without any interactions. Let's imagine what happens if we gradually tune in the interactions, the particles start to repel each other, et cetera. This non-interacting ground state that we had in which the particles were uniformly distributed across the system will evolve into some complicated ground state I don't know. And then, presumably there would be a spectrum of excitations around that ground state. Now the excitations around the non-interacting ground state we can label by this momentum peak. And it kind of makes sense that we should be able to have a singular label for excitations around the ground state of these interacting particles. And this is where you sort of needed a little bit of Landau's type of insight. He said, well, presumably what you do when you have excitations of momentum p is to distort the wave function in a manner that is consistent with having these kind of excitations of momentum p. And he said, well we typically know that if you have a fluid or a solid and we want to impart some momentum above the ground state, if will go in the form of phonons. These are distortions in which the density will vary in some sinusoidal or cosine way across the system. So he said that maybe what is happening is that for these excitations you have to take what ever this interacting ground state is-- which we don't know and can't write that down-- but hope that excitations around it correspond to these distortions in density. And that by analogy with what happens for fluids, that the spectrum of excitations will then become a linear. You would have something like a sound wave that you would have in a liquid or a solid. So if you do this, if you have a linear spectrum then we can see what happen. For a particular energy of the order of kT, we will go here, occupied momenta that would be of the order of kT over H bar. The number of excitations would be something like this cubed, and so you would imagine that you would get a heat capacity that is proportional to this times kB. And if you do things correctly, like really for the case of phonons or photons, you can even figure out what the numerical prefactor is here. And there's a velocity here because this curve goes like H bar vP. So then you can compare what you have over here with the coefficient of the T cubed over here, and you could even figure out what this velocity is. And it turns out to be of the order of 240 meters per second. A typical sound wave that you would have in a fluid. So that's kind of a reasonable thing. Now of course, when you go to higher and higher momenta, it corresponds to essentially shorter and shorter wavelengths. You expect that when you get wavelengths that is of the order of the interatomic spacing, then the interactions become less and less important. You have particles rattling in a cage that is set up by everybody else. And then you should regain this kind of spectrum at high values of momentum. And so what Landau did was he basically joined these things together and posed that there is a spectrum such as this that has what is called a phonon part, which is this linear part where energy goes like H bar, like the velocity times the momentum. And it has a part that in the vicinity of this point, you can expand parabolically. And it's called rotons. There is a gap delta, and then H bar squared over 2, some effective mass, k minus k0 squared. This k0 turns out to be roughly of the order of the inverse [INAUDIBLE], two Angstrom inverse, between particles. This mu is of the order of mass of [INAUDIBLE]. So about 10 years or so after Landau, people are able to get to this whole spectrum of excitation through neutron scattering and other scattering types of experiments. And so this picture was confirmed. So, yes? AUDIENCE: What is a roton? [INAUDIBLE] PROFESSOR: Over here what you are seeing is essentially particles rattling in the cage. It is believed that what is happening here are collections of three or four atoms that are kind of rotating in a bigger cage. So something, the picture that people draw is three or four particles rotating around. Yes? AUDIENCE: Is there some [INAUDIBLE] curve where energy's decreasing [INAUDIBLE]? Does the transition between photon and roton [INAUDIBLE]? PROFESSOR: There is no thermodynamic or other rule that says that the energy should be one of [INAUDIBLE] momentum that I know. Yes? AUDIENCE: Is there an expression for that k0 in terms of temperature and other properties of the system? PROFESSOR: This curve of excitations is supposed to be property of the ground state. That is, you take this system in its ground state, and then you create an excitation that has some particular momentum and calculate what the energy of that is. Actually, this whole curve is phenomenological, because in order to get the excitations you better have an expression for what the ground state is. And so writing a kind of wave function that describes the ground state of this interacting system is a very difficult task. I think Feynman has some variational type of wave function that we can start and work with, and then calculate things approximately in terms of that. Yes? AUDIENCE: What inspired Landau to propose that there was a [INAUDIBLE]? PROFESSOR: Actually, it was not so much I think looking at this curve, but which I think if you want to match that and that, you have to have something like this. But this was really that the whole experimental version of the heat capacity, it didn't seem like this expression was sufficient. And then there was some amount of excitation and energy at the temperatures that were experimentally accessible that heated at the presence of the rotons in the spectrum. Yes? AUDIENCE: So continuing [INAUDIBLE], if you raise the thermal energy kBT, [INAUDIBLE] level where you have multiple roots of this curve. PROFESSOR: Yes. AUDIENCE: So you will be able to excite some states and have some kind of gap, like a gap of momenta which are not. PROFESSOR: OK, so at any finite temperatures-- and I'll do the calculation for you shortly-- there is a finite probability for exciting all of these states. What you are saying is that when there is more occupation at this momentum compared to that, but much less compared to this. So that again does not violate any condition. So it is like, again, trying to shake this system of particles. Let's imagine that you have grains, and you are trying to shake them. And it may be that at some shaking frequencies, then there are things that are taking place at short distances in addition to some waves that you're generating. Yeah? AUDIENCE: I have a question about the methodology of getting this spectrum, because if we have a experimental result of the [INAUDIBLE] capacity, then if we assume there's a spectrum, there has to be this one, because it gives a one to one correspondence. So we can get the spectrum directly from the c. So-- PROFESSOR: I'm not sure, because in reality this is going to be spectrum in three-dimensional space. And there is certainly an expression that relates the heat capacity to the excitation spectrum. What I'm not sure is whether mathematically that expression uniquely invertible. It is given an epsilon-- c, you have a unique epsilon of p. Certainly, given an epsilon of p, you have a unique c. Yes? AUDIENCE: But if the excitation spectrum only depends on, let's say, k squared, not on the three-dimensional components of k, then maybe it's much easier to draw a one to one correspondence? PROFESSOR: I don't know, because you have a function of temperature and you want to convert it to a function of momentum that after some integrations will give you that function of temperature. I don't know the difficulty of mathematically doing that. I know that I can't off my head think of an inversion formula. It's not like the function that you're inversing. So the Landau spectrum can explain this part. It turns out that the Landau spectrum cannot explain this logarithmic divergence. Yes? AUDIENCE: Sorry, one more question about this. The allowed values of k, do they get modified, or are they thought to be the same? PROFESSOR: No. So basically at some point, I have to change perspective from a sum over k to an integral over k or an integral over p. The density of states in momentum is something that is kind of invariant. It is a very slight function of shape. So the periodic boundary conditions and the open boundary conditions, et cetera, give you something slightly different over here. But by the time you go to the continuum, it's a property of dimension only. It doesn't really depend on the underlying shape. AUDIENCE: So we still change sums for integrals with the same rules? PROFESSOR: Yes. It's a sort of general density of state property. So there's some nice formula that tells you what the density of state is for an arbitrary shape, and the leading term is always proportional to volume or area [INAUDIBLE] the density of state that you have been calculating. And then there are some leading terms that are proportional to if it is volume to area, or number of edges, et cetera. But those are kind of subleading the thermodynamic sense. So I guess Feynman did a lot of work on formalizing these ideas of Landau getting some idea of what the ground state does, is, and excitations that you can have about the spectrum. And so he was very happy at being able to explain this, including the nature of rotons, et cetera. And he was worried that somehow he couldn't get this logarithmic divergence. And that bothered him a little bit, but Onsager told him that that's really a much more fundamental property that depends on critical phenomena, and for resolving that issue, you have to come to 8.334. So we will not discuss that, nor will we discuss why this is Tc minus T to the 2/3 power and not a linear dependence. It again is one of these critical properties. But we should be able to explain this T to the fourth. And clearly, this T to the fourth is not as simple as saying this exponent changed from 3/2 to 3, this 3/2 should also change the 3. No, it went to T to the fourth, so what's going on over here? So last time at the end of the lecture I wrote a statement that the BEC is not superfluid. And what that really means is that it has too many excitations, low energy excitations. So imagine the following, that maybe we have a container-- I don't know, maybe we have a tube-- and we have our superfluid going through this with velocity v sub s. We want it to maintain that velocity without experiencing friction, which it seems to do in going through these capillaries. You don't have to push it, it seems to be going by itself. And so the question is, can any of these pictures that we drew for excitations be consistent with this? Now, why am I talking about excitations and consistency with superfluid? Because what can happen in principle is that within your system, you can spontaneously generate some excitation. This excitation will have some momentum p and some energy epsilon of p. And if you spontaneously can create these excitations that would take away energy from this kinetic energy of the flowing superfluid, gradually the superfluid will slow down. Its energy will be dissipated and the superfluid itself will heat up because you generated these excitations with it. So let's see what happens. If I were to create such an excitation, actually I have to worry about momentum conservation because I created something that carried momentum p. Now initially, let's say that this whole entity, all of the fluid that are superflowing with velocity Vs have mass M. So the initial momentum would be MVs. Now, I created some excitation that is carrying away some momentum p. So the only thing that can ensure this happens is that I have to slightly change the velocity of the fluid. Now this change in velocity is infinitesimal. It is Vs minus p divided by M. M is huge, so why bother thinking about this? Well, let's see what the change in energy is. Delta E. Let's say, well, you created this excitation so you have energy epsilon of p. But I say, in addition to that there is a change in the kinetic energy of the superfluid. I'm now moving at Vs prime squared, whereas initially when this excitation was not present, I was moving at Vs. And so what do you have here? We have epsilon of p, M over 2 Vs minus p over M, this infinitesimal change in velocity squared minus M over 2 Vs squared. We can see that the leading order of the kinetic energy goes away, but that there is a cross term here in which the M contribution-- the contribution of the mass-- goes away. And so the change in energy is actually something like this. So if I had a system that when stationary, the energy to create an excitation of momentum p was epsilon of p, when I put that in a frame that is moving with some velocity Vs, you have the ability to borrow some of that kinetic energy and the excitation energy goes down by this amount. So what happens if I take the Bose-Einstein type of excitation spectrum that is p squared over 2M and then subtract a v dot p from it? Essentially there is a linear subtraction going on, and I would get a curve such as this. So I probably exaggerated this by a lot. I shouldn't have subtracted so much. Let me actually not subject so much, because we don't want to go all the way in that range. But you can see that there is a range of momenta where you would spontaneously gain energy by creating excitations. If the spectrum was initially p squared over 2M, basically you just have too many low energy excitations. As soon as you start moving it, you will spontaneously excite these things. Even if you were initially at 0 temperature, these phonon excitations would be created spontaneously in your system. They would move all over the place. They would heat up your system. There is no way that you can pass the Bose-Einstein condensate-- actually, there's no way that you can even move it without losing energy. But you can see that this red curve does not have that difficulty. If I were to shift this curve by an amount that is linear, what do I get? I will get something like this. So the Landau spectrum is perfectly fine as far as excitations is concerned. At zero temperature, even if the whole fluid is moving then it cannot spontaneously create excitations, because you would increase energy of the system. So there's this difference. Ultimately, you would say that the first time you would get excitation is if you move it fast enough so that some portion of this curve goes to 0. And indeed, if you were to try to stir or move a superfluid fast enough, there's a velocity at which it breaks down, it stops being a superfluid. But it turns out that that velocity is much, much smaller than you would predict based on this roton spectrum going down. There are some other many body excitations that come before and cause the superfluid to lose energy and break down. But the genetic idea as to why a linear spectrum for k close to 0 is consist with superfluidity but the quadratic one is not remains correct. Now suppose I am in this situation. I have a moving super fluid, such as the one that I have described over here. The spectrum is going to be somewhat like this, but I'm not at 0 temperature. I want to try to describe this T to the fourth behavior, so I want to be at some finite temperature. So if I'm at some finite temperature, there is some probability to excite these different states, and the number that would correspond to some momentum p would be given by this general formula you have, 1 over Z inverse e to the beta epsilon of p minus 1. Furthermore, if I think that I am in the regime where the number of excitations is not important because of the same reason that I had for Bose-Einstein condensate, I would have this formula except that I would use epsilon of p that is appropriate to this system. Actually, what is it appropriate to this system is that my epsilon of p was velocity times p. But then I started to move with this superfluid velocity. Actually, maybe I'll call this c so that I a distinction between c, which is the linear spectrum here, and the superfluid velocity dot p. No, this is actually a vectorial product. And because of that, I only drew one part of this curve that corresponds to positive momentum. If I had gone to negative momentum, actually, this curve would have continued and whereas one branch the energy is reduced, if I go to minus p, the energy goes up. So whereas if the superfluid was not moving, I can generate as many excitations with momentum p as momentum minus p. Once the superfluid is moving, there is a difference. One of them has a v dot p. The other has minus v dot p. So because of that, there is a net momentum that is carried by these excitations. This net momentum is obtained by summing over all of these things, multiplying with appropriate momentum. So I have beta is CP minus v dot p minus 1. This is the momentum of the excitation. This is the net momentum of the system for one excitation. But then I have to sum over all possible P's. Sum over P's, as we've discussed, I can replace with an integral. And sum over k be replaced with V. Integral over K, K and P are simply related by a factor of h bar. So whereas before for k I had 2 pi cubed for v cubed, I have 2 pi h bar cubed. So this is what I have to calculate. Now what happens for small v? I can make an expansion in vs. The 0 order term in the expansion is what we would have for non-moving fluid. Momenta in the two directions are the same, so that contribution goes away. The first contribution that I'm going to get is going to come from expanding this to lowest order in P. So there is a P that is sitting out front. When I make the expansion, I will get a vs dot P times the derivative of the exponential function gives me a factor of beta e to the beta CP. And down here I will have e to the beta CP minus 1 squared. Now, in the problem set you have to actually evaluate this integral. It's not that difficult. It's related to Zeta functions. But what I'm really only interested in what is the temperature dependence? You can see that I can rescale this combination, call it x. Essentially what it says is that whenever I see a factor of p, replace it by Kt over Cx. And how many P's do I have? I have three, four, five. So I have five factors of P. So I will have five factors of Kt. One of them gets killed by the beta, so this whole thing is proportional to T to the fourth power. So what have we found? We have found that as this fluid is moving at some finite temperature T, it will generate these excitations. And these excitations are preferably along the direction of the momentum. And they correspond to an additional momentum of the fluid that is proportional to the volume. It's proportional to temperature to the fourth and something. And of course, proportional to the velocity. Now, we are used to thinking of the proportionality of momentum and velocity to be some kind of a mass. If I divide that mass by the volume, I have a density of these excitations. And what we have established is that the density of those excitations is proportional to t to the fourth. And what is happening in this Andronikashvili experiment is that as these plates are moving, by this mechanism the superfluid that is in contact with them will create excitations. And the momentum of those excitations would correspond to some kind of a density that vanishes as T to the fourth, again in agreement with what we've seen here. OK? Yes? AUDIENCE: So in an integral expression you have vs as part of a dot product. PROFESSOR: Yes. AUDIENCE: And then in the next line [INAUDIBLE]. So it's in that direction. PROFESSOR: So let's give these indices p in direction alpha. This is p in direction alpha. This is p in direction alpha. This is v in direction beta, p in direction beta, sum over beta. Now, I have to do an angular integration that is spherically symmetric. And then somewhere inside there it has a p alpha p beta. That angular integration will give me a p squared over three delta alpha beta, which then converts this v beta to a b alpha, which is in the direction of the momentum. Yes? AUDIENCE: Will you say once again what happened to the integral dimension? PROFESSOR: OK. So when we are in the Bose-Einstein condensate, as far as the excitations of concerned we have zero chemical potential. Whatever number of particle that we have in excess of what can be accommodated through the excitations we put together in the ground state. So if you like the ground state, the kp equals to 0 or k equals to 0, is a reservoir. You can add as many particles there or bring as many particles out of it as you like. So effectively, you have no conservation number and no need for a [? z. ?] Of course, that we only know for the case of the true Bose-Einstein condensate. We are kind of jumping and giving that concept relevance for the interacting superfluid. OK? Any other questions? So this is actually the last item I wanted to cover for going on the board. The rest of the hour, we have this movie that I had promised you. I will let that movie run. I also have all the connection of problem sets, and exams, and test that you have not picked up. So while the movie runs, you are welcome to sit and enjoy it. It's very nice. Or you can go and take your stuff and go your own way or do whatever you like. So let's go back. [VIDEO PLAYBACK] PROFESSOR: There will be more action. -We just made a transfer from liquid helium out of the storage tank into our own experimental equipment. It is a remarkable [INAUDIBLE]. It has two different and easily distinguishable liquid phases-- a warmer and a colder one. The warmer phase is called liquid helium I and the colder phase liquid helium II. The two stages are separated by a transition temperature, known as the lambda point. When liquid helium is pulled down through the lambda point, a transition from helium I to helium II is clearly visible. We will show it to you later in this film. The two liquids behave nothing like any other known liquid, although it could be said that helium I, the warmer phase, approximates the behavior of common liquids. But it is helium II, the colder phase, which is truly different. Because of this, it is called a superfluid. The temperatures involved when working with liquid helium are quite low. Helium boils at 4.2 degrees Kelvin under conditions of atmospheric pressure. And the lambda point lies at roughly 2.2 degrees. Note that this corresponds to minus 269 and minus 271 degrees centigrade. The properties of liquid helium that I have just been telling you about are characteristic of the heavy isotope if helium, helium-4. The element occurs in the form of two stable isotopes. [INAUDIBLE] The second and lighter one, helium-3, is very rare. Its abundance is only about 1 part of 10 million. Pure liquid helium-3 is the subject of intensive study at the present time, but so far no second superfluid liquid phase has been found to exist for helium-3. The low temperature at which we'll be working calls for well-insulated containers. The dewar meets our requirements. The word "dewar" is a scientific name given to a double-walled vessel with the space between the walls evacuated. When these dewars are made of glass, the surface of this inner space is usually filtered to cut down heat transfer by radiation. However, our dewars will have to be transparent so that we can look at what's going on inside. Now, liquid helium is commonly stored in double dewars. The design is quite simple, just put one inside the other like this. In the inner dewar, we put the liquid helium, and in the space between the inner and outer dewar, we maintain a supply of liquid air. Here is a double dewar exactly like the one we will be using in our demonstration experiment. The inner dewar is filled with liquid helium. The outer dewar contains liquid air. The normal boiling temperature of liquid air is about 80 degrees Kelvin, 75 or more degrees hotter than liquid helium. The purpose of the liquid air is twofold. First, we put the liquid air in the outer dewar well ahead of putting liquid helium in the inner dewar. In this way, the inner dewar is pre-cooled. Secondly, we maintain a supply of liquid air in the outer dewar because it provides an additional [INAUDIBLE] of insulation now that the liquid helium is in the inner dewar. The [INAUDIBLE] liquid air attests to the fact that it is absorbing some of the heat which enters the double dewar. Even with the boiling of the liquid air, the liquid helium is clearly visible. Later, we will use liquid air cooled below its boiling temperature to reduce or eliminate the air bubbles for better visibility. Now the liquid air is cooled down and we have eliminated boiling. The smaller bubbles of the boiling liquid helium are clearly visible. The cover over the inner dewar has a port, at present open. The liquid helium is at atmospheric pressure, so its temperature is 4.2 degrees Kelvin. In other words, what we have in here now is liquid helium one, the warmer of the two phases. Before we cool it down to take a look at the superfluid phase, I want to dwell greatly on the properties of helium I. I've told you before that even helium I is different from the normal liquids. The distance between neighboring atoms and this liquid is quite large. The atoms are not as closely packed as in the classical liquids. The reason for this is quantum mechanics. The zero point energy is relatively more important here than in any other liquid. As a consequence, liquid helium has a very low mass density, only about 13% the density of water, and a very low optical density. The index of refraction is quite close to 1. This makes its surface hard to see with the naked eye under ordinary lighting conditions. You are no doubt familiar with the fact that the helium atom has closed shell atomic structure. This explains why helium is a chemically inert element. It also accounts for the fact that the force of attraction between neighboring helium atoms, the so-called van der Waals force, is small. It takes little energy to pull two helium atoms apart, as for example in evaporation. This gives liquid helium a better small latent heat of vaporization. Only five calories are needed to evaporate one gram. Compare this with water, where evaporation requires between 500 and 600 calories per gram. The low van der Waals force combined with a large zero point energy also accounts for the fact that liquid helium does not freeze, cannot be solidified at ordinary pressure, no matter how far we cool it. However, liquid helium has been solidified at high pressure. The liquid helium in the dewar is at 4,2 degrees. We now want to cool it down to the lambda point and show you the transition to the [INAUDIBLE]. Our method will be cooling by evaporation using a vacuum pump. Now, the lambda point lies at 2.2 degrees, only 2 degrees colder than the [INAUDIBLE] temperature of the liquid. What's more, not very much heat has been removed from the liquid helium now in the dewar to bring it to the lambda point. It amounts to only about 250 calories. Nevertheless, don't get the idea that this cooling process is easy. On the contrary, it's quite difficult. More than 1/3 of the liquid helium now in the dewar has to be knocked away in vapor form before we can get what remains behind to the lambda point. That requires an awful lot of pumping and explains why we use this large and powerful vacuum pump over here. Even with this pump, the cooling process takes a considerable amount of time. Let me explain why it is so difficult to cool liquid helium to the lambda point. I have already mentioned that liquid helium has a remarkably small [INAUDIBLE] vaporization, only five calories per gram. At the same time, liquid helium at 4.2 degrees has a high specific heat, almost calorie per gram. Therefore, 1 gram of the vapor pumped away carries with it an amount of heat which can cool only 5 or 6 grams of liquid helium by 1 degree. That's not very much cooling. It is less by a factor of almost 100 than when we cool water by evaporation. The situation gets even worse as cooling progresses below 4.2 degrees because the specific heat of liquid helium rises astonishingly. As we approach 2,17 degrees, the lambda point. The heat of vaporization, on the other hand, remains roughly the same. So a given amount of vapor carried off produces less and less cooling as we approach 2.17 degrees, Our thermometer here is a low pressure gauge connected to the space above the liquid helium. The needle registers the pressure there. It is the saturated vapor pressure of liquid helium. The gauge is calibrated for the corresponding temperature. We call it a vapor pressure thermometer. As we approach 2.17 degrees, boiling becomes increasingly violent. Suddenly it stops. This was the transition. The liquid you now see is helium II. Even though evaporation does continue, there is no boiling. The normal liquids, such as the water in this basin, boil because of their relatively low heat conductivity. Before heat, [INAUDIBLE] at one point can be carried away to a cooler place in the liquid bubbles of the vapor form. Helium I behaves like a normal liquid in this respect. The absence of boiling in helium II reveals that this phase acts as if it had a large heat conductivity. As a matter of fact, as the liquid helium passed through the lambda point transition you just saw, its heat conductivity increased by the fantastic factor of one million. The heat conductivity of helium II is many times greater than in the metals silver and copper, which are among the best solid heat conductors. And yet here we deal with a liquid. For this alone, helium II deserves the name of superfluid. Actually, the way in which helium II transports such large quantities of heat so rapidly is totally different from the classical concept for heat conduction. I'll come back to the subject later in connection with an experiment demonstrating the phenomenon of second sound in helium II. Remember that this great change in heat conductivity occurred at a single and fixed transition temperature, the lambda point. We do indeed deal with a change in phase, only here it is a change from one liquid to another liquid. As we told you before, the specific heat of liquid helium is very large as a lambda point. In fact, it behaves abnormally even below the lambda point and falls again very rapidly with the temperature. This discontinuity in specific heat is another reflection of the fact that we are dealing with a change in the phase of the substance. By the way, the curve resembles the Greek letter lambda. The transition temperature got its name from the shape of this curve. [INAUDIBLE] The next one has to do with the viscosity of liquid helium. When a normal liquid flows through a tube, it will resist the flow. In this experiment, we shall cause some glycerin to flow to a tube under its own weight. The top layer is colored glycerin. The liquid layer closest to the tube wall adheres to it. The layer next in from the one touching the wall flows by it and is retarded as it flows due to the interatomic, the van der Waals force of attraction. The second layer in turn drags on the third, and so on inward from the wall, producing fluid friction, or viscosity. The narrower the tube, the slower the liquid rate of flow through it under a given head of pressure. Here I have a beaker with an unglazed ceramic bottom of ultra-fine [INAUDIBLE]. Many capillary channels run through this ceramic disk. The diameter is quite small, about one micron which is 1/10,000 of a centimeter. There is liquid helium in the beaker. It is 4.2 degrees Kelvin, helium I, the normal phase. The capillaries in the disk are fine enough to prevent the liquid now in the beaker from flowing through under its on weight. Clearly, helium I is viscous. To be sure, its viscosity is very small. That's why we had to choose extremely fine capillaries to demonstrate it. Here you see the lambda point transition. The helium II all poured out. The rate of pouring would not be noticeably slower if the [INAUDIBLE] were made yet finer. We call this kind of flow a superflow. The temperature is now at 1.6 degrees. The superflow is even faster. The viscosity of helium II in this experiment is so small that it has not been possible to find a value for it. It is less than the experimental uncertainty incurred in attempts to measure it. We now believe that helium II, the superfluid, has zero viscosity, although we should be more precise here. We believe its viscosity is zero when observing capillary flow. Bear this statement in mind, for we will come up with a contradiction to it in the next experiment, where we will look for viscosity by a different method. There is a copper cylinder in the liquid helium, so mounted as we can turn it about a vertical axis. In order to turn it smoothly and with as little vibration as possible, we laid the cylinder into the [INAUDIBLE] of a simple induction motor energized from outside the dewar. The four horizontal coils you see provide the torque which turns the cylinder. The liquid helium is electrically non-conducting. The coil exerts no torque on it directly. Yet as we turn on our motor, the liquid layer bounding the cylinder is dragged along behind it. The boundary layer in turn drags on the next layer, and so on outward. Finally a circulation showing up in the helium due to its own viscosity and the wooden panels we [INAUDIBLE] is turned along. What we have just seen occurred in helium I, the normal phase at 4.2 degrees Kelvin. That is to say, this demonstration is consistent with our results for helium I by capillary flow. Helium I is viscous. Here you see the liquid cooled down and passing into the superfluid phase, helium II. Let's turn on the motor. The paddle wheel starts again. What does this mean? First of all, let me emphasize that, like helium I, helium II is also non-conducting in the electrical sense. In other words, the circulation in the experiment can only have been caused through viscous drag. So we conclude from the rotating cylinder observations that helium II is viscous and from the method of capillary flow that it has zero viscosity. Our experimentation has come up with a paradox. No normal classical liquid is known to behave so inconsistently, in capillary flow on the one hand and in bulk flow on the other. This state of affairs forces us to think of helium II, the superfluid, not as a single, but as a dual liquid. It appeared as if helium II had two separate and yet interpenetrating component liquids. We shall call one component normal. It is this component which we call responsible for the appearance of viscosity below the lambda point in the rotating cylinder experiment. The normal component, as the name suggests, behaves like a normal liquid, and therefore as viscosity. It is the one which the cylinder drags along as its turned. But the normal components cannot flow through the narrow channels of the ceramic disc because of its viscosity. The second component has zero viscosity, and it's called the superfluid component. We think that it does not participate at all in the rotating cylinder experiment below the lambda point. It stays at rest. On the other hand, it can flow through channels of one micron diameter with the greatest of ease and countering no resistance whatever because it has no viscosity. As we'll see later, this flow is not repeated even when the capillary diameters are made far smaller than one micron. This [INAUDIBLE] construction is called the two fluid model for liquid helium II. Whether it is correct or not depends on further tests comparing the theory based on this model with experimental results. We now go on to another phenomenon, the fountain effect. What you see here is a tube which narrows down and then opens into a bulb. A small piece of cotton is stuffed into the [INAUDIBLE] section between the tube and the bulb. And the bulb has been tightly packed with one of the finest powders available, [INAUDIBLE]. And second wad of cotton keeps the powder in the bulb. This powder presents extremely fine capillary channels. Their average diameter is a small fraction of 1 micron. This device has been placed in the dewar. The liquid helium is below the lambda point. We submerge the bulb, and then we'll send a beam of light from this lamp to a point near the top. You will see the light beam when the lamp is turned on. It focuses some heat in the form of infrared radiation on the point in question. The temperature will rise above the temperature of the rest of the apparatus. Let us turn it on. Liquid helium flows through the hole in the bottom of the bulb, through the fine powder, and rises above the level of liquid helium outside. The height to which it will go depends on the temperature increase produced bu the lamp focused on the bulb. We can very well ask, where does the mechanical energy come from that does the work necessary to pump the liquid above the ambient level? Before we attempt to discuss this question, there are two other facts that should be noted. The first is by now obvious. The upward flow through the bulb must clearly be a superfluid. Only the superfluid component of helium II could get through. The second fact is more significant. Let me explain it this way, the superfluid flows spontaneously from a to b,from a cooler to a warmer place. Point a is in the cold liquid, but b is being heated with infrared rays. The second law of thermodynamics positively says that heat cannot of itself flow from a point of lower to a point of higher temperature. What does this mean to us here, knowing as we do that the superfluid is flowing from a colder to a warmer spot? Simply this, it carries no heat, no thermal energy. Any internal energy [INAUDIBLE] is no longer thermally available. To say it precisely, it has zero entropy. We have discovered another remarkable property of helium II. Its superfluid component not only is friction free, it also contains no heat. The heat energy contained in helium II as a whole resides, all of it, in the normal component. We may, of course, add heat to the superfluid component, as we are doing when it passes the spot heated by the lamp. But in doing so, we are converting it into the normal component. Let me return briefly to a question posed earlier. Mechanical work is done in pumping the liquid above equilibrium level. Where does it come from? I cannot answer this question here in full, but it suffice to tell you that we are dealing here with a heat engine. The mechanical energy comes from the heat added at the light spot. An amusing demonstration of the same phenomenon again uses a bulb packed with rouge, but this one opens into a capillary. Light is beamed on a spot just below the capillary, and it produces a helium fountain. The phenomenon in this and the previous experiment has become known as the thermomechanical, or the fountain, effect. Below the lambda point, the superfluid component of liquid helium creeps up along the walls of its container in an extremely thin film. It is known as a Rollin film. This creeping film is a variety of superflow. It is difficult to make the film itself directly visible to you. To show it indirectly, we've put some liquid helium into a glass vessel. It is below the lambda point. There is no part porous bottom in this vessel. The film rises along the inside wall and comes down along the outside, collecting in drops at the bottom. The thickness of this creeping film is only a small fraction of 1 micron and of the order of 200 to 300 angstrom. Its speed, while small just below the lambda point, may reach a value as high as 35 centimeters per second at lower temperatures. Our next experiment deals with the phenomenon of second sound. We are all familiar with wave motion in elastic materials, be they solids, liquids, or gases. Elastic energy of deformation, carried away from its source in the form of waves with a characteristic speed, the speed of sound. Liquid helium is an elastic substance both above and below the lambda point. Both helium one and two support sound waves. Now helium II, the superfluid phase, also conducts heat in the form of waves. This remarkable property is shared by no other substance. For better or for worse, it has been called second sound. Normal heat conduction is a diffusion process. The rate of flow of heat is proportional to the temperature differences. But in helium II it is a wave process. Heat flows through helium II with a characteristic speed, the speed of second sound. We shall send small heat pulses into helium II from a heater. They will spread away from the heater uniformly, carrying the heat energy with them. The speed of second sound is small just below the lambda point. In the neighborhood of 1.6 degrees Kelvin, it reaches a value of roughly 20 meters per second, and it is in this range that we will run our demonstration. The experimental procedure is as follows. There are two disks in the liquid helium. They are carbon resistors with the carbon applied in thin layers on one side of each disk. In this way, good thermal contact is established between the resistor and the liquid helium. The following resistor will be used as a heater. Electric currents will be sent through it in pulses from this pulse generator by means of the cable you see here. The [INAUDIBLE] of the generator is also connected via a second cable to a dual-trace oscilloscope, where it will be recorded on the bottom [? trip. ?] In other words, it will record the heat pulse as it enters the liquid helium. The pulses have been turned on. They themselves trigger the horizontal sweep of [INAUDIBLE], which records [INAUDIBLE]. It is calibrated at 1 millisecond per unit on the scale. The pulses are 1 millisecond long. The pulses leave the heater at the bottom in the form of second sound and move up to where they strike the carbon resistor at the top. Being heat pulses, they greatly raise is temperature. The carbon resistor is quite sensitive to changes in temperature. It acts as a thermometer. So the heat pulse of second sound creates a pulse-like change in the resistance of the [INAUDIBLE] up here. It isn't hard to convert this resistance pulse into a [INAUDIBLE]. What we will do is to maintain a small DC current in the top resistor. It is supplied from a battery in this metal box. The box shields the circuits in order to reduce electronic noise. The voltage pulse is small. In this second box we have an amplifier. The amplified output is fed into the oscilloscope, where it will appear on the upper trace. The horizontal timescale on this trace is exactly the same as for the bottom trace. However, the upper trace records both exchanges as they occur in the top resistor, a detector of second sound. The temperature of the liquid is about 1.65 degrees Kelvin. The battery has been turned on, and now the amplifier. Among noise and other distortions in the upper trace, a clear-cut voltage pulse appears about 4 and 1/2 units to the right, 4 and 1/2 milliseconds later than the pulse entering the heater. This pulse in the upper trace is also about 1 millisecond long. It is the second sound as it arrives at the upper resistor. The upper trace also shows a strong voltage pulse at the left, simultaneous to the heater pulse. That's due to pick-up by electromagnetic waves with the heater acting as transmitter and the detector as receiver. We're moving the detector toward the heater. The pulse moves with it to the left. Notice the echos of second sound which appear on the upper trace while the detector is near the heater. They're caused by multiple reflections between the two resistors. A total of three echos is clearly visible. [END VIDEO PLAYBACK] PROFESSOR: OK, you can watch the rest of it at home. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 9_Kinetic_Theory_of_Gases_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Let's start. So last time, we started with kinetic theory. And we will focus for gas systems mostly. And the question that we would like to think about and answer somehow is the following. You start with a gas that is initially confined to one chamber. And you can calculate all of its thermodynamic properties. You open at time 0 a hole, allowing the gas to escape into a second initially empty chamber. And after some time, the whole system will come to a new equilibrium position. It's a pretty reversible thing. You can do this experiment many, many times. And you will always get roughly the same amount of time for the situation to start from one equilibrium and reach another equilibrium. So how do we describe that? It's slightly beyond what we did in thermodynamics, because we want to go from one equilibrium state to another equilibrium state. Now, we said, OK, we know the equations of motion that governs the particles that are described by this. So here we can say that we have, let's say, N particles. They have their own momenta and coordinates. And we know that these momenta and coordinates evolve in time, governed by some Hamiltonian, which is a function of all of these momenta and coordinates. OK? Fine. How do we go from a situation which describes a whole bunch of things and coordinates that are changing with time to some microscopic description of macroscopic variables going from one equilibrium state to another? So our first attempt in that direction was to say, well, I could start with many, many, many examples of the same situation. Each one of them would correspond to a different trajectory of p's and q's. And so what we can do is to construct some kind of an ensemble average, or ensemble density, first, which is what we did. We can say that the very, very many examples of this situation that I can have will correspond to different points at some particular instant of time, occupying this 6N-dimensional phase space, out of which I can construct some kind of a density in phase space. But then I realize that since each one of these trajectories is evolving according to this Hamiltonian, this density could potentially be a function of time. And we described the equation for the evolution of that density with time. And we could write it in this form-- V rho by dt is the Poisson bracket of the Hamiltonian with rho. And this Poisson bracket was defined as a sum over all of your coordinates. Oops. We had d rho by d vector qi, dot product with dH by dpi minus the other [? way. ?] So there are essentially three N terms-- well, actually, six N terms, but three N pairs of terms in this sum. I can either use some index alpha running from one to three N, or indicate them as contributions of things that come from individual particles and then use this notation with three vectors. So this is essentially a combination of sum of three terms. OK? So I hope that somehow this equation can describe the evolution that goes on over here. And ultimately, when I wait sufficiently long time, I will reach a situation where d rho by dt does not change anymore. And I will find some density that is invariant on time. I'll call that rho equilibrium. And so we saw that we could have our rho equilibrium, which then should have zero Poisson bracket with H to be a function of H and any other conserved quantity. OK? So in principle, we sort of thought of a way of describing how the system will evolve in equilibrium-- towards an equilibrium. And indeed, we will find that in statistical mechanics later on, we are going to rely heavily on these descriptions of equilibrium densities, which are governed by only the Hamiltonian. And if there are any other conserved quantities, typically there are not. So this is the general form that we'll ultimately be using a lot. But we started with a description of evolution of these coordinates in time, which is time-reversible. And we hope to end up with a situation that is analogous to what we describe thermodynamically, we describe something that goes to some particular state, and basically stays there, as far as the macroscopic description is concerned. So did we somehow manage to just look at something else, which is this density, and achieve this transition from reversibility to irreversibility? And the answer is no. This equation-- also, if you find, indeed, a rho that goes from here to here, you can set t to minus t and get a rho that goes from back here to here. It has the same time-reversibility. So somehow, we have to find another solution. And the solution that we will gradually build upon relies more on physics rather than the rigorous mathematical nature of this thing. Physically, we know that this happens. All of us have seen this. So somehow, we should be able to use physical approximation and assumptions that are compatible with what is observed. And if, during that, we have to sort of make mathematical approximations, so be it. In fact, we have to make mathematical approximations, because otherwise, there is this very strong reversibility condition. So let's see how we are about to proceed, thinking more in terms of physics. Now, the information that I have over here, either in this description of ensemble or in this description in terms of trajectories, is just enormous. I can't think of any physical process which would need to keep track of the joined positions of coordinates and momenta of 10 to the 23 particles, and know them with infinite precision, et cetera. Things that I make observations with and I say, we see this, well, what do we see? We see that something has some kind of a density over here. It has some kind of pressure. Maybe there is, in the middle of this process, some flow of gas particles. We can talk about the velocity of those things. And let's say even when we are sort of being very non-equilibrium and thinking about velocity of those particles going from one side to the other side, we really don't care which particle among the 10 to the 23 is at some instant of time contributing to the velocity of the gas that is swishing past. So clearly, any physical observable that we make is something that does not require all of those degrees of freedom. So let's just construct some of those things that we may find useful and see how we would describe evolutions of them. I mean, the most useful thing, indeed, is the density. So what I could do is I can actually construct a one-particle density. So I want to look at some position in space at some time t and ask whether or not there are particles there. I don't care which one of the particles. Actually, let's put in a little bit more information; also, keep track of whether, at some instant of time, I see at this location particles. And these particles, I will also ask which direction they are moving. Maybe I'm also going to think about the case where, in the intermediate, I am flowing from one side to another. And keeping track of both of these is important. I said I don't care which one of these particles is contributing. So that means that I have to sum over all particles and ask whether or not, at time t, there is a particle at location q with momentum p. So these delta functions are supposed to enforce that. And again, I'm not thinking about an individual trajectory, because I think, more or less, all of the trajectories are going to behave the same way. And so what I will do is I take an ensemble average of this quantity. OK? So what does that mean? To do an ensemble average, we said I have to integrate the density function, rho, which depends on all of the coordinates. So I have coordinate for particle one, coordinate-- particle and momentum, particle and momentum for particle two, for particle three, all the way to particle number N. And this is something that depends on time. That's where the time dependence comes from. And again, let's think about the time dependence where 0 is you lift the partition and you allow things to go from one to the other. So there is a non-equilibrium process that you are following. What I need to do is I have to multiply this density with the function that I have to consider the average. Well, the function being a delta function, it's very nice. The first term in the sum, we'll set simply q1 equals to q. And it will set p1 equals to p when I do the integration over p1 and q1. But then I'm left to have to do the integration over particle two, particle three, particle four, and so forth. OK? But this is only the first term in the sum. Then I have to write the sum for particle number two, particle number three, et cetera. But all of the particles, as far as I'm concerned, are behaving the same way. So I just need to multiply this by a factor of N. Now, this is something that actually we encountered before. Recall that this rho is a joint probability distribution. It's a joint probability distribution of all N particles, their locations, and momentum. And we gave a name to taking a joint probability density function, which this is, and integrating over some subset of variables. The answer is an unconditional probability distribution. So essentially, the end of this story is if I integrate over coordinates of two all the way to N, I will get an unconditional result pertaining to the first set of coordinate. So this is the same thing up to a factor of N-- the unconditional probability that I will call rho one. Actually, I should have called this f1. So this is something that is defined to be the one particle density. I don't know why this name stuck to it, because this one, that is up to a factor of N different from it, is our usual unconditional probability for one particle. So this is rho of p1 being p, q1 being q at time. OK? So essentially, what I've said is I have a joint probability. I'm really interested in one of the particles, so I just integrate over all the others. And this is kind of not very elegant, but that's somehow the way that it has appeared in the literature. The entity that I have on the right is the properly normalized probability. Once I multiply by N, it's a quantity this is called f1. And it's called a density and used very much in the literature. OK? And that's the thing that I think will be useful. Essentially, all of the things that I know about observations of a system, such as its density, its velocity, et cetera, I should be able to get from this. Sometimes we need a little bit more information, so I allow for the possibility that I may also need to focus on a two-particle density, where I keep information pertaining to two particles at time t and I integrate over everybody else that I'm not interested. So I introduce an integration over coordinates number two-- sorry, number three all the way to N to not have to repeat this combination all the time. The 6N-dimensional phase space for particle i I will indicate by dVi. So I take the full N particle density, if you like, integrate over everybody except 2. And up to a normalization of NN minus 1, this is called a two-particle density. And again, absent this normalization, this is simply the unconditional probability that I would construct out of this joint probability [INAUDIBLE]. And just for some mathematical purposes, the generalization of this to S particles, we will call fs. So this depends on p1 through qs at time t. And this is going to be N factorial divided by N minus S factorial in general. And they join the unconditional probability that depends on S coordinates p1 through qs at time t. OK? You'll see why I need these higher ones although, in reality, all my interest is on this first one, because practically everything that I need to know about this initial experiment that I set up and how a gas expands I should be able to extract from this one-particle density. OK? So how do I calculate this? Well, what I would need to do is to look at the time variation of fs with t to calculate the time dependence of any one of these quantities. Ultimately, I want to have an equation for df1 by dt. But let's write the general one. So the general one is up to, again, this N factorial, N minus s factorial and integral over coordinates that I'm not interested, of d rho by dt. OK? So I just take the time derivative inside the integral, because I know how d rho by dt evolves. And so this is going to be simply the Poisson bracket of H and rho. And I would proceed from there. Actually, to proceed further and in order to be able to say things that are related to physics, I need to say something about the Hamiltonian. I can't do so in the most general case. So let's write the kind of Hamiltonian that we are interested and describes the gas-- so the Hamiltonian for the gas in this room. One term is simply the kinetic energy. Another term is that gas particles are confined by some potential. The potential could be as easy as the walls of this room. Or it could be some more general potential that could include gravity, whatever else you want. So let's include that possibility. So these are so-called one-body terms, because they pertain to the coordinates of one particle. And then there are two-body terms. So for example, I could look at all pairs of particles. Let's say i not equal to j to avoid self-interaction. V of qi minus qj. So certainly, two particles in this gas in this room, when they get close enough, they certainly can pass through each other. They each have a size. But even when they are a few times their sizes, they start to feel some interaction, which causes them to collide. Yes. AUDIENCE: The whole series of arguments that we're developing, do these only hold for time-independent potentials? PROFESSOR: Yes. Although, it is not that difficult to sort of go through all of these arguments and see where the corresponding modifications are going to be. But certainly, the very first thing that we are using, which is this one, we kind of implicitly assume the time-independent Hamiltonian. So you have to start changing things from that point. OK? So in principle, you could have three-body and higher body terms. But essentially, most of the relevant physics is captured by this. Even for things like plasmas, this would be a reasonably good approximation, with the Coulomb interaction appearing here. OK? Now, what I note is that what I have to calculate over here is a Poisson bracket. And then I have to integrate that Poisson bracket. Poisson bracket involves a whole bunch of derivatives over all set of quantities. And I realize that when I'm integrating over derivatives, there are simplifications that can take place, such as integration by parts. But that only will take place when the derivative is one of the variables that is being integrated. Now, this whole process has separated out arguments of rho, as far as this expression is concerned, into two sets-- one set, or the set that is appearing out here, the first s ones that don't undergo the integration, and then the remainder that do undergo the integration. So this is going to be relevant when we do our manipulations. And therefore, it is useful to rewrite this Hamiltonian in terms of three entities. The first one that I call H sub s-- so if you like, this is an end particle Hamiltonian. I can write an H sub s, which is just exactly the same thing, except that it applies to coordinates that I'm not integrating over. And to sort of make a distinction, I will label them by N. And it includes the interaction among those particles. And I can similarly write something that pertains to coordinates that I am integrating over. I will label them by j and k. So this is s plus 1 to N, pj squared over 2m plus u of qj plus 1/2 sum over j and k, V of qj minus qk. OK? So everything that involves one set of coordinates, everything that involves the other set of coordinates. So what is left are terms that [? copy ?] one set of coordinates to another. So N running from 1 to s, j running from s plus 1 to N, V between qm and qj. OK? So let me rewrite this equation rather than in terms of-- f in terms of the probabilities' rhos. So the difference only is that I don't have to include this factor out front. So I have dVi. I have the d rho by dt, which is the commutator of H with rho, which I have been writing as Hs plus HN minus s. I'm changing the way I write s. Sorry. Plus H prime and rho. OK? So there is a bit of mathematics to be performed to analyze this. There are three terms that I will label a, b, and c, which are the three Poisson brackets that I have to evaluate. So the contribution that I call a is the integral over coordinates s plus 1 to N, of the Poisson bracket of Hs with rho. Now, the Poisson bracket is given up here. It is a sum that involves N terms over all N particles. But since, if I'm evaluating it for Hs and Hs will only give nonzero derivatives with respect to the coordinates that are present in it, this sum of N terms actually becomes simply a sum of small N terms. So I will get a sum over N running from 1 to s. These are the only terms. And I will get the things that I have for d rho by dpN-- sorry, rho by dqN, dHs by dpN minus d rho by dpN, dHs by dqN. OK? Did I make a mistake? AUDIENCE: Isn't it the Poisson bracket of rho H, not H rho? PROFESSOR: Rho. Oh, yes. So I have to put a minus sign here. Right. Good. Thank you. OK. Now, note that these derivatives and operations that involve Hs involve coordinates that are not appearing in the integration process. So I can take these entities that do not depend on variables that are part of the integration outside the integration, which means that I can then write the result as being an exchange of the order of the Poisson bracket and the integration. So I would essentially have the integration only appear here, which is the same thing as Hs and rho s. This should be rho s. This is the definition of rho s, which is the same thing as rho. OK. What does it mean physically? So it's actually much easier to tell you what it means physically than to do the math. So what we saw was happening was that if I have a Hamiltonian that describes N particles, then for that Hamiltonian, the corresponding density satisfies this Liouville equation. D rho by dt is HN rho. And this was a consequence of this divergence-less character of the flow that we have in this space, that the equations that we write down over here for p dot and q dot in terms of H had this character that the divergence was 0. OK? Now, this is true if I have any number of particles. So if I focus simply on s of the particles, and they are governed by this Hamiltonian, and I don't have anything else in the universe, as far as this Hamiltonian is concerned, I should have the analog of a Liouville equation. So the term that I have obtained over there from this first term is simply stating that d rho s by dt, if there was no other interaction with anybody else, would simply satisfy the corresponding Liouville equation for s particles. And because of that, we also expect and anticipate-- and I'll show that mathematically-- that the next term in the series, that is the Poisson bracket of N minus s and rho, should be 0, because as far as this s particles that I'm focusing on and how they evolve, they really don't care about what all the other particles are doing if they are not [INAUDIBLE]. So anything interesting should ultimately come from this third term. But let's actually go and do the calculation for the second term to show that this anticipation that the answer should be 0 does hold up and why. So for the second term, I need to calculate a similar Poisson bracket, except that this second Poisson bracket involves H of N minus s. And H of N minus s, when I put in the full sum, will only get contribution from terms that start from s plus 1. So the same way that that started from N, this contribution starts from s plus 1 to N. And actually, I can just write the whole thing as above, d rho by d qj dotted by d HN minus s by dpj, plus d rho by-- no, this is the rho. d rho by d pj dotted by dHN minus s by dqj. So now I have a totally different situation from the previous case, because the previous case, the derivatives were over things I was not integrating. I could take outside the integral. Now all of the derivatives involve things that I'm integrating over. Now, when that happens, then you do integration by parts. So what you do is you take rho outside and let the derivative act on everything else. OK? So what do we end up with if we do integration by parts? I will get surface terms. Surface terms are essentially rho-evaluated when the coordinates are at infinity or at the edge of your space, where rho is 0. So there is no surface term. There is an overall change in sign, so I will get a product i running from s plus 1 to N, dVi. Now the rho comes outside. And the derivative acts on everything that is left. So the first term will give me a second derivative of HN minus s, with respect to p, with respect to q. And the second term will be essentially the opposite way of doing the derivative. And these two are, of course, the same. And the answer is 0. OK? So we do expect that the evolution of all the other particles should not affect the subset that we are looking at. And that's worn out also. So the only thing that potentially will be relevant and exciting is the last term, number c. So let's take a look at that. So here, I have to do an integration over variables that I am not interested. And then I need now, however, to do a full Poisson bracket of a whole bunch of terms, because now the terms that I'm looking at have coordinates from both sets. So I have to be a little bit careful. So let me just make sure that I follow the notes that I have here and don't make mistakes. OK. So this H prime involves two sums. So I will write the first sum, N running from 1 to s. And then I have the second sum, j running from s plus 1 to N. What do I need? I need the-- OK. Let's do it the following way. So what I have to do for the Poisson bracket is a sum that involves all coordinates. So let's just write this whole expression. But first, for coordinates 1 through s. So I have a sum N running from 1 to s. And then I will write the term that corresponds to coordinates s plus 1 to m. For the first set of coordinates, what do I have? I have d rho by dqn. And then I have d H prime by dpn. So I didn't write H prime explicitly. I'm just breaking the sum over here. And then I have sum j running from s plus 1 to N, d rho by dqj times d H prime by dpj. And again, my H prime is this entity over here that [? copies ?] coordinates from both sets. OK. First thing is I claim that one of these two sets of sums is 0. You tell me which. AUDIENCE: The first. PROFESSOR: Why first? AUDIENCE: Because H prime is independent of p [? dot. ?] PROFESSOR: That's true. OK. That's very good. And then it sort of brings up a very important question, which is, I forgot to write two more terms. [LAUGHTER] Running to s of d rho by dpn, d H prime by dqn minus sum j s plus 1 to N of d rho by dpj dot dH prime by dqj. So indeed, both answers now were correct. Somebody said that the first term is a 0, because H prime does not depend on pn. And somebody over here said that this term is 0. And maybe they can explain why. AUDIENCE: [INAUDIBLE]. PROFESSOR: Same reason as up here. That is, I can do integration by parts. AUDIENCE: [INAUDIBLE]. PROFESSOR: To get rid of this term plus this term together. So it's actually by itself is not 0. But if I do integration by parts, I will have-- actually, even by itself, it is 0, because I would have d by dqj, d by pj, H prime. And H prime, you cannot have a double derivative pj. So each one of them, actually, by itself is 0. But in general, they would also cancel each other through their single process. Yes. AUDIENCE: Do you have the sign of [? dH ?] of [? H prime? ?] PROFESSOR: Did I have the sign incorrect? Yes. Because for some reason or other, I keep reading from here, which is rho and H. So let's do this. OK? AUDIENCE: Excuse me. PROFESSOR: Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. Yes, it is different. Yes. So what I said, if I had a more general Hamiltonian that also depended on momentum, then this term would, by itself not 0, but would cancel, be the corresponding term from here. But the way that I have for H prime, indeed, each term by itself would be 0. OK. So hopefully-- then what do we have? So actually, let's keep the sign correct and do this, because I need this right sign for the one term that is preserved. So what does that say? It is a sum, n running from 1 to s. OK? I have d H prime by dqn. And I have this integration. I have the integration i running from s plus 1 to N dV of i. I have d rho by dpn dot producted with d H prime by dqN. d H prime by dqn I can calculate from here, is a sum over terms j running from s plus 1 to N of V of qn minus qj. All right. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: Why aren't you differentiating [? me ?] if you're differentiating H prime? PROFESSOR: d by dqj. All right? AUDIENCE: Where is the qn? PROFESSOR: d by dqn. Thank you. Right. Because always, pn of qn would go together. Thank you. OK. All right. So we have to slog through these derivations. And then I'll give you the physical meaning. So I can rearrange this. Let's see what's happening here. I have here a sum over particles that are not listed on the left-hand side. So when I wrote this d rho by dt, I had listed coordinates going from p1 through qs that were s coordinates that were listed. If you like, you can think of them as s particles. Now, this sum involves the remaining particles. What is this? Up to a sign. This is the force that is exerted by particle j from the list of particles that I'm not interested on one of the particles on the list that I am interested. OK? Now, I expect that at the end of the day, all of the particles that I am not interested I can treat equivalently, like everything that we had before, like how I got this factor of N or N minus 1 over there. I expect that all of these will give me the same result, which is proportional to the number of these particles, which is N minus s. OK? And then I can focus on just one of the terms in this sum. Let's say the term that corresponds to j, being s plus 1. Now, having done that, I have to be careful. I can do separately the integration over the volume of this one coordinate that I'm keeping, V of s plus 1. And what do I have here? I have the force that exerted on particle number N by the particle that is labelled s plus 1. And this force is dot producted with a gradient along the momentum in direction of particle N of its density. Actually, this is the density of all particles. This is the rho that corresponds to the joint. But I had here s plus 1 integrations. One of them I wrote down explicitly. All the others I do over here. Of the density. So basically, I change the order of the derivative and the integrations over the variables not involved in the remainder. And the reason I did that, of course, is that then this is my rho s plus 1. OK? So what we have at the end of the day is that if I take the time variation of an s particle density, I will get one term that I expected, which is if those s particles were interacting only with themselves, I would write the Liouville equation that would be appropriate to them. But because of the collisions that I can have with particles that are not over here, suddenly, the momenta that I'm looking at could change. And because of that, I have a correction term here that really describes the collisions. It says here that these s particles were following the trajectory that was governed by the Hamiltonian that was peculiar to the s particles. But suddenly, one of them had a collision with somebody else. So which one of them? Well, any one of them. So I could get a contribution from any one of the s particles that is listed over here, having a collision with somebody else. How do I describe the effect of that? I have to do an integration over where this new particle that I am colliding with could be. I have to specify both where the particle is that I am colliding with, as well as its momentum. So that's this. Then I need to know the force that this particle is exerting on me. So that's the V of qs plus 1 minus qN divided by dqN. This is the force that is exerted by this particle that I don't see on myself. Then I have to multiply this, or a dot product of this, with d by dpN, because what happens in the process, because of this force, the momentum of the N particle is changing. The variation of that is captured through looking at the density that has all of these particles in addition to this new particle that I am colliding with. But, of course, I am not really interested in the coordinate of this new particle, so I integrate over it. There are N minus s such particles. So I really have to put a factor of N minus s here for all of potential collisions. And so that's the equation. Again, it is more common, rather than to write the equation for rho, to write the equation for f. And the f's and the rhos where simply related by these factors of N factorial over N minus 1 s factorial. And the outcome of that is that the equation for f simply does not have this additional factor of N minus s, because that disappears in the ratio of rho of s plus 1 and rho s. And it becomes a sum over N running from 1 to s. Integral over coordinates and momenta of a particle s plus 1. The force exerted by particle s plus 1 on particle N used to vary the momentum of the N particle. And the whole thing would depend on the density that includes, in addition to the s particles that I had before, the new particle that I am colliding with. OK? So there is a set of equations that relates the different densities and how they evolve in time. The evolution of f1, which is the thing that I am interested, will have, on the right-hand side, something that involves f2. The evolution of f2 will involve f3. And this whole thing is called a BBGKY hierarchy, after people whose names I have in the notes. [SOFT LAUGHTER] But again, what have we learned beyond what we had in the original case? And originally, we had an equation that was governing a function in 6N-dimensional space, which we really don't need. So we tried our best to avoid that. We said that all of the physics is in one particle, maybe two particle densities. Let's calculate the evolution of one-particle and two-particle densities. Maybe they will tell us about this non-equilibrium situation that we set up. But we see that the time evolution of the first particle density requires two-particle density. Two-particle density requires three-particle densities. So we sort of made this ladder, which ultimately will [? terminate ?] at the Nth particle densities. And so we have not really gained much. So we have to now look at these equations a little bit more and try to inject more physics. So let's write down the first two terms explicitly. So what I will do is I will take this Poisson bracket of H and f, to the left-hand side, and use the Hamiltonian that we have over here to write the terms. So the equation that we have for f1-- and I'm going to write it as a whole bunch of derivatives acting on f1. f1 is a function of p1 q1 t. And essentially, what Liouville's theorem says is that as you move along the trajectory, the total derivative is 0, because the expansion of the flows is incompressible. So what does that mean? It means that d by dt, which is this argument, plus q1 dot times d by dq1 plus p1 dot by d by dp1. So here, I should write p1 dot and q1 dot. In the absence of everything else is 0. Then, of course, for q1 dot, we use the momentum that we would get out of this. q1 dot is momentum divided by mass. So that's the velocity. And p1 dot, changing momentum, is the force, is minus dH by dq1. So this is minus d of this one particle potential divided by dq1 dotted by [? h ?] p1. So if you were asked to think about one particle in a box, then you know its equation of motion. If you have many, many realizations of that particle in a box, you can construct a density. Each one of the elements of the trajectory, you know how they evolve according to [? Newton's ?] equation. And you can see how the density would evolve. It would evolve according to this. I would have said, equal to 0. But I can't set it to 0 if I'm really thinking about a gas, because my particle can come and collide with a second particle in the gas. The second particle can be anywhere. And what it will do is that it will exert a force, which would be like this, on particle one. And this force will change the momentum. So my variation of the momentum will not come only from the external force, but also from the force that is coming from some other particle in the medium. So that's where this d by dp really gets not only the external force but also the force from somebody else. But then I need to know where this other particle is, given that I know where my first particle is. So I have to include here a two-particle density which depends on p1 as well as q2 at time t. OK. Fine. Now you say, OK, let's write down-- I need to know f2. Let's write down the equation for f2. So I will write it more rapidly. I have p1 over m, d by dq1. I have p2 over m, d by dq2. I will have dU by dq1, d by dp1. I will have dU by dq2, d by dp2. I will have also a term from the collision between q1 and q2. And once it will change the momentum of the first particle, but it will change the momentum of the second particle in the opposite direction. So I will put the two of them together. So this is all of the terms that I would get from H2, Poisson bracket with density acting on the two-particle density. And the answer would be 0 if the two particles were the only thing in the box. But there's also other particles. So there can be interactions and collisions with a third particle. And for that, I would need to know, let's actually try to simplify notation. This is the force that is exerted from two to one. So I will have here the force from three to one dotted by d by dp1. Right. And the force that is exerted from three to two dotted by d by dp2 acting on a three-particle density that involves everything up to three. And now let's write the third one. [LAUGHTER] So that, I will leave to next lecture. But anyway, so this is the structure. Now, this is the point at which we would like to inject some physics into the problem. So what we are going to do is to estimate the various terms that are appearing in this equation to see whether there is some approximation that we can make to make the equations more treatable and handle-able. All right? So let's try to look at the case of a gas-- let's say the gas in this room. A typical thing that is happening in the particles of the gas in this room is that they are zipping around. Their velocity is of the order-- again, just order of magnitude, hundreds of meters per second. OK? So we are going to, again, be very sort of limited in what we are trying to describe. There is this experiment. Gas expands into a chamber. In room temperature, typical velocities are of this order. Now we are going to use that to estimate the magnitude of the various terms that are appearing in this equation. Now, the whole thing about this equation is variation with time. So the entity that we are looking at in all of these brackets is this d by dt, which means that the various terms in this differential equation, apart from d by dt, have to have dimensions of inverse time. So we are going to try to characterize what those inverse times are. So what are the typical magnitudes of various terms? If I look at the first equation, I said what that first equation describes. That first equation describes for you a particle in a box. We've forgotten about everything else. So if I have a particle in a box, what is the characteristic time? It has to be set by the size of the box, given that I am moving with that velocity. So there is a timescale that I would call extrinsic in the sense that it is not really a property of the gas. It will be different if I make the box bigger. There's a timescale over which I would go from one side of the box to another side of the box. So this is kind of a timescale that is related to the term that knows something about the box, which is dU by dq, d by dp. I would say that if I were to assign some typical magnitude to this type of term, I would say that it is related to having to traverse a distance that is of the order of the size of the box, given the velocity that I have specified. This is an inverse timescale. Right? And let's sort of imagine that I have an-- and I will call this timescale 1 over tau U, because it is sort of determined by my external U. Let's say we have a typical size that is of the order of millimeter. If I make it larger, it will be larger. So actually, let's say we have 10 to the minus 3 meters. Actually, let's make it bigger. Let's make it of the order of 10 to the minus 1 meter. Kind of reasonable-sized box. Then you would say that this 1 over tau c is of the order of 10 to the 2 divided by 10 to the minus 1, which is of the order of 1,000. Basically, it takes a millisecond to traverse a box that is a fraction of a meter with these velocities. OK? You say fine. That is the kind of timescale that I have in the first equation that I have in my hierarchy. And that kind of term is certainly also present in the second equation for the hierarchy. If I have two particles, maybe these two particles are orbiting each other, et cetera. Still, their center of mass would move, typically, with this velocity. And it would take this amount of time to go across the size of the box. But there is another timescale inside there that I would call intrinsic, which involves dV/dq, d by dp. Now, if I was to see what the characteristic magnitude of this term is, it would have to be V divided by a lens scale that characterizes the potential. And the potential, let's say, is of the order of atomic size or molecular size. Let's call it d. So this is an atomic size-- or molecular size. More correctly, really, it's the range of the interaction that you have between particles. And typical values of these [? numbers ?] are of the order of 10 angstroms, or angstroms, or whatever. Let's say 10 to the minus 10 meters. Sorry. The first one I would like to call tau U. This second time, that I will call 1 over tau c for collisions, is going to be the ratio of 10 to the 2 to 10 to the minus 10. It's of the order of 10 to the 12 in both seconds. OK? So you can see that this term is much, much larger in magnitude than the term that was governing the first equation. OK? And roughly, what you expect in a situation such as this-- let's imagine, rather than shooting particles from here, you are shooting bullets. And then the bullets would come and basically have some kind of trajectory, et cetera. The characteristic time for a single one of them would be basically something that is related to the size of the box. How long does it take a bullet to go over the size of the box? But if two of these bullets happen to come together and collide, then there's a very short period of time over which they would go in different directions. And the momenta would get displaced from what they were before. And that timescale is of the order of this. But in the situation that I set up, this particular time is too rapid. There is another more important time, which is, how long do I have to wait for two of these particles, or two of these bullets, to come and hit each other? So it's not the duration of the collision that is irrelevant, but how long it would be for me to find another particle to collide with. And actually, that is what is governed by the terms that I have on the other side. Because the terms on the other side, what they say is I have to find another particle. So if I look at the terms that I have on the right-hand side and try to construct a characteristic time out of them, I have to compare the probability that I will have, or the density that I will have for s plus 1, integrated over some volume, over which the force between these particles is non-zero. And then in order to construct a timescale for it, I know that the d by dt on the left-hand side acts on fs. On the right-hand side, I have fs plus 1. So again, just if I want to construct dimensionally, it's a ratio that involves s plus 1 to s. OK? So I have to do an additional integration over a volume in phase space over which two particles can have substantial interactions. Because that's where this [? dv ?] by dq would be non-zero, provided that there is a density for s plus 1 particle compared to s particles. If you think about it, that means that I have to look at the typical density or particles times d cubed for these additional operations multiplied by this collision time that I had before. OK? And this whole thing I will call 1 over tau collision. And another way of getting the same result is as follows. This is typically how you get collision times by pictorially. You say that I have something that can interact over some characteristic size d. It moves in space with velocity v so that if I wait a time that I will call tau, within that time, I'm essentially sweeping a volume of space that has volume d squared v tau. So my cross section, if my dimension is d, is d squared. I sweep in the other direction by [? aman ?] d tau. And how many particles will I encounter? Well, if I know the density, which is the number of particles per unit volume, I have to multiply this by n. So how far do I have to go until I hit 1? I'll call that tau x. Then my formula for tau x would be 1 over nvd squared, which is exactly what I have here. 1 over tau x is nd squared v. OK? So in order to compare the terms that I have on the right-hand side with the terms on the left-hand side, I notice that I need to know something about nd cubed. So nd cubed tells you if I have a particle here and this particle has a range of interactions that I call d, how many other particles fall within that range of interaction? Now, for the gas in this room, the range of the interaction is of the order of the size of the molecule. It is very small. And the distance between molecules in this room is far apart. And indeed, you can estimate that for gas, nd cubed has to be of the order of 10 to the minus 4. And how do I know that? Because if I were to take all of the gas particles in this room and put them together so that they are touching, then I would have a liquid. And the density of, say, typical liquid is of the order of 10,000 times larger than the density of air. So basically, it has to be a number of this order. OK? So fine. Let's look at our equations. So what I find is that in this equation for f2, on the left-hand side, I have a term that has magnitude that is very large-- 1 over tau c. Whereas the term on the right-hand side, in terms of magnitude, is something like this. And in fact, this will be true for every equation in the hierarchy. So maybe if I am in the limit where nd cubed is much, much less than 1-- such as the gas in this room, which is called the dilute limit-- I can ignore the right-hand side. I can set the right-hand side to 0. OK? Now, I can't do that for the first equation, because the first equation is really the only equation in the hierarchy that does not have the collision term on the left-hand side. Right? And so for the first equation, I really need to keep that. And actually, it goes back to all of the story that we've had over here. Remember that we said this rho equilibrium has to be a function of H and conserved quantities. Suppose I go to my Hamiltonian and I ignore all of these interactions, which is what I would have done if I just look at the first term over here and set the collision term on the right-hand side to 0. What would happen then? Then clearly, for each one of the particles, I have-- let's say it's energy is going to be conserved. Maybe the magnitude of its momentum is going to be conserved in the appropriate geometry. And so there will be a huge number of individual conserved quantities that I would have to put over there. Indeed, if I sort of go back to the picture that I was drawing over here, if I ignore collisions between the particles, then the bullets that I send will always be following a trajectory such as this forever, because momentum will be conserved. You will always-- I mean, except up to reflection. And say the magnitude of velocity would be always following the same thing. OK? So however, if there is a collision between two of the particles-- so the particles that come in here, they have different velocities, they will hit each other. The moment they hit each other, they go off different directions. And after a certain number of hits, then I will lose all of the regularity of what I had in the beginning. And so essentially, this second term on the right-hand with the collisions is the thing that is necessary for me to ensure that my gas does come to equilibrium in the sense that their momenta get distributed and reversed. I really need to keep track of that. And also, you can see that the timescales for which this kind of equilibration takes place has to do with this collision time. But as far as this term is concerned, for the gas or for that system of bullets, it doesn't really matter. Because for this term to have been important, it would have been necessary for something interesting to physically occur should three particles come together simultaneously. And if I complete the-- say that never in the history of this system three particles will come together, they do come together in reality. It's not that big a difference. It's only a factor of 10 to the 4 difference between the right-hand side and the left-hand side. But still, even if they didn't, there was nothing about equilibration of the gas that would be missed by this. So it's a perfectly reasonable approximation and assumption, therefore, for us to drop this term. And we'll see that although that is physically motivated, it actually doesn't resolve this question of irreversibility yet, because that's also potentially a system that you could set up. You just eliminate all of the three-body interactions from the problem. Still, you could have a very reversible set of conditions and deterministic process that you could reverse in time. But still, it's sort of allows us to have something that is more manageable, which is what we will be looking at next. Before I go to what is next, I also mentioned that there is one other limit where one can do things, which is when, within the range of interaction of one particle, there are many other particles. So you are in the dense limit, nd cubed greater than 1. This does not happen for a liquid, because for a liquid, the range does not allow many particles to come within it. But it happens for a plasma where you have long-range Coulomb interaction. And within the range of Coulomb interaction, you could have many other interactions. And so that limit you will explore in the problem set leads to a different description of approach to equilibrium. It's called the Vlasov equation. What we are going to proceed with now will lead to something else, which is called-- in the dilute limit, it will get the Boltzmann equation. OK? So let's see what we have. So currently, we achieved something. We want to describe properties of a few particles in the system-- densities that describe only, say, one particle by itself if one was not enough. But I can terminate the equations. And with one-particle density and two-particle density, I should have an appropriate description of how the system evolves. Let's think about it one more time. So what is happening here? There is the one-particle description that tells you how the density for one particle, or an ensemble, the probability for one particle, its position and momentum evolves. But it requires knowledge of what would happen with a second particle present. But the equations that we have for the density that involves two particles is simply a description of things that you would do if you had deterministic trajectories. There is nothing else on the right-hand side. So basically, all you need to do is in order to determine this, is to have full knowledge of what happens if two particles come together, collide together, go away, all kinds of things. So if you have those trajectories for two particles, you can, in principle, build this density. It's still not an easy task. But in principle, one could do that. And so this is the description of f2. And we expect f2 to describe processes in which, over a very rapid timescale, say, momenta gets shifted from one direction to another direction. But then there is something about the overall behavior that should follow, more or less, f1. Again, what do I mean? What I mean is the following, that if I open this box, there is what you would observe. The density would kind of gush through here. And so you can have a description for how the density, let's say in coordinate space, would be evolving as a function of time. If I ask how does the two-particle prescription evolve, well, the two-particle prescription, part of it is what's the probability that I have a particle here and a particle there? And if the two particles, if the separations are far apart, you would be justified to say that that is roughly the product of the probabilities that I have something here and something there. When you become very close to each other, however, over the range of interactions and collisions, that will have to be modified, because at those descriptions from here, you would have to worry about the collisions, and the exchange of momenta, et cetera. So in that sense, part of f2 is simply following f1 slowly. And part of f1 captures all of the collisions that you have. In fact, that part of f2 that captures in the collisions, we would like to simplify as much as possible. And that's the next task that we do. So what I need to do is to somehow express the f2 that appears in the first equation while solving this equation that is the second one. I will write the answer that we will eventually deal with and explain it next time. So the ultimate result would be that the left-hand side, we will have the terms that we have currently. f1. On the right-hand side, what we find is that I need to integrate over all momenta of a second particle and something that is like a distance to the target-- one term that is the flux of incoming particles. And then we would have f2 after collision minus f2 before collision. And this is really the Boltzmann equation after one more approximation, where we replace f2 with f1. f1. But what all of that means symbolically and what it is we'll have to explain next time. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 10_Kinetic_Theory_of_Gases_Part_4.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's start. Are there any questions? We would like to have a perspective for this really common observation that if you have a gas that is initially in one half of a box, and the other half is empty, and some kind of a partition is removed so that the gas can expand, and it can flow, and eventually we will reach another equilibrium state where the gas occupies more chambers. How do we describe this observation? We can certainly characterize it thermodynamically from the perspectives of atoms and molecules. We said that if I want to describe the configuration of the gas before it starts, and also throughout the expansion, I would basically have to look at all sets of coordinates and momenta that make up this particle. There would be some point in this [? six ?], and I mention our phase space, that would correspond to where this particle was originally. We can certainly follow the dynamics of this point, but is that useful? Normally, I could start with billions of different types of boxes, or the same box in a different instance of time, and I would have totally different initial conditions. The initial conditions presumably can be characterized to a density in this phase space. You can look at some volume and see how it changes, and how many points you have there, and define this phase space density row of all of the Q's and P's, and it works as a function of time. One way of looking at how it works as a function of time is to look at this box and where this box will be in some other instance of time. Essentially then, we are following a kind of evolution that goes along this streamline. Basically, the derivative that we are going to look at involves changes both explicitly in the time variable, and also increasingly to the changes of all of the coordinates and momenta, according to the Hamiltonian that governs the system. I have to do, essentially, a sum over all coordinates. I would have the change in coordinate i, Qi dot, dot, d row by dQi. Then I would have Pi, dot-- I guess these are all vectors-- d row by dPi. There are six end coordinates that implicitly depend on time. In principle, if I am following along the streamline, I have to look at all of these things. The characteristic of evolution, according to some Hamiltonian, was that this volume of phase space does not change. Secondly, we could characterize, once we wrote Qi dot, as dH by dP, and the i dot as the H by dQ. This combination of derivatives essentially could be captured, and be written as 0 by dt is the Poisson bracket of H and [? P. ?] One of the things, however, that we emphasize is that as far as evolution according to a Hamiltonian and this set of dynamics is concerned, the situation is completely reversible in time so that some intermediate process, if I were to reverse all of the momenta, then the gas would basically come back to the initial position. That's true. There is nothing to do about it. That kind of seems to go against the intuition that we have from thermodynamics. We said, well, in practical situations, I really don't care about all the six end pieces of information that are embedded currently in this full phase space density. If I'm really trying to physically describe this gas expanding, typically the things that I'm interested in are that at some intermediate time, whether the particles have reached this point or that point, and what is this streamline velocity that I'm seeing before the thing relaxes, presumably, eventually into zero velocity? There's a lot of things that I would need to characterize this relaxation process, but that is still much, much, much less than all of the information that is currently encoded in all of these six end coordinates and momenta. We said that for things that I'm really interested in, what I could, for example, look at, is a density that involves only one particle. What I can do is to then integrate over all of the positions and coordinates of particles that I'm not interested in. I'm sort of repeating this to introduce some notation so as to not to repeat all of these integration variables, so I will call dVi the phase place contribution of particle i. What I may be interested in is that this is something that, if I integrate over P1 and Q1, it is clearly normalized to unity because my row, by definition, was normalized to unity. Typically we may be interested in something else that I call F1, P1 Q1 P, which is simply n times this-- n times the integral product out i2 to n, dVi, the full row. Why we do that is because typically you are interested or used to calculating things in [? terms ?] of a number density, like how many particles are within some small volume here, defining the density so that when I integrate over the entire volume of f1, I would get the total number of particles, for example. That's the kind of normalization that people have used for f. More generally, we also introduced fs, which depended on coordinates representing s sets of points, or s particles, if you like, that was normalized to be-- We said, OK, what I'm really interested in, in order to calculate the properties of the gases it expands in terms of things that I'm able to measure, is f1. Let's write down the time evolution of f1. Actually, we said, let's write down the time evolution of fs, along with it. So there's the time evolution of fs. If I were to go along this stream, it would be the fs by dt, and then I would have contributions that would correspond to a the changes in coordinates of these particles. In order to progress along this direction, we said, let's define the total Hamiltonian. We will have a simple form, and certainly for the gas, it would be a good representation. I have the kinetic energies of all of the particles. I have the box that confines the particles, or some other one particle potential, if you like, but I will write in this much. Then you have the interactions between all pairs of particles. Let's write it as sum over i, less than j, V of Qi minus Qj. This depends on n set of particles, coordinates, and momenta. Then we said that for purposes of manipulations that you have to deal with, since there are s coordinates that are appearing here whose time derivatives I have to look at, I'm going to simply rewrite this as the contribution that comes from those s particles, the contribution that comes from the remaining n minus s particles, and some kind of [? term ?] that covers the two sets of particles. This, actually, I didn't quite need here until the next stage because what I write here could, presumably, be sufficiently general, like we have here some n running from 1 to s. Let me be consistent with my S's. Then I have Qn, dot, dFs by dQn, plus Pn, dot, dFs by dPn. If I just look at the coordinates that appear here, and say, following this as they move in time, there is the explicit time dependence on all of the implicit time dependence, this would be the total derivative moving along the streamline. Qn dot I know is simply the momentum. It is the H by dPn. The H by dPn I have from this formula over here. It is simply Pn divided by m. It's the velocity-- momentum divided by mass. This is the velocity of the particle. Pn dot, the rate of change of momentum is the force that is acting on the particle. What I need to do is to take the derivatives of various terms here. So I have minus dU by dPn. What is this? This is essentially the force that the particle feels from the external potential. If you are in the box in this room, It is zero until you hit the edge of the box. I will call this Fn to represent external potential that is acting on the system. What else is there? I have the force that will come from the interaction with all other [? guys. ?] I will write here a sum over m, dV of Qm minus Qn, by dQn-- dU by dQm. I'm sorry. What is this? This Is basically the sum of the forces that is exerted by the n particle on the m particle. Define it in this fashion. If this was the entire story, what I would have had here is a group of s particles that are dominated by their own dynamics. If there is no other particle involved, they basically have to satisfy the Liouville equation that I have written, now appropriate to s particles. Of course, we know that that's not the entire story because there are all these other terms involving the interactions with particles that I have not included. That's the whole essence of the story. Let's say I want to think about one or two particles. There is the interaction between the two particles, and they would be evolving according to some trajectories. But there are all of these other particles in the gas in this room that will collide with them. So those conditions are not something that we had in the Liouville equation, with everything considered. Here, I have to include the effect of all of those other particles. We saw that the way that it appears is that I have to imagine that there's another particle whose coordinates and momenta are captured through some volume for the s plus 1 particle. This s plus 1 particle can interact with any of the particles that are in the set that I have on the other side. There is an index that runs from 1 to s. What I would have here is the force that will come from this s plus 1 particle, acting on particle n the same way that this force was deriving the change of the momentum, this force will derive the change of the momentum of-- I guess I put an m here-- The thing that I have to put here is now a density that also keeps track of the probability to find the s plus 1 particle in the location in phase place that I need to integrate with both. I have to integrate over all positions. One particle is moving along a straight line by itself, let's say. Then there are all of the other particles in the system. I have to ask, what is the possibility that there is a second particle with some particular momentum and coordinate that I will be interacting with. This is the general set up of these D-B-G-K-Y hierarchy of equations. At this stage, we really have just rewritten what we had for the Liouville equation. We said, I'm really, really interested only one particle [? thing, ?] row one and F1. Let's focus on that. Let's write those equations in more detail In the first equation, I have that the explicit time dependence, plus the time dependence of the position coordinate, plus the time dependence of the momentum coordinate, which is driven by the external force, acting on this one particle density, which is dependent on p1, q1 at time t. On the right hand side of the equation. I need to worry about a second particle with momenta P2 at position Q2 that will, therefore, be able to exert a force. Once I know the position, I can calculate the force that particle exerts. What was my notation? The order was 2 and 1, dotted by d by dP1. I need now f2, p1, q2 at time t. We say, well, this is unfortunate. I have to worry about dependence on F2, but maybe I can get away with things by estimating order of magnitudes of the various terms. What is the left hand side set of operations? The left hand side set of operations describes essentially one particle moving by itself. If that particle has to cross a distance of this order of L, and I tell you that the typical velocity of these particles is off the order of V, then that time scale is going to be of the order of L over V. The operations here will give me a V over L, which is what we call the inverse of Tau u. This is a reasonably long macroscopic time. OK, that's fine. How big is the right hand side? We said that the right hand side has something to do with collisions. I have a particle in my system. Let's say that particle has some characteristic dimension that we call d. This particle is moving with velocity V. Alternatively, you can think of this particle as being stationary, and all the other particles are coming at it with some velocity V. If I say that the density of these particles is n, then the typical time for which, as I shoot these particles, they will hit this target is related to V squared and V, the volume of particles. Over time t, I have to consider this times V tau x. V tau xn V squared should be of the order of one. This gave us a formula for tau x. The inverse of tau x that controls what's happening on this side is n V squared V. Is the term on the right hand side more important, or the term on the left hand side? The term on the right hand side has to do with the two body term. There's a particle that is moving, and then there's another particle with a slightly different velocity that it is behind it. In the absence of collisions, these particles would just go along a straight line. They would bounce off the walls, but the magnitude of their energy, and hence, velocity, would not change from these elastic collisions. But if the particles can catch up and interact, which is governed by V2, V on the other side, then what happens is that the particles, when they interact, would collide and go different ways. Quickly, their velocities, and momenta, and everything would get mixed up. How rapidly that happens depends on this collision distance, which is much less than the size of the system, and, therefore, the term that you have on the right hand side in magnitude is much larger than what is happening on the left hand side. There is no way in order to describe the relaxation of the gas that I can neglect collisions between gas particles. If I neglect collisions between gas particles, there is no reason why the kinetic energies of individual particles should change. They would stay the same forever. I have to keep this. Let's go and look at the second equation in the hierarchy. What do you have? You have d by dT, P1 over m d by d Q1, P2 over m, P d by d Q2. Then we have F1 d by d Q1, plus F2, d by d Q2 coming from the external potential. Then we have the force that the involves the collision between particles one and two. When I write down the Hamiltonian for two particles, there is going to be already for two particles and interactions between them. That's where the F1 2 comes from. F1 2 changes d by the momentum of particle one. I should write, it's 2 1 that changes momentum of particle two. But as 2 1 is simply minus F1 2, I can put the two of them together in this fashion. This acting on F2 is then equal to something like integral over V3, F3 1, d by dP1, plus F3 2, d by dP2. [INAUDIBLE] on F3 P1 and Q3 [INAUDIBLE]. Are we going to do this forever? Well, we said, let's take another look at the magnitude of the various terms. This term on the right hand side still involves a collision that involves a third particle. I have to find that third particle, so I need to have, essentially, a third particle within some characteristic volume, so I have something that is of that order. Whereas on the left hand side now, I have a term that from all perspectives, looks like the kinds of terms that I had before except that it involves the collision between two particles. What it describes is the duration that collision. We said this is of the order of 1 over tau c, which replaces the n over there with some characteristic dimension. Suddenly, this term is very big. We should be able to use that. There was a question. AUDIENCE: On the left hand side of both of your equations, for F1 and F2, shouldn't all the derivatives that are multiplied by your forces be derivatives of the effects of momentum? [INAUDIBLE] the coordinates? [INAUDIBLE] reasons? PROFESSOR: Let's go back here. I have a function that depends on P, Q, and t. Then there's the explicit time derivative, d by dt. Then there is the Q dot here, which will go by d by dQ. Then there's the P dot term that will go by d by dP. All of things have to be there. I should have derivatives in respect to momenta, and derivatives with respect to coordinate. Dimensions are, of course, important. Somewhat, what I write for this and for this should make up for that. As I have written it now, it's obvious, of course. This has dimensions of Q over T. The Q's cancel. I would have one over T. D over Dps cancel. I have 1 over P. Here, dimensionality is correct. I have to just make sure I haven't made a mistake. Q dot is a velocity. Velocity is momentum divided by mass. So that should dimensionally work out. P dot is a force. Everything here is force. In a reasonable coordinate-- AUDIENCE: [INAUDIBLE] PROFESSOR: What did I do here? I made mistakes? AUDIENCE: [INAUDIBLE] PROFESSOR: Why didn't you say that in a way that-- If I don't understand the question, please correct me before I spend another five minutes. Hopefully, this is now free of these deficiencies. This there is very big. Now, compared to the right hand side in fact, we said that the right hand side is smaller by a factor that measures how many particles are within an interaction volume. And for a typical gas, this would be a number that's of the order of 10 to the minus 4. Using 10 to the minus 4 being this small, we are going to set the right hand side to zero. Now, I don't have to write the equation for F2. I'll answer a question here that may arise, which is ultimately, we will do sufficient manipulations so that we end up with a particular equation, known as the Boltzmann Equation, that we will show does not obey the time reversibility that we wrote over here. Clearly, that is built in to the various approximations I make. The first question is, the approximation that I've made here, did I destroy this time reversibility? The answer is no. You can look at this set of equations, and do the manipulations necessary to see what happens if P goes to minus P. You will find that you will be able to reverse your trajectory without any problem. Yes? AUDIENCE: Given that it is only an interaction from our left side that's very big, that's the reason why we can ignore the stuff on the right. Why is it that we are then keeping all of the other terms that were even smaller before? PROFESSOR: I will ignore them. Sure. AUDIENCE: [LAUGHTER] PROFESSOR: There was the question of time reversibility. This term here has to do with three particles coming together, and how that would modify what we have for just two-body collisions. In principle, there is some probability to have three particles coming together and some combined interactions. You can imagine some fictitious model, which in addition to these two-body interactions, you cook up some body interaction so that it precisely cancels what would have happened when three particles come together. We can write a computer program in which we have two body conditions. But if three bodies come close enough to each other, they essentially become ghosts and pass through each other. That computer program would be fully reversible. That's why sort of dropping this there is not causing any problems at this point. What is it that you have included so far? What we have is a situation where the change in F1 is governed by a process in which I have a particle that I describe on the left hand side with momentum one, and it collides with some particle that I'm integrating over, but in some particular instance of integration, has momentum P2. Presumably they come close enough to each other so that afterwards, the momenta have changed over so that I have some P1 prime, and I have some P2 prime. We want to make sure that we characterize these correctly. There was a question about while this term is big, these kinds of terms are small. Why should I basically bother to keep them? It is reasonable. What we are following here are particles in my picture that were ejected by the first box, and they collide into each other, or they were colliding in the first box. As long as you are away from the [? vols ?] of the container, you really don't care about these terms. They don't really moved very rapidly. This is the process of collision of two particles, and it's also the same process that is described over here. Somehow, I should be able to simplify the collision process that is going on here with the knowledge that the evolution of two particles is now completely deterministic. This equation by itself says, take two particles as if they are the only thing in the universe, and they would follow some completely deterministic trajectory, that if you put lots of them together, is captured through this density. Let's see whether we can massage this equation to look like this equation. Well, the force term, we have, except that here we have dP by P1 here. We have d by dP 1 minus d by dP2. So let's do this. Minus d by dP2, acting on F2. Did I do something wrong? The answer is no, because I added the complete derivative over something that I'm integrating over. This is perfectly legitimate mathematics. This part now looks like this. I have to find what is the most important term that matches this. Again, let's think about this procedure. What I have to make sure of is what is the extent of the collision, and how important is the collision? If I have one particle moving here, and another particle off there, they will pass each other. Nothing interesting could happen. The important thing is how close they come together. It Is kind of important that I keep track of the relative coordinate, Q, which is Q2 minus Q1, as opposed to the center of mass coordinate, which is just Q1 plus Q2 over 2. That kind of also indicates maybe it's a good thing for me to look at this entire process in the center of mass frame. So this is the lab frame. If I were to look at this same picture in the center of mass frame, what would I have? In the center of mass frame, I would have the initial particle coming with P1 prime, P1 minus P center of mass. The other particle that you are interacting with comes with P2 minus P center of mass. I actually drew these vectors that are hopefully equal and opposite, because you know that in the center of mass, one of them, in fact, would be P1 minus P2 over 2. The other would be P2 minus P1 over 2. They would, indeed, in the center of mass be equal and opposite momenta. Along the direction of these objects, I can look at how close they come together. I can look at some coordinate that I will call A, which measures the separation between them at some instant of time. Then there's another pair of coordinates that I could put into a vector that tells me how head to head they are. If I think about they're being on the center of mass, two things that are approaching each other, they can either approach head on-- that would correspond to be equal to 0-- or they could be slightly off a head-on collision. There is a so-called impact parameter B, which is a measure of this addition fact. Why is that going to be relevant to us? Again, we said that there are parts of this expression that all of the order of this term, they're kind of not that important. If I think about the collision, and what the collision does, I will have forces that are significant when I am within this range of interactions, D. I really have to look at what happens when the two things come close to each other. It Is only when this relative parameter A has approached D that these particles will start to deviate from their straight line trajectory, and presumably go, to say in this case, P2 prime minus P center of mass. This one occurs [? and ?] will go, and eventually P1 prime minus P center of mass. These deviations will occur over a distance that is of the order of this collision and D. The important changes that occur in various densities, in various potentials, et cetera, are all taking place when this relative coordinate is small. Things become big when the relative coordinate is small. They are big as a function of the relative coordinate. In order to get big things, what I need to do is to replace these d by dQ's with the corresponding derivatives with respect to the center of mass. One of them would come be the minus sign. The other would come be the plus sign. It doesn't matter which is which. It depends on the definition, whether I make Q2 minus Q1, or Q1 minus Q2. We see that the big terms are the force that changes the momenta and the variations that you have over these relative coordinates. What I can do now is to replace this by equating the two big terms that I have over here. The two big terms are P2 minus P1 over m, dotted by d by dQ of F2. There is some other approximation that I did. As was told to me before, this is the biggest term, and there is the part of this that is big and compensates for that. But there are all these other bunches of terms. There's also this d by dt. What I have done over here is to look at this slightly coarser perspective on time. Increasing all the equations that I have over there tells me everything about particles approaching each other and going away. I can follow through the mechanics precisely everything that is happening, even in the vicinity of this collision. If I have two squishy balls, and I run my hand through them properly, I can see how the things get squished then released. There's a lot of information, but again, a lot of information that I don't really care to know as far as the properties of this gas expansion process is concerned. What you have done is to forget about the detailed variations in time and space that are taking place here. We're going to shortly make that even more explicit by noting the following. This integration over here is an integration over phase space of the second particle. I had written before d cubed, P2, d cubed, Q2, but I can change coordinates and look at the relative coordinate, Q, over here. What I'm asking is, I have one particle moving through the gas. What is the chance that the second particle comes with momentum P2, and the appropriate relative distance Q, and I integrate over both the P and the relative distance Q? This is the quantity that I have to integrate. Let's do one more calculation, and then we will try to give a physical perspective. In this picture of the center of mass, what did I do? I do replaced the coordinate, Q, with a part that was the impact parameter, which had two components, and a part that was the relative distance. What was this relative distance? The relative distance was measured along this line that was giving me the closest approach. What is the direction of this line? The direction of this line is P1 minus P2. This is P1 minus P2 over 2. It doesn't matter. The direction is P1 minus P2. What I'm doing here is I am taking the derivative precisely along this line of constant approach. I'm taking a derivative, and I'm integrating along that. If I were to rewrite the whole thing, what do I have? I have d by dt, plus P1 over m, d by dQ1, plus F1, d by dP1-- don't make a mistake-- acting on F1, P1, Q1, t. What do I have to write on the right hand side? I have an integral over the momentum of this particle with which I'm going to make a collision. I have an integral over the impact parameter that tells me the distance of closest approach. I have to do the magnitude of P2 minus P1 over n, which is really the magnitude of the relative velocity of the two particles. I can write it as P2 minus P1, or P1 minus P2. These are, of course, vectors. and I look at the modulus. I have the integral of the derivative. Very simply, I will write the answer as F2 that is evaluated at some large distance, plus infinity minus F2 evaluated at minus infinity. I have infinity. In principle, I have to integrate over F2 from minus infinity to plus infinity. But once I am beyond the range of where the interaction changes, then the two particles just move away forever. They will never see each other. Really, what I should write here is F2 of-- after the collision, I have P1 prime, P2 prime, at some Q plus, minus F2, P1, P2, at some position minus. What I need to do is to do the integration when I'm far away from the collision, or wait until I am far after the collision. Really, I have to just integrate slightly below, after, and before the collision occurs. In principle, if I just go a few d's in one direction or the other direction, this should be enough. Let's see physically what this describes. There is a connection between this and this thing that I had over here, in fact. This equation on the left hand side, if it was zero, it would describe one particle that is just moving by itself until it hits the wall, at which point it basically reverses its trajectory, and otherwise goes forward. But what you have on the right hand side says that suddenly there could be another particle with which I interact. Then I change my direction. I need to know the probability, given that I'm moving with velocity P1, that there is a second particle with P2 that comes close enough. There is this additional factor. From what does this additional factor come? It's the same factor that we have over here. It is, if you have a target of size d squared, and we have a set of bullets with a density of n, the number of collisions that I get depends both on density and how fast these things go. The time between collisions, if you like, is proportional to n, and it is also related to V. That's what this is. I need some kind of a time between the collisions that I make. I have already specified that I'm only interested in the set of particles that have momentum P2 for this particular [? point in ?] integration, and that they have this kind of area or cross section. So I replace this V squared and V with the relative coordinates. This is the corresponding thing to V squared, and this is really a two particle density. This is a subtraction. The addition is because it is true that I'm going with velocity P1, and practically, any collisions that are significant will move me off kilter. So there has to be a subtraction for the channel that was described by P1 because of this collision. This then, is the addition, because it says that it could be that there is no particle going in the horizontal direction. I was actually coming along the vertical direction. Because of the collision, I suddenly was shifted to move along this direction. The addition comes from having particles that would correspond to momenta that somehow, if I were in some sense to reverse this, and then put a minus sign, a reverse collision would create something that was along the direction of P1. Here I also made several approximations. I said, what is chief among them is that basically I ignored the details of the process that is taking place at scale the order of d, so I have thrown away some amount of detail and information. It is, again, legitimate to say, is this the stage at which you made an approximation so that the time reversibility was lost? The answer is still no. If you are careful enough with making precise definitions of what these Q's are before and after the collision, and follow what happens if you were to reverse everything, you'll find that the equations is fully reversible. Even at this stage, I have not made any transition. I have made approximations, but I haven't made something to be time irreversible. That comes at the next stage where we make the so-called assumption of molecular chaos. The assumption is that what's the chance that I have a particle here and a particle there? You would say, it's a chance that I have one here and one there. You say that if two of any P1, P2, Q1, Q2, t is the same thing as the product of F1, P1, Q1, t, F1, P2, Q2, t. Of course, this assumption is generally varied. If I were to look at the probability that I have two particles as a function of, let's say, the relative separation, I certainly expect that if they are far away, the density should be the product of the one particle densities. But you would say that if the two particles come to distances that are closer than their separation d, then the probability and the range of interaction d-- and let's say the interaction is highly repulsive like hardcore-- then the probability should go to 0. Clearly, you can make this assumption, but up to some degree. Part of the reason we went through this process was to indeed make sure that we are integrating things at the locations where the particles are far away from each other. I said that the range of that integration over A would be someplace where they are far apart after the collision, and far apart before the collision. You have an assumption like that, which is, in principle, something that I can insert into that. Having to make a distinction between the arguments that are appearing in this equation is kind of not so pleasant. What you are going to do is to make another assumption. Make sure that everything is evaluated at the same point. What we will eventually now have is the equation that d by dt, plus P1 over n, d by dQ1, plus F1, dot, d by dP1, acting on F1, on the left hand side, is, on the right hand side, equal to all collisions in the particle of momentum P2, approaching at all possible cross sections, calculating the flux of the incoming particle that corresponds to that channel, which is proportional to V2 minus V1. Then here, we subtract the collision of the two particles. We write that as F1 of P1 at this location, Q1, t, F1 of t2 at the same location Q1, t. Then add F1 prime, P1 prime, Q1 t, F1 prime, P2 prime, Q2, t. In order to make the equation eventually manageable, what you did is to evaluate all off the coordinates that we have on the right hand side at the same location, which is the same Q1 that you specify on the left hand side. That immediately means that what you have done is you have changed the resolution with which you are looking at space. You have kind of washed out the difference between here and here. Your resolution has to put this whole area that is of the order of d squared or d cubed in three dimensions into one pixel. You have changed the resolution that you have. You are not looking at things at this [? fine ?] [? state. ?] You are losing additional information here through this change of the resolution in space. You have also lost some information in making the assumption that the two [? point ?] densities are completely within always as the product one particle densities. Both of those things correspond to taking something that is very precise and deterministic, and making it kind of vague and a little undefined. It's not surprising then, that if you have in some sense changed the precision of your computer-- let's say, that is running the particles forward-- at some point, you've changed the resolution. Then you can't really run backward. In fact, to sort of precisely be able to run the equations forward and backward, you would need to keep resolution at all levels. Here, we have sort of removed some amount of resolution. We have a very good guess that the equation that you have over here no longer respects time reversal inversions that you had originally posed. Our next task is to prove that you need this equation. It goes in one particular direction in time, and cannot be drawn backward, as opposed to all of the predecessors that I had written up to this point. Are there any questions? AUDIENCE: [INAUDIBLE] PROFESSOR: Yes, Q prime and Q1, not Q1 prime. There is no dash. AUDIENCE: Oh, I see. It is Q1. PROFESSOR: Yes, it is. Look at this equation. On the left hand side, what are the arguments? The arguments are P1 and Q1. What is it that I have on the other side? I still have P1 and Q1. I have introduced P1 and b, which is simply an impact parameter. What I will do is I will evaluate all of these things, always at the same location, Q1. Then I have P1 and P2. That's part of my story of the change in resolution. When I write here Q1, and you say Q1 prime, but what is Q1 prime? Is it Q1 plus b? Is it Q1 minus b? Something like this I'm going to ignore. It's also legitimate, and you should ask, what is P1 prime and Q2 prime? What are they? What I have to do, is I have to run on the computer or otherwise, the equations for what happens if I have P1 and P2 come together at an impact parameter that is set by me. I then integrate the equations, and I find that deterministically, that collision will lead to some P1 prime and P2 prime. P1 prime and P2 prime are some complicated functions of P1, P2, and b. Given that you know two particles are approaching each other at distance d with momenta P1 P2, in principle, you can integrate Newton's equations, and figure out with what momenta they end up. This equation, in fact, hides a very, very complicated function here, which describes P1 prime and P2 prime as a function of P1 and P2. If you really needed all of the details of that function, you would surely be in trouble. Fortunately, we don't. As we shall see shortly, you can kind of get a lot of mileage without knowing that. Yes, what is your question? AUDIENCE: There was an assumption that all the interactions between different molecules are central potentials [INAUDIBLE]. Does the force of the direction between two particles lie along the [INAUDIBLE]? PROFESSOR: For the things that I have written, yes it does. I should have been more precise. I should have put absolute value here. AUDIENCE: You have particles moving along one line towards each other, and b is some arbitrary vector. You have two directions, so you define a plane. Opposite direction particles stay at the same plane. Have you reduced-- PROFESSOR: Particles stay in the same plane? AUDIENCE: If the two particles were moving towards each other, and also you have in the integral your input parameter, which one is [INAUDIBLE]. There's two directions. All particles align, and all b's align. They form a plane. [? Opposite ?] direction particles [? stand ?] in the-- PROFESSOR: Yes, they stand in the same plane. AUDIENCE: My question is, what is [INAUDIBLE] use the integral on the right from a two-dimensional integral [? in v ?] into employing central symmetry? PROFESSOR: Yes, you could. You could, in principle, write this as b db, if you like, if that's what you want. AUDIENCE: [INAUDIBLE] PROFESSOR: Yes, you could do that if you have simple enough potential. Let's show that this equation leads to irreversibility. That you are going to do here. This, by the way, is called the Boltzmann equation. There's an associated Boltzmann H-Theorem, which restates the following-- If F of P1, Q1, and t satisfies the above Boltzmann equation, then there is a quantity H that always decreases in time, where H is the integral over P and Q of F1, log of F1. The composition of irreversibility, as we saw in thermal dynamics, was that there was a quantity entropy that was always increasing. If you have calculated for this system, entropy before for the half box, and entropy afterwards for the space both boxes occupy, the second one would certainly be larger. This H is a quantity like that, except that when it is defined this way, it always decreases as a function of time. But it certainly is very much related to entropy. You may have asked, why did Boltzmann come across such a function, which is F log F, except that actually right now, you should know why you write this. When we were dealing with probabilities, we introduced the entropy of the probability distribution, which was related to something like sum over iPi, log of Pi, with a minus sign. Up to this factor of normalization N, this F1 really is a one-particle probability. After this normalization N, you have a one-particle probability, the probability that you have occupation of one-particle free space. This occupation of one-particle phase space is changing as a function of time. What this statement says is that if the one-particle density evolves in time according to this equation, the corresponding minus entropy decreases as a function of time. Let's see if that's the case. To prove that, let's do this. We have the formula for H, so let's calculate the H by dt. I have an integral over the phase space of particle one, the particle that I just called one. I could have labeled it anything. After integration, H is only a function of time. I have to take the time derivative. The time derivative can act on F1. Then I will get the F1 by dt, times log F1. Or I will have F1 times the derivative of log F1. The derivative of log F1 would be dF1 by dt, and then 1 over F1. Then I multiply by F1. This term is simply 1. AUDIENCE: Don't you want to write the full derivative, F1 with respect [INAUDIBLE]? PROFESSOR: I thought we did that with this before. If you have something that I am summing over lots of [? points, ?] and these [? points ?] can be positioned, then I have S at location one, S at location two, S at location three, discretized versions of x. If I take the time derivative, I take the time derivative of this, plus this, plus this, which are partial derivatives. If I actually take the time derivative here, I get the integral d cubed P1, d cubed Q1, the time derivative. This would be that partial dF1 by dt is the time derivative of n, which is 0. The number of particles does not change. Indeed, I realize that 1 integrated against dF1 by dt is the same thing that's here. This term gives you 0. All I need to worry about is integrating log F against the Fydt. I have an integral over P1 and Q1 of log F against the Fydt. We have said that F1 satisfies the Boltzmann equation. So the F1 by dt, if I were to rearrange it, I have the F1 by dt. I take this part to the other side of the equation. This part is also the Poisson bracket of a one-particle H with F1. If I take it to the other side, it will be the Poisson bracket of H with F1. Then there is this whole thing that involves the collision of two particles. So I define whatever is on the right hand side to be some collision operator that acts on two [? powers ?] of F1. This is plus a collision operator, F1, F1. What I do is I replace this dF1 by dt with the Poisson bracket of H, or H1, if you like, with F1. The collision operator I will shortly write explicitly. But for the time being, let me just write it as C of F1. There is a first term in this sum-- let's call it number one-- which I claim to be 0. Typically, when you get these integrations with Poisson brackets, you would get 0. Let's explicitly show that. I have an integral over P1 and Q1 of log of F1, and this Poisson bracket of H1 and F1, which is essentially these terms. Alternatively, I could write it as dH1 by dQ1, dF1 by dt1, minus the H1 by dt1, dF1, by dQ1. I've explicitly written this form for the one-particle in terms of the Hamiltonian. The advantage of that is that now I can start doing integrations by parts. I'm taking derivatives with respect to P, but I have integrations with respect to P here. I could take the F1 out. I will have a minus. I have an integral, P1, Q1. I took F1 out. Then this d by dP1 acts on everything that came before it. It can act on the H1. I would get d2 H1 with respect to dP1, dQ1. Or it could act on the log of F1, in which case I will get set dH1 by dQ1. Then I would have d by dP acting on log of F, which would give me dF1 by dP1, then the derivative of the log, which is 1 over F1. This is only the first term. I also have this term, with which I will do the same thing. AUDIENCE: [INAUDIBLE] The second derivative [INAUDIBLE] should be multiplied by log of F. PROFESSOR: Yes, it should be. It is Log F1. Thank you. For the next term, I have F1. I have d2 H1, and the other order of derivatives, dQ1, dP1. Now I'll make sure I write down the log of F1. Then I have dH1 with respect to dQ1. Then I have a dot product with the derivative of log F, which is the derivative of F1 with respect to Q1 and 1 over F1. Here are the terms that are proportional to the second derivative. The order of the derivatives does not matter. One often is positive. One often is negative, so they cancel out. Then I have these additional terms. For the additional terms, you'll note that the F1 and the 1 over F1 cancels. These are just a product of two first derivatives. I will apply the five parts process one more time to get rid of the derivative that is acting on F1. The answer becomes plus d cubed P1, d cubed Q1. Then I have F1, d2 H1, dP1, dQ1, minus d2 H1, dQ1, dP1. These two cancel each other out, and the answer is 0. So that first term vanishes. Now for the second term, number two, what I have is the first term vanished. So I have the H by dt. It is the integral over P1 and Q1. I have log of F1. F1 is a function of P1, and Q1, and t. I will focus, and make sure I write the argument of momentum, for reasons that will become shortly apparent. I have to multiply with the collision term. The collision term involves integrations over a second particle, over an impact parameter, a relative velocity, once I have defined what P2 and P1 are. I have a subtraction of F evaluated at P1, F evaluated at P2, plus addition, F evaluated at P1 prime, F evaluated at P2 prime. Eventually, this whole thing is only a function of time. There are a whole bunch of arguments appearing here, but all of those arguments are being integrated over. In particular, I have arguments that are indexed by P1 and P2. These are dummy variables of integration. If I have a function of x and y that I'm integrating over x and y, I can call x "z." I can call y "t." I would integrate over z and t, and I would have the same answer. I would have exactly the same answer if I were to call all of the dummy integration variable that is indexed 1, "2." Any dummy variable that is indexed 2, if I rename it and call it 1, the integral would not change. If I do that, what do I have? I have integral over Q-- actually, let's get of the integration number on Q. It really doesn't matter. I have the integrals over P1 and P1. I have to integrate over both sets of momenta. I have to integrate over the cross section, which is relative between 1 and 2. I have V2 minus V1, rather than V1 minus V2, rather than V2 minus V1. The absolute value doesn't matter. If I were to replace these indices with an absolute value, [? or do a ?] V2 minus V1 goes to minus V1 minus V2. The absolute value does not change. Here, what do I have? I have minus F of P1. It becomes F of P2, F of P1, plus F of P2 prime, f of P1 prime. They are a product. It doesn't really matter in which order I write them. The only thing that really matters is that the argument was previously called F1 of P1 for the log, and now it will be called F1 of P2. Just its name changed. If I take this, and the first way of writing things, which are really two ways of writing the same integral, and just average them, I will get 1/2 an integral d cubed Q, d cubed P1, d cubed P2, d2 b, and V2 minus V1. I will have F1 of P1, F1 of P2, plus F1 of P1 prime, F1 of P2 prime. Then in one term, I had log of F1 of P1, and I averaged it with the other way of writing things, which was log of F-- let's put the two logs together, multiplied by F1. So the sum of the two logs I wrote, that's a log of the product. I just rewrote that equation. If you like, I symmetrized It with respect to index 1 and 2. So the log of 1, that previously had one argument through this symmetrization, became one half of the sum of it. The next thing one has to think about, what I want to do, is to replace primed and unprimed coordinates. What I would eventually write down is d cubed P1 prime, d cubed P2 prime, d2 b, V2 prime minus V1 prime, minus F1 of P1 prime, F1 of P2 prime, plus F1 of P1, F1 of P2. Then log of F1 of P1 prime, F1 of P2 prime. I've symmetrized originally the indices 1 and 2 that were not quite symmetric, and I end up with an expression that has variables P1, P2, and functions P1 prime and P2 prime, which are not quite symmetric again, because I have F's evaluated for P's, but not for P primes. What does this mean? This mathematical expression that I have written down here actually is not correct, because what this amounts to, is to change variables of integration. In the expression that I have up here, P1 and P2 are variables of integration. P1 prime and P2 prime are some complicated functions of P1 and P2. P1 prime is some complicated function that I don't know. P1, P2, and V, for which I need to solve in principle, is Newton's equation. This is similarly for P2 prime. What I have done is I have changed from my original variables to these functions. When I write things over here, now P1 prime and P2 prime are the integration variables. P1 and P2 are supposed to be regarded as functions of P1 prime and P2 prime. You say, well, what does that mean? You can't simply take an integral dx, let's say F of some function of x, and replace this function. You can't call it a new variable, and do integral dx prime. You have to multiply with the Jacobian of the transformation that takes you from the P variables to the new variables. My claim is that this Jacobian of the integration is, in fact, the unit. The reason is as follows. These equations that have to be integrated to give me the correlation are time reversible. If I give you two momenta, and I know what the outcomes are, I can write the equations backward, and I will have the opposite momenta go back to minus the original momenta. Up to a factor of minus, you can see that this equation has this character, that P1, P2 go to P1 prime, P2 prime, then minus P1 prime, minus P2 prime, go to P1, and P2. If you sort of follow that, and say that you do the transformation twice, you have to get back up to where a sign actually disappears to where you want. You have to multiply by two Jacobians, and you get the same unit. You can convince yourself that this Jacobian has to be unit. Next time, I guess we'll take it from there. I will explain this stuff a little bit more, and show that this implies what we had said about the Boltzmann equation. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 24_Ideal_Quantum_Gases_Part_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: We looked at the macro state, where the volume of the box, the temperature were given, as well as the chemical potential. So rather than looking at a fixed number of particles, we were looking at the grand canonical prescription where the chemical potential was specified. We said that when we have a system of non-interacting particles, we can construct a many bodied state based on occupation of single particle states such that the action of the Hamiltonian on this will give me a sum over all single particle states, the occupation number of that single particle state times the energy of that single particle state, and then the state back. We said that in quantum mechanics, we have to distinguish between bosons and fermions, and in the occupation number prescription. For the case of fermions, corresponding theta v minus 1 and k was 01. For the case of bosons, corresponding theta plus 1 and k equals [INAUDIBLE]. We said that within this planned canonical prescription, these occupation numbers were independently given. The probabilities were independently distributed according to exponential rules such that when we calculated the average, nk. We commented on the statistics what are the very simple forms, which was 1 over [INAUDIBLE] into the beta epsilon k minus eta, where z was constructed from the temperature and the temporal potential as [INAUDIBLE] to the beta mu. Now, for the case that we are interested, for this gas that is in a box of volume v, the one particle states were characterized by [INAUDIBLE] waves and their energies were f h bar squared k squared over 2m, basically just the kinetic energy. And then at various stages, we have to perform sums over k's, and for a box of volume v, the sum over k was replaced in the limit of a large box with an integral over k times the density of states, which was v divided by 2 pi cubed. And we said that again, we recognized that quantum particles have another characteristic, which is their spin, and there is a degeneracy factor associated with that that we can just stick over here as if we had [INAUDIBLE] copies corresponding to the different components of the spin. Now, once we put this form of epsilon k over here, then we can calculate what the mean number of particles is in the grand canonical ensemble as sum over k, the average nk, given the appropriate statistics. Of course, this is a quantity that is extensive. We then construct a density, which is intensive, simply by dividing by v, which gets rid of that factor of v over here, and we found that the answer could be written as g over lambda cubed, again, coming from this g, lambda cubed from the change of variables, where we defined lambda and h over root 2 pi mkp. And then a function that we indicated by s 3/2, depending on the statistics of z. And this was a member of our class of functions we defined in general, f m h of z, to be 1 over m minus 1 factorial, integral 0 to infinity, ex, x to the m minus 1, z inverse, e to the x minus eta. So for each m, we have a function of z. In fact, we have two functions, depending on whether we are dealing with the fermionic or the bosonic variety. We saw that these functions we could expand for small z as z plus eta z squared over 2 to the m. It's an alternating series for fermions. All terms are positive for bosons, zq, 3 to the m, eta z to the fourth, 4 to the m, and so forth. And we also noted that these functions have a nice property in that if you were to take a derivative with respect to z and multiply by z, you get the function with one lower index. So this is, in fact, the same thing as z divided by zz of f m plus 1 [INAUDIBLE]. This just made the connection between the number of particles and the chemical potential that we need to use when we go to the grand canonical prescription, but we are interested in calculating various properties of this gas, such as the pressure. We found that the formula for the pressure was simply g over lambda cubed, very similar to what we had over here, except that, rather than having f 3/2, we had f 5/2 of z to deal with, and if we were interested in energy, we could simply use 3/2. Indeed, this was correct for both fermions and bosons, irrespective of the difficulties or the variations in pressure that you would have, depending on your bosons or fermions due to [INAUDIBLE] gases. So what is the task here, if you are interested in getting a formula for pressure or the energy or the heat capacity? What should we do if we are interested in a gas that has a fixed number of particles or a fixed density? Clearly, the first stage is to calculate z as a function of density. What we can do is to solve the first equation graphically. We have to solve the equation f 3/2 8 of z equals the combination m lambda cubed over g, which we call the degeneracy factor. So if I tell you what the temperature is, you know lambda. If I tell you what the density is, you know the combination on the right hand side. There is this function. We have to plot this function and find the value of the argument of the function. That gives the value of m lambda cubed over g as the value of the function. So graphically, what we have to do is to plot as a function of z these functions, f 3/2. I will generically plot f m 8 of z, but for the case that we are interested, we really need to look at m equals 3/2. As we will see in problem set, if you were to solve the gas in d dimensions, this m rather than the 3/2 would be g over 2. So pictorially, the same thing would work with slightly different forms of these functions in different dimensions. How do these functions look like? Well, we see that initially, they start to be linear, but then, the case of the bosons and fermions, the next order term goes in different directions. And in particular, for the case of fermions, the quadratic term starts to bring you down, whereas for the case of bosons, the quadratic term is the opposite direction and will tend to make you large. This is what these functions look. For the case of fermions, I have additional terms in the series. It's an alternating series. And then the question is, what happens to this at large values of z? And what we found was that at large values, this function satisfies the asymptotic form that is provided by Sommerfeld formula, which states that f m of z, for the case of fermions, so this is eta equals minus 1, is log z to the power of m divided by m factorial. Of course, we are interested in m goes to 3/2. 1 plus pi squared over 6 m, m minus 1, log z squared, and higher order inverse powers of log z, which one can compute, but for our purposes, these two terms are sufficient. And if I'm interested in doing the inversion for fermions, what I need to do is to plot a line that corresponds to n lambda cubed over g and find the intersection of that line, and that would give me the value of z. When I'm very close to the origin, I start with the linear behavior, and then I can systematically calculate the corrections to the linear behavior, which would have been z equals n lambda cubed over g in higher powers of n lambda cubed over g. And we saw that by doing that, I can gradually construct a visual expansion that is appropriate for these gases. I can do the same thing, of course, for the case of bosons also, but now I'm interested in the limiting form where the density goes higher and higher or the temperature goes lower and lower so that this horizontal line that I have drawn will go to larger and larger values, and I'm interested in the intersections that I have when z is large and this asymptotic formula is being satisfied. So what do I need to do for the case of degenerate fermions? I have this quantity, n lambda cubed over g, which I claim is much larger than 1, so I have to look for values of the function, f 3/2 minus of z that are large, and those values are achieved when the argument is large, and then it takes the form log z to the 3/2 divided by 3/2 factorial, 1 plus pi squared over 6. m in this case is 3/2, so I have 3/2 times m minus 1, which is 1/2, divided by log z over 2. And then I will have potentially higher order terms. So I can invert this formula to find log z as a function of density. And to lowest order, what I have is 3/2 factorial n lambda cubed over g raised to the 2/3 power. If I were to ignore everything here, just arrange things this way, that's the formula that I would get. But then I have this correction, so I divide by this. I have 1 plus-- this combination is equivalent to pi squared over 8, 1 over log z squared. I have taken it to the other side, so I put the minus sign and raise again to the 2/3 power. Now, the way that I have defined z up there, clearly log z is proportional to beta nu. We can see that lambda is proportional to beta to the 1/2, so lambda cubed is beta to the 3/2, which, when raised to the 2/3, gives me a factor of beta. So basically, the quantity that we have over here is of the order of beta, and then we saw that the remainder, which depends on properties such as the mass of the gas, gh, et cetera, which is independent of temperature, is a constant. That is the Fermi energy that we can compute by usual route of filling up a Fermi c. And the value of epsilon f is h bar squared over 2m times kf squared, and the kf was 6 pi squared n over g to the 1/3 power, so kf squared would be something like this. So once we know the density of the fermion, its spin and its mass, we can figure out what this quantity is. The zeroth order solution for the chemical potential is clearly this epsilon f. What do I have to do if I want to calculate the next correction? Well, what I can do is I can put the zeroth order solution for log z in this expression, and then that will give me the first order correction. I have to raise this whole thing to the minus 2/3 power, so pi squared over 8 becomes 1 minus 2/3 of pi squared over h, which is minus pi squared over 12. 1 over log of z squared-- log z is again mu over kt, so its inverse is kt over the zeroth order value of mu, which is epsilon f raised to second power, and we expect that there will be higher order terms. So what have I stated? If I really take this curve and try to solve for mu as a function of temperature for a given density, I find that at very low temperatures, the value that I get is epsilon f, and then the value starts to get reduced as I go to higher temperatures and the initial fall is quadratic. Of course, the function will have higher and higher order corrections. It will not stay quadratic. And I also know that at very large values, it essentially converges to a form that is minus kt log of n lambda cubed, because down here, I know that this solution is z is simply n lambda cubed over g. So at low densities and high temperatures, basically, I will have something that is almost linear with logarithmic corrections and negative, and so presumably, the function looks something like this. Of course, I can always convert. We can say that what I see here is the combination of kt over epsilon f. I can define epsilon f over kv to be something that I'll call tf, and this correction I can write as t over tf squared. So basically, I have defined epsilon f to be some kv times a Fermi temperature. Just on the basis of dimensional argument, you would expect that the place where this zero is occurring is of the order of this tf. I don't claim that it is exactly tf. It will have some multiplicative factor, but it will be of the order of tf, which can be related to the density, mass, and other properties of the gas by this formula. And if you look at a metal, such as copper, where the electrons are approximately described by this Fermi gas, this tf is of the order of, say, 10 to the 4 degrees Kelvin. Yes? AUDIENCE: So you were saying that at a high temperature, the chemical potential should go as minus kvt times ln of n lambda cubed, yeah? PROFESSOR: Yes. AUDIENCE: Now, the lambda itself has temperature dependence, so how can you claim that it is behaving linearly? PROFESSOR: I said almost linearly. As you say, the correct behavior is something like minus 3/2 t log t, so log t changes the exponent to be slightly different from 1. This is not entirely linear. It has some curvature due to this logarithm. In fact, something that people sometimes get confused at is when you go to high temperatures, beta goes to 0. So why doesn't z go to 1? Of course, beta goes to 0 but mu goes to infinity such that the product of 0 and infinity is actually still pretty large, partly because of this logarithm. AUDIENCE: Could you just explain again why it's linear? PROFESSOR: Down here it is linear, so I know that z is n lambda cubed over g. z is e to the mu over kt. So mu is kt log of n lambda cubed over g. Remember lambda depends on temperature, so it is almost linear, except that really what is inside here has a temperature dependence. So it's really more like t log t. AUDIENCE: When z is small? PROFESSOR: When z is small and only when z is small, and z small corresponds to being down here. Again, that's what I was saying. Down here, beta goes to zero but mu goes to minus infinity such that the product of 0 and minus infinity is still something that is large and negative because of this. AUDIENCE: If the product is large, then isn't z large? PROFESSOR: If the product is large and negative, then z is exponentially small. In fact, if I were to plot this function just without the higher order corrections, if I plot the function, kt log of n lambda cubed over g, you can see that it goes to 0 when the combination n lambda cubed over g is 1 because log of 1 is 0. So that function by itself, asymptotically this is what I have, it will come and do something like this, but it really is not something that we are dealing with. It's in some sense the classical version. So classically, your mu is given by this formula, except that classically, you don't know what the volume factor is that you have to put there because you don't know what h is. So that's the chemical potential, but the chemical potential is really, for our intents and purposes, a device that I need to put here in order to calculate the pressure, to calculate the energy, et cetera. So let's go to the next step. We are again in this limit. Beta p is g over lambda cubed f 5/2 plus z. That's going to give me g over lambda cubed in the limit where z is large-- we established that z is large-- log z to the 5/2 divided by 5/2 factorial using the Sommerfeld expansion, 1 plus pi over 6. Now my m is 5/2. m minus 1 would be 3/2 divided by log z squared in higher order terms. AUDIENCE: Question. PROFESSOR: Yes? AUDIENCE: Why is that plus? PROFESSOR: Because I made a mistake. Just to make my life and manipulations easier, I'll do the following. I write under this the formula for n being g over lambda cubed, f 3/2 minus z, which is g over lambda cubed log z to the 3/2 divided by 3/2 factorial, 1 plus-- I calculated the correction-- pi squared over 6 divided by various combination, which is pi squared over 8, 1 over log z squared. Now what I will do is to divide these two equations. If I were to divide the numerator by the denominator, what do I get? The left hand side becomes beta p over n. The right hand side, I can get rid of the ratio of log z 5/2 to log z. 3/2 is just one factor of log z. What do I have when I divide 5/2 factorial by 3/2 factorial? I have 5/2. This division allows me to get rid of some unwanted terms. And then what do I have here? This combination here is 5 pi squared over 8 minus 1 pi squared over 8. Because of the division, I will put a minus sign here, so that becomes 4 pi squared over 8, which is pi squared over 2. So I will get 1 plus pi squared over 2, and at the order that we are dealing, we can replace 1 over log z squared with kt over epsilon f squared. Now, for log z over here, I can write beta mu, and for mu, I can write the formula that I have up here. So you can see that once I do that, the betas cancel from the two sides of the equation, and what I get is that p is 2/5, the inverse of 5/2. The n I will take to the other side of the equation. To the lowest order mu is epsilon f, so I will put epsilon f, but mu is not exactly epsilon f. It is 1 minus pi squared over 12, kt over epsilon f squared. And actually, there was this additional factor of 1 plus pi squared over 2, kt over epsilon f squared. So what do we find? We find that to the zeroth order, pressure is related to the density but multiplied by epsilon f, so that even at zero temperature, there is a finite value of pressure left. So we are used to ideal gases that are classical, and when we go to zero temperature, they start to stop moving around. Since they are not moving around, there is no pressure that is exerted, but for the Fermi gas, you cannot say that all of the particles are not moving around because that violates this exclusion of n being zero or one. You have to give particles more and more momentum so that they don't violate the condition that they should have different values of the occupation numbers, the Pauli exclusion. And therefore, even at zero temperature, you will have particles that in the ground state are zipping around, and because they're moving around, they can hit the wall and exert pressure on it, and this is the value of the pressure. As you go to higher temperature, that pressure gets modified. We can see that balancing these two terms, there is an increase in pressure, which is 1 plus 5 pi squared over 6, kt over epsilon f squared. So you expect the pressure, if I plot it as a function of temperature, at zero temperature, it will have this constant value, pf. As I put a higher temperature, there will be even more energy and more kinetic energy, and the pressure will rise. Eventually, at very high temperatures, I will regain the classical, rather pressure is proportional to temperature. That's the pressure of this ideal Fermi gas. The energy. Well, the energy is simply what we said over here, always 3/2 pv. So it is 3/2, the pressure over there multiplied by v. When I multiply the density by v, I will get the number of particles. 3/2 times 5/2 will give me 3/5, so I will 3/5 and epsilon f at the lowest order. Again, you've probably seen this already where you draw diagrams for filling up a Fermi c. Up to some particular kf, you would say that all states are occupied. When epsilon f is the energy of this state that is sitting right at the edge of the Fermi c, and clearly, the energies of the particles vary all the way from zero up to epsilon f, so the average energy is going to be less than epsilon f. Dimensionally, it works out to be 3/5 of that. That's what that says. But the more important part is how it changes as a function of temperature. What you find is that it goes to 1 plus 5 pi squared over 6, kt over epsilon f squared in high order terms. AUDIENCE: Question. Should the 6 be 12? PROFESSOR: 6 should be 12? Let's see. This is 6/12 minus 1/12 is indeed 5/12. Thank you. So if I ask what's the heat capacity at constant volume, this is dE by dT, so just taking the derivative. The zeroth order term, of course, is irrelevant. It doesn't vary the temperature. It's this t squared that will give you the variations. So what does it give me? I will have n epsilon f. I have 3/5 times 5 pi squared over 12. The derivative of this combination will give me 2kb squared t divided by epsilon f squared. There will be, of course, higher order, but this combination you can see I can write as follows. It is extensive. It is proportional to n. There is one kb that I can take out, which is nice, because natural units of heat capacity, as we have emphasized, are kb. This combination of numbers, I believe the 5 cancels. 3 times 2 divided by 12 gives me 1/2, so I have pi squared over 2. And then I have the combination, kt over epsilon f, which I can also write as t over tf. So the heat capacity of the Fermi gas goes to 0 as I go to zero temperature, in accord with the third law of thermodynamics, which we said is a consequence of quantum mechanics. So we have this result, that the proportionality is given by the inverse of tf. If I were to plot the heat capacity in units of nkb as a function of t, at very high temperatures, I will get the classical result, which is 3/2, and then I will start to get corrections to that. Those corrections will gradually reduce this, so eventually, the function that I will get starts linearly and then gets matched to that. So the linear behavior is of the order of t over tf, and presumably at some temperature that is of the order of tf, you will switch to some classical behavior. And as I said, if you look at metals, this tf is very large. So when you look at the heat capacity of something like copper or some other metal, you find that there is a contribution from the electrons to the heat capacity that is linear. Now, the reason for this linear behavior is also good to understand. This calculation that I did is necessary in order to establish what this precise factor of pi squared over 2 or appropriate coefficient is, but the physical reason for the linearity you should be able to know. Basically, we saw that the occupation numbers for the case of fermions have this form that is 1 over z inverse, e to the epsilon plus 1. If I plot the occupation numbers as a function of energy, for a case that is at zero temperature, you can see that the occupation number is 1 or 0, so you have a picture that is like this. You switch from 1 to 0 at epsilon f. Oops, there was a beta here that I forgot, beta epsilon. When I go to finite temperature, this becomes fuzzy because this expectation value for the number, rather than being a one zero step function, becomes smoothed out, becomes a function such as this, and the width over which this smoothing takes place is of the order of kt. So what happens is that rather than having this short Fermi c, you will start having less particles here and more occupied particles over here. To do that, you have created energies going from here to here that you have stored that is of the order of this kt. What fraction of the total number have you given this energy? Well, the fraction is given from here. It is the ratio of this to the entirety, and that ratio is t, or kt, divided by epsilon f is the fraction that has been given energy this. So the excitation energy that you have is this number, the total times this fraction times the energy that you have, which is kt. And if I take the derivative of this, I will get precisely this formula over here up to this factor of pi squared over 2, which I was very cavalier, but the scaling everything comes from this. And to all intents and purposes, this is the characteristic of all Fermi systems, that as you go to low temperatures, there is a small fraction of electrons, if you like, that can be excited. That fraction goes to 0 at low temperatures as kt over epsilon f, and the typical energies that they have is of the order of kt. So essentially, many of the results for the degenerate Fermi gas you can obtain by taking classical results and substituting for the number of particles nkt over epsilon f. You will see in the problem set that this kind of argument tells you something also about the magnetic response of a system of electrons, what is the susceptibility and why does the susceptibility saturate at low temperatures. It's just substituting for the number of particles this fraction, nkt over epsilon f, will give you lots of things. This picture is also valid in all dimensions, whereas next, we'll be discussing the case of bosons, where the dependence is on temperature. There is an exponent here that determines which dimension you are or depends on dimensions. For the case of fermions, this linearity exists independently. So now let's think about bosons and what's going to happen for bosons. For bosons, we saw that the series, rather than being alternating, all of the terms are adding up together so that the parabolic correction, rather than reducing the function for the case of bosons, will increase it. So for a particular value of density, we find that the z of fermions is larger than the z of bosons. So for a given density, what we will find is that the chemical potential, whereas for the case of fermions was deviating from the classical form by going up, for the case of bosons, we'll start to deviate from the classical form and go down. Now, there is something, however, that the chemical potential cannot do for the case of bosons which it did it for the case of fermions, and that was to change sign. Why is that? Well, we have that over there, the occupation number for a boson is 1 over z inverse, which is e to the beta epsilon k minus mu. z inverse is e to the minus meta mu minus 1. Now, this result was obtained from summing a geometric series, and the condition certainly is that this object, that is, the inverse of what was multiplying different terms in the series, has to be larger than 1. Of course, it has to be larger than 1 so that I get a positive occupation number. If it was less than 1, the geometric series was never convergent. Clearly, the negativity of the occupation number has no meaning. So this being positive strictly immediately implies that mu has to be less than epsilon k for all k. So it has to be certainly less than the minimum of epsilon k with respect to all k. And for the case that I'm looking at where my epsilon k's are h bar squared k squared over 2m, the lowest one corresponds to k equals 0, so mu has to be less than 0. It can never go to the other side of this. Or alternatively, z has to be less than or approaches maybe 1. So there is certainly a barrier here that we are going to encounter for the case of bosons at z equals 1. We should not go beyond that point. So let's see what's happening for z equals 1 to this function. You see, my task remains the same. In order to find z, I have to solve graphically for the intersection of the curve that corresponds to f plus 3/2 and the curve that corresponds to the density n lambda cubed over g. So what happens as I go to higher and higher values of n lambda cubed over g? You can see one possibility is that this curve just diverges as z approaches 1. Then for every value of n lambda cubed over g, you will find some value of z that will gradually become closer and closer to 1. But is that the scenario? For that, we need to know what the value of this f function is at z equals 1, so the limit f m plus of z as z goes to 1. How do you obtain that? Well, that is 1 over m minus 1 factorial, integral 0 to infinity, dx, x to the m minus 1, z inverse, which is 1, e to the x minus 1. I have to integrate this function. The integrand, how does it look like? x to the m minus 1, e to the x minus 1. Well, at large x, there is no problem. It goes to 0 exponentially. At small x, it goes to 0 as x to the m minus 2. So basically, it's a curve such as this that I have to integrate. And if the curve is like I have drawn it, there is no problem. I can find the integral underneath it and there actually is a finite value that has a name. It's called a zeta function, so this is some tabulated function that you can look at. But you can see that if m, let's say, is 0, then it is dx over x squared. So the other possibility is that this is a function that diverges at the origin, which then may or may not be integrable. And we can see that this is finite and exists only for m that is larger then 1. So in particular, we are interested in the case of m equals 3/2. So then at z equals 1, I have a finite value. It is zeta of 3/2. So basically, the function will come up to a finite value at z equals 1, which is this zeta of 3/2, which you can look up in tables. It's 2.612. Now, I tried to draw this curve as if it comes and hugs the vertical line tangentially, that is, with infinite slope, and that is the case. Why do I know that? Because the derivative of the function will be related to the function at one lower index. So the derivative of f 3/2 is really an f 1/2. f 1/2 does not exist. It's a function that goes to infinity. So essentially, this curve comes with an infinite slope. Now, it will turn out that this is the scenario that we have in three dimensions. If you are in two dimensions, then what you need to do is look at f that corresponds to m equals 2 over 2 or 1, and in that case, the function diverges. So then, you have no problem in finding some intersection point for any combination of n lambda cubed over g. But currently in three dimensions, we have a problem because for n lambda cubed over g that falls higher than zeta of 3/2, it doesn't hit the curve at any point, and we have to interpret what that means. So for d equals 3, encounter singularity when n lambda cubed over g is greater than or equal to zeta of 3/2. This corresponds at a fixed density to temperatures that are less than some critical temperature that depends on n, and that I can read off as being 1 over kb. This is going to give me a combination. Lambda cubed is proportional inversely to 3/2, so this will give me nz 3/2 over g to the 2/3, and then I have h squared 2 pi m. I put the kb over here. So the more dense your system is, the lower-- I guess that's why I have it the opposite way. It should be the more dense it is, the higher the temperature. That's fine. But what does that mean? The point is that if we go and look at the structure that we have developed, for any temperature that is high enough, or any density that is low enough, so that I hit the curve on its continuous part, it means that I can find the value of z that is strictly less than 1, and for that value of z, I can occupy states according to that probability, and when I calculate the net mean occupation, I will get the actual density that I'm interested in. As I go to temperatures that are lower, this combination goes up and up. The value of z that I have to get gets pushed more towards 1. As it gets pushed more towards 1, I see that mu going to 0, the state that corresponds to the lowest energy, k equals 0, gets more and more occupied. But still, nothing special about that occupation except what happens when I am at higher values. The most natural thing is that when I am at higher values, I should pick z equals 1 minus a little bit. And if I choose z to be 1 minus a very small quantity, then when I do the integration over here, I didn't get a value for the density, which is g over lambda cubed, the limiting value of this f function, which is zeta of 3/2. I will call this n star because this is strictly less than the total density that I have. That was the problem. If I could make up the total density with z equals 1, I would be satisfied. I would be making it here, but I'm not making it up with the spectrum that I have written over here. But on the other hand, if epsilon is incredibly small, what I find is that the occupation number of the k equals to 0 state-- let's write it n of k equals 0. What is that? It is 1 over z inverse, which is the inverse of this quantity, which is 1 plus epsilon, which is approximately e to the epsilon minus 1. Actually, maybe what I should do is to write it in the form e to the beta mu minus 1 and realize that this beta mu is a quantity that is very small. It is the same thing that I was calling epsilon before. And if it is very small, I can make it to be 1 over beta mu. So by making mu arbitrarily close to the origin or epsilon mu arbitrarily close to 1, I could in principle pump a lot of particles in the k equals 0 state, and that does not violate anything. Bosons, you can put as many particles as you like in the k equals 0 state. So essentially, what I can do is I can make this, but you say, well, isn't this already covered by this curve that you have over here? Are you doing something different? Well, let's follow this line of thought a little bit more. How much do I have to put here? This n is the total number of particles divided by volume, and so this is going to be this quantity, n star, which is gv over lambda cubed divided by volume. Let's write it in this fashion, z over lambda cubed zeta of 3/2 and then whatever is left over, and the leftover I can write as n0. So what we know is if we take z equals 1, this function will tell me how many things I have put in everything except k equals 0 that I will calculate separately. That amount is here. And I will put some more in k equals 0, and you can see that the amount I have to put there is going to be n minus g over lambda cubed zeta of 3/2. Actually, I will have to put a volume here. Why is that? The reason I have to put the volume is because the volume that I had here, that ultimately I divided the number of particles by volume to get the density, came from replacing the sum with an integration. So what I have envisioned now is that there are all of these points that correspond to different k's. I'll replace the sum with a integration, but then there was one point, k equals 0, that I am treating separately. When I'm treating that separately, it means that is really is the same as the total number expectation value without having to divide by the volume, which comes from this density of state, and I have something like this. Now, the problem is that you can see that suddenly, you have to pick a value of mu that is inversely proportional to the volume of the system. This is a problem in which the thermodynamic limit is taken in this strange sense. You have all of these potential values of the energy, epsilon of k, that correspond to h bar squared, k squared over 2m. There is one that is at 0, and then choosing k equals 2 to pi over l, you will have one other state, another state, all of these states. All of these states are actually very finely spaced. The difference between the ground state and the first excited state is h bar squared over 2m, 2 pi over l squared. So this distance over here is of the order of 1 over l squared. But I see that in order to occupy this with the appropriate number, I have to choose my chemical potential to be as close as 1 over the volume. So in the limit where I take the size of the system go to infinity, this approaches this. It never touches it, but the distance that I have here is much, much less than the distance that I have in the spacings that I made this replacement. So I can indeed treat these separately. I can give a particular weight to this state where the mu has come this close to it as 1 over v, treat all of the other ones as part of this replacement of the summations with the integral. You can also see that this trick is going to be problematic if I were to perform it in two dimensions, because rather than 1 over l to the third power, that is, the volume in three dimensions, I would have had 1 over l squared. And indeed, that's another reason why two dimensions is special. And as I said, in two dimensions, the curve actually does go all the way to infinity. You won't have this problem. So this is essentially what happens with this Bose-Einstein condensation, that the occupation number of the excited state for bosons is such that you encounter a singularity in three dimensions beyond a particular density or temperatures lower than a certain amount or highly degenerate case, you have to separately treat all the huge number of particles that is proportional to the volume that have now piled up in the single state, k equals 0, the ground state in this case, but whatever the ground state may be for your appropriate system, and all the other particles go in the corresponding excited states. Now, as you go towards zero temperature, you can see that this combination is proportional to temperature goes to zero. So at 0 temperature, essentially all of the particles will need to be placed in this k equals 0 state. So if you like, a rough picture is that the net density. This is the density, n, at high temperature is entirely made up as being part of the excited state. Let's draw it this way. When you hit Tc, you find that the excited state can no longer accommodate the entire density. This is a coordination that, as we said, goes to zero at zero temperature as T to the 3/2. So there's a curve that goes like T to the 3/2, hits 1 at exactly Tc, which is a function of the density that you choose. This is the fraction that corresponds to the excited states. And on top of that, there is a macroscopic fraction that is occupying the ground state, which is the compliment to this curve. Basically, it behaves something like this. One thing that I should emphasize is that what I see here makes it look like at high temperatures, there is no occupation of k equals 0 state. That is certainly not correct because if you look at this function at any finite z as a function of epsilon, you can see that the largest value of this function is still at epsilon equals 0. It is much more likely that there is occupation of the ground state than any other state, except that the fraction that is occupying here, when I divide by the total number, goes to zero. It becomes macroscopic when I'm below the BEC transition. Now, the properties of what is happening in this system below Tc is actually very simple because below Tc, the chemical potential in the macroscopic sense is stuck at 1. Essentially, what I'm saying is that in this system, the chemical potential comes down. It hits Tc of n, and then it is 0. Now of course, 0, as we have discussed here, if I put a magnifier here and multiply by a factor of n or a factor of volume, then I see that there is a distinction between that and 0. So it doesn't quite go to 0, but the distance is of the order of 1 over volume or 1 over the number of particles. Effectively, from the thermodynamic perspective, it is 0. But again, for the purposes of thermodynamics, pressure, everything comes from the particles that are moving around. The particles in the ground state are frozen out. They don't do anything. So the pressure is very simple. Beta p is g over lambda cubed, f 5/2, eta of z equals 1, which is some number. It is g over lambda cubed, the zeta function at 5/2, which again has some value that you can read of in tables, 1.34, something like that. So you can see that this lambda cubed is inversely proportional to t to the 3/2, so pressure is proportional to t to the 5/2. If I were to plot the pressure as a function of temperature, what I find is that at low temperatures, the pressure is simply given as t to the 5/2. And you can see that this pressure knows nothing about the overall density of the system because this is a pure number. It is only a function of temperature, mass, et cetera. There is no factor of density here. So what is happening? Because we certainly know that when I am at very high temperatures, ideal gas behavior dictates that the pressure should be proportional to temperature times density. So if I have two different gases where the density here is greater than the density here, these will be the form of the isotherms that I have at high temperatures. So what happens is that you will start with the ideal gas behavior. We saw that for the case of bosons, there is some kind of an effective attraction that reduces the pressure, so the pressure starts to go down. Eventually, it will join this universal curve at the value that corresponds to Tc of n2. Whereas if I see what's happening with the lower density, at high temperatures, again, I will have the linear behavior of the ideal gas with a lower slope because I have a lower density. As I go to lower temperatures, quantum corrections will reduce the pressure. Eventually, I find that this curve will join my universal curve at the point that would correspond to Tc of n1, which in this case would be lower. But beyond that, it will forget what the density was, the pressure would be the same for all of these curves. And I have kind of indicated, or tried to indicate, that the curves join here in a manner that there is no discontinuity in slope. So an interesting exercise to do is to calculate derivatives of this pressure as a function of temperature and see whether they match coming from the two sides. And you will find that if you do the algebra correctly, there is a matching of the two. Now, if I have the pressure as a function of temperature, then I also have the energy as a function of temperature, right? Because we know that the energy is 3/2 PV. So if I know P, I can immediately know what the energy is. I know, therefore, that in the condensate, the energy is proportional to T to the 5/2. Take a derivative, I know that the heat capacity will be proportional to T to the 3/2. That T to the 3/2 is a signature of this condensate that we have. Again, heat capacity will go to 0 as T goes to 0, as we expect. And as opposed to the case of the fermions, where the vanishing of the heat capacity was always linear, the vanishing of the heat capacity for bosons will depend on dimensions, and this vanishing as T to the 3/2 is in general T to the d over 2, as can be shown very easily by following this algebra. I wanted to do a little bit of work on calculating the heat capacity because its shape is interesting. Let's write the formula a little bit more accurately. I have 3/2 v. Pressure is kTg over lambda cubed, f 5/2 plus of z. This formula is valid, both at high temperatures and at low temperatures. At high temperatures, z will be varying as a function of temperature. Let me also write the formula for the number of particles. The number of particles is V times g over lambda cubed f 3/2 plus of z. Once more, I can get rid of a number of things by dividing these two. So the energy per particle is 3/2 kT, f 5/2 of z divided by f 3/2 of z. Let's make sure I didn't make any mistake. Now, I want to calculate the heat capacity, let's say, per particle, which is d by dT of the energy per particle. Now, I have one factor out here, which is easy to evaluate. There's an explicit temperature dependence here, so I will get 3/2 kb. I will have f 5/2 plus of z divided by f 3/2 plus of z. Something here that I don't like. Everything that I have written here is clearly valid only for T greater than Tc because for T less than Tc, I cannot write n in this fashion. So what I'm writing for you is correct for T greater than Tc. Point is that there is also an implicit dependence on temperature because z is a function of temperature. So what I can do is I can do 3/2 kbT, and this is a function of z, and z is a function of temperature. So what I have it is dz by dT done at constant volume or number of particles times the derivative of this function with respect to z. Now, when I take the derivative with respect to z, I know that I introduce a function with one lower derivative up to a factor of z. So I divide by the 1 over z, and then the derivative of the numerator is f 3/2 plus divided by f 3/2 plus in the denominator minus the derivative of the denominator, which is f 1/2 plus of z times the numerator divided by f 3/2 plus of z squared. So what you would need to evaluate in order to get an expression that is meaningful and we can eliminate z's as much as possible is what dz by dT is. Well, if this is our formula for the number of particles and the number of particles or density is fixed, then we take a derivative with respect to temperature, and I will get that dN by dT, which is 0, is v. And then we have to take the temperature derivative of this combination, g over lambda cubed, f 3/2 plus of z. Now, lambda cubed scales like 1 over t to the 3/2, so when I take a derivative of this, I will get 3/2 T to the 1/2, which I can again combine and write in this fashion, 1 over T, f 3/2 plus of z. Or then I take the implicit derivative that I have here, so just like there, I will get dz by dT at constant density times 1 over z. The derivative of f 3/2 will give me f 1/2 plus of z. Setting it to zero, we can immediately see that this combination, T over z dz by dT at constant density over the number of particles is minus 3/2, f 3/2 plus divided by f 1/2 plus of z. So what I need to do is to substitute this over here. And then when I want to calculate the various limiting behaviors as I start with the high temperature and approach Tc, all I need to know is that z will go to 1. f 1/2 of 1 is divergent, so this factor will go to 0. This factor will disappear, and so then these things will take nice, simple forms. Once you do that-- we will do this more correctly next time around-- we'll find that the heat capacity of the bosons in units of kb as a function of temperature has a discontinuity at Tc of n. It approaches the classical result, which is 3/2 at high temperatures. At low temperatures, we said it is simply proportional to T to the 3/2, so I have this kind of behavior just a simple T to the 3/2 curve. And one can show through arguments, such as the one that I showed you above, that the heat capacity is continuous but its derivative is discontinuous at Tc. So the behavior overall of the heat capacity of this Bose gas is something like this. So next time, we'll elucidate this a little bit better and go on and talk about experimental realizations of Bose-Einstein condensation. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 16_Interacting_Particles_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let's start. So last time, having dealt with the ideal gas as much as possible, we started with interacting systems. Let's say interacting gas. And the first topic that we did was what I called an approach towards interaction through the Cumulant expansion. The idea is that we certainly solved the problem where we had particles in a box and it was just a trivial system. Basically, the particles were independent of each other in the canonical ensemble. And things become interesting if you put interactions among all of the particles. Very soon, we will have a specific form of this interaction. But for the beginning, let's maintain it as general as possible. And the idea was that we are trying to calculate the partition function for a system that has a given temperature. The box has some volume V and there are N particles in it. And to do so, we have to integrate over the entirety of the phase space of N particles. So we have to integrate over all of the momenta and coordinates of e to the minus beta h. But we said that we are going to first of all, note that for identical particles, we can't tell apart the phase space if you were to make these in [? factorial ?] permutations. And we also made it dimensionless by dividing by h to the 3n. And then we have e to the minus beta h, which has a part that is e to the minus sum over i pi squared over 2m, that depends on the momenta. And a part that depends on the interaction U. The integrals over the momenta, we can easily perform. They give us the result of Gaussians that we can express as lambda to the power of 3n where we introduced this lambda to be h over root 2 pi m k t. And actually, sort of keeping in mind the result that we had for the case of the ideal gas, let's rewrite this as V over lambda cubed to the power of N. So I essentially divided and multiplied by V to the N so that this part becomes the partition function than I would have had if U was absent. And so then the remainder is the integral over all of the q's, but I divided by V. So basically, with each integration, I'm uniformly sampling all of the points inside the box with probability density 1/V e to the minus beta U. And we said that this quantity we can regard as sampling e to the minus beta U through 0 to all other probability distribution in which the particles ideal-like are uniformly distributed. And we call this e to the minus beta U. And the index 0 again to indicate that we are dealing with a uniform 0 to order distribution with respect to which this average is taken. And then, this quantity we can write as Z0. And this exponential we can formally express as a sum over all, let's say, l running from 0 to infinity minus beta to the l divided by l factorial. Then, expanding this, a quantity that depends on the coordinates is U. So I could have U raised to the l power, this kind of average taken with respect to that. And immediately, my writing something like this suggests that what I'm after eventually is some kind of a perturbation theory. Because I know how to solve the problem where U is absent. That's the ideal gas problem. Maybe we can start to move away from the ideal gas and calculate things perturbatively in some quantity. And exactly what that quantity is will become apparent shortly. At this time, it looks like it's an expansion in beta U having to be a small quantity. Now, of course, for calculating thermodynamic functions and behaviors, we don't really rely usually on Z, but log Z, which can be related to the free energy, for example. So taking the log of that expression, I have a term that is log of Z0. And a term that is the log of that sum. But we recognize that that sum, in some sense, is a generator of moments of this quantity U. And we have experienced that when we take the logarithm of a generator of moments, what we will get is a generator of Cumulants, now starting with l equals to 1 minus beta to the power of l divided by l factorial U to the l0 with a subscript c. Of course, this subscript c sort of captures all of the various subtractions that you have to make in order to go from a moment to a Cumulant. But that's quite general and something that we know essentially is relating log of the expansion to the expansion itself. But that's well-known. So presumably, I can perturbatively calculate these. And I will have, therefore, progressively better approximations to log Z in the presence of this interaction. Now at this point, we have to start thinking about a particular expression for U. So let's imagine that our U is a sum over pairwise interactions. So if I think about the particles, there are molecules that are in the gas in this room. Basically, the most important thing is when the two of them approach each other and I have to put a pairwise interaction between them of this form. In principle, I could add three-point and higher-order interactions. But for all intents and purposes, this should be enough. And then again, what I have here to evaluate, the first term would be minus beta, the average of the first power, and the next one will be beta squared over 2. For the first Cumulant, it is the same thing as the first moment. The second Cumulant is the thing that is the variance. So this would have been U squared 0 minus average of U squared, and then there will be higher orders. So let's calculate these first two moments explicitly for this potential. So the first term U in this 0 to order average, what is it? Well, that's my U. So I have to do a sum over i and j of V of qi minus qj. What does the averaging mean? It means I have to integrate this over all qi divided by V. Let's call this qk divided V and this is [INAUDIBLE]. OK, so let's say we look at the first term in this series which involves V of q1 minus q2. Let's say we pick that pair, q1 and q2. And explicitly, what I have here is integrals over q1, q2, qn, each one of them coming with a factor of V. Now, of course, it doesn't matter which pair I pick. Everything is symmetric. The weight that I start with uniformly is symmetric. So clearly, my choice of this pair was arbitrary and the answer when I do the sum is the answer for 1 times the number of pairs, which is N N minus 1 over 2. Now, over here I have integrals over q3 or qn, et cetera, that don't appear in the function that I'm integrating. So all of those integrals will give me factors of 1. And really, the only thing that I am going to be left with is these two integrals over q1 and q2. But even then, the function that I'm integrating is only a function of the relative coordinate that I can call q. And then I can, if I want, integrate over q1 and q. The integral over q1 would also give me 1 when I divide by the volume. So the answer is going to be N N minus 1 over 2, which is the number of pairs. The integral over the relative coordinate times the potential-- there is a factor of 1 over V that I can put out here. So you tell me what the first pair potential is and I tell you that this is the first correction that you will get once you multiply by minus beta to log Z from which you can calculate the partition functions. Yes. AUDIENCE: How did you turn the product of two differentials d q1 times d q2 into just one differential? PROFESSOR: OK, I have the integral dq q1 dq q2, some function of q1 minus q2. I change variables keeping q1 as one of my variables and q [INAUDIBLE] replace q2 with q minus q2. So then I have the integration over q1. I have the integration over q2. Not q2, but q f of q. And then everything's fine. All right, so that's the first term. Let's calculate the next term. Next term becomes somewhat interesting. I have to calculate this quantity as a Cumulant. It's the variance. V itself is a sum over pairs. So to get V squared to U squared, I have to essentially square this expression. So rather than having sum over one pair ij, I have the sum over two pairs ij and kl. And then what I need to calculate is an average that if I were to square this, I will get v of qi minus qj, v of qk minus ql 0. This will be the average of U squared. But when I calculate the variance, I have to subtract from it the average of U squared. So I have to subtract V of qi minus qj V of qk minus ql. Fine. So this is just rewriting of that expression in terms of the sum. Now, each one of these is N N minus 1 over two terms. And so this, when I sum over all possibility, is the square of N N minus 1 over two possible terms that will occur here in the series that I've written. Now, let's group those terms that can occur in this as follows. One class of terms will be when the pair ij that I pick for the first sum is distinct from the pair kl. So huge number of terms as I go through the pairs will have this possibility. OK, then what happens? Then when I calculate the average of qi minus qj qk minus ql, which is this part, what I need to do is to do all of these integral of this form over all q's. And then I have this V of qi minus qj V of qk minus ql. Now, my statement is that I have, let's say here, q1 and q2 and somewhere else q7 and q8. Or, q7 and q974. It doesn't matter. The point is that this integral that involves this pair of variables has nothing to do with the integral that involves the other pair of variables. So this answer is the same thing as the average of qi minus qj average of qk minus ql. Essentially, in the 0 order probability that we are using, the particles are completely independently exploring this space and these averages will independently rely on one pair and independently rely on another pair. And once this kind of factorization occurs, it's clear that this subtraction that I need to do for the variance will get rid of this term. So these pairs do not contribute. No contribution. If you like, this is this thing. All right, so let's do the next possible term, ij and ik, with k not equal to this k. So basically, I make the pairs have one point in common. So previously, I had a pair here, a pair here. Now I joined one of their points. So what do I get here? I will get for this average qi minus qj qi minus ql. You can see that essentially, the three integrals that I need to worry about are the integrals that involve these three indices. All the others will give me 0. So this is going to be the integral qi qj ql divided by V. I have V of qi minus qj V of qi minus ql. AUDIENCE: You switched [INAUDIBLE]. PROFESSOR: OK. I think that's fine. All right. Now, I can do the same thing that I did over here. I have three variables. I will pick qi to stay as one of my variables. I replace qj with this difference, q j i. I replace ql with its distance to i. And again, I can independently integrate over these three variables. So this again, becomes the same thing as V of qi minus qj average V of qi minus ql average. And again, there is no contribution. Now, the first class it was obvious because this pair and this pair were completely distinct. This class is a little bit more subtle because I joined one of the points together, and then I used this and this as well as this point as independent variables. And I saw that the sum breaks into pieces. And you won't be surprised if no matter how complicated I make various more interactions over here, I can measure all of the coordinates with respect to the single point and same thing would happen. This class of diagrams are called 1-particle reducible. And the first part had to do with distinct graphs. You didn't have to worry about them. But if I were to sort of convert this expression-- and we will do so shortly-- into graphs, it corresponds to graphs where there is a point from which everything else is hanging. And measuring coordinates with respect to that point will allow the average to break into pieces, and then to be removed through this subtraction. So this is simple example of something that is more general. So at the level of this second-order theorem, the only type of thing that will survive is if the pairs are identical, ij and other term is also the same pair ij. What do I get then? I get V of qi minus qj squared minus V of qi minus qj 0 order squared. So it's the variance of a single one of these bond contributions. If I write that in terms of integrals, this becomes the integral d cubed q1, d cubed q2-- well, qi qj. I will have factors of V. Then, I would have V squared qi minus qj for the first term and then the square of something like this for the second term. So putting everything together up to this order, what do I get? I will get log Z, which is log of Z0. Let me remind you, log of Z0 was N log of V over lambda cubed N. That's on form such as this. And then I have these corrections. Note that both of the terms that have survived correspond to looking at one pair. So the corrections will be of the order of N N minus 1 over 2. Because in both cases, I really look at one pair. The first contribution was minus beta integral d cubed q v of q. There was a factor of 1 over V. Actually, I can take-- well, let's put the factor of 1 over V here for the time being. The next term will be beta squared over 2 because I am looking at the second-order term. And what do I have? I have the difference of integral d cubed q over V V squared. And the square of the integral of d cubed q over V V. And presumably, there will be higher-order terms as we will discuss. Now, I'm interested in the limit of thermodynamics where N and V are large. And in that limit, what do I get? I will get log Z is log of Z0. And then I can see that these terms I can write as beta. N N minus 1 I will replace will N squared. The factors-- here I have a 1 over V. Here, I have 1 over V squared. So this factor is smaller by an amount that is order of V in the large V-limit. I will bring the factor of V outside, so I have something like this. And then, what do I have here? Actually, let's keep this in this form, minus beta integral d cubed q V plus beta squared over 2 integral d cubed q V squared [INAUDIBLE]. So the kinds of things that we are interested and we can measure are the energy of the system, but let's say we focus on pressure. And if you look through various things that we derived before, beta times the pressure you can get from taking a derivative of log Z with respect to V. We can express the log Z in terms of the free energy, and then the derivative of free energy with respect to volume will give you the pressure of the various factors of beta [? and signs, ?] we will come up with this. So from the first term, N long V, we get our ideal gas result. Beta p is the same thing as density. And we see that we get a correction from here when I divide by-- when I take a derivative with respect to V of 1 over V, I will get minus 1 over V squared. So the next term would be the square of the density. And then, there is a series that we will encounter which depends on the interaction potential. Now, it turns out that so far I have calculated terms that relied on only two points, ij. And hence, they become at the end proportional to density squared. If I go further in my expansion, I will encounter things that I will need triangular points. For example, i j k forming a triangle. And then, that would give me something that would be of the order of the density cubed. So somehow, I can also see that ultimately this series in perturbation theory, as we shall see, also more precisely can be organized in powers of density. Why is that important? Because typically, as we discussed right at the beginning, when you look at the pressure of a gas when it is dilute and over V is very small, it is always ideal gas-like. As you make it more dense, you start to get corrections that you can express in powers of density. And the coefficients of that are called Virial coefficients. So in some sense, we have already started in calculating the second Virial coefficient, and there are higher-order Virial coefficients, which together give you the equation of states relating pressure to density by a power series. AUDIENCE: Question. PROFESSOR: Yes. AUDIENCE: The Virial coefficients, if they have integral over the whole volume, then they will be functions of volume, right? PROFESSOR: Yes. But imagine that I'm thinking about the gas in this room and the interaction of two oxygen molecules. So what I am saying is you pick an oxygen molecule. There is interaction and I have to integrate over where the other oxygen molecule is. And by the time it is tiny, tiny bit [? away, ?] the interaction is 0. It doesn't really matter. AUDIENCE: If the characteristic volume of interaction is much smaller than volume-- PROFESSOR: Of the space. AUDIENCE: [INAUDIBLE], then the integrals just converge to [? constant. ?] PROFESSOR: That's right. So if you want to be even more precise, you will get corrections when your particles are close to the wall and uniformity, et cetera, is violated. So there will, actually, be corrections to, say, log Z that are not only proportional ultimately to number volume, but the area of the enclosure and other [? subleading ?] factors. But in the thermodynamic limit, we ignore all of that. So a lot of that is resolved by this statement here. Again, you won't be surprised that if I were to go ahead, then there will be a term in this series that would be beta cubed over 3 factorial integral over q of vq, et cetera. And that's actually very good because as I have written for you currently, this expression is totally useless. Why is it useless? Because let's think back about these two oxygen molecules in the gas in this room and what the potential of interaction between them would look like. Presumably, it is a function of the relative separation. We can call it r or q, it doesn't matter. And if you bring them closer than the typical size of these molecules, their potential will go to infinity. So basically, they don't want to come close to each other. If you go very far away, typically you have the van der Waals attraction between the particles. So out here, it is attractive. The potential is negative and falls off, typically, as 1 over r to the 6 related to the polarizabilities of the particles. And these, when you come very short distances, the electronic clouds will overlap and the potential goes to infinity. And if you want to sort of think about numbers, typical scales that we have here-- let's say here, are presumably of the order of angstroms. And the typical depths of these potentials in units that make sense to statistical physics are of the order of 100 degrees Kelvin. So it's why typical gases liquidy at range of temperatures that is of the order of 100 degrees Kelvin. Now, if I want to take this potential and calculate this, I have trouble. Because the integral over here will give me infinity. So this expression is, in general, fine. But whenever you have perturbation theory, you start to evaluate things and you have to see whether the correction indeed is sufficiently small. And it seems we can't make it sufficiently small. Yes. AUDIENCE: If you're modeling interatomic or intermolecular potential as a Lennard-Jones potential, isn't the [INAUDIBLE] term at the-- or near the origin, a phenomenological choice? PROFESSOR: If you are using as a formula for this the Lennard-Jones potential, people phenomenologically write something like 1 over r to the 12th power, et cetera. But I don't have to assume that. AUDIENCE: Right. Because I've seen forms of the Lennard-Jones potential which is exponentially decaying at the origin. So it reaches a finite, albeit probably high value at the origin. Wouldn't that take care of the issue? PROFESSOR: If you believe in that kind of potential, yes. So what you are saying is that if I make this come to a finite value, I will be able to find the temperature that is sufficiently high compared to this, and then things would be fine. If we had that, then we would have fusion right here going on, right? So clearly, there is some better truth to a potential that is really very, very high compared to temperatures that we are looking at. So we don't have to worry about that. Well, we have to worry about this issue. OK? All right. So you say, well, how about the following-- let's say that I am something like the gas in this room and I am sufficiently dilute. I'm going to forget about these terms. But since this potential becomes essentially large, I can never ignore a particular term in this series. And I keep going adding more and more terms in the series. If I sort of think back about the picture that I was generating before diagrammatically-- which again, we will clarify later on-- this term corresponds to taking a pair of points and a V that connects them. And this term corresponds to two V's going between the same pair of points. And you say, well, there will be terms that will involve three V's, four V's, and they will contribute in the series in a manner that I can recognize. So why don't I add all of those terms together? And if you like, we can call the resulting object something else. What is that something else? Well, what I have done is I have minus beta V. Well, all of them-- so what is this? It is the integral d cubed q. I have minus beta V plus 1/2 beta V squared. The next term is going to be minus 1 over 3 factorial beta V cubed because it will come from the third-order term in the expansion. And I can go on. And you say, well, obviously this whole thing came from the expansion of e to the minus beta V except that I don't have the 0 order term, 1. So this quantity that I will call f of q rather than V of q is obtained as e to the minus beta V of q minus 1. So if I were to, in fact, add up an infinite number of terms in the series rather than having to integrate V V squared V cubed, each one of them is divergence. I have to integrate this f of q. So let's see what this f of q-- or correspondingly, f of r looks like as a function of r. So in the range that I have my hard core and V is large and positive, this is going to give me 0. So the function is minus 1 down here. At exactly whatever point it is that the potential is 0, then f is also 0. So basically, I will come to the same 0. When the potential is negative, this will be larger than 1. My f will be positive. So presumably, it will have some kind of a peak around where this peak is. And then at large distances, it goes back towards 0. So this is going to go back towards 1 minus 1, which is 0. So actually, this end of the potential is the same thing as minus beta. And whereas I couldn't integrate any one term in this series, I can certainly integrate the sum over all positions and it will give me a number, which will depend on temperature, the properties of the potential, et cetera. And it is that number that will tell me what the density squared term in the expression for the pressure is and what the second Virial coefficient is. Yes. AUDIENCE: Is this basically an excluded volume of sorts? PROFESSOR: Yeah. So this part if I sent it to infinity, corresponds to an excluded volume. So if you want really excluded volume, this would be minus 1 up to some point. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. But I'm talking about very general, so I would say that excluded volume can be very easily captured within this [INAUDIBLE]. All right. So we want to sort of follow-up from here. But since you will have a problem set, I will mention one other thing. That this reorganization of the series that I made here is appropriate to the limit of low densities, where I would have a nice expansion in powers of density. The problem that you will deal with has to do with plasmas, where the interaction range is very large. And you already saw something along those lines when we had the Vlasov equation as opposed to Boltzmann equation. There was a regime where you had to reorganize the series different ways. In that case, it was the BBGKY hierarchy whether you were looking at the dense limit or a dilute limit. So this is the analog of where the Boltzmann equation would have been inappropriate. The analog of the regime where you are dense and something like the Vlasov equation would be appropriate. So there is some kind of interaction range, n d cubed if it is much larger than 1. Then it turns out that rather than looking at diagrams that have the fewest number of points-- in this case, 2-- you have to look at diagrams that have the largest number of points. Because each additional point will give you an additional factor of N over V. You can see that these factors of N N minus 1 over 2 came from the number of points that I had selected. So if I had to select three points, I would have N cubed and the corresponding V. So the more points that I have in my series, I will have more powers of the density. So in that case, it turns out that rather than looking at two points and all lines between them, you organize things in terms of what are called ring diagrams, which are things such as this, this, this. So basically, for a given number of lines, the most number of points is obtained by creating your ring. And so one of the problems that you have is to sort of sum these ring diagrams and see what happens. But it seems like what I'm telling you here is while we calculated order of density squared, but maybe I want to calculate order of density cubed. And it makes much more sense rather than when faced with potentials of this form arranging the series in powers of the potential to arrange it in powers of this quantity e to the minus beta V minus 1. So let's go through a route in which I directly expand everything in terms of this quantity. And that's the second type of expansion that I will call the cluster expansion. So once more, what we want to calculate is the partition function. It depends on temperature, volume, number of particles obtained by integrating over all degrees of freedom. The integration over momenta we saw is very simple. Ultimately, will give us this N factorial lambda to the 3N. This time, I won't pull out the factor of V to the N at this point. And then I have the integration over all of these q's. I did not divide by V because I did not multiply by V to the N. OK. And then I have e to the minus beta U, but this is my U. U is the sum of various terms. So e to the minus beta U will be a product over all, let's say-- let's call this k pairs ij. e to the minus beta V of qi minus qj. So basically, the only thing that I did was e to the minus beta-- this quantity I wrote as a product of contributions from the different pairs. Now, this quantity I have it in the line above. This is also a relative position. Clearly, I can think of this as 1 plus the f that would correspond to the distance between qi and qj. And again, I can either write it as Vf of qi minus qj, or simplify of i-- my notation-- and write it as f i j. So f i j means-- so let me maybe write it here. f i j I have defined to be e to the minus beta V of qi minus qj minus 1. So what do I expect? I saw that the first correction to something that was interesting to me had one power of f in it that I had to ultimately integrate. So maybe what I should do is I should start organizing things in terms of how many f's I have. So an expansion in powers of f. So what's going to happen here? I would have 1 over N factorial lambda to the 3N. I have a product of all of the integrations. And I have all of these factors of 1 plus f1 2 times 1. So basically, this is maybe-- it's really a product of 1 plus f1 2, 1 plus f1 3, 1 plus all of one of these things. So the thing that has least number of f's is when I pick the 1 from all of these brackets that are multiplying each other. The next term is to pick 1f from one pair and all the others would be 1. The next term would be sum over ij kl f i j f k l, and then there will be diagrams that would progressively have more and more factors of f. Now, what I will do is to represent the various terms that I generate in this series diagrammatically. So first of all, I have to integrate over N points. So I put points 1 through N. So I have 1, 2, 3, 4. It doesn't matter how I put them-- N. And then, this 1 would correspond to just this diagram. The next thing is I put a line for f i j. So let's say that I picked in this series the term that was f 2 3. I will represent that by a line that goes between 2 and 3. Later on in the picture, maybe I will pick a term that is f 2 3 f 4 5. So some second-order term could be something like this. Maybe later on in the series, I will pick a term that is connecting f3 and f4. So this would be a third-order term in the series. Maybe later on the series, I have some other pair over here. So any one of the huge number of terms that I have generated here has one diagram that is appearing over here. Now, the next step is, of course I have to integrate over all of the q's. I mean, these diagrams, what they represent is essentially some f of q1 minus q3, some f of whatever. And I have to integrate over all of these q's. So the contribution to the partition function that I will get would be some value associated with one of these graphs obtained by doing the integrations over q. Now, the thing that I want you to notice is that contribution of one of these graphs or diagrams is the product of contributions of its linked clusters. So here I have one particular diagram that is represented here. Well, let's say 2, 3, 4, 5 are linked together and separate from whatever this is. Let's say this is 7, 9. So when I do the integrations over q2, q3, q4, q5, I don't really rely on any of the other things. So my integration here, we break off into the integration that involves this, the integration that involves this, as well as integrations over all of these points that are not connected to anybody. All of those will simply give me a factor of v. So is everybody happy with this simple statement that the contribution-- if I think of these linked clusters as some collection of points that are linked together, the value that I would get for this diagram would be the contribution-- product of the contributions that I would have for this. So yes? AUDIENCE: Can you explain [INAUDIBLE]? PROFESSOR: OK. So let's pick a particular thing. Let's say we do something like this, where this is number 7, 245, 6, 5, 4. This is a particular thing. So what do I have to do? There's also a whole bunch of points-- 1, 2, 3. What I am instructed to do is to integrate over q1, q2, q3, et cetera. Now, the integral over q1 is just the integral of q1. So this integral by itself will give me V. Any number of points that are by themselves will also give me V. So the first term that becomes nontrivial is when I have to do the integration over q4, q5, q6 of f 4 5, f 5 6, f 6 4. I don't know what this is, but there is something that will come from here. Then, later on I have to integrate over 7 and something else. So then I have the integral over q7, q-- let's call this 8. f of 7 8. So this is something else. So the overall value of this term in the perturbation theory is the product of contributions from a huge number of one clusters. Here, I have a two cluster. Here, I have a three cluster. Maybe I will have more things. Also, notice that if I have more of these two clusters, the result for them would be exactly the same. So if I have lots of these pairs, the same way that I had lots of single points-- and it became V to the number of single points-- it will become whatever that y is to the number of pair points, x, whatever that is, to the number of triplets that I have in triangles. So this is how this object is built up. Yes? AUDIENCE: How are we deciding how many triplets and how many pairs we actually have? PROFESSOR: At this point, I haven't told you. So there is a multiplicity that we have to calculate. Yes. So at this point, all I'm saying is given this, the answer is the product of contribution. There is a multiplicity factor-- you're right. We have to calculate that. OK? Anything else? All right. So I mean, think of this as taking a number and writing it as the product of its prime factors, factorizing into primes. So you sort of immediately know that somehow prime factors, the prime numbers, are more important. Because everything else you can write as prime numbers. So clearly, what is also buried in heart of calculating this partition function is these clusters. So let me define the analog of prime numbers as follows. I define bl to be the sum over contributions of all linked clusters of l-points. So let's go term by term. b1 corresponds essentially to the one point by itself in the diagram that I was drawing down here. And corresponds to integrating over the coordinate that goes all over the space. And hence, the volume is the same thing as the volume V. b2 is a cluster of two points. So it is this. And so it is the integral over q1 and q2, which are two endpoints of this, of f of q1 minus q2. And this we have seen already many times. It's the same thing as volume, the integral over q f of q. It is the thing that was related to our second Virial coefficient. Now, this is very important for b3. I note that I underlined here "all." And this is for later convenience. It seems that I'm essentially pushing complexity from one place to another. So there are a number of diagrams that have three things linked together. This is one of them. Let's say, think of 1, 2, 3 connected to each other. But then, I have diagrams. At this stage, I don't really care that this is one particle irreducible. Here, I have no constraints on one particle irreducibility. And in fact, this comes in three varieties because the bond that is missing can be one of three. So once I pick the triplets of three points, sum over all clusters that involve these three points is this, in particular with this one, appearing three times. Yes. AUDIENCE: Do we still count all the diagrams when particles are identical? PROFESSOR: Yes. Because here, at this stage, there is an N factorial out here. All I'm really doing is transcribing a mathematical formula. The mathematical formula says you have to do a bunch of integrations. And I am just following translating that mathematical formula to diagrams. At this stage, we forget. This has nothing to do with this N factorial or identity. It's just a transcription of the mathematics So at this stage, don't try to correct my conceptual parts but try to make sure that my mathematical steps are correct. OK? Yes. AUDIENCE: I didn't understand what that missing links represents. PROFESSOR: OK. So at the end of this story, what I want to write down is that the value of some particular term in this series is related to product of contributions. I described to you how productive contributions comes along. And each contribution is a bunch of points that are connected together. So what I want to do, in order to make my algebra easier later on, is to say that-- let's say these three points are part of a cluster. They are connected together somehow. So they will be giving me some factor. Now, they are connected, if they are connected like this, like this, like this, like that. So basically, any one of these is a way of connecting these three points so that they will make a contribution together. Yes. AUDIENCE: So the one with all three bonds comes with a different [INAUDIBLE] in the series than the ones with only two bonds, right? PROFESSOR: Yes. Now, there is reason, ultimately, why I want to group all of them together. Because there was a question before about multiplicity factors. I can write down a closed form, nice expression for the multiplicity factor if I group them together. Otherwise, I have to also worry that there are three of these and there is one of them, et cetera. There is an additional layer of multiplicity. And I want to separate those two layers of multiplicity. So here, the thing that I have is I have these endpoints. I want to say that the eventual result for endpoints comes from clusters that involves 1's, clusters that involve pairs, and cluster that involve triplets. Now, I realize suddenly that if I made the multiplicity count here, then for this triplet, I could have put all of them together like this or like that. And the next order of business when I am thinking about four objects, I can connect them together into a cluster in multiple ways. And there is a separate way of calculating the relative multiplicity of what comes outside and the multiplicity of this quartic with respect to entirety. So you have to think about this a little bit. So I'll proceed with this. So this mathematically would be the integral q1, q2, q3, f 1 2, f 2 3, f 3 1 plus, say, f 1 2, f 2 3, plus f 1 3 f 3 2 plus f 2 3, f 3 1, something like this. Well, 1, 2. 2 is repeated. 3 is repeated. I should repeat 1. OK. Of course, the last three are the same thing, but this is the expression for this. Now, before would be a huge complex of things. It would have a diagram that is of this form. It would have a number of diagrams that are of this form. It would have diagrams that are of this form, diagrams that are like this, and so forth. Basically, even within the choice of things that have four clusters, there is a huge number when you go to further and further [? down. ?] OK, so maybe this statement will now clarify things. So what I have is that the partition function is 1 over N factorial lambda to the 3N. And then it is all of the terms that are obtained by summing this series. And so I have to look at all possible ways that I can break N-points into these clusters. Suppose I created a situation where I have clusters of size l and then I have nl of them. So I saw that the contribution that I would get from the 1 clusters was essentially the product of the number of 1 clusters. It was V to the number of points that are not connected together. So I have to take b1, raise it to the power of N1, which is the number of 1 cluster. Then, I have to do the same thing for b2, b3, et cetera. And then I have to make sure that I have looked at all ways of partitioning N the numbers into these clusters. So I have a sum over nl l has to add up to n. And I have to sum over all nl's that are consistent with this constraint. But then there is this issue of the multiplicity. Given that I chose some articular set of these, if I were to reorganize the numbers, I would get the same contribution. So let's say we pick this diagram that has precisely a contribution that is V, which is b1 to the number of these one points. This to the number of b2 to the power of the number of these, this, et cetera. But then I can take these labels-- 1, 2, 3, 4, 5, 6-- that I assigned to these things and permute them. If I permute them, I will get exactly the same contribution. So there is a large number of diagrams that have exactly this same contribution because of this permutation. There was a question back there. AUDIENCE: You've already answered it. PROFESSOR: All right. Actually, this is an interesting thing-- I don't know how many of you recognize this. So essentially, I have to take a huge number, N, and break it into a number of 1 clusters, 2 clusters, et cetera. The number of ways of doing that is called a partition of an integer. And last century, the [INAUDIBLE] calculated what that number is. So there is a [INAUDIBLE] theorem associated with that. And actually, later in the course I will give you a problem to calculate the asymptotic version of the [INAUDIBLE]. But this is different story. So given that you had made one of these partitions of this integer, what is this degeneracy factor? So let me tell you what this degeneracy factor is. So given this choice of nl's, you say-- well, the first thing is what I told you. For a particular graph, the value is independent of how these numbers were assigned. So I will permute those numbers in all possible ways and I will get the same thing. So that will give me N factorial. It's the number of permutations. But then, I have over-counted things because within, let's say, a 2 cluster-- if I count 7, 8, 8, 7 they are the same thing. And by multiplying by this N factorial, I have over-counted that. So I have to divide by 2 for every one of my 2 clusters. So I have to divide by 2 to the power of N2. For the 3 clusters, I have the permutation of everything that is inside. So it is 3 factor here. So what I have here is I have to divide by the over-counting, which is l factorial-- labels within a cluster-- to the number of clusters that I have that are subject to this. There is another thing that this pair is 100, 101 and this pair is 7, 8. Exchanging that pair of numbers with that pair of numbers is also part of the symmetries that I have now over-counted. So that has to be taken into account because the number of these 2 clusters here is 2. Here is actually 3. I have to divide by 3 factorial. In general, I have to divide by nl factorial. OK. Again, this is one of those things that the best advice that I can give you is to draw five or six points so your n is small. And draw some diagrams. And convince yourself that what I told you here rapidly is correct. And it is correct. Yes. AUDIENCE: Can you just clarify which is in the [INAUDIBLE]? PROFESSOR: OK. So the N factorial is outside. And then-- so maybe write it better. For each l, I have to multiply nl factorial and l factorial to the power of nl. So this is my partition function in terms of these clusters. Even with all of these definitions that I have up there, maybe not so obvious an answer. And one of the reason it is kind of an obscure answer is because here I have to do a constrained sum. That is, I have all of these variables. Let's call them n1, n2, n3, et cetera. Each one of them can go from 0, 1, 2, 3. But they are all linked to each other because their sum is kind of restricted by some total value l, by this constraint. And constrained sums are hard to do. But in statistical physics, we know how to gets rid of constrained sums. The way that we do that is we essentially allow this N to go all the way. So if I say, let's make-- it's very hard for me to do this n1 from 0 to something, n2 from 0 to something with all of this constrained. My life would be very easier if I could independently have n1 go take any value, n2 take any value, n3 take any value. But if I do that essentially for each choice of n1, n2, n3, I have shifted the value of big N. But there is an ensemble that I know which has possibly any value of N, and that's the grand canonical. So rather than looking at the partition function, I say I will look at the grand partition function Q that is obtained by summing over all N. Can take any value from 0 to infinity. I have e to the beta mu N times the partition function that is for N particles. So that's the definition of how you would go from the canonical where you have fixed N to grand canonical where you have fixed the chemical potential mu. So let's apply this sum over there. I have a sum over N, but I said that if I allowed the nl's to vary independently, it is equivalent to varying that N, recognizing that this n is sum over l l nl. That's the constraint. So that's just the first term. I have rewritten this sum that was constrained and this sum over the total number as independent sums over the nl's. Got rid of the constraint. Now, I write W. Oh, OK. The partition function. Now, write the partition function. I have 1 over N factorial. I have lambda cubed raised to this power. So actually, let me put this in this fashion. So both e to the beta mu over lambda cubed that is raised to that power. OK, so that's that part. The sum I have gotten rid of. I notice that my W has an N factorial. So this is the N factorial that came from here. But then there is an N factorial that is up here from the W. So the two of them will cancel each other, N factorial. And then I have a product over all clusters. Part of it is this bl to the power of nl, and then I have the contribution that is nl factorial l factorial to the power of nl. So this part cancels. For each l, I can now independently sum over the values of nl can be anything, 0 to infinity. And what do I have? I have bl to the power of nl. I have division by nl factorial. I have l factorial to the power of nl. And then I have e to the beta mu over lambda cubed raised to the power of l nl. I recognize that each one of these terms in the sum is 1 over nl factorial something raised to the power of nl, which is the definition of the exponential. So my q is a product over all l's. Once I exponentiate, I have e to the beta mu over lambda cubed raised to the power of l. And then I have bl divided by l factorial. Or, log of Q, which is the quantity that I'm interested, is obtained by summing over all clusters 1 to infinity e to the beta mu over lambda cubed raised to the power of l bl divided by l factorial. So what does this tell us, which is kind of nice and fundamentally important? You see, we started at the beginning over here with a huge number of graphs. These graphs could be organizing all kinds of clusters. And they would give us either the partition function or summing over N, the grand partition function. But when we take the log, I get only single connect objects. And this is something that you had already seen as the connection that we have between moments and Cumulants. So the way that we got Cumulants was to look at the expansion of the log. So the function itself was the generator of all moments and we took the log. And graphically, we presented that as getting the moments by putting together all kinds of clusters of points that corresponded to Cumulants. And this is again, a representation of the same thing. That is, in the log you have essentially individual contributions. Once you exponentiated, you get multiple contributions. Now, the other thing is, that if I am thinking about the gas, then log of Q is e to the minus beta g. And g is E minus TS minus mu N. Bur for a gas that has extensive properties, E is TS mu N minus PV. So this is the same thing as beta PV. So this quantity that we have calculated here is, in fact, related to the pressure directly through this formula. And the important part or the important observation is that it says that it should be proportional to volume. You can see that each one of my b's that I calculated here will have one free integral. If you like, its center of mass. You can go over the entire volume. So all of my b's are indeed proportionality to volume. And you can see what disaster it would have been if there was a term here that was not just a linked cluster but product of two linked clusters. Then I would have something that would go like V squared. It's not allowed. So essentially, this linked cluster nature is also related to extensivity in this sense. So in some sense, what I have established here-- again, related to that-- is that clearly all of my bl's are proportional to volume. And I can define something that I will call bl bar, which is divide the thing by the volume. And then what we have established is that the pressure of this interacting gas as an expansion beta p, which takes this nice, simple form, sum over l e to the beta mu divided by lambda cubed bl bar-- the intensive part of the contribution of these cluster-- divided by l factorial. This, of course, runs from 1 to [INAUDIBLE]. Yes. AUDIENCE: [INAUDIBLE] is raised to the l? PROFESSOR: Who's asking the question? AUDIENCE: The term in the parentheses is raised to the l? PROFESSOR: Exactly. Thank you very much. Yes. AUDIENCE: [INAUDIBLE]. But they don't, actually. Could that be true, for example [INAUDIBLE]? PROFESSOR: OK. So the triangle, I have it here. Integral d q1, d q2, d Q3, f 1 2, f 2 3, f 3. Let's write it explicitly. It is f of q1 minus q2, f of q2 minus q3, f of q3 minus q1. I call this vector x. I call that vector y. This vector is x plus y. Yes. AUDIENCE: Does the contribution from [INAUDIBLE]? PROFESSOR: OK. So you say that I have here an expansion for the pressure. I had given you previously an expansion for pressure that I said is sensible, which is this Virial expansion, which is powers of density. This is not an expansion in powers of density. So whether or not the terms in this expansion becomes smaller or larger will depend on density in some indirect way. So my next task, which I will do next lecture, is within this ensemble I have told you what the chemical potential is. Once I know what the chemical potential is, I can calculate the number of particles as d log Q by d beta mu. And hence, I can calculate what the density is. And so the answer for this will also be a series in powers of e to the beta mu. And what then I will do is I will combine these two series to get an expansion for pressure in powers of density, and then identify the convergence of this series and all of that via that procedure. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 2_Thermodynamics_Part_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So I'll briefly review what we were doing last time. We are gradually developing the structure that we need for thermodynamics. And the first thing that we did was to state that it wholly depends on the existence of equilibrim-- is the first and foremost thing that we need to know, is that the objects that we are studying in thermodynamics are in equilibrium. And once they're in equilibrium, we can characterize them by a set of properties. So there's a list of coordinates that I would have the list here. And we saw that since we will also need to think in terms of mechanical work, a good place to start-- and since we are familiar with mechanical coordinates-- is to enumerate here the mechanical coordinates that we would describe the system with. And we said that we could, in principal, characterize or divide those coordinates in the form of some generalized displacements and the corresponding conjugate forces. And typically, the displacements are extensive proportional to the size of your system. The forces are intensive. And the work is related to the product of those two in some fashion. One of the things that I will do throughout is to emphasize that one of the main materials that we are going to look at, and illustrate the concept with, is the gas. And in particular, the version of the gas that is the ideal gas. But forgetting the ideal part for the time being, the coordinates that we use to describe the gas with is V and pressure. And actually, we saw that in order to ultimately be consistent with how we will define work, it is good to think of minus pressure as the appropriate force. And so basically, then if I want to characterize the equilibrium state of a gas, I need, for example, to specify where I lie in this P-V plane. OK so that's the first thing to know. But then we also realize that mechanical coordinates are insufficient to describe the properties of the systems that we encounter. For example, if we are thinking in terms of a spring and we pull on the spring-- so we are looking at the relationship between displacement and the force-- the result that we get how easy it is to pull on this depends on something else-- depends on temperature. So somehow, we need to include these properties that have to do with heat and heat transfer in order to have a complete description of systems that are in equilibrium. And so going along that direction, we started with the Zeroth Law. And the statement of the Zeroth Law was the transitivity of equilibrium-- if two objects are in equilibrium with each other they are-- two objects are in equilibrium with a third object, they're also in equilibrium with each other. And we saw that what that means, is that once I am at some point in this coordinate space, I know that there exists some function. I don't know its functional form. This empirical temperature and being in equilibrium means that there is this important function of the coordinates of the first system equal to some other functional or form potentially, depending on the coordinates of the second system. And this empirical temperature has to be the same. We said that this is kind of like being on a scale, and having a balance between different things, and then they are in balance with each other. So we know that there's something like mass. Now, when we describe this in the context of the gas, we noted that for all gases in the limit that they are dilute, the isotherms-- the places that would correspond in this case to always being in contact with the object is at a fixed temperature, form these hyperboles. So the isotherms in the limit of dilute are of the form PV is proportional to time span. We can call that theta. We can use that, even, to define the ideal gas temperature scale, provided that we choose some proportionality coefficient which was selected so that the temperature of water, ice, steam, coexistent at a particular value. OK, so that's the specific property of the ideal gas as far as its temperature is concerned. The next thing that we did was to look at how changes are carrying a system, a particular system. And we found that if you take the system from one location in its coordinate space to another location, do it in a manner that does not involve the exchange of heat. So basically, you idealize this thing and isolate it from any other source-- that the amount of work-- and we can then calculate mechanical work-- the amount of mechanical work that you do is only a function of the initial and final states, and does not depend on how you supply the work that produces this, et cetera. And so that immediately reminded us of conservation of energy, and a suggestion that there exists this function that depends on where you are in this coordinate representation that gives the total energy content of the system. What was important was that, of course, we want to relax the condition of making changes over which there is no heat exchange to the system. And more generally then, we said that the changes that we obtain in energy are the sum of the two components-- the amount of work that you do on it, and that we know how to measure from our various mechanical prescriptions. But if there is a shortcoming from the internal energy function that we had constructed previously, and the work that we compute for a particular process, we say that in that process, there was some heat that went into the system. And it is, again, important to state that this dE really only depends on the initial and final state. So this depends on state. While these things-- that's why we put a bar on them-- depend on the particular path that we take. Now again, if you ask, well, what can we say in this context for the ideal gas? And there is the very nice experiment by Joule where he said, let's isolate our gas, let's say, into two chambers. These are kind of rigid balls, very nicely isolated from the environment. Initially, the gas is confined completely onto one side. So let's say that the initial state is over here, with this volume. And then we release this. And so the gas goes and expands. And we wait sufficiently. We wait sufficiently. Ultimately, we're settled to a place, let's say over here-- the final state. Now, it's important to say that I don't know the intermediate state of this transformation. So I can't say that somewhere between t equals to zero when I open the valve, and t goes to infinity where the whole thing is settled. Zero and infinity points I know. In between, I can't really have an idea of where to put my system. It is at non-equilibrium by construction. But this is a process that by definition I follow the path along which there was no input either in the form of work or heat into the system. It was isolated. So all I know for sure is that however amount of energy I had initially is what I have finally. The observation of Joule was that if we do this for the case of a dilute gas, I start with some initial temperature Ti. I go to some final temperature. But that final temperature is the same as the initial temperature. And that is actually why I put both of these points on the same isotherm that I had drawn before. So although I don't know anything about the in between points, I know that this is the property of the system. And essentially, we know, therefore, that although in principle I can write E as a function of P and V, it must be only a function of the product PV, because it only-- P changes in the process. V changes in the process. But PV remains constant. Or if you like, E is really a function of T, which is related to the product. And sorry about being kind of inconsistent with using T and theta for temperature. OK, any questions? So this is kind of recap of what we were doing. All right, now, it would be good if we can construct this function somehow-- at least theoretically, even. And you would say, well, if I had this spring, and I didn't know whether it was a Hookean or a nonlinear spring, what I could do is I could pull on it, calculate what the force is, and the displacement, and integrate f dx and use the formula that quite generally, the mechanical work is sum over i, let's say Ji dxi. Now, of course you know that I can only do this if I pull it on this spring sufficiently slowly, so that the work that I do goes into changing the internal energy of the spring. And then that would be a contribution to dE. If I do that rapidly, then it will send this spring into oscillation. I have no idea, again, where the system is in the intermediate stages. So I can use this kind of formula if, in this type of diagram that indicates the state of the system, I proceed sufficiently slowly so that I can put points that corresponding to all intermediate states. And so that's for things that are sufficiently slow and close to equilibrium that we call quasistatic. And so in principal, I guess rather than opening this immediately, I could have put a slow piston here, change its position slowly, so that at each stage I can calculate where I am on the PV diagram. And I could have calculated what the work is, and used this formula to calculate the contribution of the work to the change in internal energy. OK, now I said that it would be ideal-- and what we would like to do, ultimately-- is to have a similar formula for dQ. And if we look by analogy, we see that for W, you have J's times the X's. The forces are the things that tell us whether systems are in equilibrium with each other. So two-- if I have, let's say, a piston separating two parts of the gas, the piston will not move one direction or the other if the pressure from one direction is the same as the pressure from the other direction. So mechanical equilibrium tells us that J's are the same. And it suggests that if I want to write a similar type of formula for heat, that the thing that I should put here is temperature or empirical temperature, which is, again, a measure of thermal equilibrium. And then the question that would immediately jump onto you is, what is the conjugate that I have put for the displacement? And you know ultimately that it is going to end up to be entropy. And so the next part of the story is to build that. And we're going to do that through the Second Law of Thermodynamics. Any questions? Yes. AUDIENCE: I mean, this is kind of a semantics question, but what always confuses me in thermodynamics when we talk about quasistatic is, I mean, if it isn't quasistatic, introductory formation of [INAUDIBLE] doesn't really give a means to address it. So we have to rely on the fact that it's quasistatic. And the only way we could check it is to assume that what we're already saying is true will be true. So I guess my question is, in the context of a spring, like, there's obviously-- there exists a physical threshold at which-- that's not infinitely slow, and which you can get a reasonable measurement if you're doing an experiment. PROFESSOR: So in all of these things, I would sort of resort to some kind of a limiting procedure. So you could, for example, pull the spring at some velocity, and figure out what the amount of work is, and gradually reduce that velocity, and hope that the formula-- the values-- that you get are converging to something as you go to low velocity. I don't think there's actually a threshold velocity, kind of implied that if you are slower than something, then it would work. I think it really only works in the limit of zero velocity. And so if you like, that's kind of procedure you would use to define derivatives, right? So you can't say what's the meaning of velocity in physics. So, but it is an idealization. I said at the beginning that the very-- actually, I would say that more fundamentally, the thing that is really an idealization is this adiabatic walls. Even the concept of equilibrium-- what do we see around us that we are absolutely 100% sure in equilibrium? Nothing is. The fate of the universe is that all of our atoms and molecules are going to separate out and go to infinity, if it expands forever. Right? OK, Second Law-- all right, so again, we want to sort of set our minds back to where these laws were developed, which is in the 18th century. And the thing that was of importance at that time was sort of Industrial Revolution-- get things moving. You need energy, and you would get energy from coal, and so you need to convert heat to work. And this conversion of heat to work is important to you. So typical thing that you have for your coal engine is you have something that is a source of heat. So there's, for example, a fire that you're burning. You have some kind of a machine that has components, such as a piston, that is being heated. And during this process, you extract a certain amount of heat from your fire. And then typically, you realize that you will be releasing a certain amount of energy back to the atmosphere, causing pollution. So there is a coal sink. But in the process, you will be extracting a certain amount of work. And what you want to do is to make the best use of this set up. So you're concerned with efficiency, which is defined to be the amount of work that you're going to extract out of a certain amount of heat that you expend in your coal, or whatever. And again, based on conservation of energy, we expect that this W to be less than QH by an amount that you are setting to the exhaust. And because of that, this eta has to be less than or equal to 1, in principal. Now, another device that is using the same rough principles is the same thing going in reverse, which is a refrigerator. So in order to cool this room, you use electricity or some other means of doing work on a machine whose job it is to extract heat out of the room. So let's say this is our room. We want to extract the heat. But of course, that heat has to go somewhere. So this has to have some kind of exhaust putting this again to the atmosphere or somewhere. So how good is this thing performing? So there is a measure of the performance-- figure of merit, if you like, for this-- which we will label by omega, which is how much heat you were able to remove from the room given some amount of work or energy that was put in. And again, because of conservation, this W is going to be QH minus QC. Now, this particular number, which is a useful measure of how well your refrigerator works, has no particular constraint. It can be less than 1. It can be larger than 1. So again, you want it to be as large as possible, and in principle, it can be 200-- whatever. So clearly, already these are very much idealizations of some complicated process that is going on. And the thing that is interesting is that you can take a look at how well you are able to do these processes, and then make this equation that we wrote up here to be an exact differential form. So how you do that is an interesting thing that I think is worth repeating and exploring. And essentially, you do that by formulating things that are possible or not possible through the Second Law, and then doing some mathematical manipulations. And we will use two formulations of the Second Law due to Kevin and Clausius. So I will indicate them either by K for Kelvin-- so Kelvin's statement of the Second Law is the following-- no process is possible whose sole result is complete conversion of heat to work. So what that is, is a statement about how good an engine you can make. He says that you cannot make an engine that takes a certain amount of heat and completely makes it to work without having any waste QC. So basically, it says that there is no ideal engine, and that your eta has to be less than 1. There is a second variant of this, which is due to Clausius-- says that no process is possible whose sole result is transfer of heat from cold to hot. So if you want to make an ideal refrigerator, what you want to do is to extract more and more QC for less and less W. So the ideal version would be where essentially, the limit of zero work, you would still be able to transfer heat from the room to the outside and get an ideal refrigerator. And Clausius says that it's not possible. So these are two statements of the Second Law. There are other versions and statements of the Second Law, but these are kind of most close to this historical perspective. And then the question is, well, which one of them do you want to use? And since I will be kind of switching between using one or the other, it would be good if I can show that they really are the same statement, and I'm really not making use of two different statements when I use one or the other alternative. So what I really want to do is to show that these are really, logically, the same statement. And essentially, what that means is that the two statements are equivalent. And two statements are equivalent to each other if one of them being incorrect-- which I've indicated by K with a bar on it-- implies that the other one is correct, and simultaneously, the other way around. And since this does not take more than a few minutes, and is a nice exercise, I think it's worth showing this. So let's say that somebody came to you and said that I have constructed a machine that violates Clausius. We'll call that K bar. So what does that machine do? What that machine does is it converts heat, Q, completely to work, without needing to exhaust anything. So this W is this Q, and there's no exhaust machine. He says, OK I have this machine. I say, OK, what I will do is I will use that work to run a refrigerator. So I connect that to a refrigerator. And that work will be used to take heat, QC, out of some room and exhaust it to the atmosphere. And I will choose the exhaust location to be the same as the source of heat that I had for my anti-Kelvin machine. OK? So then if you look at how the sum of these two machines is operating together, from the perspective of looking at the sum of the two machines, the work is just an internal operation. You don't really care. So as far as the net thing is concerned, what you're seeing is that there is no work involved externally. It's all internal. But what is happening is that you're pulling QC out of the room, and exhausting, presumably, what you have here-- QH minus Q to the other side. This, again, by conservation of energy must be the same as QC, right? So if you have a machine that violates Kelvin, you connect it to a refrigerator, and what you have is a machine that transfers heat from a cold-air to a hot-air body, and violates Clausius. So that's the first part we have established. And to establish the second part, let's say that we have a machine that violates Clausius. And this machine takes a certain amount of heat, Q, from the room, and deposits it to the hotter outside. What we will do immediately is to take some of this heat that has been generated by this machine and use it in a regular engine to create some amount of work. And then this has to take an amount of heat QH, deposit an amount of heat, QC, which we will select to be, essentially, the temperature or the room from which the anti Clausius machine is operating. And then we will run this engine several times, or fractions of times, et cetera, and the other machine several times, ensuring that QC is the same as Q. That is, the engine puts out as much dumped heat as this anti-Clausius machine is extracting from the room. Then if you look at the combined system, what we see is that the combined system is doing the following-- this was equivalent to this. This is equivalent to a combined system that does a certain amount of work. So there is a W that is coming out. There is no heat exchange with the room, because we ensured that these two heats are completely balanced. So you can sort of think of them as something that is internal to this bigger machine. And so what is happening is that there's a certain amount of heat, QH minus Q that is taken here that has to be equal to W, by conservation of energy. And we have converted heat entirely to work, violating Kelvin's statement. So basically, we have proved this. Any questions? OK, one more step-- there is actually a couple more steps left before we get to construct our entropy function. And the next step is the Carnot Engine. So this is yet another one of these idealizations that-- again, theoretical construct, which you can try to approach as in some limiting procedure. A Carnot engine is any engine with the following properties-- that is, one, reversible, two, operates in a cycle, and three, all inputs-- all heat inputs-- outputs at two temperatures. OK, so let's go through the conditions. Reversible implies that at any stage, if I change the directions of inputs and outputs, it will just go backward. So can go forward, backward, by reversing inputs, outputs. And so again, think of the case of our spring that we were pushing back and forth. In order for it to have this property of reversibility, I should not only do it pulling down and up sufficiently slowly so that at each stage, I can define its tension, but also as far as its connection to the outside world, I have to do it frictionlessly, so that if I were to reverse the direction of the force that I have, it will either go forward or backward, reversing the path that it has. If there is any friction that is involved, then I cannot do that. So essentially, this reversibility is some kind of a generalization of the frictionless condition that we would use in mechanics. The cycle part is easy. Essentially, it says that the start and end points are the same. And it kind of harks back to this statement that is part of either one of these formulations of the Second Law-- that is the net result or sole result is something. So we want the engine to essentially be back to where it was, so that when we are looking at changes that are involved, we say, OK, the engine is not part of the equation. It is where it was at the beginning and at the end of the cycle. Now there's something that I didn't emphasize, but I was-- and actually, I was not really using, but you may have thought I was using-- is that in all of these pictures that I drew, I said there is a hot place and there's a cold place. Now, I didn't specify exactly what these are, and whether this corresponds to a particular temperature. Now, for the case of the Carnot engine, I have to be precise. I have to say that this is always at one temperature. This is always at the other temperature. So these Carnot engines are defined implicitly with two temperature labels corresponding to the hot part and the cold part. In principle, everything else that I talked about could have had a range of temperatures. But the Carnot engine says, OK, just two temperatures. OK, so for the Carnot engine, if I can now draw a diagram that would be kind of straight line version of what I had before. And the Carnot engine will be extracting a certain amount of heat, QH, from here, QC from here, doing a certain amount of work. And since it is reversible, I could actually run it backward. I could have W. I could have QH, QC, and it would work as a refrigerator. Now, it is good to have at least one example-- that such a theoretical construct as a Carnot engine can be made based on things that we have so far. And we can be explicit, and show that for the case of the ideal gas. So we said that a gas I can represent in equivalent by a point in the pressure-volume diagram. So let's say that what is the working substance of this Carnot engine is a gas. And part of this whole story is that it should be working between two isotherms-- the TH and TC. But we've already established that we can have isotherms for the ideal gas that correspond to these curves that correspond to PV as constant. So I could really choose one of the isotherms that we discussed before corresponds to TH. One corresponds to TC. And I can imagine taking a gas whose initial state is point A, and expanding it while maintaining it at this isotherm TH, and ending up at some point B. So as I'm expanding this, let's say we have put this gas. I have this piston that contains this gas. My TH and TC correspond to two baths that correspond to, if you like, lakes-- a temperature TH, lakes at temperature TC. I took my piston, put it into the hot bath, make it expand up to some point. And in the process, there is certainly a certain amount of heat that will go into the system, QH. So I have taken QH out of the hot source. Now, next thing that I want to do is to take this, and put it into the colder lake. But what the process is, is I have to do it in a way that there is no heat exchange. So I have to find the path that goes from the hot bath to the cold path-- to point C, some point C-- without any heat involved. We'll have to ask how that's possible, and what's the structure of that path. Once I'm in the colder bath, then I can expand my gas up to some point D. And clearly, I have to choose this point D sufficiently precisely so that then, when I take my gas from the cool to the original hot position, I end up at precisely the location that I started, so that I have completed a cycle. So it's necessary that I should be able to construct these two paths, BC and DA, that correspond to no heat exchange. And somehow I'm sure that I can complete this cycle. So the paths BC and DA are called adiabats, or adiabatic paths, in the sense that this is what you would get if you put your system in a container with these adiabatic walls that allows no exchange of heat. Now clearly, I also want to do this sufficiently slowly so that infinitesimally, I can draw this path as a series of points. And so I don't want to do what was in the Joule expansion experiment, and suddenly expand the whole thing. I have to do it sufficiently slowly so that the conditions of this Carnot engine are satisfied. So along these paths, what do I know? By definition dQ is 0. And dQ is dE minus dW along those paths. And since I'm performing these paths sufficiently slowly, I can write the W as PdV minus PdV. Now, what you will show in the problem set is that I don't need to make any assumption about the functional form of energy on P and V. Just knowing that I'm describing a system that is characterized by two coordinates, P and V, is sufficient for you to construct these curves. Turns out that if you have more than two coordinates-- three or four coordinates-- at this time you don't know that you will be able to do so. But we're sticking with this. For simplicity, I'm going to choose this-- the form that I have for the ideal gas, which I know that it's really a function only of the product PV. This is what we had used before. And again, so as to simplify the algebra, I will use the form that is applicable to a monoatomic gas. And this-- I really have no reason to do this, except that I want to be able to get through this in two minutes with the algebra. So I will write the energy to be 3/2 PV. So then for that particular choice, what I have here is d of 3/2 PV plus PdV. And that's 3/2 PdV plus V, 3/2 VdP plus PdV, which is 5/2 PdV plus 3/2 VdP is 0. And you can rearrange that easily to dP over P plus 5/3 dV over V is 0. And this is simply the derivative of log of PV to the power of 5/3. And since this derivative is 0, you know that along the curves that correspond to no heat exchange, you have something like PV to the power of gamma-- in this case, gamma being 5/3-- that is some constant. So after you have done your expansion, all you need is to, in this diagram, go continuously along the path that corresponds to this formula that we have indicated. And this formula describes a path that is distinct from the isotherm. So you start from somewhere. You start along some path. You are guaranteed to hit the other isotherm at some point. You start from the starting point. You go backward. You're guaranteed to hit the isotherm at some point. And then you join those two points, and you've completed your cycle. And so you know that, at least for the case of the ideal gas, or actually for any two-coordinate system, you can construct a cardinal cycle. That is, put your material, go through a cycle that is reversible with all heat exchanges at two temperatures only. OK, now why is this idealization even useful? Well, we said that this Carnot engine is stamped by really two temperatures-- TH and TC. And the following theorem is important, which says that of all engines operating only between TH and TC, the Carnot engine is most efficient. So we are going to move away from this idealization that we've made, but so far, only in one small step. That is, we are still going to assume that all heat exchanges are done between TH and TC. OK, and so let's say that we have some kind of an engine which does not have to be reversible. So because it does not have to be reversible, it's not a Carnot engine, but otherwise, operates completely between these two points. So it takes, let's say, heat QH prime, QC prime, and does a certain amount of work W. And so somebody comes and says, I've constructed this engine that is actually very good. It's better than your Carnot engine. I say, no, that's not possible. And the way that I will prove it to you is as follows-- I will take the output of your engine, connect it to a Carnot engine, and since the Carnot engine can be run backward, it will act as a refrigerator and extract heat and deposit heat precisely at the two temperatures that your engine is operating. So if somebody looks at the combination of these two, what do they see? The thing that they see for the combination is that there is some entity that is operating between TH and TC. And there is some internal amount of work that is going on, and you don't care about that. But there is a certain amount of heat, QH prime minus QH that becomes QC prime minus QC. Now here we invoke Clausius. Clausius says that if you see heat going between two temperatures, it could have only gone from the direction hotter to colder, which means that this amount of heat has to be positive. QH prime should be greater than QH. So this is why Clausius-- we know that this has to be case. So then what I can do, is I can divide both expressions by W. And then invert this thing. And if I invert it, the inequality gets inverted. So I get that W over QH prime is less than W over QH. And the left-hand side is the efficiency of my non-Carnot engine. And the right-hand side is the efficiency of my Carnot engine. Carnot And what I've shown is that the efficiency of the non-Carnot engine has to be less than or equal to the efficiency of the Carnot engine. OK, now, the next step is the following-- that all Carnot engines operating between TH and TC have the same efficiency. And the statement is, suppose I have two different Carnot engines-- Carnot engine one and Carnot engine two. I can use one to run the other one backward, and I would get that the efficiency of Carnot engine one is less than or equal to Carnot engine two, or the other way around-- that two is less than or equal to one. And hence, the only possibility is that they are the same, which is actually interesting, because I told you that Carnot engines are stamped by two temperatures, TH and TC. And what it says is that they all have an efficiency that is a function, therefore, of these two temperatures only. It cannot depend on whether I constructed my Carnot engine out of an ideal gas. I constructed it out of a rubber band that I was pulling or pushing, or any other substance in the world that you can think about, as long as that substance I can construct some kind of a Carnot cycle, irrespective of what the coordinate systems are, what the isotherms look like, et cetera. Just working between two temperatures and two isotherms tells me that this reversible cycle will have this efficiency relating the amount of work and the amount of heat that these exchange as you go around. OK? Now, that's important because previously we wanted to define our temperature. And we had to rely on some property of the gas. We said OK, empirically we observe that for the gas, you have this relationship that when it is dilute, PV is proportional to T and we use that property of the gas to construct our ideal gas temperature scale. Well, that's OK, but it relies on some observations additionally to everything else that we had. Whereas now we have the means to define temperatures irrespective of any material. This is much more of a universal quantity. It' doesn't depend on any substance, any particular structure. So it's a much better way to rely for constructing a temperature scale. So in some sense, at least theoretically, we are going to abandon our previous temperature scale based on properties of the ideal gas, and reconstruct the temperature scale based on this. Any questions? OK. So this is what will be called the thermodynamic temperature scale. And what it amounts to is that somehow I use the efficiency between two temperatures or vice versa-- use that there are two temperatures. Let's say one of them is a reference temperature. I derive the temperature of the other one by the maximum efficiency of all engines that can operate between the reference temperature and the temperature that I wish to measure. It's T. But that means I-- before doing that, I need to know at least something, or some kind of a postulate about the structure of this function of two variables. And I cannot write any arbitrary function. It has to satisfy certain properties and symmetries. And I can see what those properties are by putting a couple of these engines in series. So let's imagine that we have a structure where I have the highest temperature T1, and intermediate temperature T2, and the lowest temperature T3. And I put one Carnot engine to operate between these two temperatures, and one to operate between these temperatures. And the first one is going to take heat that I will called Q1. And release a heat that I will call Q2, in the process doing a certain amount of work that I will call W between 1 and 2. Now what I will do is I will use the entirety of that heat, Q2, to run the second Carnot engine to temperature T3, reducing here the amount of heat, Q3, and doing a certain amount of work-- W23. Clearly what I want to do is to say that when I look at the combination of these things, what happens at temperature 2 is an intermediate. And the whole thing is equivalent to a Carnot engine operating between T1 and T3. Again, the whole thing is reversible, operates between two temperatures, and since each one of them can be made to go back to where they started, is a cycle. So the whole thing is really equivalent to another Carnot engine that takes Q1, deposits Q3, does a certain amount of work, W13, which is the sum of W12 plus W23. So clearly, these two different perspectives on the same operation will constrain some-- give some constraint between the form of the functions that will be describing the efficiencies of Carnot engines. So let's follow that mathematically. OK, so the first one-- what do we have? By conservation of energy, I have that Q2 is-- so this is-- let me be precise. This is Carnot engine 1-- tells me that Q2 is Q1 minus W12. All right, so how much heat I have here is the difference between this and the amount of work that I did-- conservation of energy. W12 is the efficiency times Q. That was how it was defined. So it is 1 minus the efficiency operating between T1 and T2 times Q1. What do we know about Carnot engine number 2? Carnot engine number 2 says that I can look at Q3 as being the difference between Q2 minus the work that is done over here. The work is related to the heat through multiplying by the efficiency that is operating between T2 and T3. And if I substitute for Q2 from the first equation, what do I get? I will get Q2 being Q1, 1 minus efficiency T1, T2, multiplying by the second bracket that is 1 minus efficiency T2 T3. But I can also look at the composite Carnot engine-- that is the sum total of them represented by the diagram on the right. And what that states-- just writing the whole thing same way-- is that this Q3 is the same as Q1 minus this amount of W13, which is the same thing as Q1, 1 minus the efficiency between T1 and T3. So suddenly you see that I have two equations that relate Q3 and Q1-- essentially two formulations of this ratio. And what that says is that the efficiency functions calculated between pairs of temperatures, when we look at triplet of them, have to be related by 1 minus efficiency between T1 and T3 is 1 minus efficiency between T1 and T2, 1 minus efficiency between T2 and T3. So when we are constructing our temperature scale based on properties of this efficiency function, we better choose and efficiency function that satisfies this equality. And very roughly again, where did this type of equality come from? It originated in the fact that this 1 minus eta was the ratio of two Q's. This was the ratio of Q2 over Q1, and then the other one was Q1 over Q3, and you would multiply them, and you would get Q2 over Q3, et cetera. And that gives you a hint as to how you should construct this function to satisfy the property that you want. If I write the function as the ratio of some function of T2 divided by some function of T1, it would cancel in precisely the same way that when you were multiplying Q2's and Q1's, the cancellations occur. OK? So what I need to do is to postulate that this is of this form. And then actually, I'm free to choose any function or form for F that I want. So here we need to make a convention. And the convention that we make-- so this is-- this was not required-- is that this is really the functional form is linear. So this defines for you the thermodynamic temperature scale. Yes. AUDIENCE: Yes, the part, the step right before the convention, where you wrote F of T1 over T1, is this also an assumption, or is it a necessity? PROFESSOR: I'm, let's say, 99.9% sure that it's a mathematical necessity. I can't-- at this stage, think of precisely how to make that-- actually, yes, I can make that rigor. So essentially think of writing this in the following way-- 1 minus eta of T1 and T2 equals to 1 minus eta of T1 and T3, 1 minus eta of T2 and T3. OK? The left-hand side depends on T1 and T2. The right-hand side is the ratio of two functional forms involving T1 and T2 precisely as you have over here, as long as you regard T3 to be some arbitrary parameter or constant. OK? So that's the proof. All right, unfortunately, it's like doing magic. Once you reveal the trick, it becomes not so interesting. Yes. AUDIENCE: Does that mean more precisely if we assumed that T2 and T1 using F are empirical temperatures, and we define the different temperature to be F [INAUDIBLE] 2, something like that? PROFESSOR: OK, what I wanted to say next maybe answers to that. So why don't I give that answer first, and then come back to you? Is that so far, we have defined two temperatures. There is the thermodynamic temperature scale based on these efficiencies and there is one other statement, which is that this only defines the ratios. So I need to also pick a reference temperature. And for reference temperature, I pick the temperature of the coexistence of ice, steam, water, to be 273.16 degrees K, which is what I have for the ideal gas scale. Because if I don't do that, I only know things up to some proportionality constant. If I say that I pick that reference temperature, then in principle what I could do is, if somebody gives me a bath, I can run a Carnot engine between that bath of unknown temperature and this reference point, calculate the efficiency, 1 minus the efficiency, use this ratio-- 273.16-- and I have the temperature of my new object. So we've completed that thermodynamic temperature scale. If I understood correctly the question that was posed, is, well, what about the empirical temperature? And I was going to answer something different, which is not really what you were asking. So maybe you can repeat your question one more time. AUDIENCE: If the, all of our computations where we assume that instead of T's, there are some thetas. PROFESSOR: Right. AUDIENCE: And which means that we use the [INAUDIBLE] at last we define the thermodynamic temperature, it seems to have more precise. PROFESSOR: That's fine. So you're saying that what I did with the Zeroth Law was to state that for somebody that exists in thermal equilibrium with a bath, I can characterize it with the theta. And I can go through the whole argument that I had over here by constructing, let's say, Carnot engines that operate between empirical temperatures defined as theta H and theta C. And nowhere in the process have I given you a number for what theta H and theta C is up to this point. And at this point, I defined efficiency, which I have established is only a function of the empirical temperature operating between two isotherms, as some way through this process and this definition, giving an actual numerical value. So that's fine. Yes. AUDIENCE: So should you also define temperature [INAUDIBLE] with ideal gas and infinite expansion? PROFESSOR: Yes. AUDIENCE: And doesn't it define what convention for effective we need to pick here? So for all the definitions to be consistent with each other? PROFESSOR: OK, so now that becomes a different question. So I think the first question was, let's not define any temperature scale up to this point. And this is the first time that we define a temperature scale. Now, your question is the one that I wanted to originally answer, so thank you for the question, which is that I actually did define for you a temperature scale through some property of the ideal gas. And indeed, that property of the ideal gas I implicitly used here in the shape of the isotherms that I had and constructing this energy functional in this fashion. Now you say well, OK, you defined temperature two different ways. Are they consistent with each other? And you will have a problem set that will ask the following question-- let's run an ideal gas Carnot cycle between ideal gas temperatures theta H and theta C. OK? Now you can-- and I have drawn for you the form of this Carnot cycle. Using PdV, you can calculate what amount of work goes into this at different stages. And the net work would be the area of this cycle. So W would give you the area of the cycle. You can calculate what QH is. And you can calculate the efficiency of this as a function theta H and theta C. And what you will find, if you do things correctly, is that using all of these definitions for the shapes of these isotherms, you can calculate what the efficiency is. And the answer for your efficiency will come out to be theta H minus theta C divided by theta H, which is precisely what you would have expected based on the Carnot cycle. So then you can establish that the ratio of theta H to theta C defined through the ideal gas temperature scale is the same as the ratio of the temperatures if you had defined through the thermodynamic. So the two scales, once you set the same point for both of them, become identical. Other questions? OK, so we went through all of this in order to ultimately get back to the story that I was constructing at the beginning of the lecture, which was that I'd like to have the form ultimately for my energy. I want to construct the energy. And if I was looking only at mechanical systems, and there was no temperature in the world, I would construct it as sum over i Ji dxi. But I know that that's not enough, because there are processes by which I can change the state by not doing any mechanical work-- go from one point in the diagram to another point in the diagram. So there must be something else. Now we said that it kind of makes a lot of sense that something else should be something like a temperature. And before that, really, the only thing that we had that was a form of temperature was the ideal gas temperature, or the empirical temperature that we had defined through the Zeroth Law. And neither of them had any nice connection to heat. Through this Carnot engines, et cetera you have established some kind of a connection between heats, temperatures, et cetera. And that's why this thermodynamic temperature scale is useful. It is independent of any material. And it's really the T that I should put here. And so the next question is, what should I put here? And that will also come through the Second Law. I will develop that mostly the next time around, but I'll give you in the next two or three minutes a preview. So the statement that we had was that the efficiency of any engine is less than the Carnot engine, right? So let's imagine that we have some kind of a exchange process going between TH and TC. And there is some engine-- I don't know what that engine is-- that takes heat QH here, QC here, and does a certain amount of work. Now I know that W over QH is less than or equal to 1 minus TH over TC. And W is really the same thing as QH minus QC. So actually, I could also have written this as 1 minus QC over QH is less than 1 minus TC over TC. I can eliminate the 1, and rearrange things, yes? AUDIENCE: It's TC over TH. PROFESSOR: 1 minus TC over TH, thank you very much. Yes, and if you ever forget-- not that I forgot-- I miswrote-- but something to remember is that ultimately, I was kind of hinting this-- if you ever forget Q's are proportional to T's. So what I have is that ideally, QH will be proportional to TH. QC will be proportional to TC. So the difference between them would be proportional to the differences. Also, that means that if I were to rearrange this slightly, what I would get is that QH over TH plus minus QC over TC has to be negative. So this is just a rearrangement of the whole thing. But this rearrangement-- I wrote this as minus QC so as to look at things from the perspective of the engine. So the engine is something that goes through a cycle. As part of that cycle, it does some work. But what this expression has to do is the heat that goes into the system. So I can regard minus QC as the heat that goes into the engine coming from a reservoir of temperature TC, QH going through the engine, coming from a reservoir of temperature TH. And you have a relation such as this because the efficiency of any type of engine has to be less than this efficiency of the Carnot engine that is stamped by the corresponding temperatures. OK, now this is kind of limited. And in order to be able to construct a general formula, we need to be able to make statements that's are relevant to arbitrary complex cycles and complex behaviors. So what we will start next time is to prove something that ultimately will allow us to quantify entropy. It's called Clausius' Theorem, which states the following-- imagine some kind of a generalization of your engine that takes place in some multidimensional space. So rather than really thinking about an engine, I want to think about some substance. This was an ideal gas run through some nice cycle. But I want to think about some substance that is described by multiple coordinate system. I take the system from some point A, and the only requirement that I place on it is that ultimately, I do something, and I come back to A. So it is a cycle. What I don't even require is that this is a quasistatic process, so that the intermediate stages are well-defined in this coordinate space-- OK, so for any arbitrary cyclic transformation. OK, now this transformation-- we'll do is that at various stages-- just like that simple example-- we'll do work on the environment, take work from the environment. But most importantly from our perspective, there will be heat input into the system at various stages of the cycle. And I'm going to look at it from the perspective of what goes into the system, just like in this engine. So sometimes maybe this dQ is negative-- means that the system is really releasing heat, just like this engine was releasing the heat. Now, Clausius' Theorem says that if you integrate all around the cycle, these elements of heat that you input to the system throughout various stages of the cyclic transformation, and divide them by some T. And this is kind of something that needs definition. And S is supposed to parametrize the cycle. Let's say S goes from 0 to 1 as you go across the cycle. Generalizing that is negative. Next time, we will see if this system is non-equilibrium, what exactly I mean by this T of S. And we will see, somehow, that once I define this, by doing part of this, I can actually define an entropy function integral from A to B dQ over S, and complete the definition of what needs to be put for the Q in order to give you an expression for T. OK. |
MIT_8333_Statistical_Mechanics_I_Statistical_Mechanics_of_Particles_Fall_2013 | 18_Interacting_Particles_Part_4.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So today's topic is liquid gas condensation and throughout this course I have already mentioned two different perspectives on the same phenomenon, transition between liquid and gas. One perspective was when we looked at the phase diagram even in the first lecture I mentioned that if you look at something like water as a function of pressure and temperature it can exist in three different phases. There is the gas phase, which you have at high temperatures and low pressures. At low temperatures and high pressures you have the liquid phase. And of course at very low temperatures you also have the possibility of the solid phase. And our concern at some point was to think about the location of the coexistence of the three phases of that was set as the basis of the temperature, 273.16. Actually what you're going to focus now is not this part of the phase diagram but the portion that consists of the transition between liquid and gas phases. And this coexistence line actually terminates at what is called the critical point that we will talk about more today at the particular value of Tc and Pc. An equivalent perspective that we have looked at that and I just want to make sure that you have both of these in mind and know the relationship between them is to look at isotherms. Basically we also looked at cases where we looked at isotherms of pressure versus volume. And this statement was that if we look at the system at high enough temperatures and low enough pressures so let's say pick this temperature and scan along a line such as this, it would be equivalent over here to have some kind of an isotherm that is kind of potentially a distorted version of an ideal gas hyperbola. Now if we were to look at another isotherm that corresponds to this scan over here then what would happen is that it will cross this line of the coexistence, part of it would fall in the liquid phase, part of it would fall in the gas phase. And we draw the corresponding isotherm, it would look something like this. There would be a portion that would correspond to being in the liquid. There would be a portion that would correspond to being in the gas. And this line, this point where you hit this line would correspond to the coexistence. Essentially you could change the volume of a container and at high volumes, you would start with entirety gas. At the same pressure and temperature you could squeeze it and some of the gas would get converted to liquid and you would have a coexistence of gas and liquid until you squeezed it sufficiently, still maintaining the same pressure and temperature until your container was fully liquid. So that's a different type of isotherm. And what we discussed was that basically there is a coexistence boundary that separates the first and the second types of isotherms. And presumably in between there is some trajectory that would correspond to basically being exactly at the boundary and it would look something like this. This would be the trajectory that you would have at Tc. The location Pc is the same, of course, so basically I can carry this out so there is the same Pc that would occur here. And there would be some particular Vc but that depends on the amount of material that I have. OK? So we are going to try to understand what is happening here and in particular we note that suddenly we have to deal with cases where there are singularities in our thermodynamic parameters. There's some thermodynamic parameter that I'm scanning as I go across the system. It will not be varying continuously. It has these kinds of discontinuities in it. And how did discontinuities appear and how can we account for these phase transitions given the formalisms that we have developed for studying thermodynamic functions. And in particular let's, for example, start with the canonical prescription where I state, let's say that I have volume and temperature of a fixed number of particles and I want to figure out thermodynamic properties in this perspective. What I need to calculate is a partition function and we've seen that the partition function is obtained by integrating over all degrees of freedom which are basically the coordinates and momentum of particles that make up this gas so I have to do d cubed p i, d cubed q i, divide by h cubed, divide by n factorial because of the way that we've been looking at things, of energy so I have e to the minus beta h that has a part that is from the kinetic energy which I can integrate immediately. And then it has a part that is from the potential energy of the interactions among all of these particles. OK? So somehow if I could do these integrations-- we already did them for an ideal gas and nothing special happened but presumably if I can do that for the case of an interacting gas, buried within it would be the properties of this phase transition and the singularities, et cetera. Question is how does that happen? So last time we tried to do shortcuts. We started with calculating a first derivative calculation in this potential and then try to guess things and maybe that was not so satisfactory. So today we'll take another approach to calculating this partition function where the approximations and assumptions are more clearly stated and we can see what happens. So I want to calculate this partition function. OK? So part of that partition function that depends on the momentum I can very easily take care of. That gives me these factors of 1 over lambda. Again as usual my lambda is h over 2 pi m k T and then I have to do the integrations over all of the coordinates, all of these coordinate integrations and the potential that I'm going to be thinking about, so again in my U will be something like sum over pairs V of q i minus q j, where the typical form of this V as a function of the separation that I'm going to look at has presumably a part that is hard-core and a part that is attract. Something like this. OK? So given that potential what are the kinds of configurations I expect to happen in my system? Well, let's see. Let's try to make a diagram. I have a huge number of particles that don't come very close to each other, there's a hard-core repulsion, so maybe I can think of them as having some kind of a size-- marbles, et cetera, and they are distributed so that they cannot come close to each other. More than that are zero and then I have to sum over all configurations that are compatible where they are not crossing each other and calculate e to the minus beta u. So let's try to do some kind of an approximation to this u. I claim that I can write this U as follows- I can write it as one-half, basically this i less than j, I can write as one-half all i not equal to j, so that's where the one-half comes from. I will write it as one-half but not in this form, as an integral d cubed r d cubed r prime n of r n of r prime v of r minus r prime where n of r is sum over all particles asking whether they are at location r. So I can pick some particle opposition here-- let's call it r-- ask whether or not there is a particle there, construct the density by summing over everywhere. You can convince yourself that if I were to substitute this back over here, the integrals over r and r prime can be done and they sit r and r prime respectively to sum q i and q j and I will get back the sum that I had before. OK? So what is that thing doing? Essentially it says pick some point, r, and then look at some other point, r prime, ask whether there are particles in that point and then sum over all pairs of points r and r prime. So I changed my perspective. Rather than calculating the energy by looking at particles, I essentially look at parts of space, ask how many particles there are. Well, it kind of makes sense if I coarse-grained this a little bit that the density should be more or less the same in every single box. So I make the assumption of uniform density-- I shouldn't say assumption, let's call it an approximation of uniform density in which I replace this n of r by it's average value which is the number of particles per [INAUDIBLE]. OK? Then my U-- I will take the n's outside. I will have one-half n squared integral d cubed r d cubed r prime v of r minus r prime. Again it really is a function of the relative distance so I can integrate over the center of mass if you like to get one factor of volumes so I have one n squared V and then I have an integral over the relative coordinate d cubed r-- let's write it as 4 pi r squared d r, the potential as a function of separation. Now the only configurations that are possible are ones that don't really come closer than wherever the particles are on top of each other, so really there is some kind of a minimum value over here. The maximum value you could say is the size of the box but the typical range of potentials that we are thinking is much, much less than the size of the box. So for all intents and purposes I can set the value of the other part of the thing to infinity. So what I'm doing is essentially I'm integrating this portion and I can call the result of doing that to be minus u. Why minus? Because it's clearly the attractive portion in this picture I'm integrating. And you can convince yourself that if I use the potential that I had before-- yes, the last time around it was minus u 0 r 0 over r to the sixth power, but this u is actually the omega that I was using times u 0 but I will keep it as u to sort of indicate that it could be a more general potential. But the specific potential that we were working with last time to calculate the second visual coefficient would correspond to that. So essentially I claim that the configurations that I'm interested will give a contribution to the energy which is minus whatever this u 0-- u is n squared V divided by 2. Now clearly not all configuration has the same energy. I mean that's the whole thing, I have to really integrate overall configurations. I've sort of looked at an average contribution. There will be configurations where the particles are more bunched or separate, differently arranged, et cetera, and energy would vary. But I expect that most of the time-- I see something like a gas in this room, there is a uniform density typically. The fluctuations in density I can ignore and I will have a contribution such as this. So this factor I will replace by what I have over here. What I have over here will give me e to the minus beta u density squared actually I can write as N squared over V squared, one of V's cancel here, I have 2 V. OK? So that's the typical value of this quantity but now I have to do the integrations over all of the q's. Yes? AUDIENCE: In terms of the approximation you made-- PROFESSOR: Yes. AUDIENCE: --the implications are that if you have a more complex potential-- PROFESSOR: Yes. AUDIENCE: --you'll get into trouble with for a given sample size, it will be less accurate. would you agree with that? PROFESSOR: What do you mean by a more complex potential? Until you tell me, I can't-- AUDIENCE: I mean I, guess, and usually-- I mean, I guess if you had like multi-body or maybe the cell group. PROFESSOR: OK. So certainly what I have assumed here is a two-body potential. If I had a three-body potential then I would have a term that would be density cubed, yes. AUDIENCE: But then you would-- my point is that-- and maybe it's not a place to question-- for a given sample size, I would imagine there's some relation between how small it can be and the complexity of the potential when you make this uniform density assumption. PROFESSOR: OK. We are always evaluating things in the thermodynamic limit, so ultimately I'm always interested in m and v going to infinity, while the ratio of n over v is fixed. OK? So things-- there are certainly problems associated with let's say extending the range of this integration to infinity. We essentially are not worrying about the walls of the container, et cetera. All of those effects are proportional to area and in the thermodynamic limit, the ratio of area to volume goes to zero. I can ignore that. If I bring things to become smaller and smaller, then I need to worry about a lot of things, like do particles absorb on surfaces, et cetera. I don't want to do that. But if you are worrying about complexity of the potential such as I assume things to be radially symmetric, you say, well actually if I think about oxygen molecules, they are not vertical. They have a dipole-dipole interactions potentially, et cetera. All of those you can take care of by doing this integral more carefully. Ultimately the value of your parameter u will be different. AUDIENCE: But I guess what I'm getting at is you deviate from the-- if you're doing it-- if we're back when computers are not as fast and you can essentially approach the thermodynamic limit and you're doing a simulation and you want to use these as a tool-- PROFESSOR: Mm-hmm? AUDIENCE: --then you get into those questions, right? PROFESSOR: Yes. AUDIENCE: But this framework should still work? PROFESSOR: No. This framework is an approximation intended to answer the following question-- how is it possible that singularities can emerge? And we will see shortly that the origin of the emergence of singularities is precisely this. And if I don't have that, then I don't have singularities. So truly when you do a computer simulation with 10 million particles, you will not see the singularity. It emerges only in the thermodynamic limit and my point here is to sort of start from that limit. Once we understand that limit, then maybe we can better answer your question about limitations of computer simulations. OK? So I've said that there's basically configurations that typically give the [INAUDIBLE]. But how many configurations? I can certainly move these particles around. And therefore I have to see what the value of these integrations are. OK? So I will do so, as follows-- I will say that the first particle, if there was nobody else-- so I have to do this integration over q 1, q 2, q 3, and I have some constraints in the space that the q's cannot come closer to each other than this distance. So what I'm going to try to calculate now is how this factor of v to the n that you would have normally put here for ideal gases gets modified. Well I say that the first particle that I put in the box can explore the entire space. If I had two particles, the second particle could explore everything except the region that is excluded by the first particle. The second particle that I put in can explore the space minus the region that is excluded by the first two particles. And the last, the end particle, the space except the region that is excluded by the first n minus 1. This is an approximation, right? So if I have three particles, the space that is excluded for the third particle can be more than this if the two particles are kind of close to each other. If you have two billiard balls that are kind of close to each other the region between them is also excluded for the first particle. We are throwing that out and one can show that that is really throwing out things that are higher order in the ratio of omega over V So if you're right, this is really an expansion in omega over V and I have calculated things correctly to order of omega over V. And if I am consistent with that and I multiply all of these things together, the first term is V to the N and then I have 1 minus omega plus 2 omega plus all the way to N minus 1 omega which then basically sums out to 1 plus 2 plus 3 all the way to N minus 1, which is N minus 1 over 2 in the large N limit. It's the same thing as N squared over 2 and this whole thing is raised to the power-- OK this whole thing I can approximate by V minus N omega over 2 squared. There are various ways of seeing this. I mean one way of seeing this is to pair things one from one end and one from the other end and then multiply them together. When you multiply them you will have a term that would go be V squared, a term that would be proportionate to V omega and would be the sum of the coefficients of omega from the two. And you can see that if I pick, say, alpha term from this side N minus alpha minus one term from the other side, add them up, the alphas cancel out, N and N minus 1 are roughly the same, so basically the square of two of them is the same as the square of V minus N omega over 2. And then I can repeat that for all pairs and I get this. Sorry. So the statement is that the effect of the excluded volumes since it is joint effect of mutually excluding each other is that V to the N that I have for ideal gas, each one of them can go over the entire place, gets replaced by V minus something to the power of N and that something is N omega over 2. Plus higher orders in powers of omega. OK? So I will call this a mean field estimate. Really, it's an average density estimate but this kind of approximation is typically done for magnetic systems and in that context the name of mean field has stuck. And the ultimate result is that my estimation for the partition function is 1 over N factorial lambda to the power of 3 N V minus N omega over 2 raised to the power of N because of the excluded volume and because of the Boltzmann weight of the attraction between particles I have a term which is e to the minus beta U N squared over 2V. OK? So the approximations are clear on what set of assumptions I get this estimate for the partition function. Once I have the partition function, I can calculate the pressure because log z is going to be related to free energy, the derivative of the free energy respect to volume will give me the pressure, and you can ultimately check that the formula is that beta P is the log Z by d V. That's the correct formula. And because we are doing this in the canonical formalism that I emphasized over here, let's call this P that we get through this process beta P canonical. OK? So when I take log z, there's a bunch of terms and I don't really care about because they don't have V dependence from N factorial lambda to the power of 3N, but I have N times log of V minus N omega over 2. So that has V dependence. Take the derivative of log of V minus N omega over 2, what do I get? I get V minus N omega over 2. The other theorem when I take the log I have a minus beta U N squared over 2 V so then I take a derivative of this 1 over V. I will get minus 1 over V squared so I have a term here which is beta U over 2 N squared over V squared. OK? Let's multiply by k T so that we have the formula for P canonical. P canonical is then N k T V minus N omega over 2 and then I have-- I think I made a sign over here-- yes the sign error that I made is as follows-- note that the potential is attractive so I have a minus U here. So when I exponentiate that, it becomes a plus here and will be a plus here and therefore I will have a minus here and I will have a minus U over 2 times the density squared. OK? So what did we arrive at? We arrived at the van der Waals equation. OK? So we discussed already last time around that if I now look at the isotherms pressure volume at different temperatures, at high temperatures I have no problem. I will get things that look reasonable except if I have to terminate at something that is related to this excluded volume, whereas if I go to low temperatures, what happens is that the kind of isotherms that I get have a structure which is kind of like this and incorporates a region which violates everything that we know about thermodynamics, is not stable, et cetera. OK? So what happened here? I did a calculation. You can see every step of the calculation over there. Why does it give me nonsense? Everything that you need to know is on the board. Suggestions? Yep? AUDIENCE: The assumption of uniform density in a phase transition? PROFESSOR: Right. That's the picture that I put over here. If I draw the box that corresponds to what is happening over there, what's going on in the box? So there is a box. I have particles in it and in the region where this nonsense is happening, what is actually happening in reality? What is actually happening in reality is that I get some of the particles to condense into liquid drops somewhere and then there's basically the rest of them floating around as a gas. Right? So clearly I cannot, for this configuration that I have over here, calculate energy and contribution this way. So coexistence implies non-uniform density. OK? So what do we do? I want to carry out this calculation. I want to stay as much as possible and within this framework and I can do it, but I need to do one other thing. Any suggestions? Yes? AUDIENCE: Try to describe both densities simultaneously? PROFESSOR: Try to describe both densities-- AUDIENCE: You know the amount of liquid and the amount of vapor on average-- PROFESSOR: I guess I could. I have an easier way of doing things. OK. So the easier way that I have on doing things is we have emphasized that in thermodynamics, we can look at different perspectives. Problem with this perspective of the canonical is that I know that I will encounter this singularity. I know that. There's phase diagrams like this exist. Whereas if I use this prescription I have one. So what's the difference is this prescription is better mind in terms of pressure and temperature. So what I want to do is to replace the canonical perspective with what I will call the isobaric-- it's the Gibbs canonical ensemble. In this context it's called isobaric ensemble where rather than describing things at fixed volume, I describe things at fixed pressure. So I tell you what pressure, temperature, and the number of particles are and then I calculate the corresponding Gibbs partition function. Why does that help? Why that helps is because of what you see on the left side diagram is if I wanted to maintain this system at the fixed pressure rather than a fixed volume, I would replace one of the walls of this container with a piston and then I would apply a particular value of pressure. Then you can see that a situation such as this does not occur. The piston will move so that ultimately I have a uniform density. If I'm choosing my pressure to be up here, then it would be all liquid. If I choose my pressure to be down here, it will be all gas. So I have this piston on top. It does not tolerate coexistence. It will either keep everybody in the gas phase or it will compress, change the volume so that everybody becomes liquid so what I have done is I have gotten rid of the volume. And I do expect that in this ensemble as I change pressure and go through this line, there is a discontinuity that I should observe in volume. OK? So again just think of the physics. Put the piston, what's going to happen? It's going to be either one or the other. And whether it is one or the other, I'm at uniform density and so I should be able to use that approximation that I have over there. OK? So mathematics, what does that mean? This Gibbs version of the partition function in this isobaric ensemble, you don't fix the volume you just integrate over all possible volumes weighted by this e to the minus beta P V that does the Laplace transform from one ensemble to the other of the partition function which is at some fixed volume, V T N. OK? And for this we use the uniform density approximation. So my Gibbs partition function, function of P T and N is the integral 0 to infinity d V e to the minus beta P V and I guess I have what would go over here would be log Z of V T N. OK? Again this is answer to previous question, I am in the thermodynamic limit. I expect that this which gives essentially the probabilities or the weights of different volumes is going to be dominated by a single volume that makes the largest contribution to the thermodynamics or to this integration. But this integration should be subtle point like so what I would do is I will call whatever is appearing here to be some function of V because I'm integrating over V and I'm going to look for its extreme. So basically I expect that this because of this subtle point will be e to the psi of V that maximizes this weight. OK? So what I have to do is I have to take the psi by d V and set that to 0. Well part of it is simply minus beta P. The other part of it is log Z by d V but the log Z by d V I have up there. It's the same Z that I'm calculating. It is beta P canonical. So this is none other than minus beta the P that I in this ensemble have set out, minus this P canonical that I calculated before which is some function of the volume. Again, generally, it's fine. We expect the different ensembles to correspond to each other and essentially that says that most of the time you will see that you're going to get the pressure calculated in this fashion and calculated canonically to be the same thing. Except that sometimes we seem to have ambiguity because what I should do is given that I have some particular value of pressure that I emphasize exists in my isobaric ensemble, I calculate what the corresponding V is by solving this equation. What is this equation graphically? It says pick your P and figure out what volume in these canonical curves, intersected. So the case that I drew has clearly one answer. I can go up here I have one answer. But what do I do when I have a situation such as this? I have three answers. So in this ensemble I say what the pressure is and I ask, well, what's the volume? It says you should solve this equation to find the volume. I solve the equation graphically and I have now three possible solutions. What does that mean? All this equation says is that these solutions are extrema. I set the derivative to 0. The task here is to find if I have multiple solutions, the one that gives me the largest value over here. So what could be happening? What is happening is that when I'm integrating as a function of V, this integrand which is e to psi of V. Right? An integrand in the curve let's call these curves over here number one. In the case number one there's a clear solution is if I'm scanning in volume, there is a particular location that maximizes this side of V and the corresponding volume is unambiguous. If I go and look at the curve let's say up here let's call it that case number three. For case number three it also hits the blue curve unambiguously at one point which is at much lower volume and so presumably I have a situation such as this for my number two. Now given these two cases it is not surprising that the middle one, let's say number two here, corresponds to a situation where you have two maximums. So generically, presumably, case number two corresponds to something such as this. OK? There are three solutions to setting the derivative to 0. There are three extrema. Two extrema correspond to maxima. One extrema corresponds to the minimum and falls between the two and clearly that corresponds to this portion that we say is unstable. It's clearly unstable because it's the least likely place that you're going to find something. Right? Not the least likely but if you go a little bit to either side, the probabilities really increase. Yes? AUDIENCE: I think your indices don't correspond on the plot you've just drawn and on the [INAUDIBLE]. Like 1, 2, 3 in parentheses, in two different cases they don't correspond to it. PROFESSOR: Let's pick number one. AUDIENCE: Number one is when we have a singular maximum is the biggest. PROFESSOR: With the biggest volume, yes? AUDIENCE: No. Number two should be-- PROFESSOR: Oh, yes. That's right. So this is number three and this is number two. And for number two I indicated that there are maxima that I labelled one, two, and three without the parenthesis that would presumably correspond to these. OK? So then you see there is now no ambiguity. I have to pick among the three that occur over there, the three solutions that correspond that occur from the van der Waals equation. The one that would give the highest value for this function psi. Actually what is psi if you think about this ensemble? This ensemble is going to be dominated by minus beta P V at the location of the V that is thermodynamic. This Z is e to the minus beta E, so there's a minus beta E and there's omega which gives me a beta T S. So it is this combination of thermodynamic quantities which if you go and look at your extensivity characterization is related to the chemical potential. So the value of the psi at the maximum is directly related to the chemical potential and finding which one of these has the largest value corresponds to which solution has the lowest chemical potential. Remember we were drawing this curve last time around where we integrated van der Waals equation to calculate the chemical potential and then we have multiple solutions for chemical potential. We picked the lowest one. Well, here's the justification. OK? All right. So now what's happening? So the question that I ask is how can doing integrals such as this give you some singularity as you're scanning in say pressure or temperature along the lines such as this? And now we have the mechanism because presumably as I go from here to here the maximum that corresponds to the liquid volume will get replaced with the maximum that corresponds to the gas volume or vice versa. So essentially if you have a curve such as two then the value of your Z which is also e to the minus beta U N as we said is determined by the contribution from here or here. Well, let's write both of them. I have e to the psi corresponding to the V of the gas plus e to the psi corresponding to V the liquid. Now both of these quantities are quantities that in the large N limit will be exponentially large. So one or the other will dominate and the mechanism of the phase transition is that as I change parameters, as I change my P, I go from a situation that is like this to a situation that is like this. The two maxima change heights. OK? It is because you are in the large N limit that this can be expressed either as being this or as being the other and not as a mixture because the mixture you can write it that way but it's a negligible contribution from one or the other. This is the answer I was giving in connection with computer simulations. Computer simulations, let's say you have 1,000 particles. You have e to the 1,000 times something, e to the 1,000 times another thing, and if you, for example, look at the curves for something like the density you will find that you will start with the gas density. You ultimately have to go to the liquid density in the true thermodynamic limit there will be a discontinuity here. If you do a computer simulation with finite end, you will get a continuous curve joining one to the other. As you make your size of your system simulated bigger, this becomes sharper and sharper, but truly the singularity will emerge only in the N goes to infinity limit. So phase transitions, et cetera, mathematically really exist only for infinite number of particles. Well, of course, for 10 to the 23 particles the resolution you would need over here to see that it's actually going continuously from one to the other is immeasurably small. OK? Any questions about that? I guess there was one other thing that maybe we should note. So if I ask what is the pressure that corresponds to the location of this phase transition, so what is the value of the pressure that separates the liquid and gas, the location of the singularity? I would have to find Pc from the following observation, that that's the pressure at which the two psi's, psi of the gas is the same thing as the psi of the liquid. Right? All right. Or alternatively, the difference between them is equal to 0. OK? And the difference between-- I can write this kind of trivially as the integral of the derivative of the function. So d V d psi by d V integrated between the V of the liquid and then the V of the gas. That integral should be 0. d psi by d V we established, is this quantity. It is beta integral from V liquid to V gas d V P canonical of V minus this P transition that I want, has to be zero. OK? So what do I have to do? Since that we identify the location of your Pc such that if you integrate the canonical minus this Pc all the way from the V of the gas to V of the liquid, this integral will give you 0. So this is the Maxwell construction that you were using last time. Again, it is equivalent to the fact that we stated that really this object is the same or related to the chemical potential, so I'm requiring that the chemical potential should be the same as I go through the transition. Yes? AUDIENCE: Could you repeat why minus finite size does not in phase transition? PROFESSOR: OK. Yes. So let's imagine that I have two possibilities that appear as exponentials but the number that is appearing in the exponential which is the analog of N makes one of them to be positive so I have e to the N u and the other to be negative. So my object would be something like this. OK? Then what is this? This is related to a factor of 2 to hyperbolic cosine of N u. OK? So this is like some kind of a partition function and what we are interested is something like the log of this quantity. And maybe if I take a derivative with respect to u of this I will get something like the tangent of N u. OK? So some kind-- something like a tangent and u would be something like the expectation value of this quantity. It's either plus u or minus u and it occurs-- so the average of it would be e to the N u minus e to the minus N u e to the N u plus e to the minus N u and so u would be some kind of an expectation value of the quantity such as this. Now what is this function look like the function tangent. So for positive values it goes to plus 1, for negative values it goes to minus 1 and it's a perfectly continuous function. It goes to 0. All right? That's the tangent function. Now suppose I calculate this for N of 10, this is the curve that I drew. Suppose I draw this same curve for N of 100. For N of 100 I claim that the curve will look something like this because the slope that you have at the origin for the tangent grows like N so basically becomes steeper and steeper but ultimately has to saturate one or the other. So what-- for any, for even N of 1,000 then it's a finite slope. The slope here is related to 1,000 U 1,000, but it's finite. It is only in the limit where N goes to infinity you would say that the function is either minus 1 or plus 1 depending on whether U is positive or negative. So the same thing happens as I scan over here and ask what is the density? The density looks precisely like this tangent function. For finite values of the number of particles the density would do something like this. It is only for infinite number of particles that the density is discontinuous. Yes? AUDIENCE: Sir, on the plot which you drew initially on that board, what are the axis over which this scans? PROFESSOR: Over here? AUDIENCE: Yeah. Just what values are physical values? PROFESSOR: OK. Pressure, right? So what I said was that if I scan across pressure what I will see is a discontinuity, right? So if-- AUDIENCE: You said that this is the result of a numerical experiment with finite number of particles. So what is the parameter of the experiment, and what is the result. PROFESSOR: We have the density so you could have, for example, exactly the situation that I-- AUDIENCE: [INAUDIBLE]? PROFESSOR: Pressure. This box that I'm showing you over here, you simulate on the computer. You have a box of finite volume, you put a piston on top of it, you simulate 1,000 particles inside. AUDIENCE: OK. Thank you. PROFESSOR: Then you scan as a function of pressure, you plug the density. Now, of course, if you want to simulate this, you have to increase both the number of particles and the volume of the box so that the average density is fixed and then you would see something like this. OK? Any other questions? OK, let's stick with this picture. I kind of focused on the occurrence of the singularity, but now let's see the exact location of the singularity. So I sort of started this lecture by labeling Pc, Tc, Vc. Well, what are their values? OK. So something that you know from stability is that the isotherms are constrained such that delta P delta V has to be negative. OK? Or writing delta V as a function of V or delta V, this is d P by d V along an isotherm delta V plus d 2 P by d V squared along the isotherm delta V squared-- this is one-half plus one-sixth d cubed P d V cubed along the isotherm and so forth. This whole thing has to be negative. OK? Now generically I pick a point and a small value of delta V and the statement is that along the isotherm this derivative has to be negative. So generically what I have is that d P by d V for a physical isotherm has to be negative and indeed the reason we don't like that portion is because it violates the stability condition. Well, clearly that will be broken at one point the isotherm that corresponds to T equals to Tc is not a generic isotherm. That's an isotherm that at some point it comes in tangentially. So there is over here a point where d P by d V is 0 and we already discussed this. If d P by d V is 0 then the second derivative in such an expansion must also be 0. Why? Because if it is nonzero, irrespective of whether it is positive or negative, since it multiplies that V cubed, I will be able to pick a delta V such that the sine will be violated by the delta V of positive or delta V of negative. And clearly what it says is that over here the simplest curve that you can have should not have a second order therm which would be like a parabola but should be like a cubic. And that the sine of the cubic should be appropriate the negative so that delta V to the fourth is always positive, then you have the right thing. OK? So given that information I have sufficient parameters, information to calculate what the location of Pc, Tc, Vc are for the van der Waals or other approximate equation of state such as the one that you will encounter in the problems. So let's stick with the van der Waals so the van der Waals equation is that my P is-- actually let's divide by N so it would be volume per particle so I have introduced V to be the volume per particle which is the inverse of the density, if you like, so that is intensive. V minus-- there's some excluded volume, let's call it through the volume b and then I have minus something that goes like the inverse of this excluded volume, let's write it as a. Just I write those two parameters as a and b because I will be taking many derivatives and I don't want to write anything more complicated. OK. So that's the equation. We said that the location of this critical point is obtained by the requirement that d P by d V should be 0 so what is that? It is minus k V T divided by V minus b squared plus 2 a over V cubed so I took one derivative. And the second derivative is now constrained also, so that's going to give me plus 2 k T V minus b cubed minus 6 a V to the fourth. OK? Now if I'm at a critical point-- so I put c over here, this will be 0, and this will also be 0. So the conditions for the critical point are that first of all k B Tc divided by Vc minus b cubed should be 2 a over Vc cubed and 2 k B Tc Vc minus b to the fourth power should be 6 a divided by Vc to the fourth power. Hmm? So this is two equations for two unknowns, which are Tc and Vc. We can actually reduce it to one variable by dividing these two equations with each other. The division of the left-hand sides will give me Vc minus b in the numerator divided by 2 in the denominator. The ratio of these two is simply Vc over 3. That's a one variable equation. Yes? AUDIENCE: Where you have written the equations in terms Vc minus b and Vc, it should come in different powers. So our first equation should be Vc minus b squared. PROFESSOR: Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: OK. Magically the ratio does not get affected and therefore neither does the answer which is Vc which is 3 b so that's one parameter. I can substitute that Vc into this equation and get what k B Tc is. And hopefully you can see that this will become 2 b squared, which is 4b squared so this gives me a factor of 8 a, ratio of these will give me one factor of b, so that's k B Tc. And then if I substitute those in the first equation, I will get the value of Pc and I happen to know that the answer-- oops, this is going to be 27, I forgot because 3 cubed will give me 27. And Pc here will give me 27 also. I believe b squared a and there will be a factor of 1. OK. OK? Really the only thing that actually I wanted to get out of this, you can do this for any equation of state, et cetera. Point is that we now have something to compare to experiment. We have a dimension less ratio Pc Vc divided by k B Tc. So I multiply these two numbers, I will get one-ninth. And then I have here a factor of 8 so the whole thing becomes a factor of 3 over 8 which is 0.375. OK? So the point is that we constructed this van der Waals equation through some reasonable description of a gas. You would imagine that practically any type of gas that you have will have some kind of excluded volume so that was one of the parameters that we used in calculating our estimation for the partition function, ultimately getting this omega. And there's some kind of attraction at large distance that's been integrated over the entire range, here was this factor of U. You would expect that to be quite generic. And once we sort of put this into this machinery that generic thing makes a particular prediction. It says that pick any gas, find it's critical point-- critical point occurs with for some value of Pc density or inverse density, Vc, and some characteristic [INAUDIBLE] pressure k B Tc. This is a dimensionless ratio you can calculate for any gas and based on this semi-reasonable assumption that we made, no matter what you do you should come up the number that is around this. OK? So what you find in experiment? You find that the values of this combination from 0.28 to 0.33, so that's kind of a range that you find for this. So first of all, you don't find that it is the same for all gases. There is some variation. And that it's not this value of 3 8. So there's certainly now questions to be asked about the approximation, how you could make it better. You could say, OK, actually most of the gases, you don't have spherical potentials. You have things that are diatomic that shape may be important. And so the estimation that we have for the energy for the omega et cetera is too approximate. There is something to that because you find that you get more or less the same value in this range for gases that are similar, like helium, krypton, neon, argon, et cetera, they're kind of like each other. Oxygen, nitrogen, these diatomic molecules are kind of like each other. OK? So there is some hope to maybe get the more universal description. So what is it that we're after? I mean the thing that is so nice about the ideal gas law is that it doesn't matter what material you are looking at. You make it sufficiently dilute, you know exactly what the equation of state is. It will be good if we could extend that. If we could say something about interacting gases that also maybe depends on just a few parameters. So you don't have to go and do a huge calculation for the case of each gas, but you have something that has the sense of universality to it. So clearly the van der Waals equation is a step in that direction, but it is not quite good enough. So people said, well, maybe what we should do is increase the number of parameters, because currently we are constructing everything based on two parameters, the excluded volume and some integrated attraction. So those are the two parameters and with two parameters you really can only fit two things. What we see that the ratio of Pc Vc over k T is not fixed, it's as a range, so maybe what we should do is to go to three parameters. So this whole is captured by the search for law of corresponding states. So the hope is that let's do a three parameter system. Which parameters should I choose? Well, clearly the different systems are characterized by Pc Tc and Nc T at critical point, so maybe what I should do is I should measure all pressures, made them dimensionless by dividing by Pc and hope that there is some universal function that relates that to the ratio of all temperatures divided by Tc, all densities or inverse densities divided by the corresponding Vc. So is there such a curve so that that was a whole and you go and play around with this curve and it is with this suggestion and you can convince yourself very easily that it cannot be the case. And one easy way to think about it is that if this was the case, then all of our virial expansions and all of the perturbative expansions that we had should also somehow collapse together with a few parameters. Whereas all of these complicated diagrams that we had with the cycles and shapes and different things of the diagrams are really, you can calculate them. They're completely independent integrals. They will give you results that should not be collapsible into a single form. So this was a nice suggestion what in reality does not work. So why am I telling you this? The reason I'm telling you this is that surprisingly in the vicinity of this point it does work. In the vicinity of this point you can get the huge number of different system gases, krypton, argon, carbon dioxide, mixtures, a whole lot of things. And if you appropriately rescale your pressure, temperature, et cetera, for all of these gases, they come together and fall on exactly the same type of curve of over here. OK? So there is universality, just not over the entire phase diagram but in the vicinity of this critical point. And so what is special about that, and why does that work? OK? So I give you first an argument that says, well, it really should work. It shouldn't be a surprise. And then show that the simple argument is in fact wrong. But let's go through because we already put the elements of it over there. So let me try to figure out what the shape of these pressure versus volume curves are going to be for isotherms that correspond to different temperatures close to Tc. So what I want to do is to calculate P as a function of volume, actually there's reduced volume by the number of particles, and P and I'll do the following. I will write this as an expansion but in the following form-- P of Vc T plus d P by d V at Vc T times V minus, Vc plus one-half V 2 P by d V squared at Vc T minus Vc squared plus one-sixth d cubed P d V cubed at Vc and T V minus Vc cubed and so forth. So it's a function of two arguments. Really it has an expansion in both deviations from this critical point along the V direction so going away from Vc as well as going T direction, going to away from Tc but I organize it in this fashion and realize that actually these derivatives and all of the coefficients will be a function of T minus Tc. So for example P of Vc and T starts with being whatever value I have at that critical point and then if I go to higher temperatures the pressure will go up. So there's some coefficient proportional to T minus Tc plus higher order in T minus Tc and so forth and actually I expect this alpha to be positive. The next coefficient d P by d V Vc T T what do I expect? Well, right at this point d P by d V there's slope. This has to be 0 for the critical isotherm. So it starts with 0 and then if I go and look at a curve that corresponds to high temperatures because of the stability, the slope better be positive. So I have a T minus Tc plus our order that I know that a is positive because of a-- a is negative because of stability. So let's write it in this fashion. OK? What about the second derivative? Well, we said that if I look at the point where the first derivative is 0, the second derivative better be 0, so it starts also with 0. And then there's some coefficient to order of T minus Tc and then higher order and actually I don't know anything about the sign of P. It can be both positive or negative, we don't know what it is. Now the third derivative there is no reason that it should start at 0. It will be some negative number again in order to ensure stability and there will be higher order terms in T minus Tc what we have is structure such as this. So putting everything together the statement is that if I look in the vicinity of the critical point, ask what should pressure look like? You say, OK, it has to start with a constant that we called Pc. It has this linear increase that I put over there. The first derivative is some negative number that is proportional to T minus Tc multiplying by V minus Vc. The third coefficient is negative-- oh and there's also a second-- there's the second order coefficient and then there will be high order terms. So there should be generically an expansion such as this. Say, well, OK, fine, what have we learned? I say, well, OK, let me tell you about something that we're going to experimentally measure. Let's look at the compressibility, kappa. Kappa is minus 1 over V d V by d P evaluated at whatever temperature you are looking at. So if I look at this-- actually I have stated what d P by d V is. d V by d P will be the inverse of that, so this is going to give me in the vicinity of this critical point Vc a T minus Tc. So the statement is this is now something that I can go and ask my experimentalist friends to measure, go and calculate the compressibility of this gas, plot it as a function of temperature, and the prediction is that the compressibility will diverge at T goes to Tc. OK? So the prediction is that it is 1 over a T minus Tc. OK. So they go and do the experiment, they do the experiment for a huge number of different systems, not just one system, and they come back and say it is correct. The compressibility does diverge but what we find is that irrespective of what gas we are looking at, and this is a very universal, it goes with an exponent like this. OK? So question is maybe you made a mistake or you did something or whatever, really the big puzzle is why do such different gases all over the spectrum and even some things that are not gases but other types of things all have the same diverges? Where did this number 1.24 come from? OK? AUDIENCE: So there isn't a range like there was 0.375. It all depends on 1.24. PROFESSOR: All hit 1.24. Irrespective of [? order. ?] It's a different quantity, though. I mean, it's not the number, it's an exponent. It's this, a functional form. Why did this functional form come about? Another thing that you predict is the shape of the curve at Tc. So what we have said is that at T close to Tc, the shape of this isotherm is essentially that P minus Pc is cubic, it's proportional to V minus Vc cubed. And the way that we said that is OK at that point there's neither a slope nor a second derivative, so it should be cubic. They say, OK, what do you find in experiment? You go and do the experiment and again for a whole huge range of materials all of the data collapses and you find the curve that is something like this, is an exponent that is very close to 5 but is not exactly 5. And again, why not 3 and why is it always the same number 5? How do you understand that? Well, I'll tell you why not 3. Why not 3? All of the things that I did here was based on the assumption that I could make an analytical expansion. The whole idea of writing a Taylor series is based on making an analytical series. Who told you that you can do an analytic series? Experiment tells you that it is actually something non-analytic. Why this form of non-analyticity? Why it's universality? We won't explain in this course. If you want to find out come to a 334. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.