playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
SLAM_Lectures
SLAM_D_14.txt
now let's have a look at where we are so our prediction step was as follows our mu was computed by G and our Sigma prediction was computed by G Sigma G transposed plus our system noise R and so we have set up this equation and we also computed G which was the derivative with respect to the state so we know how to do this and this and this is the previous Sigma so we're done with that what remains is this now this is a 3 * 3 Matrix which results from the noise of our control and so similar to here we compute our RT from the noise in our control given by The covariance Matrix of our control multiplied from the left and right by a matrix V where V is the derivative of G with respect to the control so see the analogy between this and that with this construct transports the variance of the Old State to the variance of the new state and this construct transports the variance of the control to the new state and in order to write that out we will have rt is VT times the variance of the left control the variance of the right control times the transpose of V so those variances capture the inexactness of our movement of the left and right track of the robot assuming that there is no correlation between the two so if you look at the dimensions this is 3 * 3 now this as we see is 2 * 2 so this must be 3 * 2 and this is the transposed Matrix so it's 2 * 3 and so indeed this V Matrix looks like this so it is indeed 3 * 2 the First Column being the partial derivatives with respect to L the second being with respect to R now let's compute those derivatives so our function G just G1 G2 G3 is XY theta plus those terms and now we have to compute the partial derivative of G1 with respect to L now there's no l in those equations because it is hidden in R so R was L / Alpha and Alpha was r - L / W so we see that r = l w / Rus L and so the term r + W half this is L W / by r - L + W2 which is the same as W2 * R + L / r- l so let's go on here we have to compute the partial derivative with respect to L of the first component which is X plus this term here which we just computed times the S of theta Prime where we will just set Theta Prime is Theta + Alpha minus the S of theta and so the derivative of x with respect to L is zero whereas here we have the L in this term and we also have the L in the alpha hidden here so we have to apply the product rule so the derivative of the first factor is 1 / by the denominator squared * r- L plus R + L * this part unmodified plus the first part unmodified time the derivative of the second part which is the cosine times the derivative of theta Prime with respect to L so this is the derivative of alpha with respect to L which is -1 / W so overall we obtain w r / r - L 2 * the sin Theta Prime - sin of theta minus now this comes from here R + L / 2 r- L * the cosine of thet Prime so this is our partial derivative of the first component of G with respect to L and we will have to do six of those let me just write down all those for you so this is what we just computed and for the second component we have for the third component this is quite simple -1 / the width now the same for the derivatives with respect to R so we get a minus here and a plus here and finally the derivative of the third component with respect to R is 1 / by the width and so remember this is for R not equal to l and the Theta Prime which is used here is theta plus Alpha now unfortunately we'll have to do all this for the case R equals L as well and so for every component we will have to find out what what happens if Alpha goes to zero and so Theta Prime goes to Theta so let me just give you those equations so in the case R equals L we'll have to use this set of equations with a partial derivative of the third component of G with respect to Ln R is the same as on the previous slide so now let's program this so I prepared this slam 7c control derivative question file for you and so essentially your task is the same as in the previous case only that the derivative of G you have to take is now with respect to control meaning to Ln R and not with respect to the state and so this is the function you'll have to fill out again there's two cases for R not equal to l and for r equal to l and you'll have to return a matrix which is three rows * 2 columns and down here in the main function everything is set up as in the previous case this time the numeric differentiation is with with respect to lnr and it will compare this to your analytic solution and will print out the difference so now please program this
SLAM_Lectures
SLAM_D_06.txt
and so the answer to this is our covariance matrix is Sigma x squared times 1 2 2 4 and we found out that this is Sigma x squared times the matrix made from our eigenvectors which need to be normalized times the diagonal matrix with the two eigenvalues time sweetie and we found out that lambda 1 was 5 lambda 2 was 0 so combining those two we get 5 Sigma x squared and 0 times we and V T so in our turned coordinate system we have a variance of 5 Sigma X square along one axis and 0 along the other axis and so indeed this is the square root of five sleep now x squared which is the square root of 5 times Sigma X so now let's have a look at the probability density function again so in the 1d case we're at the following P of X was 1 divided by square root of 2 pi times Sigma times e raised to the power of minus 1/2 times X minus nu squared divided by Sigma squared and I cannot just rewrite this as 2 Pi Sigma squared raised to the power of minus 1/2 so I squared the Sigma but then I take the root again and so it's exactly the same since Sigma is non-negative times e raised to the power minus 1/2 I'll just write X minus mu times 1 divided by Sigma square which is Sigma raised to the power of minus 2 times X minus mu so in the 1d case that doesn't actually make sense to separate the X minus mu squared into a multiplier before and one multiplier after the seek my raise to the power of minus 2 but then you can see then I will cheat you and dimensions now my Sigma will be a matrix so I'll take two pi times Sigma and it contains already the squared elements so I don't have to square that and I will have to take the determinant of this raised to the power of minus 1/2 times B you raise to the power of minus 1/2 times X minus mu and now the ax and mu are vectors times the covariance matrix and it already contains two squared values so it's raised to the power of minus 1 times X minus mu and since these are no vectors again I need here a transpose to get this quadratic form so this is the multi-dimensional Gaussian or a normal distribution where X is an N dimensional vector and mu is also an n-dimensional vector and the covariance matrix is a matrix of n rows and n columns now after we understood multi-dimensional distributions let's have a look at our common filter again just you remember a dynamic Bayes Network looked like that we we're in the state ax t minus 1 we made a transition to our new state XT using an addition the control u T and being there we measured and obtained the measurement CT and so in the one dimensional case we had the following prediction so we assumed the linear prediction model where the old state is multiplied by a factor and then we added some control now conventionally instead of just taking UT we will introduce a factor BT which multiplies the control and thereby converts from control space to the state space so this was our functional equation and we would have then also to add some noise which is also called the system noise and using that formula we obtained without predicted state is just the multiplication of our old state by a plus our control and again the only difference being that we now multiply our condo by this additional factor BT and for the variance we obtained that it is the old variance multiplied by our linear factor which needs to be squared plus our system noise and so this was our way to represent our predicted belief using the first and second moment of a Gaussian distribution and in the second step we had a correction so we measured a value C should be equal to CT times XT plus some noise now this is the measurement noise and we were required to put together our new measurement and our predicted belief to get our new posterior belief and we did so by defining a common game and using that we computed our new mu which consisted of the predicted Mew plots the common game types the innovation which is our measurement minus the predicted measurement and we also computed our new variants which was 1 minus common gain times C times 2 predicted variants since those two values where our new beliefs had ax T now let us see how this looks like in n dimension now the prediction is exactly the same formula except that we now use matrices instead of scalars and for our predicted mutant we also get a vector which is computed in exactly the same manner as in the 1d case with a and be replaced by matrices and you replaced by a vector and for the new covariance matrix we obtained this formula where the multiplication with a squared is replaced by a left multiplication with a and right multiplication with 18 and the Sigma R squared is replaced by the covariance matrix after transition and for the correction we have again exactly same formulas with a scalar replaced by a matrix and the scalar XT replaced by a vector X T and we obtain the following common game which seems to be more complicated as in the 1d case but if you have a closer look you see the following in the numerator we do have our predicted Sigma times C this is exactly the same as here in the numerator and in the denominator we have this term so the C times Sigma times C that is the C square Sigma square plus the measurement noise covariance matrix is the same as here plots the measurement variance so it's the same formula adapted for the n dimensional case and then we get our new mu surprisingly this is exactly the same formula only that mute is now vector K is now a matrix and C is a matrix as well and for our new covariance we get the identity matrix minus K times C times the predicted covariance so as you can see the formulas for the n dimensional case they are generalizations of the one dimensional case and also the derivation of those formulas is a bit more complicated you can clearly see the symmetries between the left and right side and so again if you're interested in the derivation of those formulas this is explained in detail in the book up front for Gert andfox probabilistic robotics now assume that our system state which is a vector consists of our x position our Y position and our heading theta and so this is a 3 x 1 matrix or a column vector then assume our control consists of our left and right motor reading so this will be a 2 x 4 matrix now let's just do the following quiz I want to know the dimension of all the matrices in the formulas so what is the dimension of a is it 2 times 2 2 times 3 3 times 2 or 3 times 3
SLAM_Lectures
SLAM_C_10.txt
now congratulations if you managed to program this correctly and if you did so you should see the following and this is a really amazing result so we started with our discrete distribution having to compute all those convolutions and multiplications value by value with all those steps and now we also implemented this using our kalman filter and this gives us the solid lines and we see the common filter solution gives us exactly the same solution only it is still better because we don't have to compute very much we only have to treat those mu and sigma values of the distributions and in the end we get a closed form description of our distributions so we get a mathematically exact result and so you see this is clearly a huge advantage of the kalman filter but there is not very much to compute and still we get a mathematically exact solution in terms of a closed form description of our density and congratulations if you made it that far so this time we had a really really amazing journey so when we started we had no clue how to represent the uncertainty in the position or the measurement of our robot and so we started to assign probabilities and we found out that the best way to do so would be distributions so we said our robot is now not here anymore but rather it is here here or here with a different probability and then we next found out that using those distributions we need not only to model the position but we also want to know what happens if we are somewhere and we move what happens to the distribution and we found out the distribution gets more complicated and what we saw was for the movement we need to convolve our distribution whereas for the measurement we need to multiply now putting together those two steps was called a base filter and actually since we implemented all this using discrete distributions we already learned what a histogram filter is namely the discrete version of this base folder and then finally we went from distributions which are discrete to densities which are continuous and here for the special case of a normal distribution or gaussian distribution and now implementing the base filter using those densities especially those normal distributions has led us to the common filter so again congratulations you really learned a lot so hope to see you in the next class
SLAM_Lectures
SLAM_G_02.txt
now there's something interesting about data Association or Landmark correspondence now the basic situation is a robot moves along it does some measurements and based on this initializes some Landmark positions with some uncertainty then later on it measures the same landmarks and Associates the new measurements with the previously measured landmarks and now this Association is a discrete decision so for example if there was a second Landmark here it would be not so clear if this measurement belongs to this Landmark or if it belongs to that landmark and all those discre decisions are actually also part of our posterior so that we'll have to write the probability for our states and the map given all the measurements and all the controls and all the correspondences is equal to the probability of the state given all that times the product of the probabilities for all the map features as so you see here is the correspondence variable and here and here it is as well now the interesting thing here is that this path of robot States is represented by one particle and so also the decisions on the data Association are made on a per particle basis so each particle maintains its own data associations now this is very different from the previous case where our extended common filter slam represented posterior of the online Slam by a multivariate gausin distribution however using only a single sequence of data associations so the difference between the extended common filter slam and our particle filter slam with respect to the data associations is our extended common filter slam represents only one particular sequence of data associations whereas our particle filter slam or fast slam maintains the posterior over multiple data associations so each particle has its own sequence of data associations which makes this type of filtering much more robust now let me finally give you a remark before we start implementing all this now sometimes I say that fast slam solves the full slam problem and this is because I have a number of particles representing one part of the distribution and they contain the full path as well as all the landmarks so this here is the full path on the other hand I sometimes say that used this as a filter which means I talk about the online slam and in fact this also solves the online slam problem because although this contains the current pose of the robot as well as all previous poses I don't have to store them so if I'm not interested in all previous poses I may just keep the last pose off the robot in my particle while keeping all the rest exactly the same so again fast slam solves the full slam problem as well as the online slam problem and so we will use it as a filter now let's program all this so I prepared the slam 10A prediction and this will provide us with an overview of the program that we shall develop now there's two classes here the first is class particle and this class contains a method namely Chi which was previously located in our classes but which is exactly the same so the method G computes the state transition given the Old State and the control input and this method is wrapped in this move function which now is a member function of the particle and which modifies the pose of the particle given the Left Right control input and the width of the robot so this is our particle so far there are no routines for measurement and correction it is all just the movement or prediction step now the second class is our fast slam class which is also pretty short up to now so it consists of a Constructor which assigns all the particles and copies some constants to the class variables and the second function is the prediction function and we also used that function earlier when we did the particle filter so it takes left and right from the control it computes a standard deviation for left and right and then for every particle of the filter it sets L and R to random values based on a gusan distribution which is centered at the control and has the appropriate standard deviation and then it just calls the movement function of the particle finally in the main function we set all those constants we generate an initial set of 25 particles which are given an arbitary start State and which are duplicated here using the copy copy function then we set up our filter using the particles we just generated and those constants we read the Control Data from our motor t and then for any control data we do a prediction step so that is the interesting part of the loop and then we output particles a mean State computed from all particles and the error ellipse now this function get mean and also this function is imported from the slamg library where I moved some of the helper functions for a better readability of the main code so that is all there is to do now let's run this now after you run this it will produce the fast slam prediction text file so load this and you will see the following in the beginning all particles are the same then they start to diverge and so we don't get a reasonable result here so this result is not very impressive but it was to be expected because we don't have a correction step yet and in fact this result is the same as what we obtained earlier in our unit about the particle filter so now let's have a look at the correction step so our correction step step will be a member function of the fast slam class so in the class fast slam we will have the function correct and this will take our measured cylinders now exactly as in our previous particle filter the correction step will have two substeps the first is Computing all weights and we'll call this function update and compute weights and it will take the measurements and then the second substep we'll do the resampling so this first function here will have a loop over all particles and return one weight for each particle in this list of Weights so the list of Weights will have exactly the same number of entries as there are particles in our particle filter and then in the Second Step there will be a resampling and we won't worry about this because this will be exactly the same as the resembling step which we programmed earlier in our particle filter now let's have a look at this function in more detail now this function computes all the weights but it also updates all the particles so this is why it's called update and compute weights and not just compute weights so it does the following it has a loop over all particles say for particle p in particles and in this Loop it does another loop for all measurements the foro measurement m in cylinders and here we just call the update function of the particle so P the particle of the AR Loop do update particle using the measurement M so this is essentially just a loop over all particles which presents every measurement of the current step to every particle now we'll also have to compute the weights and so this update function will return a weight and since we'll have to compute the weight of the particle and not only of the single measurement we will have one overall weight which will initialize by 1.0 and we'll multiply this here by the weight for this single measurement now after the loop overall measurements we'll append the result to a list which will initialize by the empty list before we start and we will just return this list of Weights so now we started with the fast slam correct function and we have seen this means we'll have to implement the update and compute weights function so this is a member of the fast slam class now this function just does a loop overall particles and all measurements and calls this update particle function which now is a member of the particle class and this will be the function we will develop in the next steps so let's have a look at it so in class part particle we'll have the update particle member function which takes a measurement which is one range and one bearing value corresponding to the measurement of a single cylinder now remember we are now in the class particle so each particle has the robot state so this is x y and Theta and it also has a list of estimated Landmark positions so there's one entry for every Landmark which has been observed by the robot so far and for which the robot has decided that it is indeed a new landmark and not not a landmark that has been observed earlier already and we do not only have the estimated position for every Landmark but also the co-variances because as you remember we run one extended common filter for each landmark in the particles list of landmarks and remember this list is not Global this list of landmark positions and co-variances is individual for each particle so our AR ellipses may be like that so now we get this measurement that is the robot tells us that it has detected a landmark at a certain range and bearing angle so the first thing we will do is we will compute the likelihood of Correspondence for any existing Landmark so this means we will compute the likelihood that this measurement is due to observing Landmark M1 and the likelihood if this measurement is due to observing Landmark M2 and as you see here obviously it is not very likely that the measurement belongs to any of those two landmarks so depending on this result we either do the following B initialize a new Landmark which means we take this position set up a new common filter initializes with that position compute an appropriate covariance Matrix and initialize that too or if the likelihood that a measurement belongs to a certain Landmark is above a thresold we will update the landmark so say if the measurement is like that we will decide that it belongs to Landmark one and so we'll update this meaning we'll update the extended colon filter of the corresponding Landmark so we will update the position it will move a little bit in that direction and the covariance Matrix which will get smaller so these are the three important steps first of all compute the likelihood that the given measurement is due to the observation of any of the existing landmarks then second if the maximum of those likelihoods is below a threshold initialize a new l Mark in the current particle which means setting up one U filter and third if the maximum likelihood is above a threshold pick the landmark which belongs to the maximum value and update its common filter now let's first have a look at this step so we'll talk about step a the computation of the likelihood so our robot is somewhere and it determines using a slider that this seems to be a landmark at a certain range and bearing angle as we know the robot itself is represented by a particle with a certain position and heading Theta and also list of landmarks that the robot has encountered so far with positions and covariance matrices so if this is a landmark which the robot has observed earlier then the robot will have this Landmark position XK YK somewhere in its list of landmarks and it will also have the co-variance Matrix corresponding to that landmark and so now we want to compute the likelihood that this measurement of range and bearing angle actually belongs to this landmark and so we'll first compute the expected measurement or predicted measurement which as we see here would be this so if the robot is here this is given by the particle and our land mark is here then we would expect a bearing angle like this and a range like that so we will say our expected measurement C had is a function h of our current state and our landmark and fortunately we programmed all of that earlier so this is our measurement function now we will also need the covariance Matrix for this measurement and for that we'll need the Jacobian of H so we'll need Capital H which is the derivative of H with respect to the landmark and we need to take this at the current pose of the robot and the landmark position so this will be a 2 * 2 Matrix and we've computed that earlier as part of the H Matrix which we used in our extended filter slam so we will just use that now here comes the interesting part our covariance of the landmark measurement is this Jacobian times the covariance of the landmark here times HT the transposed Matrix of h plus the covariance of a measurement which will Den note as QT now this T stands for the time dependence of this co-variance but actually we will use a constant covariance Matrix independent of time so then we encountered that Matrix already so this is our variance in range and variance in the bearing angle and so what is this it's easy to see that this here is a variance propagation so this is the variance of the landmark K which is in the plane which is in XY whereas this is the measurement variance due to the landmark variance and so this is the variance due to the actual measurement so what are we doing here we are interested in obtaining the measurement C had and its co-variance so we are interested in the uncertainty which in our case is expressed as a covariance matrix now the uncertainty of this measurement is due to the uncertainty of the landmark which is this which then translates into uncertainty in a distance and bearing measurement plus the uncertainty of the actual measurement of the sensor that we use so we add those two up and obtain the desired co-variance Matrix now we'll compute our Delta C which which is our measured C minus our expected c c head which we computed here and finally we'll compute the likelihood which is obtained from the probability density function of the gausian distribution so it's 1 / 2 pi * the square root of the determinant of the Matrix ql * e raed to the^ of min-2 * Delta C transposed time the inverse of ql * Delta C and so this gives gives us the final likelihood so if this is our range and our bearing angle then C hat is our expected measurement and using C hat as the center and ql as the covariance we Define this gaussian distribution so this here is the one Sigma error ellipse given by the co-variance ql and now we want to know the probability of our actual measurement of the range R and the bearing angle Alpha and so we'll grab this value here and this is the likelihood that we will return here so this is our measurement C and this is Delta C so now all we have to do is to implement those formulas and as I mentioned we already programmed H the measurement function we also programmed Capital H the toobian of the measurement function and so essentially we'll have to program this part here and then put everything together so I implemented the slam 10B correspondence likelihood which serves as a framework for what you'll have to implement so here's our particle class again where each particle now also has a list of positions and co-variances for the landmarks so these two variables will hold all those extended Calon filters for our landmarks then here is our measurement function H which is just copied from our earlier implementation and here is the Jacobian with respect to the landmark and this is a subp part of what we implemented earlier so our extended common filter slam implemented the Jacobian with respect to the state and the landmark whereas we now will'll only use the last part namely the derivative with respect to the landmark now here's the first function you'll have to implement so that is the function H which computes the expected measurement for landmark and it is given the number of the landmark and an additional constant and it should return the expected measurement and this is really really easy to do because you may use the function H which we just defined now the second function you'll have to implement is this and this is a combined function it Returns the Jacobian H and the co-variance Matrix ql so it returns a tuple of those two values so using the formulas we just developed compute H compute ql and return the tubble of those two values and finally you'll have to implement the WL likelihood of Cor respondence function where you compute the likelihood that a certain measurement corresponds to an existing Landmark which is given by its number and as usual here is an extensive list of hints how to do this and here this is the final function but you won't have to implement this it just does a loop over all landmarks of this particle so given a measurement it returns a list of likelihoods one likelihood for each landmark and each value representing the likelihood that that the measurement corresponds to the corresponding Landmark now the main part of the program is not partical filter anymore but it consists of routines to set up some landmarks and test the output of your code so here we Define one single particle so this is placed at minus scanner displacement for x and zero for y with a heading of zero so this means our robot will be here and this is scanner displacement so our scanner Center will be in the origin of the coordinate system and then here we add some landmarks which is done here so the first Landmark is at 500 minus 500 with a standard deviation of 100 in both axes so the arrow ellipse is a circle the second Landmark is at 1,000 and it has the same Arrow ellipse and the third Landmark is at 2,000 and it has a different a ellipse which looks somehow like this where this is 45° and so the main code computes the expected measurements for each of those particles and in the second part it sets up some measurements with the first measurement is close to the first landmark or landmark number zero whereas the second measurement is at a distance of 1,500 with a bearing angle of zero so it is exactly between those two landmarks and if your implementation is correct you should see the following so for landmark number zero we expect the range of 707 which is 500 * the < TK of 2 and a bearing angle of - 45° which seems to be correct and the co-variance of our measurement is 50,000 in distance and 8.85 * 10 ra to^ of minus 2 so I got a certain distance error and a certain bearing error now if we look at the first Landmark we see that the distance error is exactly the same however the error in the bearing Angle now is smaller because the point is further away so the point's uncertainty translates into a smaller bearing angle error and in the third case the interesting part is here which means that we'll have a correlation between the range and the angle now this is quite clear because looking at this Arrow ellipse if our bearing angle gets larger our distance gets smaller now in the second part we give a measurement which is close to Landmark zero so this was this measurement here and consequently we get a likelihood of 0.0 two that this measurement belongs to Landmark zero and two other likelihoods for the other landmarks which are much smaller so this is time 10 ra to the^ of -5 and this is time 10 ra to the power of -1 so it is clear that Landmark zero has the largest likelihood in fact by two orders of magnitude now it's somehow more interesting for this measurement which is geometrically exactly between this landmark and that landmark so is it more likely to belong to this Landmark or to that landmark and as we see here first of all the likelihood for the first Landmark is much smaller but then considering the likelihoods for those other two landmarks they are substantially different so it's 0.2 for this landmark and 0.04 for this Landmark so it's twice as likely that this measurement belongs to this Landmark actually where is this the case it is geometrically exactly between both landmarks well of course because we have set the variance of this Landmark larger than the variance for that landmark and so it is less probable for this measurement to belong to this Landmark with the small variance so now please program the computation of the correspondence likelihoods and after you implemented this you may check your result against this outcome which should appear if your implementation is correct
SLAM_Lectures
SLAM_D_03.txt
and so interestingly different solutions are correct if I tell you that X is normal distributed with mu and sigma square and Y is normal distributed with 2 mu and sigma square and first of all you know that c which is also normal distributed has a mu Vector which is Mu and 2 mu so one thing you know for sure is that the expectation of C is at mu 2 mu now since I gave you the distribution of X and Y you know the two marginal distributions of c and so know the two diagonal elements of the covariance Matrix which are Sigma square and sigma square but you don't know anything about the two other values and so it is correct that this may be a circle in which case it would be like that however it could also be the case that this Sigma Square half and so then this would be an arrow ellipse like that now this is approximately at 1.2 Sigma and this is at 0.7 Sigma however the other solution was not correct because this would imply a matrix with elements on the main diagonal and zero elsewhere and since it is no Circle Sigma 1 would be not equal to Sigma 2 which is not the case so the solution a is correct and C is correct too but not B now let's ask me a second thing say you have to build a mechanism that moves a certain point with a high accuracy up and down and so you buy a motor a linear motor which is fixed to a wall and so this point moves up and down here but then you realize that while the motor is very accurate it is constrained in its maximum movement and you can't get hold of a motor with a larger movement and so you device the following mechanical construction you build a lever arm which you fix here also in a wall and so it can turn around this axis here and then you add another joint here which will then actuate your mechanics here so if this moves by a certain amount then this will move by a larger amount and so by construction the distance between those two joints say this is L and the distance between those two joints is also l so you look into the data sheet of the motor which is supplied by the manufacturer and it tells you that this actuator this motor has a noise with zero expectation so there is no systematic effect but with some position noise which is given by the variance Sigma X squ so here we do have X now at this joint here you're interested actually in the movement which affects the rest of your mechanical setup you have another movement here y that's also normal distributed and so you expect an expectation of zero and a variance of Sigma y so we have two variables here they are now functionally connected by this mechanical device so that every movement in X results in a movement in y and so how does the aror ellipse of the common distribution of XY look like so we also set up C which is XY and we're interested in The cence Matrix of C so first let's have a look at how the arrow ellipse looks like so it Des centered at zero now does it look like this like a perfect circle or does it look like this so it's elongated in the y direction or does it look like that or finally is it degenerated like this ellipse which is basically flat in One Direction and just extended in the other direction so what is the solution A B C or D
SLAM_Lectures
SLAM_E_01.txt
now welcome to Unit E which will be about the particle filter now let's first have a look at our base filter again which takes our old belief the control and our measurement and from that first computes a prediction which I will now give in a continuous form and then the correction which is our predicted belief multiplied by the probability of our measurement under the condition that we're in the state XT and then we return our new belief now we have seen two different ways of implementing this the first one was using discrete approximations of distributions so instead of having a continuous distribution we had something like that and the convolution was computed using a sum instead of an integral so if this is the prediction and this is the correction those two steps were implemented in the completely same way by just replacing the integral by a sum in the first equation and then we noted that if you start somewhere and we move then by the convolution our distribution will get wider until in our case it approximated a bell-shaped distribution now subdividing space in that way leads to a histogram filter whereas our other representation was parametric so we said our belief is normal distributed so that the entire distribution can be represented by the first and second moment mu and sigma square and this has led to the Calment filter equations which were of the form Mt and sigma T The co-variance Matrix which computed the prediction step and then a common gain and our new M now without an overline and variance Sigma T and those three formulas were the correction step now if we compare those two we see that here any distribution is possible especially distributions having multiple modes meaning multiple Peaks on the other hand I have to define a discrete roster and we already discussed that I might want to make them small for a better accuracy but then in turn I will have so many of them rendering computation inefficient so there's a tradeoff between the approximation and the cost on the other hand my cman filter is very efficient it only needs to deal with the first and second moments also if our distributions are indeed normal distributions then the representation is exact on the other hand we assumed the normal distribution so we do have only one Peak now let's have a look at this uni modality now when we introduced the robot's position as a distribution we said we do not know exactly where the robot is instead of pretending it is at a certain position we decided to represent its position by this distribution and so after remove the robot will have a predicted position like that and then if we integrate a measurement this will lead to a new posterior like that however there may be situations where the robot might be here or here and probably you thought about that earlier when we encountered the case where we started in this corner and moved down here and we updated our position using a matching of the walls of the Arena to our laser scan data and that worked very well however the arena is a square and if we had placed our robot here nothing would be different in terms of the observation that our lighter sees and so in fact it could also be placed here or here and so if we don't know our start position we could model this as distribution having Four Peaks meaning a 2d version of this which looks like that now say this is my Arena and I don't have any initial information as to where my robot is now what do you think does this mean for the base filter does it mean a it can't handle this case or B it can handle the case and the distribution somehow needs to be say a very very flat Peak or C it can handle the case but since I don't know where I am the distribution is just a constant so this is flat what do you think a b or c
SLAM_Lectures
SLAM_G_01.txt
now welcome to unit G of our slam lecture and this will be about particle filter slam and the particular algorithm that we will talk about here is also known as fast slam now to put that into perspective let's have a look at what we did so far so we started with the following problem a robot was placed in an arena with known landmarks and it measured the bearing angle and the distance to those landmarks and as it moved we were interested in obtaining the position and orientation in terms of a belief meaning we knew that position and orientation won't be error free now one way of modeling this belief was in terms of a gausian distribution which we used in our extended Calon filter so in the extended Calon filter we represented the robot's pose by the first moment of the distribution as a vector XY Theta and the uncertainty in terms of the second moments of the distribution that is the variance in X Y and Theta and the corresponding co-variances and we also visualize the variance in Theta by a dis segment and the XY covariance matrix by an error ellipse so this was our three-dimensional state so this means if this is our 3D space of possible States XY Theta meaning for any given x y and Theta I can grab the belief by Computing the function value at that point and representing our belief as a gusan means that we Divine the first moment of the gaan distribution which will be the point in our 3D space where the peak will be and then the second moments which Define the variances and co-variances and so overall we can represent our distribution as an error ellips it in 3D space and so far we depicted the marginal distributions of the two-dimensional Subspace in XY as an error ellipse and the marginal distribution in Theta as an interval which we visualized using this plusus one Sigma dis segment a one Sigma eror ellips means that with a 68% probability our true position and orientation of the robot will be within this ellipsoid now again all this was for the extended common filter where we represented our belief by a gausian distribution now in a subsequent unit we did the following we replaced our representation by a set of particles and we did so when we introduced the particle filter so now we said in our 3D space of possible States we use a set of hypothetical States or particles where each single particle is indeed three-dimensional namely it is a hypothetical state of a robot and this way we represent it our belief by a set of particles where a high density of particles means that there is a high probability that this is actually the robot state but using a particle filter we were able to represent multimodal distributions so for example the high density of particles here indicates that the robot state is here however if we have a second group of particles that would indicate a second possible state of the robot and if you remember the experiments we did with our small robot in the arena that started here and so it went there made a left turn and followed the trajectory somehow like that and here we had typically the case that our particles were here but some of the particles took a stronger left turn and Were Somehow Here and shortly after that they fortunately died out but this is exactly the situation that we see here where we have one group of particles with a certain XY position and heading and another group of particles with a different XY position and a different heading using either the common filter or the particle filter we estimated the robot's pose so the position and orientation which is a three-dimensional Vector whereas we did not estimate the position of the landmarks so the positions of the landmarks were considered to be fixed and given in advance and this changed in the last lecture where we had to look at extended common filter slam and the problem was modified as follows our robot observes some landmarks which are not known in advance and by observing them it inserts them into a map but now as the robot's position is stochastic and the measurements are stochastic as well the landmarks positions are now stochastic too so when the robot moves on it observes landmarks again and since we observe them multiple times the arrow get smaller and ideally the robot's uncertainty and position and heading gets more or too now everything is tastic the robot's position as well as the positions of all the landmarks and so an extended commum filter slam we modeled this using our system state which contains the pose of the robot but also the positions of all landmarks and consequently our covariance Matrix contained all the variances and co-variances between the post and all Landmark so the cence Matrix was 3 + 2 n * 3 + 2N Matrix now our belief looks like that this is the space of all possible system States and now unfortunately I can't draw this anymore because the dimension is now 3 + 2 N this means if I pick a point here this gives me the belief that the robot is at position XY with a heading of theta and all the landmarks are at position X1 y1 X2 Y2 and so on until xn YN and now in extended common filter slam we represented our belief by a multivariate gouin of that Dimension so this will be a high-dimensional ellips and unfortunately I can't draw anything else than this two-dimensional projector of a two-dimensional surface in 3D space but just imagine this is the one Sigma surface of 3 + 2 nus one dimensions in 3 + 2 N dimensional space and the probability of our system State being there again is 68% and although we can't visualize this we did so for the marginal distributions namely imagine this is X and this is y and so this is the other 1 plus two n dimensions and so the marginal distribution of this is this in the XY plane and we visualize this in our viewer application by drawing that Arrow ellipse in XY space and we also did so for the one dimensional marginal distribution of theta and we visualized this as this dis segment and we also did so for the marginal distributions of the land marks where we represented the two-dimensional marginal distributions of the landmarks by Arrow ellipse around the landmarks so as you see we were unable to visualize the distribution in 3 + 2N dimensional space and so instead we drew the marginal distributions of the robot pose and the landmarks and so this was extended colon filter slam and as you see when we moved from our extended colon filter which estimated the posst of the robot given a map of known landmarks to our extended common filter slam with un own landmarks we just added this part to our system State and so we not only estimated XY Theta but also the positions of all the landmarks and this of course has increased the size of our system State greatly so if we want to move from extended common filter slam to particle filter slam we could proceed in exactly the same way that is we represent our belief by a set of particles each one being a hypothetical System state so that each single particle contains our pose as well as the positions of all the landmarks for three plus 2N dimensional particle so this would be straightforward we've seen the extended Calon filter and the particle filter both operating in 3D and we have seen the extended common filter slam operating in 3+ 2 N dimensions and so we also set up our particle filter slam with particles which are 3 + 2N dimensional and un fortunately this does not work very well because of the curse of dimensionality and so it is the case that particle filters scale exponentially with the number of Dimensions which means that if the dimension of space goes up the number of particles that I will need to represent the distribution in this High dimensional space scales exponentially and since this is dependent on the number of landmarks in the scene meaning for example for 100 landmarks marks the overall Dimension is already 203 this will go up very quickly and this simple approach here is not feasible now instead we will use a factorization of the posterior which is given here without proof the posterior of the full slam problem is the probability for the entire path of the robot from one to T and the map given all our measurements and all our control inputs and interestingly this can be factored into the probability for the entire path given all the measurements and all the controls times the product overall landmarks of the probability for the landmark location given the entire path and all measurements and this equation is not an approximation it is exact and so what we see here is a conditional Independence if the path is known it follows that the locations of the landmarks are independent since those prob probabilities are independent of each other we will represent them using independent gausin distributions so we will use one extended colon filter for each Landmark so this is represented using independent extended colon filters namely one per L Mark whereas this is represented using particles so we represent the distribution of our robot's paths using a particle filter and for each particle we represent the remaining part of the posterior by a set of cion distributions one distribution for each landmark and this approach is also called a blackiz part of the filter and again this is not proven here and you can find proof in the probabilistic robotics book by Fran borgard and fox now even though we don't prove it formally let me give you the following explanation so we had the following situation a robot moved so it was in different states and while it moved it measured some landmarks and the field of VI so from here it observed those two landmarks from here it observed all three landmarks from here it observed those two landmarks and so on now this sequence of movements and observations led to this Dynamic base Network where we have the sequence of States control U1 to u3 and we perform some measurements which are related to our landmarks M1 M2 and M3 so M1 is seen from all three positions of the robot M2 is seen from two positions and M3 is seen from two positions as well so this is the dynamic base Network that is generated by this situation now if you now think about the probability for our map now the posterior of our full slam problem includes all the states of the robot and all the positions of the landmarks so for example landmark number three is stochastic so its X and Y position are not fixed but they are random variables according to certain distribution now this Landmark is observed for example by the robot when it is at position X3 X3 on the other hand is obtained from X2 by a stochastic movement of the robot which follows another distribution now X2 is obtained by the same procedure from X1 and on the other hand when the robot is in state X1 it measures landmark number two and so this means I can go from here while this measurement to X3 and while the movement to X1 and to M1 so this means M3 and M1 are not independent because they are related by those measurement equations where M3 is measured or M2 is measured as well as the state transition equations from State 1 to two and 2 to 3 however now imagine that those positions would be fixed so they're not stochastic or you could think of them as if they were measured using an instrument with a superior accuracy so that essentially those positions can be considered to be fixed now this means that this part here is fixed however this also means that I can't go from M3 to M1 using this path because this path would lead over nonstochastic variables which is not possible or put in a different way imagine you now have this instrument with Superior accuracy and so you place it here you measure from here to here later on you measure also from here to here and from here to here so certainly all those measurements lead to some uncertainty here but since the black positions of your surveying instrument are fixed they are not influenced by any calculations going on here to determine the position of the landmark so since they stay fixed for example position of this Landmark which is measured from here and here is not influenced At All by the calculations going on here and so indeed the outcome is the probability for the map given the path and this is important so if all those positions are given and of course the measurements are given as well then this can be factored into the product we had earlier of the probabilities of each individual map feature given the path and the measurements so while we didn't prove it this is some kind of explanation we gave some indication why it is true namely because if I condition the probability for the map on the path so the path is known those positions are fixed and the probabilities for the map features become independent and so I can compute the overall probability as a product of a probability for the individual map features let's have a look at this probability for the individual map feature now in this product of probabilities where do we get this path from on which we condition the probability for the map and remember only this conditional Independence allows us to write our probability for the map as a produ of probabilities of the individual landmarks and the trick is as you've seen in an earlier slide our full posterior is the product of those two terms so the probability of a path given all measurements and all controls times the probability of the map conditioned on the path now in our ra black equalized filter this part is represented by a particle filter so we will have particle one particle 2 and so on until particle M and each of those particles will have its own path so we'll use the superscript to denote the individual particles now this means the distribution of paths given all those measurements and all that controls is approximated by this set of particles now for one single particle I now have this path which is non-stochastic because the distribution of paths is captured by the distribution of the particles and so what remains for this particle is to express the probability of the map which now since I know the path is conditioned on this known path in which case the probabilities for the map features are independent so here the remaining elements in the particle describe the distribution of my landmarks where I use one distribution for each individual landmark and in our case we will represent those distributions using gaussian distributions so we will have a mean and a covariance for our first landmark and for the second landmark and so on until the nth landmark and again we will indicate that this is for the first particle and the same holds for the second particle and so on until the M particle and so this is the first part it uses a particle filter to represent the distribution of paths and this is the second part and it uses individual extended common filters namely one for each lark
SLAM_Lectures
SLAM_A_08.txt
there's one more thing I'll ask you to do but fortunately that is really really easy so we now do have two robot and the robots coordinate system is like this and as I told you earlier the beams go like that and back here is an area where no measurements are returned now what you just computed is terrain indices that point to the cylinders in the scene and the range now this here starts at index zero and it ends at index 659 so this may be index 400 I want you now to produce the XY coordinates in the robots coordinate system so for every detected cylinder given array index and range I want you to produce the x and y in the robots coordinate system and for converting the Ray index to an angle there's a function provided in the Lego log file class which is called Lego log file dot beam index - angle and this will convert the index like four hundred here to an angle measured in radians there's one modification that you'll have to apply to the range measurement remember the robot measures some cylinder so these are the Rays and from that you would expect a range measurement which looks like that this should be this ray and then these other rays they are closer so it should look somehow like this but as you probably noticed in reality the outcome of our lighter something like that even with the peak in here and in our algorithm we just computed the average of all those values which then is here so in that case we may get something like that but as it turns out this is the correct value for the cylinders distance so the cylinder is here and we would have to apply an offset of the cylinder radius which is 55 millimeters but in addition we have to apply a correction here and Pyrrhic lee I determined that this correction that we have to apply maybe something like 90 millimeters and so whenever you convert one of your resulting ranges to the XY coordinates remember that you get something like this range and before converting it to XY you will have to add up 90 millimeters and this is called the cylinder offset so and this is the final modification I'll ask you for the program is fine cylinders Cartesian and it consists of the compute derivative which you know already the fine cylinders which you'll just have to fill in from your previous solution and this new function which you'll have to program so this function gets a list of all the cylinders that were detected in one scan this might be 0 or more cylinders and then for every cylinder it takes the total beam index range and from that has to compute an XY coordinate and just for now I have put 0 0 here and now let's have a look at the main function which I modified a little bit so there's still the minimum valid distance and a depth Trump and there's no the cylinder offset which is a global variable but you may use in here in the function you have to program it still opens lock file and reads the log file but now instead of drawing the result it will produce a file called cylinders text and it will write for every single scan one line to this file which starts with D C meaning detected cylinders and will then list all the Cartesian coordinates of the cylinders you've just found so in the loop it calls the computation of the derivative it calls the fine cylinders and then it calls you new function compute Cartesian coordinates
SLAM_Lectures
SLAM_A_01.txt
so welcome to Unit A of our slam lecture and in this lecture we will get you started to work with a real robot so if you have taken the artificial intelligence course of Sebastian thron you now know about a self-driving car so here is one from the official Google block and what you can see here well that's just a standard car uh where is this here well that's a laser skanner using that car would be great however we had a certain problem buying such a thing so we buildt our own so as you can see here we have two systems uh one is the Google self-driving car and this is our own robotic system so what's the similarities well on top you have as I mentioned this lighter and we do have a lighter as well well so instead of this bulky Vine 1 million scan points per second $70,000 us scanner we have this small lightweight hokuyo scanner on top of our robot and we also have a different Drive mechanism so here you have normal tires um whereas we do have a caterpillar system with caterpillar tracks well and otherwise well that's just the car so it has somewhere here in there there's a drive drain driving here the the probably the rear axis we have that two right so we have two of those tracks here we have two Motors here driving the left and the right track and so the vehicle moves forward our device is actually much much cheaper than the original Google self-driving car while on the other hand there is some little drawbacks one of them being this car is actually a self-driving car whereas this car is currently driven by Daniel so here's Daniel he designed and constructed this device and he also has built the Control software which is able to control the movements of this small robot but which is also able to get the measurements of this laser scanner device in real time so let's have a closer look at the system we do have the lighter up here now the lighter has an axis here and the lighter is shooting out its rays in that direction so horizontal so parallel to the ground and there's some area behind here which is not covered by a lighter right so that's uh a dead area which where the laser scanner can't see anything and we do have our caterpillar tracks here these are the driving Motors here and the other one you can see this is an extra battery and this is the controller that drives here the two Motors and now the controller links wirelessly via Bluetooth and also the lighter data is R out via serial interface right and goes into this Bluetooth device here so that also is sent via Bluetooth this is our robotic car in his natural environment what you can see here is our Arena and so we placed some obstacles these are actually not meant as obstacles those are our landmarks so later on we will try to find those landmarks in our scan to find out algorithms to control the movement of the robot so how is all that set up so our robot is here so there's no wiring it's all Bluetooth and here's the control computer and with this control computer you can tell the robot to move forward and you get back the signals of the motors the motor count and you also get back the measurements of the lighter scanner here you have another view and here on the screen you can see now a live view of the scan that the robot does so here is a top view of everything the robot is in the left bottom corner and now starts to move and if you look carefully you see that the robot has a red dot that is actually the lighter and you also see a a small red circle that follows this Red Dot and has sometimes a little bit of a lag you also see the video does not move really evenly so there is probably some lack time lag issues here but nevertheless this red circle is tracked and gives us a reference trajectory so by this means we do have an external measurement system not part not being part of the robot that gives us a reference where the robot is and there is those Tim likee issues nevertheless your algorithm should be able to cope with these
SLAM_Lectures
SLAM_D_15.txt
now we got everything we need to implement the prediction step so once again these are the two equations and redefined RT to be V times sigma control times we transposed well the signal control waltz the variance in the left control and the variance in the right control put into a diagonal matrix and now we still have to define those two and here's our approach you're saying the variance for the left control well this depends on the total movement we do with our left trick times a factor so for example this factor may be 0.3 so we would say we'll make a 30% error when moving our left track and then watching the robot we have observed that there is especially a lob slip off the tracks on the ground when the robot turns and so we'll multiply the difference between left and right by another factor this may be for example 60% and so as variances add up quadratically we have to write this as follows and we will do the same for the right that will be right here but there will be no difference in the second term now when you implement this the Chi well that's the function Chi in our extended common filter cloth and so remember for static methods you can either call extended Kalman filter dot G meaning you take the class name and not the variable name of an instance or you can take a variable name such as KF or self-touching now we also need cheap but she is the function DG d state now this variance that is self dot covariance now the V is dget control and the alpha-1 and alpha-2 are called self dot control motion factor and self dot control turn factor and all you have to do is first set up those two equations second put them into this diagonal matrix then call this function to compute V and compute this term call this function to compute G and compute this term and then add this up to obtain the predicted covariance and then in the end update the state which is called self dot state so that the new state reflects the prediction so here's the file I prepared for you it is lamps MD common predict question so now the constructor has grown a little bit it now takes the initial state the initial covariance and also robots with takes this control motion factor and control turn factor and stores all that in member variables then here's the function G which I implemented for you and here you have to put your brevis code for the partial derivative with respect to the state and the partial derivative with respect to control then here's another code that I put in here which does eigen value eigen vector decomposition and converts to two by two sub matrix off the covariance into an error ellipse representation consisting of an angle of the first axis and to standard deviation along the first and second axis which are the square roots of two eigenvalues now you don't have to use this the code below will use it in order to output the error ellipse and now here's the code you'll have to write this time it's the prediction step and as I've just shown you first of all compute the new covariance and put this code here and then compute the new state put this code here and that's all there is to do and I put some additional hints into this comment section now let's have a look at the main code here again there's this constants here are our filter constants regarding our control so here for 35% slick when going straight and a 60% slit when we turn here we define our initial state and our initial covariance so in the beginning we say we know our position with a standard deviation of 100 millimeters so that's 10 centimeters in X&Y and a standard deviation of 10 degrees then we initialize the common filter here we read the log file as we did previously and then here this is our common filter loop which for now just consists of the predictions there and it's pretty much like the previous code so rereading the control by reading the motor tics and multiplying them with the ticks to millimeter factor and then we just called a common filter predict using that control and this will replace the old state which is stored in a member variable off the common filter instance by our new predicted state and in this loop we will also store the state and the covariance in two extra lists in order to output them down here I can go up the center of the scanner and not the center of the robot by using this displacement and in addition to the position vina output the error ellipse which is computed from the covariance and also this and deviation of our heading as computed from the q2 element of our covariance matrix by taking the square root so now please implement the predict function
SLAM_Lectures
SLAM_A_05.txt
so now that you programmed the position tracking I'll ask you for three more modifications so first of all as you remember we had some colored cardboard on top of our laser scanner and this license cannon was then tracked by our video tracker and you probably remember this red circle which try to follow the laser scanner in the real world so as you can see there is more than one coordinate system involved so we try to drag this point here where as the robot turns around a point that is somewhere here so there's a difference between the robot coordinate system and the track coordinate system and this is often the case so you often have for every sensor you do have a coordinate system and for the robot as such you would have another coordinate system which often is termed the body coordinate system whereas this is the lighter coordinate system so into displacement and the first thing I'll ask you is to integrate this displacement in this case it means you should output the point of the lighter where as for your motion equations the coordinate system of the robot is important so what you can see here it's really easy to do so because if the robot has this heading then you just have to subtract this displacement here and then after you did your forward motion equations you have to add up again this displacement vector which then points to another direction in the direction of the new heading I assumed that the robot's coordinate system is somewhere here and I took a ruler and measured this and I think it's something like 30 millimeters now the second modification I want is a different starting point so if you remember the video from the beginning that was our arena and the robot went from the bottom left now unfortunately that video was turned by 180 degrees so in reality in our coordinate system the robot starts here and goes down here and then does those circles and third modification I want you to do is output to file so instead of directly printing to the console and making this figure using matplotlib I wanted to output all positions into an ASCII file you'll have to write this second implementation for the tracking of the robot and you have to add the scanner displacement the different start pose and the output of the result to a file and so as you see we still have the same function as you did in your last implementation it's only the case that you have to integrate into this function now the scanner displacement and that's all the scatter displacement is defined down here in the main function otherwise the tick's in the robot with are still the same and there's a new start pose which I did find here with the new X Y and in your heading as you can see here I open this motors file I do the filtering just as in the last solution and at the end here's the output to a file instead of the plot so I open a file and for every pose that is generated here I just output an F which is the code that we use for filter position and then the X Y and heading and this X Y and heading this is exactly three values that you should produce for every pose so and after you run this code it will produce a text file which is called poses from ticks meaning you computed all the posts of the robot using just the tick count from the motors and this file will have 278 lines and you should check that you produce that number of lines and it begins as follows so here is the F it always is an F for filter position and these are the values that are set in the program as the start pose and as we had that before you have first these 13 positions where the robot stayed in the same place and then it starts to move and so you should verify that your program gives you exactly the same numbers and let's have a look at the end of the file so at the end of the file the position should be 3 - 9 5 43 and the heading should be 4 point 7 8 and you should verify that you get the same result and there something really cool in the set of files that you download it so after you produced this file with all the positions you can use this log file viewer dot PI to have a look at it so locate the file in the set of files to download it for this unit and then start it either from the console or wire idle or just by double clicking on it in the Explorer so after you start the log file viewer it will open up a file selector like that and you can switch to text files in order to only view this type of files and then select the poses from ticks that is the file that which is produced and after you open that you can see the trajectory all the points that you just produced and you can now travel along this trajectory with your virtual robot starting at position 0 and going to position 277 so this is the 278 positions that we have here and you can see the position as well as the heading at that point where the heading is shown as a line pointing in the direction of the robot you can also see the numerical values that you brought used meaning the X Y and the heading and if you want you can load additional files for example the motor information that you use to compute this trajectory that's the robot motors file when that case that can't be visualized so you'll see the motor values down here where as it doesn't change the display of the trajectory here
SLAM_Lectures
SLAM_E_02.txt
and of course in general if I do not know anything about position of my robot then a uniform distribution will be the proper representation and so my belief will be constant well the constant is chosen such that the integral over my entire Arena will be one now let's introduce three important classes of localization problems the first and easiest one is position tracking and this is a problem we have worked on so far so there's a known initial pose and typically Ally a unimodal distribution for example a gan distribution and then there's a problem of global localization and this means I don't have any indication of my initial post and in general in order to solve that a unimodal distribution is not useful now similar to that is a so-called kidnapped robot problem where a global localization problem is assumed plus there will be the possibility that someone kidnaps the robot and moves it to another place where the robot then has to recover and determine its new Global position so now in reality robots are not kidnapped so often at least so far but the Practical importance of this problem is that any Global localization algorithm might eventually fail and if it does so it needs to recover from this failure meaning it needs the ability to discover that the position that it might have tracked for a while is completely wrong and that therefore it needs to determine its Global position again so the Practical importance is that the robot is able to recover from localization failures so these are three different localization problems which are often mentioned in the standard robotics literature now let's think about global localization again so we just learned that if I don't know where I am then a uniform distribution would be the best possible choice however if I'm interested in modeling this with a gan another POS possibility would be to place a g at the center with a very large variance so in 1D this would mean that this is my belief it is centered here but that doesn't matter because I will set the variance to be very large and so during filtering if I integrate this with my measurement my posterior belief will be almost the same as the probability of my measurement and so even though I was unable to represent my initial belief exactly I had to replace the uniform distribution by this very flat gaussion I eventually end up after my first measurement in a good guess for my system State now what do you think would it be possible to model the belief by a very wide normal distribution and subsequently rely on the measurements to determine our system State yes or no
SLAM_Lectures
SLAM_D_01.txt
and welcome to unit D which will be about the Kalman filter let's first have a look what we will cover in this unit now if you remember in unit B we had the following we used our motor tics and emotion model to determine the robots position at any point in time and the result looked like that it was a pretty smooth trajectory but on the other hand it led to a position that was globally wrong if we compared that to the reference position which we obtained by using an additional overhead camera so in order to fix this we use the matching of the landmarks in the scene and the result of this looked as follows so we had here on the right hand side the detected cylinders in the scanners coordinate system which we transformed into our world coordinate system and then matched them to the known positions of the landmarks in our scene and for matching we used a similarity transform for which at least two corresponding landmarks are required so in the end we obtained a globally good position whereas if you look at the detail the trajectory looks very jagged so overall there are two drawbacks first of all if you have only one landmark in our view then we can't apply our method because we need at least two landmarks because that gives us for observation equations which enable us to estimate the four parameters of the transformation and the other drawback was that all the noise in the measurement of our landmarks also effects our position and so in the end we get a very noisy position which results in this non smooth trajectory and now with what you'll learn today you will be able to produce a trajectory like that using a common filter so what you see here is a trajectory that is on the one hand pretty smooth and on the other hand globally correct and what you also see is that at any point in time the robot maintains an uncertainty about its position which is expressed in this error ellipse and also an uncertainty in heading which you can see here as small triangles which gives the plus minus 1 sigma angle for the heading and so as we move along the trajectory you see that the noise in the measurements does not strongly affect the trajectory and you can also see for example here that as we move along I may get fewer and fewer points to match now in this case only one our uncertainty in position and heading is growing and also that the filter is able to improve the position of our robot even if we have only one matching pole available so in contrast to our previous solution which used the estimation of a similarity transform the Kalman filter does not need at least two matching poles and in fact here is the solution which is obtained if I disallow the Kalman filter to use more than one match so even if I allow the Kalman filter to only use this one matching pole in this case this one and then somehow arbitrarily this one I will get a trajectory that is globally good although it looks a little bit more jagged than the trajectory that is obtained if the filter is allowed to use all potential matches so the uncertainty of my position will grow as it is expressed by this launcher error ellipse but still it is astounding how smooth trajectory is in general except in this case here so now let's see how this common filtering works
SLAM_Lectures
SLAM_E_07.txt
now after you run this it will produce the particle filter mean. text file let's load this and now what you see here is in green all the particles for step number zero and in blue the current mean position and mean heading and as we cycle through the trajectory we see how in every step the mean is computed which gives us one single position for every filter step similarly to the column filter now for example here we see a typical effect now in the curve some of the particles spread out and move the mean position a little bit to the right now in The Next Step those particles have died out and in consequence our position Chums a little bit to the left so this is what you see here also down here the trajectory is not very smooth but this depends on the number of particles so I ran the same algorithm with 20 200 particles instead of 50 let's load this and as you see the trajectory looks much smoother now although the larger number of particles does not prevent small steps into the trajectory as you can see here now let's have a look at this here we can see clearly how some of the particles spread out and take a smaller curve radius and then eventually they die out now if we compare this to the reference we see that we obtain globally a very good solution now let's have a look at the the start and the end of our trajectory once more so in the beginning our robot does not move for a few steps so these are all the particles and they are spread according to the sigma in our initialization so now if we move for 1 2 3 four five steps we see that the number of unique particles decreases rapidly so that shortly before we start to move we only seem to have a few particles well we still have 200 particles here but we only have a few unique particles now as we move the situation tends to get better but in the end when the robot comes to a Hal the number of particles again decreases now where does this come from so assume we have those particles now the robot moves we sample from those distributions get those particles then we give them some weights and then we do the resampling where we typically may lose particles with a very low weight whereas we tend to draw several identical particles for particles with a high weight so we have this situation so we reduce the number of unique particles by one however if you now move again this doesn't matter so much because we move here we sample from distribution we move the second point also here and sample from the distribution and get another point and so after picking the particles from those distributions we again have four unique points unfortunately if it don't move then this step where identical particles are spread out again due to to the sampling from this distribution does not happen so if you don't move all that happens is we occasionally drop one particle so we have two here but then if we still don't move we might also drop this particle and so as you see the number of unique particles is reduced until eventually after some time we end up with one single particle and so what you see here for the extreme case of a robot that does not move at all holds in fact in general which means that if I example too often I may lose diversity as we just have seen for the case of a robot that doesn't move at all on the other hand resampling to seldom Wast particles in low probability regions and in fact the reason for our resampling was that we wanted to drop particles which are in regions that have a very low probability and so as you see there's a trade-off here and the solution we have so far does a resampling in every filter step which may be too often and we noticed that we reduced the number of unique particles while the robot Stood Still and one of the simplest measure against this is just to do no resampling while the robot is static so if our control tells us there's zero movement of the left and right track we simply do not do a resampling and usually it's also a good idea to discard the measurements so we can also integrate multiple measurements into our weights so remember the way we treated our weights was we compute the weights we do the resampling and we just discard all the weights and then in the next filter step we compute the weights for the new particles so we don't keep them however we could also keep them and so if we did a resampling we reinitialize them to be one if the resampling took place whereas we multiply them by the probability of our current measurement if no resampling took place so when should we resample and there's a standard approach here to use the variance of the weights to determine if resampling is necessary but we won't have a Clos look at this now let's have a look at this solution now implementing this solution is very easy because all there is to do is in the main function we check if our control is not equal to zero and if so we do the prediction and the correction step and if not we just leave that out and directly jump to the print out of of our particles and determination of our mean now in the code you see here I also did some other modifications and I invite you to do those modifications to your own code so this if control not equal to 0 0 and also this modification so earlier I motivated that one of the main advantages of the particle filter is that it is able to do Global localization but up to now we initialized our particles to be close to our start state which we have determined somehow by hand now let's do something different now let's initialize all the particles just using a uniform distribution between 0 and 2,000 mm for x y and the heading between minus pi and plus pi so indeed our particles spread out over the entire arena with arbitary orientation and now the hope is that even though we do not know our initial position at all we will still be able after some startup maybe to determine our Global trajectory and here's the third modification you may play with so so far we have computed the mean but we can also compute the second moments so we take the mean we take every particle subtract the mean in X and Y which is now called Center X Center Y and accumulate the sums XX XY and y y and divide that by the number number of particles minus one and this is the covariance Matrix and for the covariance Matrix we compute the ion values and igon vectors and use those to compute the orientation of our eror ellipse as well as both half axis and here we compute the variance of the heading and return that as well and below in the main program here in this section we output the error ellipse and the heading variant and this uses the very same records the E Record Recs which we introduced in the last unit about the common filter now one more remark if you run this you have to select a certain number of particles and so if you use 50 as we did previously then they are spread out in our Arena so much that the chance that there is a particle close enough to our correct but unknown initial position is so low that probably the filter will not be able to recover the trajectory so you need a higher number of particles and I've have chosen 500 here so now let's have a look so these are our particles when we start and they are uniformly distributed and now from all those particles the mean is somewhere in the middle of the Arena our aror ellipse looks like that so it is a circle and our heading just points somewhere with a high variance now when we start to move nothing happens this is because as long as our left and right control is zero our particle filter does not do anything so let's move on and here from Step 12 to 13 with our first move and the particle filter starts to predict and correct and so after this first move we see that only a few unique particle locations remain and obviously there are so many particles around here that the mean is already very close to our true position let's do one more step now most particles are concentrated here and now we start to move and again we see very nicely how the particles spread out in the curve which is also reflected in the larger Arrow ellipse and the larger variance in heading and as we go straight the AR lipse is smaller and also the variance in heading is much smaller now if you also look the landmarks and the reference we see that indeed all of the particle filter quickly determines the correct position approximately it still needs a while to converge to the correct position so probably this here in the beginning show us that our variances of the error that we assumed are too small and need to be set larger in order to have a faster convergence to the true trajectory nevertheless I find it very amazing that without any any knowledge of our initial State the particle filter manages to get very close to the true State very quickly there's one more thing I want to mention and this is particle deprivation now remember we have our particles and remember our case with the trajectory where the true movement is like that but some of the particles actually moved like that and eventually they died out however what might also happen is that by unlucky series of random numbers we might pick many particles here and then from the remaining ones we unfortunately delete all particles that follow the true track which we don't know and so we end up with a lot of particles here and no particle at all on the true trick now those particles would fit much better to the measurements of our laser scanner but unfortunately all particles that remain are here and the weights that I determine they're not absolute but just relative so I go on weighting those particles and even though in absolute terms the weights are pretty small much smaller than the weights I would obtain here I have no chance of ever selecting again one of those particles because they all died out and so since this is all a random process it might always happen that there's particles on the true trajectory and just by The Pick of the random numbers they suddenly all die out although the probability for this to happen is very small and so particle deprivation means that an unlucky series of random numbers May delete all particles near the true State and this typically occurs of course if the number of particles is too small and one popular measure which is not really a solution to that problem is to add some random particles after resampling so it's not really a solution because still the problem might occur but this at least reduces the probability of wipe out so in this case here we would add some random particles and due to the fact that one of those random particles is close to the real State its weight would be much larger than the other weights and so in resampling this would quickly gather many more particles and eventually this solution would die out now the downside of this is that by adding random particles I modify the distribution and so I obtain an incorrect posterior now congratulations in this unit you implemented a fully functional particle filter which is able to determine the position and heading of our small robot even if it don't have any initial information about the robot's post and I think this is a pretty amazing result so we learned to know the particle filter which we used for localization and which is also known as Monte Carlo localization and we've seen there are some caveats for example we need density estimation that is after we predict and correct all our particles we still have to extract the state of our robot because we need that for example to do a planing of our trajectory and we have used a very simple density estimation which consisted of computing the mean and covariance Matrix of all the particles and we also have seen the problem of particle deprivation which we didn't treat in detail however we also have seen the particle filter is very easy to implement so if you compare that to the Calon filter in the case of the Calon filter we had to compute several different derivative matrices and the computation of the Calon gain and the new state and new co-variance required several Matrix operations now as you remember here in the particle filter we did not need any Matrix operation and in fact in our implementation we did not include numerical Python and so since it is so easy to implement the particle filter is very popular especially in robotics so again congratulations and I hope you join me in the next unit
SLAM_Lectures
SLAM_C_09.txt
and it turns out that indeed the result is normal distributed so we obtain our prediction MW overline and sigma t² overline so this is our belief overline and the computation that leads us there is somehow tedious and so if you want you can look it up for example in the probabilistic robotics book by thr borgard and fox after all these computations what comes out is that our m is just given by a * mu tus1 plus our control u t and remember our motion model was a x tus 1 plus UT so all we have to do is to apply the motion model to our mean which will give us our new mean and as for the variance Sigma t² overline = a 2 * Sigma t - 1^ 2 + Sig R squar and this is also really simple we have to multiply our old variance which gives us the uncertainty that we had in position by a square so this is kind of a slope of our motion and then add up the noise of our motion so also called the system noise so all there is to do is move forward by moving the center of the distribution and add the noise of our movement to the noise which we already had in our last position so this is surprisingly simple and so this is our overall procedure for the calman filter we have our previous belief which is given now as Mu T minus1 and sigma T minus one and we have our movement control which is given as UT and sigma R and we have given our measurement which is given as C and our measurement standard deviation Sigma q and so we will handle those as variances so we have two steps the prediction this was just on the last slide and then we have the correction which was also on a previous slide so in the correction we first compute our calman gain and then we compute our new me and our new Sigma squ and then we return our new belief which is simply our new me and our new Sigma squar and this is the very important calman filtering in one dimension so remember this is the old belief this is the control and this is the measurement so now you're able to program your first Calon filter and I put together this program for you the slam 6f calman versus histogram filter and this program contains two filters namely the histogram filter and the calman filter so it consists of the cleaned up version of the histogram filter which we had a look at previously and in addition now integrates a calman filter so you can directly compare the two different results so first of all I find a density while this density is just given in terms of the first and second moment of the distribution meaning mu and sigma Square so there's a two meaning Sigma Square so all this class does is it holds two float variables then there's the plotting routines the histogram plot from the earlier program and now there's also calman plot so you don't need to worry about plotting the results there's the histogram filters stab which uses a convolve and a multiply and in order for that to work you have to import up here from your last solution the move convolve and multiplyer routines and then there's the cman filter step and as you see the histogram filter step and the Colman filter step they are identical they take a belief control and measurement and they return of prediction and correction the only difference is that in the histogram filter step these were classes with discrete values whereas now these are classes Each of which contain a mu and a sigma Square now in the main function you see every major call is done Twice first of all I initialize the position here for the histogram this is from the previous code meaning the position is the distribution in this case a discrete gusan centered at 10 with a sigma of one and now I also initialize a position with an underline all those underlines are for the Calon filter and instead of a distribution I use a density a different class but otherwise it looks completely identical then I initialize the controls for for the histogram I use the dist whereas for the common filter now I use the density the only thing that changes here is that my density always handles Sigma Square so I have to square those values and for the measurements it's also identical for the histogram case I initialize using dist and for the calman filter case I initialize using density otherwise the values are identical and also the filter is identical the histogram case I do a histogram filter step which gives me the the new prediction and position and in the common filter case I do a common filter step which gives me the new prediction and position and in each case I call the appropriate plot routines and that's all there's to do so everything you'll have to do is you have to implement using the previous formulas this common filter step so it takes a belief control and measurement Each of which is an instance of a density clause and it computes a prediction and a correction in this case I put in here the prediction is just a copy of the belief where I mve the center by 10 and I increased the sigma Square by 100 and the correction is just a copy of the prediction so both of those lines are not correct and you'll have to replace them with your own correct solution now please program this and this will be the last programming assignment
SLAM_Lectures
SLAM_G_05.txt
so now let's put everything together and we will do this in the slam 10e correction file so as usual it starts with our particle class where each particle consists of a pose and one position and covariance entry for every landmark and then we have the function g for the state transition and the move function which we used earlier to build our prediction step then we have the function h and the jacobian of h and then all those functions which you implemented earlier so you'll have just to copy your previous solution there and then finally here is the final function you'll have to implement and don't worry it's only very few lines of code so first of all you'll have to compute the likelihoods that the given measurement is due to any of the landmarks in the particle and you implemented the corresponding function earlier and this can be done just by calling the function above now if the likelihood list is empty which is the case if the current number of landmarks in the particle is zero or if the maximum of all the likelihoods is smaller than a correspondence likelihood threshold then here we have to call the code which adds a new landmark and we will return exactly the same threshold now in the other case the maximum of the likelihoods is greater or equal to the minimum correspondence likelihood so here you'll have to add some code which finds this maximum and also the argument of the maximum so the index of the landmark which corresponds to the maximum likelihood then here you'll have to update the kalman filter of the landmark with this corresponding index so this is very few lines of code and mostly it's just about calling routines from above now let's have a look at the fast slump class this is a constructor which is followed by the prediction function and we used that earlier so it's not modified at all now here's the update and compute weights function which has been discussed earlier so it mainly sets up an empty list of weights and does a loop over all particles inside of which there's a loop overall measurements it accumulates all those weights and finally returns the list of all those weights now here's the resample function which we used earlier already and here is the entire correction step and this has been discussed earlier too it consists of computing the list of weights by calling the update and compute weights function which takes the measurements and then resampling particles so that's all there is to do let's have a look at the main function it initializes all the constants and the constant which is new here is the minimum correspondence likelihood and this certainly takes some care to select and i have chosen 0.001 here and we take 25 particles copy an identical start state to all of them and initialize our filter read the log data and then here that's the entire main loop we do a prediction and a correction where this function will find the cylinders in the scan data and return a list of cylinders or measurements then all the remaining code is just about outputting all the particles outputting the mean state and the area ellipse then also outputting the landmarks now outputting the landmarks is not an operation which is not quite clear because as a matter of fact every single particle now has its own landmarks so what we do here for the purpose of visualization is the following we have computed this mean particle value which is the mean pose of all particles and we now pick the particle which is closest to this mean and position and then we output the landmarks of this particle and the error ellipses belonging to the landmarks of this particle so keep in mind that while this mean here is a state that is computed from all the particles and which in general is different from any of our particles the landmarks we output are the landmarks from one certain particle which we picked from our entire set of particles so now let's see what happens if you run this now running this will produce the fast slam correction.text file now open this and you will see the following result so this is the trajectory which results from our fast slam filter in this case with a total of 25 particles now let's run through this trajectory in the beginning the robot stays at the same position so the error ellipses of the landmarks are getting smaller then as we move we see here the number of particles that spread out a little bit we see here our typical problem of a set of particles which take a stronger left turn but eventually they die out you also see the typical result that due to those spread out particles our errors in position and heading are larger than usual so as we go on we also see a typical effect here so here's an additional landmark which will stay until the end now if you go back this landmark appeared here for the first time and so it survived in some of the particles until finally it's obviously consistently present as a landmark in those particles that are picked because they're closest to the mean state so so far the result looks good the trajectory looks a little bit jagged however we're also having only 25 particles so we're looking forward to taking a large number of particles to get a smoother trajectory but we're worried a little bit about this extra landmark which is inserted somewhere and stays until the end in fact as we can see without being modified at all which is due to the fact that after being inserted for all those positions of the robot none of the observations of the robot is assigned to this landmark anymore
SLAM_Lectures
PP_11.txt
now let's have a look at how these few modifications actually improve the performance of the code a lot so let's place our start Point here and our end point here in the opposite direction as we did earlier and we can see that it comes up immediately with a solution and as we move this point closer it does the same thing as did earlier so it works quite well so now let's test this critical case where a relatively long trajectory is combined with different orientations of the start and end nodes now first the orientations of the start node are again handled pretty well and now the orientation of the end node takes a bit longer but it's also handled very well so let's have a look at the visited notes so again it visits only a few notes in this case and if the orientation is wrong it visits more notes but still much less notes than it visited earlier now let's make some practical examples say we are in a parking garage and we backed up into this parking lot so this is the parking lot in between those lines that actually we want to go into this next parking lot and so the planner makes a left turn and then a right with the minimum radius it can do and then goes into this parking lot so here is the balance of the other parking lots while unfortunately on the opposite side there is parking lots as well so the planner comes up with a solution to make the turn up here if there's another parking lot it actually goes around well this might not be possible because here there might be a a back wall so it goes up here and does to turn up there now this may be the end of the park deck here so here might be a wall right so it goes around the structure somehow but well this might be not possible as well because there is a wall here so it goes around the entire thing now this may be also not possible because there's a wall here so the planner searches for some empty space where it can turn the vehicle now if there is a narrow passage down here it will go down here and to here the circle with minimum radius if this is not possible it will try to search this minimum rages somewhere else so it's pretty cool watching the planner find alternative Solutions like this one which is of course also not possible if this wall blocks this maneuver now of course if there is no space whatsoever to make your turn then the planner can't do anything right it visits all those notes but it can't do the curve because there's simply no space however if you allow forward and backward motion it will do something that makes really sense it will go out forward to the right back up a little bit and then go into the parking lot by a forward movement and if this somehow changes it shape then you will see how it will try to do other maneuvers or if there's a car it will try this one so if you switch on the visited nodes now you will see of course that the forward backward solution is much more efficient than the forward Only Solution which has to visit a lot of nodes now this is too narrow in order to come up with a feasible solution so congratulations if you made it through here I think by now you have learned a lot about D star algorithm the AAR algorithm and finally our kinematic State space algorithm which finds paths that can actually be driven by a real car so again congratulations and this finishes this part of the lecture
SLAM_Lectures
SLAM_C_01.txt
so welcome to Unit C which is about filtering so as you remember we started by looking at two trajectories the red one which we obtained by an additional external measurement system and not by the sensors that are available for the robot itself and the blue one which we obtained by reading out the wheel counters of the robot and applying a motion model and we saw the following problem as long as the robot goes straight everything seems fine but then when the robot turns the two trajectories diverge and we found out that the reason is probably a wrong calibration constant namely the width of the robot so we experimented a little bit with that calibration parameter but we found out we can somehow improve it but nevertheless if we just rely on the measurements of our wheel counters there will always be a drift that leads to a situation that we are in the end far far away from the correct position so we came up with the following idea in addition to the wheel counters we are also using the information that we get from our laser scanner so the laser scanner sees all those cylinders and can determine the position and by using an assignment of those cylinders to the known cylinders in the real world which are stored as a kind of a map we can correct the robot's position and as we see everywhere we do have some smaller or larger jumps in the trajectory nevertheless we ended up in a globally goodlook trajectory and we can distinguish two different effects here in one case if our reference trajectory looks like that and our reconstructed trajectory looks like that then the difference that we see here is most probably not due to noise rather it is due to a wrong parameter in our movement model and so this is called a systematic effect or systematic error if on the other hand our reference looks like that and our estimated trajectory looks somehow like that then we also have differences between those two but those differences are most probably due to a random error now keeping those two apart in modeling is not as easy as it seems so in our case in order to fix that systematic error we already had an approach namely to use another parameter for the robot's width so we can use add a calibration parameter but then the world is more complex than you would think namely in our case we fixed it by a setting the width to a higher value this was somehow unusual because I measured the width of the robot but in the end you obtained a good trajectory by setting it just to a different width which didn't make much sense and the reason for that is probably that when our caterpillar robot turns there's a large slip between the caterpillar tracks and the ground now we can set a calibration parameter like W2 here and all works fine but then later when we put the robot on a different ground it may behave differently and the output may be something like that and so while we may be able to solve that problem in our case in general the underlying problem is always every modeling will be incomplete so then the usual approach is you try to capture the most important effects of the real world in your model of the system and the rest of them are handled as random errors and so the hope is just that the effect of this remaining systematic error that can't be modeled can be neglected so next let's have a look at the random error first let's have a look at the movement so say this is our ground and our robot moves in a onedimensional fashion to the left or right so in the beginning we're absolutely sure that our robot is at the Precision 0 m and then we tell him to move for 1 M please so that's a command and then the robot ends up here it also change the shape but never mind so then it is here at 1 M and then we tell the Ro again to move for 1 meter so it ends up here pops back to its original shape and it ends here at 2 m and so far we handled this the following way we said we are absolutely sure that we at 0 m and then we gave the command or the control that the robot shell move for 1 M and so in the end we are absolutely sure that the robot is in the position 1 M and the same for the movement from 1 M to 2 m so meaning we assumed there's no error but we already noticed that is not really true let's see what this means in terms of probabilities now say our one-dimensional space is rostered into small cells the robot is only allowed to stay in one of those cells the x-axis here that's our position whereas the y- AIS that's the probability that robot is at that position so with our current Approach at time t0 since we are absolutely sure that the robot is at 0 m this would mean say this is the Zer M cell that the probability being at0 m is 1.0 whereas the probability of being somewhere else is 0.0 and after the robot moves at T1 it is exactly a 1 M and at T2 it is exactly at 2 m that was our approach so far and we saw it didn't work very well so now let's model the errors so say again this is our ground which is subdivided into discrete cells this is our position X whereas here is our probability that the robot is at position X the probability is between Z and one so say at time t0 we are here at 0.0 M but this time we're not so sure about that so we could also be here at 0.01 M or here at minus 0.01 M but say the probability of being to the left or right of our intended position is half of that of being at the intended position from that we would say the probability of being at our desired position is 0.5 whereas the probability of being to the left and right is 0.25 each and so overall this sums up to probability of one and then if the robot moves to the next position we will move all those values accordingly and obtain this and if it moves once again you will obtain this now this is called a probability distribution and since our space is discrete it's a discrete probability distribution so for every position XI we have a probability and this probability must be larger or equal to zero and it must be smaller or equal to one and also as we did here already all the probabilities must sum up to one where the index runs from minus infinity to plus infinity now since this is just a discrete roster we can represented in the computer as an array so we Define a certain rer width say 1 cm so this might be the cell 1 M whereas this is 1 m 1 cm and this is 99 cm and so on and with our probabilities up here we will obtain 0.5 and 0.25 and all the other values will be zero now as you can see here the problem is this area of cells extends to infinity and so we make use of the fact that every robot will have a limited space where it can operate and so using this we will just cut our array at both ends and so I implemented a class to represent this discrete probability distribution and what it will do is the following instead of storing a large array it starts to store the first value which which is non zero and so in this case it will just store an array of three values so this is the values array and it will also store the offset index which depends on our chosen raser withth but in our case that might be 99 and using that class you're able to represent discrete probability distributions and so the first function I want you to implement is this move function where an existing distribution is just moved forward by a given distance now let's first have a look at the distribution class and you will find this class in the package for the unit c so the distribution glass has a Constructor which by default just generates a unit pulse at offset zero so if you call this just without parameters it will produce at index zero a unit pulse and if you want to give it explicitly another handmade distribution and just call it using D is a assigned distribution say of 99 and then a list so if you want to do the distribution from the previous example 0.25 0.5 0.25 and this will give the result 99 100 101 0.25 and 0.5 but we will see in a moment how we can get that distribution by calling another function now the representation function will return a string containing the values sometimes use for to Deb so you can just say print distribution start and stop will give you the first offset and the offset one beyond the last valid element so that you can use directly those values in a range or X range call normalize is useful to modify the values in a way that the sum of the values equals one so that's useful when you want to construct your own distribution you don't have to worry about normalizing constants you just calling in the end this normalized function now value Returns the value of the distribution and the nice thing is it will return a value for any index not only the indices which are inside the values array so if these are your values right and your array is just here starting at 99 then you can also ask for 200 and it will automatically return 0.0 then plot lists that's a kind of special function it will return two lists which can be direct ly used in the ma plot lip plot function and so this is was a way to incorporate some helper for plotting the distribution into the distribution class without having to include the map plot Library into the distribution. piy file and we will see the usage of that function in a moment so here are some distributions so you can construct the unipulse at at a given Center so the center is also the only cell which is non zero in this case and there's simple distrib ution called triangle distribution and what you need to give here is the center and the half width and this is meant as follows so if you give a center of say 10 and a half width of two then it will produce the maximum value here and at plus - 2 it will be zero so half width is just half of the width of the entire distribution but since this is discrete you will get a value here here and there and so overall we will get this distribution that we have used all the time with 0.5 and 0.25 and here you can actually see the standard trick so in order to set that up easily I just fill in integer values of the counter into an array not worrying about the necessary normalization then after I did this I construct a distribution which starts at Center minus with + one because Center minus the width is here and those values are always zero so I don't need to store them so I'm starting here I'm storing those values I give the value array to the distribution and at that time it is not normalized and then I just call the normalized function there's also a gaussian distribution here and we will talk about that later and here in the end there is a very useful function that computes the sum of several distributions so say you're having several distributions like you have something like that and the second distribution like that then this sum function will compute the sum of both which may be something like that but on top it will of course normalize everything so that the sum of all values will be one and this is not very hard to do but because we store our distributions with a start index or offset and an array it's a little bit tricky to figure out the overall length of the resulting array and so this function does all that for you and in the end again it normalizes the resulting distribution and this function has two PA parameters one is the distributions and you would give them in a list so you would have the distribution one distribution two and so on and each of those distributions is a distribution object or an instance of the distribution class and optionally you can also give some weights so if you give 3.0 here and 2.0 here then the distribution one will be weighted by a factor of three and distribution two by a factor of two and in the end don't worry everything is normalized and so you will obtain the very same result if you set this to 6.0 and this to 4.0 now here's the first program I want you to implement and this should move a distribution by a given Delta so in the main program I set a list of moves so it's 20 20 20 meaning move three times by 20 to the right and we start at a position but this position is not an integer anymore it's a distribution so it is the triangle distribution that we have used before consisting of 0.25 0.5 0.25 centered at 10 and here's the trick with the plot list plot list is a member function of our distribution class and what we do here is by giving two arguments 0 to 100 we make sure that we get two lists back one with X values and one with Y values suitable to be used directly in the plot command and then we set the line style to steps so that the discrete nature of our plot is shown more clearly and now we do just a loop for all the moves our position which is now distribution is replaced by the old position moved by the amount of movement from the loop variable and then again we plot it using the very same command in the end the Y limit is a function of M plot lip and it will set y maximum just a little bit above one in order to make distributions more visible that are exactly 1.0 so we won't need that here and in the end all the plots which go into the very same diagram are shown and if you implement that correctly you will see the following a window opens where original triangle distribution is at position 10 you see the 0.25 0.5 0.25 and then you move it three times by an amount of 20 so now please program this
SLAM_Lectures
SLAM_B_03.txt
so congratulations if you manage to program this SDM a transform function so this is non-trivial code and it's a really useful function which is able to compute the best similarity transform in a least square sense between a left point list and the right point list and I forgot some strange errors when the number of points is very low namely 1 or 0 and let me give you an additional hint so the similarity transform returns four parameters so to estimate the four parameters we need at least four observations now if you have two points and the left list and you got two points in the right list then the similarity transform is able to compute a perfect fit because the two point pairs lead to two observation equations each and so overall you have for observation equations for four unknown parameters now what happens if you have only one point then it is easy to see that it could compute the translation but you will be unable to compute the scale and the rotation matrix but your function returns those values so it is clear that the function will fail in some sense if you have this situation meaning there's only one point in the left and right list or there's many points but they're all identical so here's what you can do about this remember in this function you compute the values LL and RR so whenever there is just one point or there are many points but they all have exactly the same coordinates then the reduced coordinates will all be 0 0 and so the ll which is the sum of the squared length will also be 0 and similarly for RR if you do have a single point or multiple identical points in the right list so you may check for those conditions and if they apply just return none and if you had to look down in the main function you see that in the main function there's a test if truffaut which exactly tests for this return non value if you program the estimation successfully running the code will produce a file called estimate transform text now load this file the file contains the detractor e and also the points which were measured by the laser scanner and then in magenta it contains the best fit of the green points to the river and positions which are those gray disks using our newly programmed rigidbody transform estimation now as we move along the trajectory you can see the noise in those green points and you also see when we reach the lower left corner we have fewer and fewer points and then if we move further on there's only one point left and we can't estimate the transform anymore so no much end up points are shown and if you go on well then we're far off the trajectory and all the other results while sometimes they're still computed a match don't make actually sense now there's one thing remaining our initial intention was not to find out how well the scan points match the cylinders in our arena but rather to correct the trajectory of our small robot so we have this situation we have our robot here and from the motor tics it tells us that it has moved now here so we get an update of the old position to the new position just based on the tics of our robots caterpillar tracks and now the robot scans say three cylinders and so we get the angle and distance information from that we can project those cylinders as we did into the real world and the real world say you're having those landmarks and now we ask our estimation routine to find the best fit for the scanned landmarks which gives us this magenta points what does this mean for the robots pose well as we move those green points to the magenta points we can also move the robots position using exactly the same transform so all we have to do is use the transformation which we obtained and apply it to the robots XY coordinates that's one thing remaining as you remember the robot has a state which consists of three variables it is X Y and the heading so we know how to correct x and y we just sent it through the transformation that we obtained but what shall we do with the heading home this is easy because the robot looked in that direction and the entire scene is turned then the robot has also to look into a turn direction so all we have to do is to correct the heading by the angle we found when we estimated the best fit so you remember the parameters of our transformation where the scale lambda the cosine of alpha the sine of alpha and translation and now in this case we really need alpha in order to recover alpha you need to calculate the Archos tangent of sine alpha divided by cosine alpha and if you program this remember in Python it's best to use the function R cos tangent to of sine and cosine and so after you computed this you have to modify the heading according to this angle so in the code for this is lamb for D if you want to do it offline and so all you have to do is put in here you're fine cylinder pairs function from the previous implementation put in here your estimate transform from a previous implementation and here that's the only thing to program the correct post function it has an old post and it gets the estimated transformation that maps the scan slender's to the reference cylinders in the arena and down here in maine the only modification is here where if the transformation is valid so it is not nun then the pose is updated using the correct post function now let's have a look at how it should look like if your code runs correctly so the code will produce a apply transform text file so load this and this is amazing now for the first time we seem to have a correct trajectory it's kind of noisy but still we're just using the motor case which were not very accurate and the matching of the cylinders we now obtain a trajectory that seems to be correct globally let's load the arena cylinders as well so you see the matching worked very well in the beginning then as we move along here comes a critical situation the matching does not work anymore so the robot relies exclusively on the motor tech information and here this cylinder actually belongs down here and we're probably almost lost but in the next step we're lucky that you to a heading change those two landmarks match and so we are back on the track now here we have a match by two cylinders and now if we move on by one step there's a third cylinder now there's some noise in all those measurements and so incorporating the third cylinder into our estimation makes our trajectory jump let's load the reference data as well so there reference data is also not true because it was just obtained by this uncalibrated overhead camera but what we can see is for example here the turn of the robot is too strong but then fortunately here the matching of the landmarks prints the robot closer to the correct trajectory and now if you compare this to our previous results our matching result is now globally correct but it looks pretty jagged the reason for this is that we didn't have a concept to deal with the measurement noise of our laser scanner but if you compare this to our previous result you can see the previous result is much smoother but overall it tends to bend and after a while it ends up in a completely wrong position
SLAM_Lectures
SLAM_F_05.txt
so so far our result was not so impressive because the implementation was made to handle an arbitrary number of landmarks but we never added one so now we'll have a look at how to add a landmark to our system State now this addition of landmarks will happen later on whenever the robot observes an object in the measurements for which it doesn't find a corresponding landmark in the current list of landmarks so in the beginning the robot state is X Y and theta the heading and so if we add one landmark this will be augmented to contain X Y theta and x1 y1 and we will use the robots measurement of the bearing angle and the distance to obtain this x1 y1 and of course if I add one more landmark I will obtain X Y Theta x1 y1 and x2 y2 and so on now what happens to the covariance matrix well this is initially a 3 x 3 matrix containing the variances and covariances and now if I add one landmark I will still have this three times three sub matrix here but it will have two more day variables so this will be a 5 x 5 matrix and how shall I initialize all those new elements we will assume that this new landmark is not correlated with the previous state and we will assume that ax and y off the new landmark are uncorrelated and since we don't know anything about the initial position before our first measurement we'll assume those values to be infinity and now for the practical implementation we will use 10 raised to the power of 10 which by the way means that the standard deviation is 10 raised to the power of 5 which is 100,000 and since all this is in millimeters this is 100 meters so it is not infinity but it is much larger than the size of our arena so all you have to do now is whenever you add a landmark add two more state variables and keep the old ones and also extent the matrix by two elements for both the rows and columns set the Infinity values here and 0 here put back the new state in the Kalman filters state variable put back the new covariance in the covariance member variable and do not forget to increase the number of landmarks which is also member variable after Kalman filter by one so now let's implement this so here's the code I prepared slam line beat slam add landmark and it is the same as the previous code so you may want to put your previous code for the prediction down here and here is the new member function you'll have to implement so all it gets are the XY coordinates of the new landmark and it should enlarge the state vector by 2 elements for X&Y of the landmark and the number of rows and columns of the covariance matrix also by 2 should increment the member variable number of landmarks and it should return the index of the newly added landmark instead of -1 that is all there's two you and I added a few other functions the gate landmarks function will return a list of all landmarks and the gate landmark error ellipses will return a list of error ellipses one ellipse for each landmark in the current state now down here in the main function here is one addition since we do not yet have the code to add landmarks automatically whenever our robot sees a new one we just add one manually namely at the position 400-700 and if your code works correctly you will see this landmark at that position later on but the error lips will be so large that you can't see it so I also added some code to set the variance in X to 300 millimeters and in Y to 500 millimeters so you will see not only the landmarks position but also a suitable error ellipse now in the main part nothing changed in the Kalman filter but down here I added two calls which will write out all the cylinders of the landmarks and their associated error ellipses this uses write cylinders and write error ellipses and those two functions are imported at the beginning of the code and after you implemented and ran this if you'll produce the EKF slam Atlanta marks dot txt file so load this and you will see the following that's the trajectory and there's also our landmark at 400 700 with a standard deviation of 300 in X and 500 in Y and of course since we do not process any measurements so far the landmarks coordinates are not influenced by the robot and neither the robot is influenced by the landmark and so we obtain our previous trajectory for the robot and a landmark which is constant in position and which has a constant covariance matrix so adding observations will be our next step but first please program the addition of a single landmark to the state and covariance
SLAM_Lectures
SLAM_D_04.txt
let's find this out looking for the tolerance matrix which is the expected value of C minus the expected value of C times C minus the expected value of C transpose and so C is as we have defined just X Y which is by the construction of our lever arm X and 2x so this is because our lever arm looked like that here was our motor and here was the other joint and this was L and this was also L so every movement here at this joint is doubled at that joint and serving you that X was normal distributed with expectation 0 and variance Sigma x squared and so from that we see that the expectation of AX was of course zero and so from that it follows that the expectation of C is zero zero and so the formula above here becomes very simple Sigma now is just the expectation of C C transpose which is the same as X - x times x - X as a row vector which is simply x squared - x squared and 4x squared and this is symmetric but the expectation of x squared is the variance of X so this simply becomes Sigma x squared times the matrix 1 2 - 4 and so to compute the error ellipse which is compute the eigenvectors of this matrix and we will now not care about Sigma X square so the eigenvectors of 1 2 2 4 and if you compute those you will obtain vector 1 is 1 2 because 1 2 2 4 times 1/2 is 1 times 1 plus 2 times 2 + 1 times 2 plus 2 times 4 which is 5 10 and so this is 5 times 1 2 and so the eigenvalue w-want is 5 because if i put in this vector i get out 5 times the vector and so the second eigenvector is minus 2 1 and this leads to the following - 2 + 2 + -4 + 4 and so this is 0 which is 0 x - 2 1 and so we get our error lips as follows are we 1 is 1 to the right and 2 up V 2 is 2 - left and 1 app and so this is perpendicular our ellipse is indeed stretched in the we wander etching W 2 is 0 so it is indeed degenerated ellipse which is just this line and this reflects the fact that by assuming that this link here is a perfect multiplier the two variables will be perfectly coupled so if there is some error in X then there will be just double the error in Y with no added noise and reality I will be unable to produce this mechanical components perfectly and so in reality my lever arm will add some noise which will lead then to an error ellipse like that but as we have set up the problem it is indeed a degenerated ellipse now let me ask you the following now for this mechanical setup which we just had with our ellipse being degenerated and X being normal distributed like that Y being normal distributed like that what is the variance in Y is it the same as an X is it two times the variance in X or four times the variance in X or five times the variance in X now what is the correct solution ABC or D
SLAM_Lectures
PP_03.txt
so now that you implemented your first version of D algorithm let's make our first Improvement and this will be just a cosmetic Improvement and not an improvement of the algorithm as such so let me show you the outcome so if in this improved version I set the start and the goal it will do rooting exactly as before however it will represent the actual distances using different intensities of G so zero cost is at the start Noe and the maximum cost is when it reaches the goal Noe and if I place this somewhere else then it will expand accordingly and as you see there's no larger distances but the green intensity automatically rescales so that the maximum distance is always represented by a green color with maximum intensity now if you're adventurous you might try out defining a mace for example by drawing with the left Mouse button those obstacles and then you can see nicely how the algorithm explores from the inside to the goal and using the middle right Mouse button can of course punch some holes into your Maze and so you will get different results so now let's modify our previous algorithm to get this nice visualization and the required modifications are quite minimal so remember when we discussed how the algorithm proceeds on a graph where we did two things when a note was visited first we crossed out the note and the second thing was by the very moment we marked this note as visited we also fixed its cost of 4.4 or whatsoever now in our case we have this roster an array called visited where we started from our start node all the array cells were initialized with zero and as soon as we visited one note we replace that value by a one so as the algorithm proceeds it will replace all those zeros by once we now also want to record its cost so we would need a secondary where the cost of the start node is zero and so whenever we Mark a node as visited we write down the cost of this node in this second array now here comes a trick instead of using a second cost array we will just use a single array where we will record the cost and whenever the cost is larger than zero we know we have visited this node so we will just keep our old visited array and instead of marking this with zeros and ones we will mark them with the actual cost so this new array of visited values will contain all zeros in the beginning just as we did earlier for our visited area and then whenever we visit a node we will replace this Zero by the actual cost so we have one little problem here so remember that our start node actually had the cost zero if you visit this note we will put in this value this cost value of zero into our visited array meaning it will look as if the start node was not visited yet so we make a little trick here we set this value actually to a very small number so when the start node is visited this small number is put in here which is then larger than zero and by using this trick we can use a single visited array to record all the visited noes and at the same time record all the cost values so let's have a look at the code you'll have to implement and the changes are really minimal so the code now is p1b and let's have a look at the main part now in general we started with one version of the dtra algorithm and we will modify this constantly but I won't show you the solution so what you'll need to do is you'll need to start with your previous implementation and copy all the things you did in the previous implementation to this new version where the comments here stay the same so it is easy to identify the places where you have to put in your previous solution now first of all you have to modify this cost instead of 0.0 use a small number it must be positive and it should be substantially smaller than one the unit distance so that it won't have an effect on the algorithm itself and the only second change you'll have to do is instead of marking the visited Noe with one put in the actual cost of the Noe which is just a variable cost that has been determined from the element we picked from the front so this is the only two changes you'll have to do and after that you should be able to see the distances visualized as different intensities of the green color
SLAM_Lectures
SLAM_C_04.txt
now before we go on let's again have a look at the output so if you run the program we got the following result and the initial distribution is unit pulse which may be seen as a special discrete form of our triangle distribution with a halfwidth of one and the outcome of the first convolution that's a triangular distribution and the outcome of the second convolution that looks almost like a triangular distribution but if you remember the probability values that we computed earlier with 1116 416 616 you see that is not a triangle anymore so let's see how that evolves so let's make our Arena larger and instead of three times we just do 50 times and then we run it again so this is now the outcome let's zoom in here's the beginning and as we move on our distributions they lose this triangle shape until in the end they become pretty much bell shaped now that's quite interesting because we have a certain distribution and using that convolution we lose the shape of our original distribution even though it was kind of a nice mathematical shape we started with also of course as we move on the distributions get wider every time and so they start to overlap each other so that the robot is at position 960 in this iteration with the same probability as it is on position 960 in the next itation so how far did we get with our attempt to model the uncertainty what we have so far is we star in a position then we move and when we end up in another position then we move again and so on so what we have so far is first of all we able to model the uncertainty regarding our current position so we use a distribution to describe our current position and also this move here we use another distribution to model the movement now what is still missing is the measurement so let's have a look at that so say there's a manufacturer of a laser scanner and before selling his products he will do a calibration so he will mount it somewhere right and then he will have some kind of a fixed wall and he will shoot Rays at this wall and the lighter will tell him kns here say the fixed distance which is calibrated very accurately will be 5 m but the laser scanner measures a little bit more and then he will measure again and it will be less it'll measure again be more again and so on and so he will put up together all those measurements and do something like a histogram which will end up in something like that so in the end the manufacturer will publish a data sheet telling you that this scanner has an accuracy of something like plusus 4 cm meaning that here might be 4 cm and now the peak of this curve that should be on the actual value otherwise it's a systematic deviation and the manufacturer will try to remove those systematic deviations so that just the stochastic part remains so now for our case this means the following if there's a wall and I have a laser scanner but I don't know where it is but I shoot AR Ray and the laser scanner tells me it's 5 m then my conclusion would be that I'm here 5 m away from the wall however incorporating that inaccuracy that is well known from the data sheet of the scanner I have to put that here of course so this means this uncertainty in measuring translates to an uncertainty in my current position so what does this mean for our robot again here's our scene and here's our robot and what we did so far is we modeled the uncertainty in position in movement and so we know the robot is here but as we know it's not really accurate we say this is represented by this distribution so it's here with the highest probability but it might be also somewhere else with an accordingly lower probability now we put this laser scanner on top and the laser scanner measures the distance to a ball and it tells us it's 5 m so now we measure back from the wall those 5 m and we end up here we find out that we should be here but we not exactly here of course as we know we have here the error of our laser scanner so as we see the motion tells us you're most probably here but you could also be here of course that fits our measurement it could be as well the case that in reality I'm really here and the measurement is wrong so I can choose either I have to go into the lower probability range of my position or into the lower probability range of my measurement and now this is called the prior because this curve is known before I do a measurement and then we have here the distribution or the probability of the measurement and now what I want to know is what is the result of this so if I put together the prior distribution and the measurement distribution what will be the final distribution what shall I do shall I add those distributions or subtract or multiply or divide or do I have to do again a convolution
SLAM_Lectures
SLAM_A_04.txt
so I want you to implement this motion model and for that I have prepared some code for you so the code consists of a main function and the filter step and you will have to implement this filter step the filter step gets the old pose which consists of x y and the heading and it gets the motor ticks left and right and it has to implement the motion equations for the two cases the first case is the motor ties left and right are the same in that case the robot just dri straight and if not the robot drives along a curve segment and you'll have to implement that case too and these are the formulas we just discussed and I prepared a main function for you there's the constant we just discussed the takes 2mm conversion factor another constant is the width of the track in millimeters and the main function does all the opening of the file and the processing for all the motor ticks so here it opens the lock file it reads in the motor T then it starts a given pose so since I do not know the pose I just say it is at x0 y0 and with a heading of zero and then I start to construct the list of the filter positions for all ticks in the motor tick I just call this filter step function the one up here which you'll have to implement the filter step function takes the POS takes tis and computes the new pose and then the new PO is just appended to this list then in the end I do two things I print out all the positions and orientations and I also in addition to a plot so now let's have look at what the program does after you implemented the filter step function so just run it and this is what happens the program prints a long list of XY heading values so if I scroll back these are the first values at at the start of the experiment uh the robot stands still so it is at 0 0 with heading zero and at the beginning the motors are not turning and for the first 1 2 3 4 5 6 7 8 9 10 11 12 13 for the first 13 time steps the robot stands still and then it starts to accelerate and it goes straight since the heading is along the xaxis it goes into X Direction and it accelerates and then as you remember with a third value here the left and right ticks are a little bit different so it starts to turn a little bit that the heading here switches to a value of something like 2 pi so when you implement this make sure that you have those 13 zeros here and then the same values as you can see here but the program does not only output those values it also draws a figure so here's the figure it draws and you can see the robot starts at 0 0 with heading along the a x axis theta equals z and then it moves and does a left turn and so you see for the first time you see the trajectory that the robot is moving in the arena so now try to implement this have a look at the graphics and make sure that you get the correct values as they are shown here
SLAM_Lectures
PP_09.txt
now while we achieved the pretty good path planning globally we still have to admit that locally the generated path is not drivable by a normal car so for example for this situation we may get a path like this which could be driven by a robot for example our two track robot which we used and the slam lecture which would first drive straight then move one track forward the other backwards so make a turn in place and then drive straight straight straight again make a turn in place and then arrive at the goal however a normal car has the constraint that it can only go straight or it can go along a circle where the radius of the circle is determined by the steering wheel so what we want now is we want to adapt our search for a possible path such that the trajectory which we generate can be driven by a standard car which should base those constraints so we assume that our car can go either straight or make a turn to the right or to the left where those turns are made of circular segments so this model is more realistic than what we had earlier but you won't find it on real roads because if you build a real Road like that consisting of a straight line segment then a circular segment and then again a straight line segment this would mean that a car coming from here with with a certain speed the steering wheel in this position would have to turn its steering wheel at infinite speed at this point to make this right turn and then turn it back at infinite speed at this point which is unrealistic so that if you buil streets that way you will have a lot of accidents right here and probably also here however this kind of navigation would be possible at slow speeds for example on a parking lot where the cars Drive slowly then stand still turn their wheels and then go straight into the parking lot imagine our car is here and looking along the x-axis so now we do have XY and heading angle which together form the pose of our car now assume this is the start point and we want to end up here with that orientation and all the way from here to there should be made up of either straight line or circular segments so in this case the solution is easy we would take straight line segments until we arrive here in our gold state now if our gold state is actually this then we would probably start the same but the last segment would be replaced by a curved segment and so we see starting from here if we make the simplifying assumption that we always will go for a unit length we will go straight make a right turn or left turn and for further simplification we only allow three three curvatures namely a curvature of zero a positive curvature and a negative curvature both of which are constant then we end up in the following situation in the first step we can go straight left or right starting from this position we can go straight again or left or right but also starting from these positions we can go straight left or right and so as you see just as earlier the set of solutions spans a tree but now this is in the kinematic State space of the car instead of being in a roster as was the case in our previous implementations and so our plan is as follows starting from our start POS we will try to reach our goal posst by exploring this tree so to simplify matters our standard length will be five units and the allowed cages will be plus 1110th 0 or minus 1110th and so we start by expanding start node and then we will similarly as in our a star implementation compute the cost from the distance from the start plus the remaining direct line distance and we will do so for the other notes as well now the driven distance in all those three cases is five and so this node will be the closest to our goal and we will expand this node next and it is our our hope that we will finally end up with a trajectory say like that which is able to combine straight line and circular segments and again what we explore here is the state space in terms of the controls that the car can change in this case the steering angle whereas earlier we have explored the space of possible positions in the plane without worrying about if it is possible for real vehicle to set its controls in a way that it can actually follow this path now there's a small adjustment we have to make with regard to our previous implementations namely in the gold state since we allow only discrete lengths and only discrete Curves in general we will be unable to hit a goal State exactly and so instead of testing if we reach the goal State exactly as we did earlier we now have to test if our current state is within a certain tolerance in position and heading angle from our goal State and if so we will accept it so this is not very unrealistic because usually after having planned that path we might still be able to fine-tune this trajectory and so to use in detail a slightly different curvature to optimize the path and end up in the correct position with the correct orientation so now let me show you the implementation of this which which you'll find in pbo2 a so the code looks pretty much like our previous a star code with some modifications that's the usual stuff in the beginning and then we have here an extra class which does the following given a start pose a curvature and a length it computes the end pose so if you give it this start pose a length and a curvature it will compute this end pose where this is the length and the radius is 1 / by the curvature so this method end pose is called all the time when we try to generate the three variant starting from an existing pose this second method down here is actually only for the visualization what it does is given again start pose curvature and length and in addition a Delta length it will generate for the exact same situation as above a set of intermediate points at a spacing of Delta length and this is used and the graphical user interface to draw the final trajectory so now let's go down to the main implementation first as previously we Define our movements those movements now are not in terms of X and Y but they are in terms of car controls so they consist each of the curvature and the distance to drive and we have a positive curvature zero and a negative curvature for in this case a total of three possible movements here we have the distance function again again this is the same as in the previous versions and here we have this function which we discussed earlier which determines if two states are close so in this case they are considered to be close if the heading difference is less than 15° and the distance is less than two and remember each time we drive a segment we actually drive a segment length of five so now here comes our explore State space function which we modeled similarly to our a star search which we used in our previous program so first we put the start node into the front now keep in mind that this here now is not XY positions but it is poses so it is XY and heading and so in the beginning we put in here the distance between the start pose and the goal pose which is then the ukian distance between the X and Y parts of those posts put in the cost a small POS postive cost and the start pose and then we put in two more items namely the previous pose and an index and both of those are only needed in order to reconstruct our path later on so as in our previous version we have a set of visited cells just to visualize which positions were explored by our algorithm and also we keep track of all generated States and this is similar to the came from data structure which we used in our a star implementation now here comes the main Loop and it's pretty short so I integrated a kind of a timeout so if the front Heap gets too large you may play with that number in my case if it has more than 500,000 elements I will just break the search a print out there's a timeout and it will return without giving a path from start to go so that is just a safety measure otherwise as in a star I pop the smallest item from the Heap I mark this item to be visited and here's a small modification because my post now is x y and heading where X and Y are now floating Point numbers I have to convert them into integers in order to use them as indices into my array now after having marked this I remember that I generated this posst and when I generated it I came from previous posst with the move number move and this move number is 0 one or two for left turn straight and go right this is exactly the index into this list of possible movements so then I check if I reach to go and as mentioned earlier I cannot just test if my pose is now exactly identical to my goal pose so I check if those dates are closed and then in the end there's our usual Loop overall possible movement so for I in X range of Max movement ID which is three so I is 01 or two I get this movement from the list of movements so I get a curvature and a length and I compute a new pose using my curve segment end pose function which we just discussed and after having this new pose I check if it is within the bounds of my world area if not I will skip the rest of the for Loop and then as usual I check if there is no obstacle and if there's not I compute my new cost as the old Cost Plus the length of the segment and I compute the new total cost as this new Cost Plus the distance to the goal and then I push this new element consisting of the total cost the new cost the new pose my previous pose where I came from and I which is the number of the movement that has led to to this entry in my Heap so this is all pretty much the same as to what we did in our previous AAR implementation and finally down here there is the backwards unwinding of the path which is then returned together with the visited cells now when you run this and you place the start position by shift left clicking then you have another option namely if after you click you just drag the mouse then you can also Define The Heading and not only the position so say we start going to the right and then we want to end up down here wherever looking to the left now switch off the visited cells for a moment then we get a trajectory like this which makes perfectly sense so we drive in that direction drive down here and then end up here in the correct orientation now let's see what happens if that distance here gets closer then yes the curve gets narrower the radius decreases until we end up in a situation like that where we see that we already have reached the maximum curvature at which our car can go so if you force those two points to be at an even smaller distance it plans differently it first does a left turn then does the minimum radius right turn and then the does a left turn again to end up in the correct orientation in that point so that is pretty cool now also it is able to come up with pretty interesting solutions for example if you have to go down there but with the same orientation as the original orientation it will make a loop up here now if there is an obstacle it will make the loop down here if there's an obstacle it will take a while and then it will go around it so that is pretty cool now let us have a look at an interesting effect which happens when we go over longer distances so first of all everything looks okay we go from here take a left turn go up here take a right turn we end up slightly off but that is the tolerance we allow and if you show the visited notes we see it is pretty straightforward so these are possible turns that we visited and these are the five unit distance increments it goes up here and everything is all right now if we change the orientation of our start Point say we go up this time it works as well we have a little bit of an extra cost up here if it goes down it's fine if we go left it's fine too so let's go right again now if we try the same with our end point so the orientation should be downwards then we see it takes a long time and in this case it tells me there's a timeout so what happens well if you change the orientation of the start point it seems to be not so bad but if you change the orientation of the end point it takes very long and doesn't come up with a solution we also see the huge difference between the end point oriented to the right where only a few cells are visited and the end point in a different orientation where many cells are explored in some cases so many that it does not come up with a solution and so what happens is that our AAR like implementation actually tries to move to the goal as quickly as possible but then when it arrives at the goal it has a completely wrong orientation and so in order to have the correct orientation it needs to do a huge turn so something like this it starts here with the correct orentation then it is dragged towards the goal but then here it has to start this huge turn in order to end up in the correct orientation here so as early as here it has to plan for ending up in the correct orientation up here and this induces a huge number of states to be explored so what do you think is this a major flaw in our approach or do you think that we just need a faster computer with probably also more memory to fix this what do you think
SLAM_Lectures
SLAM_D_07.txt
okay then what is the dimension of B
SLAM_Lectures
SLAM_D_17.txt
now it's time to produce our final result the extended calman filter and you've seen all the equations and several parts of this unit but let me put those together on one slide so we have the prediction step where we compute our predicted mu and our predicted Sigma so this is the predicted State and the predicted co-variance Matrix where in our case we computed this system noise from the co-variance of our control noise and we have the correction step where we first have to compute our common gain but we have to replace every occurrence of the C Matrix we used earlier by our Jacobian matrix of the measurement function and from that we obtain our new system State and our new co-variance and these are the three equations for the correction step so you already implemented the prediction step and the last thing you'll have to do is to implement this correction step there's one final hint I want to give you regarding the correction step now in this step you will compute the new state as shown on the previous slide so on this here is the Innovation which is a two Vector of the range measurement minus the predicted range measurement and the measured angle minus the predicted Angle now the difference in R will work as you'd expect but the difference in angle might give you wrong results which lead to Bucks which are really hard to track down now say the measurement tells you that your angle is this so C is almost Pi minus some small Delta whereas the predicted position is very close so H Alpha is minus pi plus some small Epsilon and so just subtracting those values you will obtain 2 pi minus Delta plus Epsilon and that is not the same as minus Delta plus Epsilon which you would expect and so what you have to do here is make sure the final range of the alpha component is within - pi 2 pi although in our case case if you normalized C correctly and if you normalized H correctly then this case might probably not occur however in general it is good to treat this case correctly so always watch out when you take differences of angles which may introduce some additional plus minus 2 pi offset and still one more thing when you compute the Calon gain this formula contains Q which is the measurement co-variance Matrix now we will assume that Q contains the variance of the range measurement and the variant of the angle measurement which are uncorrelated and so in the program Sigma R is called measurement distance standard deviation and Sigma Alpha is called measurement angle standard deviation and keep in mind that Q contains the variances that is the squared values of the standard deviations so here's the final programming assignment so this is the familiar extended calman filter with its Constructor which has grown by two additional elements the measurement distance standard deviation and the measurement angle standard deviation then here is G our state transition function and here you have to put in the derivatives with respect to the state and to the control here you'll have to put in your prediction function here you have to put in the derivative of the measurement function with respect to the state which case I gave you a hint where to find this and here this is the new part this is the correction step of the common filter and don't be afraid your final function will be much shorter than the commons I've put here but those comons will make your life easier and so down in the main function we have our familiar robot constants and now we also have the constants we used in our unit B to extract the cylinders from the scan and to match them to the cylinders in the map we have our filter constants our motion and turn factor and as well now here we set our measurement error in range and in angle and you see I don't trust my Hardware very much so I say the measurement error is 20 cm and the angle error is 15° we start with an initial position and heading and an initial covariance Matrix of 10 cm in X and Y and 10° in heading and after initializing our filter class we read all the data so now we need the motors but also the scan and the landmarks which are here converted to the reference cylinders list and here this is our entire col common filter Loop so it does the prediction this is exactly the same step that we used earlier and it does the correction now unfortunately in the correction there's many things to do so we need to evaluate our scan data find the cylinders project them find closest matching cylinders from our map and then return the measurements from our scanner and the positions of the reference cylinders and this is returned in observations so observations is a list of pairs where the first element is the measured range and angle and the second element is the XY coordinates of the corresponding Landmark now as we are at a certain position we may see more than one landmark and this is solved as follows we just present the landmarks one after the other to the correction step of our calman filter so in each step we will give just one observation to our correction step and so our correction step will get a measurement range in Alpha and the landmark X and Y and will update the current state and co-variance and after having presented zero one or more observations to the filter which start again and do the next prediction step also for logging purposes we put all the states into a list all the co-variances into another list and for visualization we also append all the Matched reference cylinders from the used observations and later on we use this code to Output all the positions all the eror ellipses and standard deviations in heading and all the match cylinders now please program this function above here and don't forget to replace all this other put your method here by your actual code
SLAM_Lectures
SLAM_G_03.txt
now let's have a look at Part B where our task is to initialize a new Landmark so the situation is our robot is here it measures a distance and bearing angle so that's our measurement C to a cylinder and it determines by Computing the likelihoods that this is a new Landmark that should be incorporated into its list of current landmarks so all we have to do is using the known position compute the position of the landmark and of course compute the appropriate Co so as for the landmark position we know that the measurement can be computed from our current pose and landmark coordinates using our measurement function H so what we want now is the inverse of this we want to get our Landmark from the current pose and our measurement so with regard to M and C this is the inverse function so this is already the first step we use the inverse function the current pose and the known range and bearing measurement to compute the L Mark position M now in our python code we'll have to compute the scanner pose because it is different from the robot's pose and we may use the legal lock file do scanner to World function to compute the world coordinates M of our new Landmark from the XY coordinates in the scanner's coordinate system so this is what we'll use in the code now let's have a look at the co-variance Matrix so we know that the Jacobian of H is a capital H that's a derivative of H with respect to the landmark and we used that already to compute the likelihoods now we would have to take this at the current state and the landmark coordinates X and Y which we just computed now this H Matrix is the Jacobian of this H function which translates from our Landmark coordinates to our measurement so this age gives us the information how our Landmark noise translates to a measurement noise but now we have the inverse problem our robot measured the range and bearing and this noise translates to the noise of the landmark so we need the Jacobian of the inverse of H which is of course the inverse of this H Matrix and so in order to compute this co-variance which we're looking for all we have to do is take the measurement noise which is the variance in range and bearing and use variance propagation now with the Matrix H minus one and after this we'll have to append our new Landmark coordinates to our Landmark positions list and our new covariance Matrix to the Landmark covariances list and that's all there is to do so in summary we compute the landmarks coordinates mathematically by Computing H minus one of the state and our measurement and practically by Computing the scanner pose and using the scanner to World function then we compute the Jacobian which we take at the current state and at the Landmark coordinates which we just computed we invert this and compute our Landmark Co variance by this variance propagation now I have prepared the slam 10c new Landmark file for you as usual it contains our particle class with a Constructor and a utility function it also contains the derivative of H with respect to the landmark which we just had in our previous exercise and then here this is the function you'll have to implement and here in the first part I gave a hint how to compute the scanner pose given the particle State and then remember after you computed the position and cence Matrix of the landmark insert them into the particles list of positions and covariances so that's all there's to do let's have a look at the main function now the main function again places the scanner at the origin of the coordinate system after setting up the particle three landmarks are measured and those measurements are given in the scanner's coordinate system so the first measurement is at x = 1,000 relative to the scanner which happens to be aligned with our world coordinate system in this case so this is at 1, 0 the second measurement is at 2,000 Z and the third measurement is at 1, 1,000 divided by the square < TK of two which means it will be here and after you run this you should see the following result so for landmark zero position is initialized as 1, so that's correct and the landmark covariance Matrix contains 40, 68,000 on the main diagonal which is converging to the arrow ellipse here so it means 200 along the axis with degree Z and 260 along the other axis so it's slightly elongated now for the first Landmark we get 2,0 so that's correct too with a co-variance matrix which corresponds to an AR ellipse which is 200 in One Direction again so it's the same but 523 in the other direction which is say double the value that we had earlier so this is times 2 and the reason is of course that this error is dependent on the Range accuracy and so it's the same in both cases but the error along the other axis is dependent on the error of the bearing angle measurement and so it increases linearly with the distance and finally for the third Landmark landmark number two again we get the correct result for the position which is here now our co-variance looks much more complicated but as we see in terms of the arrow ellipse we'll have the exactly same as in the first case so it's 200 along one axis and 261 along the other axis that's exactly what we have here the first axis is at - 45° so the axis is like that with have 261 along this axis and 200 along the other axis so slightly elongated and of course we see it's the exactly same Arrow ellipse as the red one because this point is measured at the same distance the only difference being that this ellipse is turned by 45 Dees so now please program the code to initialize a new landmark
SLAM_Lectures
SLAM_A_06.txt
now let's talk about sensor data why do we need sensor data at all so if we now look at our solution we could be happy with it so it's pretty smooth but we don't know if it is correct so in order to find that out let us load the reference trajectory and this is called robot 4 reference. text now let's look at this so in red you now have the reference trajectory that was obtained from tracking the robot via the overhead camera so so let's go back to the start well in the beginning the reference point and the trajectory that you computed from the motor Tak they are the same that's because I have given you the start position which I just grabbed from the reference trajectory and as me move along things are pretty good as long as we go straight but then at a certain moment we see that as the robot starts to turn there is a deviation it seems that curve it takes has a too small radius and this leads to a deviation and this goes on so the next curve is also to small radius so and as you see everything is bent here and after a while our robot is in a completely wrong position so as you remember I have measured the width of the robot I've given you the width of 150 mm so but it was not so quite clear where the middle of those robot tracks were so one thing you can do in your filter mode of file exercise you can change the robot's width so let's change that to say 160 mm and then just run it now we can go back to our visualization and just press reload all and as you could see right now the trajectory moved a little bit it's better now so the turns it takes they are still too narrow but it's better so let's try another value let's try say 173 go back and reload and as you see now we are pretty good now so by setting a good parameter value we obtain a trajectory that is much better but still this should leave you somehow worried because the width of the robot is now just assumed to be 173 we never measured that it just seems that this is a good value so what we do here is actually a calibration of the values but the problem with this is it might work now very well for our current robot and for our current ground so but if we use another ground we still may be off because we will have a different slip of the robot on the ground so we need another solution and this will be the measurement of our L marks by using our lighter now let's have a look at the lighter data so open up the lock file viewer and in your directory for this unit locate the robot for scan text so this file contains con all the scan data from our robots lighter just open this and now on the right hand side you see the robots coordinate system so the things you saw so far were on the left hand side of our viewer which is in World coordinates and on the right hand side we have the scene shown in the robot's coordinate system now let's move the slider and you see that is the lighter data that the robot sees so as the robot travels through the arena it measures those points the robot moves forward and forward is his X Direction and then the scanner scans from here to here all those beams it's a total of 660 beams and then behind the robot there's a Zone where it can't see anything now what are those spikes you remember those landmarks in the scene well every Landmark leads to one of those spikes here so if the landmark is here then the laser rays that go along here they hit this landmark and the landmark casts a shadow So currently if you look here we have we have 1 2 3 4 five landmarks overall and as I go through the scene this varies so in the starting position I do have 1 2 3 4 5 six landmarks and remember these were all the landmarks that we had in our scene and as I move a little bit for example here to step 17 you see one of the landmarks disappears in the shadow of the other landmark and so we only have five of them and this goes on when we are back here we only see two land marks we go around here at this position we only see one landmark and then we go back here again here's the shadowed Landmark now it moves out of the Shadow and so on this is our entire scans scene from the robot's perspective now as all the other data the scan data is just a text file and so if you look into the directory for this unit you will find the robot 4 scan. text file which we just opened in our lock file viewer so this file contains one line per scan position so you can see here's the first line that starts and it wraps around here it goes on and on so it starts with an S it goes on and on and then here is the next line so and in between all those values are measurement values it goes on like that in the entire file this is our 278 scans each one having 660 range measurements so how is one record of one scan line stored well as always we do have a code that starts the line in this case it is an S for scan data and then there's a time stamp 3150 milliseconds then there's a count meaning there's 660 values which follow now and then this is just all the values so this is 660 entries and those values are the depth values of the scanner now let's plot the scan data in the files you downloaded there's a python plot scan file and it's very short and simple so it imports from pile La for plotting then it Imports everything from Lego robot then it loads the lock file in that case it loads the scan data and then it just plots the scan data now lock file. scan data means this is the list of all scans and so if we take the seventh scan here we just have to index that by a seven now let's have a look and this is how it looks like this is our 660 scan values down here that's the indices and this is the depth that the caner delivers so you see the robot stands close to a wall of his Arena and so this causes the depth values going up here and then there's one of those spikes which means here's a landmark we can see more of those spikes so there's one two 3 4 5 6 so all of our six landmarks are visible in this scan now let's have a look at another scan I now switch to scan number eight let's run it and the outcome is this and so since the robot didn't didn't move very much we still have our 1 2 3 4 5 six spikes but we also have an artifact here so if you look at that you can see this is a spike going down so it's an error in the measurements and it's going down to the value of zero well almost let's have a look it's not really zero it's something like 15 so in order to filter out the bad values we will assume a threshold of 20 meaning we will not use any measurement value that is closer than 2 cm to the robot scanner now let's think about the strategy to find the cylinders in the scan now as you have seen the scan data looks like that you do have a certain depth and then there's a cylinder in the foreground which means there will be a depth jump in our scan data it scans across the cylinder and then it will jump back to hit the background wall in the real world this will look as follows there will be a cylinder there will be some wall in the background and the scanner now will shoot its Ray it will hit the background then it will start to hit the cylinder these are these values here and after it went across the cylinder it will go back here and go on hitting the wall so this will somehow go on go around the corner here maybe and then it will hit another cylinder and so on so now what's our strategy to find those spikes well as we can see there's a strong negative slope at the beginning of a cylinder and a strong positive slope at the end of the cylinder let's think about how the derivative looks like it's going up here so it will be positive then there's the strong negative Peak so the derivative will be like that and that's flat actually so it will be zero and there will be a strong positive Peak here and then it will rise here for a little bit it will switch to a slight negative slope then again there will be this strong negative Peak and this strong positive Peak so all strategy will be to set up a threshold and just say whenever the derivative is larger than the threshold then we will detect this as a falling or Rising Edge so in this case because it is strongly negative this will be a falling Edge or the left edge of a cylinder this will be the right edge of a cylinder and again here we will have the left Edge and the right Edge so how do we determine the derivative now from image processing you know we can do this using discrete masks the derivative at a discrete position I that is approximately the function at position I + 1 minus the function at I divided by the step which is in this case one that's also termed the difference quotient but this function introduces a phase shift so we'll use another one and since there is now I + one and I minus one the difference in step is two I have to divide by two so now let's implement this and I prepared this file scan derivative question for you let's have a look at this so there's the main function down here it sets up a constant which is 20 which is the minimum distance that we will assume to be a valid measurement value so any distances below 20 mm are considered to be an error and then it just loads the lock file it picks out a scan like number seven and you encouraged to try out your implementation with different scan numbers then it just assigns the log file scan to scan and then it computes the derivative using the function that you'll have to implement up here and it just plots all the values it plots the scan as well as the derivatives let's have a look at this compute derivative function so it gets the scan so the scan will be a list of values of depth values so let's say this will be something like 100 and 110 and so on and in the end the last value might be 550 the first item will have index zero second will have index one the last one with our scanner will be index number 659 because there will be 660 scan values overall in our scan remember I told you to compute the derivative using this formula so if I is zero right you will access element minus one and + one and of course you don't want to do that actually python will let you do that but it will give you for the index minus one this value here assuming that the list is cyclic and this is not what we want so the solution here is I start with index one and I run until index length of the scan minus 1 which means the last index is actually the length of the scan minus 2 so this will be the last index that is being accessed and this will be the first index so now since we only compute now these values here we will have a total of 600 58 values in our final list but I want the final list to have exactly the same length as the original list so what I do here is I start by adding a zero and I append a zero in the end and you will have to replace this append here by appending your computer derivative value and for now I placed there a funny function just to remind you to fill that out
SLAM_Lectures
SLAM_A_09.txt
now finally I'll show you what you just programmed so open up the log file viewer and look for the text files so we'll start by loading the scan so we had a look at the scan before and you can travel using the slider through all scan measurements but now let's load those cylinders as well to the last program you've written produces this cylinders dot text and if you load this you will now see these small green disks which depict your detected cylinders in the robots coordinate system so now let's have a look at this so we detect all six cylinders in the beginning then as we move along one of the cylinders moves into the shadow of the other cylinder and correctly you omit this cylinder while still detecting the cylinder the foreground now we can move through our entire skin sometimes we don't see cylinders for example we are unable to detect this cylinder here probably due to our threshold let's load the reference trajectory in addition so now you can compare the position of the robot and it scan data now let's load the absolute positions of the cylinders in the arena as well so this is this file and now you can see as the robot was here the where two landmarks to his right and four landmarks to his left and it stood right between two of those landmarks and as it moves through the trajectory you can see how the situation that you see from above fits the measurement data after over there's one more cool thing now remember the positions that you produced in the beginning using the motor tics if you open that as well then what the log file viewer will do is it will use your Cartesian coordinates that you produced in the scanners coordinate system and it will use the filtered position that you produced in order to project the detected cylinders into real-world coordinates so as you move along you see sometimes they fit quite well to the reference coordinates and sometimes they're pretty much off especially when the robot turns and that will be our next task to use the differences between the measured coordinates and the known locations of the cylinders in the world to correct the trajectory of the robot so congratulations which is programmed code that takes the market X of the robot and transforms them into a real-world trajectory and you also managed to determine the locations of the cylinders and the scanners coordinate system and putting that together you can see how well the location of the cylinders fits the known positions in the world
SLAM_Lectures
PP_04.txt
now after this Improvement of the visualization let us now make some real algorithmic Improvement so as you noticed already the algorithm is pretty slow so what's the reason for this so as you remember we start from a note and then we expand all the nodes around it so at some stage of the algorithm we do have this set of visited nodes which are connected to the outside by edges and all the noes that can be reached directly from the set of visited notes are contained in our front set so in our implementation our front was this list of elements starting with zero cost for S and so on and at a certain stage in the algorithm we already visited s so this is not in front anymore and so on and there's remaining elements and so at a certain moment of the algorithm all those elements are in front and they are in this set which we actually implemented as a python list so now the first step in the while loop of the algorithm is to find the element in front with minimum cost so how can we find the minimum element in this list of nodes well essentially since this list is completely unsorted we will have to step through every element and check if this is the minimum so say the number of elements in front is k then looking for the minimum in the unsorted list list comes at an algorithmic cost of O of K we have to step through all current elements in front now this is bad because after we find the minimum element we will process it we will have a look at its neighbors so the front will be extended locally and so overall the new front will be very much the same except for the one deleted element and the few elements that have been added by this step and so after processing one single node with only minor modifications to the entire list of front we get back to the beginning of the loop and we again search through all elements for the minimum cost now this looks like quadratic cost so we could improve as follows instead of searching through all elements we could sort the list and pick the minimum element so if this list is sorted then the minimum element is just the first one we can do this in O of one however sorting it has a cost of O of K log K if you use a comparison of keys and so sorting this list in each step is even more inefficient than just searching for the minimum so another idea would be to keep the list sorted which is Trivial because in the beginning with the start node node there's only one node in the list so it's trivially sorted and later say if there are some elements in the list and we have to insert a new one we will just check where it fits so that the entire sequence stay sorted and this would be similar to insertion sort so again picking from the sorted list would be o of one but unfortunately for each of those notes which are new the insertion into list will also be o ofk and so this is actually an ideal situation for a data structure known as Heap and you know this probably from the Heap Sort algorithm which uses a heap to sort elements so without going into detail a heap keeps its element in a binary tree where the root is the minimum element so whenever I'm interested in the minimum element I can get the minimum element in O of one however then the Heap lost its root and so it has to be repaired which is possible in logarithmic time complexity so picking the minimum element is O of log K and with a second operation when I want to insert a new element then I typically put it to the end and swap it with its parent in the tree until it Finds Its correct place in the Heap and this is also of logarithmic time complexity so so overall a heap allows us to do all those operations finding the minimum and adding the new notes to front in logarithmic time complexity now let's have a look how this influences our algorithm so here is our algorithm as we programmed it earlier and there's two places where we interact with our front in the main Loop first of all here we get the note and with minimum cost now this is the place where we so far search the entire list for the minimum note and down here we add a neighbor to front and we just append it to the end of the list so far now the only modification we'll have to do is we'll have to replace this get note by searching the entire list by another operation namely we have to pop the note with lowest cost from our Heap which is front and down here we will not add the node to the end of the list but we will insert it properly into our Heap so instead of adding we have to push M onto our Heap which is of course front so in these two operations will very much improve the performance of our algorithm so where can we get this abstract data structure Heap from and as it turns out in Python there's a built-in Heap queue which can be easily used for that and so the only modification we have to do to switch our implementation from a list based inefficient solution to a heap-based efficient solution is to change two lines of code so let's have a look at the code you'll have to implement it's now pp1c and it Imports two functions from Heap Q namely Heap push and Heap pop and so down here in the main algorithm there's only two changes to be made namely replace your minimum search and remove by a single call to Heap pop and replace your call to append by a call to Heap push and that is all there is to do so now please implement this
SLAM_Lectures
SLAM_C_08.txt
and to answer that let's have a look at the density of the normal distribution which is also termed the Gaussian distribution and which is defined as 1 divided by the square root of 2 pi times Sigma times e raised to the power of minus 1/2 times X minus mu divided by Sigma squared and this now is a density meaning the integral over all possible values of X is 1 and this is commonly denoted as n of X which is this parameter but it also has two other parameters namely the Mew and the Sigma which is typically given as Sigma squared and this is termed the mean and this is the variance where a sigma then is the standard deviation which is the square root of the variance and this density plays a major role in modeling probabilities in this course as well as in general now let's first have a look at the second step in our Bayes filter as you remember our new belief or some normalization constant times the probability of measuring C when we are at x times our predicted belief it's an over line that we are in X and some people now model all those beliefs using normal distributions so in that case our belief over line will be a variable which is normal distributed where the normal distribution is given by mu over line which is the mean and a sigma over line squared which is the variance and so this will look like that robot will be somewhere at mu and we will have to single peak with a standard deviation of Sigma over line and his density will be the result of a previous move now as for this other part this will be normal distributed as see given the mean at X and a measurement noise which we denote as Sigma Q squared and so this here will be Sigma Q and we will add here a multiplication factor which in our explanation used to be 1 but in general think of it as a factor that is needed to transform from your state space the X to your measurement space for now you could just assume that your robots position is in centimeters whereas your measurement is given in millimeters and so you need this additional factor this factor will become interesting later on when we deal with multi-dimensional distributions and so now all we hope for is that our resulting belief will be normal distributed as well with the mean of mu and a variance of Sigma square and if that is really the case so that our final belief is also normal distributed all we have to do this to compute our new mu and our new Sigma square as some function of our predicted mu and Sigma square and our measurement and our measurement noise the variance Sigma Q square and other values as for example our multiplier C so in this would mean instead of having to deal with all those approximations to distributions using arrays and using multiplication of those areas all we have to do is to compute those two scalar values which are the first and second moments of our distribution from the other first and second moments and some other parameters now let's find out if this is really normal distributed and this is easy to see let's write down our equation again now for this back here we have to write e raised to the power of minus 1/2 times X minus mu divided by Sigma first for this we have to write e raised to the power of C minus C X divided by Sigma Q and now those two distributions have to be normalized and this is done by some other normalization factor now instead of multiplying we can also add the exponents let's write this as minus 1/2 C minus C X divided by Sigma Q plus 1/2 X minus mu divided by Sigma and now we see the following this term here is quadratic in X so all we have to do is to rearrange those terms in order to end up in an exponent that looks like this X minus mu divided by Sigma which is also quadratic in X so since we know we can transform any quadratic function X into this form we know that our resulting x will be distributed as a normal distribution and so our first conclusion is belief of X normal distributed and all they have to do now is to find those mu and Sigma values by rearrangement of those terms now remember if I have any quadratic function I can write it in this form which means that plotting it will result in a parabola which has its minimum value at B and the minimum value itself is C and this is for a larger than zero and if I compute the derivative of this function I will get a times X minus B and if I set this to zero I will get x equals B so by setting the first derivative of any quadratic function to zero I will get this B now if I do the second derivative what remains here is a so meaning if I have any complicated quadratic function which contains some x squared X and constants I can just compute the first derivative set it to zero and I obtain the B above here and then I do the second derivative with respect to X and this will give me the a up here and we will use this trick to rearrange our exponent so now our kazar function quadratic in X is 1/2 times 1 divided by Sigma Q squared times C minus C x squared plus 1/2 times 1 divided by Sigma square overline times X minus mu over line squared so we'll just form the derivative which is 1 divided by Sigma Q times the derivative of X here which is minus C plus and this should be 0 so we're looking for the solution X for which the first derivative becomes 0 so we group it according to X and you obtain the solution x equals right and this here is the B we were looking for I'm also compute the second derivative and this is easy to see here because we grouped this according to X then the second derivative just is this part here so this is C square divided by Sigma cube square plus 1 divided by Sigma over life and this is our a and now we're basically done because now our presentation is lambda times e raised to the power of minus 1/2 times a times X minus B squared plus some C and this should be some other alpha times e raised to the power of minus 1/2 X minus mu squared divided by Sigma squared and so we see the a equals 1 divided by Sigma squared whereas DB is simply mu and so with the formulas from the previous slide we get the result our new Sigma square of our posterior distribution is 1 divided by C square divided by Sigma Q squared plus 1 divided by Sigma overlined squared and then our new mu is this here Sigma squared times C the constant times our measurement C divided by Sigma Q squared plus mu over lines divided by Sigma over line squared this is what we initially wanted so we now know our belief of X that is a normal distribution of X with mu and Sigma Square and those here all the values and now conventionally those formulas are given in a different way so let's reformulate them so this is what we obtained now let's write that differently so this is C divided by Sigma Q squared times C and now we just subtract seen you over line and V at C nu over line so this what we do here that's just zero plus this part but now we crew bit differently so we obtain Sigma square plus the second part here which is C square divided by Sigma Q square times mu over line plus nu over line this part here divided by Sigma over line squared now this part here that is C squared divided by Sigma Q squared plus 1 divided by Sigma over line squared times mu over line and as we have seen this part here is actually 1 divided by Sigma squared so this cancels out with this and so we obtain Sigma squared divided by Sigma Q squared times C times C minus C you over line plus mu over line and now we define this here to be K the common game and so writing it with this k we obtained on um you as the old mu here plus common gain x c- c new overlined and this is the formula which is conventionally given now let's have a look at this formula once again so this is our predicted state and as you remember this is the factor which converts from our state's base into our measurement space so this is the predicted measurement and in the most simple case C is just 1 and so the state is identical with the measurement that is predicted now this is the actual measurement and so the difference here that's the difference between what we measured and what we expected to measure and that is also called to innovation and so the innovation is multiplied with the Kalman gain and then add it to the predicted state so this is the same predicted status here and here so say for sake of this example C is just 1 and now let's explore the two cases say K is 0 then from that it would follow that on umyou is our old mu over line our predicted Mew over line plus 0 times something which is just our predicted mu that means if my gain is 0 my posterior will completely ignore any measurement and it will just take my predicted mu as my corrected mean whereas if K is 1 then what follows the MU then is mu over line plus C minus mu over line which is C so if the gain is 1 then what comes the artists my measurement so in this case the computation will completely ignore my predicted mu and we'll just take the measurement value as my new state now conventionally the Kalman gain is also given in a different form so we defined it as Sigma square divided by Sigma Q square times C now if I substitute the Sigma square here I obtain C divided by Sigma Q squared times C square divided by Sigma Q square plus 1 divided by Sigma overline Square and this is the same as this and now if we multiply this by Sigma over line square we obtain K equals C times Sigma over line squared divided by C square times Sigma over line square plus Sigma Q square this is the conventional form as this given in a common filter and you can see the larger our measurement noise so the larger all variants down here the smaller will be our common gain and if you remember explanation on the last slide if our common gain is smaller we will take our measurement last into account now since we have to find our K we can also try to express our new Sigma square in terms of K so what we obtained so far was Sigma square is 1 divided by C square divided by Sigma Q square plus 1 divided by Sigma overline squared and if I multiply the numerator and the denominator both by Sigma Q square times Sigma 1 square they obtain this and this is the same as 1 minus C square Sigma overline square divided by C square Sigma over loin square plus Sigma Q square times Sigma reline Square and this seems to be much more complicated but now remember from the previous slide this year was K and so we obtained the really simple result our Sigma square is 1 minus common gain times RC times Sigma Overland Square and this is the third formula we need so this formula makes perfect sense now we had the following situation we where somewhere he moved forward our Sigma will become launcher and then we hit the measurement and after the measurement our Sigma became smaller again and you can see that perfectly here so this blue curve corresponds to this predicted Sigma value and this is multiplied by something if our common gain is 0 then it follows that our corrected Sigma is the same as our predicted Sigma because that here will be 1 whereas in other cases it follows that our corrected Sigma because I subtract this from 1 will be smaller than our predicted Sigma and it is this smaller here which we see in this curve so overall for our correction step our belief was given by that formula and we said this will be normal distributed according to our prediction and this will be normal distributed according to our measurement and now we have proven that this will be normal distributed also with mu and Sigma square remember we are looking for this big function containing new overline Sigma over line and so on we have this function now we have to compute our common gain and from that we compute our new mu which is our predicted mu plus the common gain times C the measurement minus the predicted measurement and our Sigma square equals warm - Kalman gain times C times Sigma over line square and this is the formulas for the correction step of our common filter and so we can compute our new mu and Sigma square of our new distribution so instead of having to deal with all those distributions and some discrete form all we have to do is compute mu and Sigma square from all the other given values and so all the filtering reduces to just maintaining those two variables the first and second order moments of our distribution now the prediction is still missing cell your member and the prediction belief over line of XT equals a sum and now since we moved to the continuous case the sum becomes an integral off the probability that we end up in XT if we have been in XT minus one previously and we received the control UT times the belief that we vary XT minus one previously so this is the continuous version of this tree which you certainly remember and again this belief will be modeled as a normal distributed variable so this is normal distribution of XT minus one with mean mu t minus one and standard deviation t minus 1 so and this is clear it represents position at a certain point in time and regarding the probability of ending up in XT when we wear an XT minus 1 and we were given the control UT we define a motion model so say we have been in XT minus 1 this here before and now we moved and this movement shall be given by a linear part plus the addition off the control so together that is actually an affine transform and after this movement our probability will look like that and so for any given value XT we can look up the probability here so our distribution will be the normal distribution of XT given this Center which is a xt minus one plus the control UT and our Sigma which now will be the Sigma of the movement also termed the system noise now that means what we have to compute this integral over e raised to the power of minus 1/2 this is the first part times XT minus a XT minus one plus u squared divided by Sigma R times the second part which is e raised to the power of minus 1/2 times XT minus 1 minus mu t minus 1 divided by Sigma t minus 1 squared DX t minus 1 and now again this is a multiplication of two exponential functions each of which has an exponent which is quadratic in XT minus 1 and also quadratic in X T and if we sort out all the X T squared and XT minus 1 squared we have still two integrated now what do you think will the result be after integration normal distributed or not
SLAM_Lectures
SLAM_F_07.txt
now let's have a closer look at the results so in the beginning our robot starts here at an arbitrary position and orientation and its uncertainty in position and heading is zero because we just defined that the map it will construct has its origin at the initial robot position and so let me just explain all the colors so this is the robot's position in blue the uncertainty ellipse of the position and the plus minus one sigma uncertainty in heading all for the robot now at this position the robot sees those four landmarks relative to its own coordinate system they are depicted in green in the robot's coordinate system and they are also depicted in green in the world coordinate system so these green dots they are simply the measurements transformed into the world coordinate system according to the robot's current location and orientation and what you see in magenta are the estimated positions of all our landmarks and those positions are now part of our system state and in addition to the estimated x y values the covariances are depicted by corresponding error ellipses so now let's go back to the beginning in the beginning the robot's uncertainty about its position and orientation is zero and standing still at the start position it immediately observes all six cylinders so all six landmarks in a laser scan and so all those six landmarks are added to the state vector and the covariance matrix using the function you implemented so their coordinates are directly taken from the measurements and their covariance matrices are set to infinity however what we see here is not the result that we obtained after we added the landmarks but after we added them and then observed them for the first time so this is why the error ellipses do not start with a radius of infinity which was in our case 100 meters in both along the major and minor axis but rather they start with relatively small values of say less than a meter here this is because they have been observed by a robot already once so now if we move on our robot stands still and so as it stands still its uncertainty in position and heading stays at zero so all the measurements of the landmarks accumulate and the aero ellipses of our landmarks are getting smaller now something interesting you see as well so when we measure a cylinder using our lighter our error model includes a range error and a bearing error and so you see for this landmark which is close to the robot the range error has a larger effect on the xy coordinate of the landmark than the bearing error however if you look at a landmark that is far away you see this bearing error here approximately here it's the same error because we have modeled our error in distance measurement independent of the absolute distance which is just an approximation but you see that the angle in bearing translates now to much larger error ellipse along that axis now if we start to move the uncertainty of the robot's position starts to grow and now watch the robot it gets larger and larger and as usual it is especially large back here in our left turn because we do not observe many landmarks at this point now if you go on it gets smaller again and so in the end we seem to have a stable state just as we had in the case of the extended kalman filter with known landmarks while the error introduced by the movement and the decrease in error which results from the observation of the landmarks are in balance now look at the landmark uncertainties now the error ellipse of this landmark is small and this is similarly small but the error ellipses of those landmarks are larger and the reason for this is that our coordinate system is actually anchored here in this point so this point has zero variance in position and orientation and the further away landmark is the larger the error will be and even though we drive around for a while this will stay the same now let's look at this landmark so in the beginning we observe it for a while so the error ellipse gets smaller now as we move on we still observe that landmark but it doesn't get smaller as fast as it did previously because we observe it from our robot's position which now has also accumulated some uncertainty now but watch the landmark and watch the error ellipse as long as we have this green point close to our landmark that is as long as we have an observation that is tied to this landmark the error loops get smaller now we don't have observations anymore and the size of the arrow ellipse stays the same the landmark might move a little bit because when the landmarks in the vicinity are moved the position of the landmark will move as well but the arrow ellipse will stay the same so now we start observing it again and the arrow ellipse gets smaller now it is out of our field of view and it stays the same we observe it again and it gets smaller so what you see here is that whenever we have an observation whenever there's a green point we add information and so as our information increases our uncertainty decreases now let's run a few experiments first i pretended that the initial position will have no effect at all so let's check this let's just set it to 0 in x so this will be the left border of our arena and 1000 in y so this will be vertically in the middle of the arena and let's set the heading to zero and then run this and we load the result so as you see we get the same result the only difference is that the visualization now is not as nice as it used to be previously now let's change that back now remember earlier we set our initial covariance according to the error that we assumed we have made when we measured our initial state now let's try to do this here so these are the arrows we used 10 centimeters in x and y and 10 degrees in the heading let's run this now we reload it and now we start in our initial position with the uncertainty of 100 millimeters in xy and 10 degrees now let's have a look at what happens now after we drive for a while we will see the size of the arrow ellipses just doesn't get smaller anymore so in the end both the sizes of the error ellipses of the landmarks and the size of the arrow ellipse of my robot are larger than in the previous case why is this the case simply because our overall error is tied to the error of this point so if we have a large variance in the beginning we will never be able to obtain small variances for the landmarks because no matter how many relative measurements we obtain from our robot slider the absolute position will always be tied to the initial position and so in effect the arrow ellipses of the landmarks won't get small and since the robot uses the landmarks in the correction step of the kalman filter the outer ellipse of the robot doesn't get small either so if i make this even larger you will easily see the effect so we start now with a larger variance and as we move the larger variance in the start position translates to a larger variance of the landmarks and of the robot's position no matter how many measurements we integrate during our trip so let's undo this finally let's have a look at the maximum cylinder distance and i've set this to 500 millimeters that is 50 centimeters now let's set this to 40 centimeters and reload the result now when we start it is exactly the same result as previously so the setting of the maximum distance for the association of landmarks to their closest measurements does not have any effect at all however after a while here in this step a robot has the wrong heading and so the landmark that it observes here is actually this landmark but still it's close enough according to our threshold to be assigned to this landmark but then two steps later the observed cylinder is too far away from any of our landmarks and our algorithm sets up a new landmark with variance set to infinity then the observation equation is applied and you obtain this error ellipse so if we move on the robot observes more landmarks again and so it corrects its position and what we see here is that after a while our newly inserted landmark is moved very close to the old landmark and we finally obtain this state so what we see is our landmark assignment is pretty simple and it is also brittle because let's say we make this threshold even smaller then we obtain the following we start perfectly as in the previous case now that we move around the corner here we make a new landmark and even one more landmark here and then later on we will add a few more landmarks so that in the end we obtain ten landmarks instead of six now this shows us how brittle this landmark assignment process actually is and obviously we could try to improve the algorithm here we could think about unifying landmarks which turned out to be close after observing them for a while but there is other techniques as well which i'll show in a moment but first let me ask you a question so in our case we had an arena containing six landmarks and i want to ask you in general regarding the number of landmarks in our world is it better to have many of those or to have only a few now what is better
SLAM_Lectures
SLAM_A_00.txt
so welcome to my class on simultaneous localization and mapping my name is Claus brener and I'm a professor at the liess University hover in Germany so what you see here is our small robot which will both fit with a laser scanner or lighter and which will run on the ground and detect landmarks such as this one and using data from this robot you will learn about techniques such as extended colon filter and particle filter slam the class is completely self-contained and you won't need anything else than a python installation and the code Snippets which are provided in this class so now let's have a look at what you learn in each unit in Unit A we will introduce our robot and the lighter scanning sensor and our Arena containing cylindric landmarks we will steer the robot through this Arena and record the motor control and lighter scans we will then set up a motion model and determine the trajectory of the robot from the motor control data then we will turn to the robot's lighter data we will analyze each scan in order to find the cylinders in the scene and we will develop an algorithm for that finally we will reproject the detected cylinders into the scene and compare them with the cylinder positions given in our map in Unit B we will assign the landmarks in the map to the cylinders we detected in the scans based on their proximity then we will use those Point pairs for a least squares estimation of a similarity transform in the plane for which we will give a non-iterative solution based on that we will correct the trajectory of the robot the result shows that while our approach works it yields a pretty Jagged trajectory since the number of observed features or cylinders is sometimes too small we will have a look at another featureless technique where we assign points on the fence of our Arena to the nearest possible partners and determine the transformation using an algorithm called iterative closest point this leads to a better looking trajectory although at the expense of a more costly algorithm in Unit C we start to look at filtering by modeling the uncertainty of the robot using probability distributions we first study the effect of robot movement on the distribution and we'll determine that the robot uncertainty grows as it moves then we will have a look at measurements where we will conclude that they have the opposite effect they reduce the uncertainty in the robot State combining a movement or prediction step with a measurement or correction step we introduce a filter called the base filter we then have a look at what happens if we use a specific probability distribution namely the Gan or normal distribution fusion and derive the so-called calman filter for the one-dimensional case in Unit D we first look at the multivariate normal distribution and then we find out how the calman filter from Unit C generalizes to the multi-dimensional case we then recapitulate our robot motion equation and since this is nonlinear we introduce the extended common filter in order to use the common filter we need to compute the derivatives of the motion equation with respect to the state and we also need the derivatives with respect to control after we compute all of this we obtain the robot's trajectory based only on the prediction step and we see that the error in position and heading grows unboundedly now the second step in the common filter is the correction or measurement and we also have to compute the derivative of the measurement equation with respect to the state after we implement this second step we finally obtain the common filter trajectory prediction and correction while the error is not unbounded anymore in Unit E we will have a first look at the particle filter which represents the distribution by a set of hypothetical States or particles the implementation of the prediction step is rather easy and as expected we obtain a set of particles which diverge we will then have a look at the correction step which involves the computation of an importance factor or weight and an important sampling of particles based on this weight having implemented all this we see that the particle filter is able to recover the trajectory of the robot even if it does not know the initial state in unit F we will start with mapping now no map of landmarks will be given in advance instead the robot will simultaneously localize itself and produce a map we will first treat this by extended common filter slam where the locations of the landmarks become part of the filter State based on the extended common filter of Unit D we modify the equations and derivatives for the prediction or movement stab and since the robot now produces the map there must be a mechanism to modify the robot State when landmarks are added we will also modify the measurement equation which describes the relationship between the pose of the robot and the position of the landmark which are now both part of the robot's State the final result then shows how the positions and errors of both of the robot and the landmarks evolve while the robot explores the arena unit G is about particle filter slam we will see how a factorization trick allows us to split the posterior into a term for the robot's pose which will represent using particles and another term for the positions of all landmarks which we will represent by individual extended Calon filters where each particle holds a set of filters one for each landmark in the measurement update we will either initialize a new or update an existing landmark and we will also add code to remove spurious landmarks which are not consistently observed this will finally lead to an implementation of the particle filter slam Al algorithm so again welcome to this class and I hope you'll join me for unit a right away
SLAM_Lectures
SLAM_D_02.txt
now let's first have a look at the normal distribution again so we said X is normally distributed with the parameters mu and Sigma square if its density function was like that so there is a normalizer times e raised to the power of minus 1/2 times X minus mu squared divided by Sigma squared and so this was in 1d meaning that the probability density function looks like that we have mu here and then we have this Gaussian bell-shaped curve with inflection points here and here which are at MU minus sigma mu plus sigma and if i draw a random variable as with a 68 percent probability inside this area so 4 plus minus 1 Sigma is 68% plus minus 2 Sigma it is 95.5% and 4 plus minus 3 Sigma is 99.7% and so if you look at the exponent X minus mu squared divided by Sigma squared equals 1 if and only if x equals mu plus minus Sigma which is exactly this range and now to generalize this which is in 1d to two dimensions we will have to follow you now we can note this as Sigma x squared plus y minus mu so this needs to be new Y and so we will call this new x squared divided by Sigma Y squared and so we have seen if this is 1 this means in our 1d world that we are either here or here ress if this is 1 this now means in our 2d world where we have mu X here and mu Y here that will be somewhere between here and here but also between here and here so this condition is fulfilled by the points lying on this ellipse now you can express this formula in a different way we see we can write this as a quadratic form where we have a matrix in this case a 2 times 2 matrix of warm divided by Sigma X square and 1 divided by Sigma Y square on the diagonal and 0 everywhere else times X minus mu x and y minus mu Y as a column vector multiplied from the right and the same as a row vector multiplied from the left because this gives the vector times this row vector which gives the first element times two first plus the second element times the second which gives exactly this formula which is the same as this above here and so now let's have a look at this matrix so this matrix in our quadratic form this is our variance raised to the power of minus 1 it's the inverse of the covariance matrix because if the covariance matrix is diagonal I can obtain the inverse of this matrix by an element-wise inversion and so we see if our covariance matrix looks like that they're not a probability density function there'll be some normalizer times e raised to the power of minus 1/2 times X minus mu times Sigma raised to the power of minus 1 times X minus mu well now we use a vector form and x equals our XY and u equals mu ax new Y and since we use column vectors we need a row vector here and we have to transpose this now for a probability density function it's a factor times this expression then in 3d it will look like that here's X here's Y and this will be our PDF so this will be the first component of new this will be the second and our peak over here now this will be our bell shape in 3d and here you will have the ellipse offer an inflection point richard will usually draw in 2d and here the PDF extents in all directions to infinity and it is called the Wang Sigma error ellipse so now let's have a look at that once again so in our case our covariance matrix was this and we obtained an error ellipse like that so apart from the main diagonal we didn't have any nonzero entries and this is the uncorrelated case where the random variables x and y are independent because max doesn't tell me anything about y and so in the general case our covariance matrix would look like this so we will have two variances on the main diagonal but it will also have the covariances off diagonal which are nonzero which means that x and y now are correlated so our ellipse will for example look like that so it will now be turned and look like this so we will still have this length of the main axis which will be now here and the length of the second axis which will be now here but these do now not concede anymore with some min and Max points along the axis so if you want to draw this ellipse what is normally done is an eigen value eigen vector decomposition of Sigma which is a real and symmetric matrix and so we will obtain a matrix we times a diagonal matrix containing the eigen values times B transpose where we will consist of two column vectors in the 2d case which are the eigen vectors of Sigma corresponding to the two eigen values and so if we draw this then if V 1 is the eigen vector belonging to Locke sure I can value that is this and then V 2 is the eigen vector belonging to the smaller item value so let me test your intuition here so you might have a bivariate distribution of X and boy which is normal distributed so see it's normal distributed it's a mu which is a vector and a covariance matrix and now I tell you that in addition I know that X is normally distributed with mu a scalar and some Sigma square and Y is normal distributed sabres to mu so this is the same you and Sigma Square and I want to know for this bivariate distribution how does the Sigma look like so Sigma X square Sigma X Y Sigma Y square Sigma X Y so that's the corresponding error lips look like that so it is a perfect circle or does it look like that so it's X's are parallel to the coordinate axis but it is elongated in one direction or does it probably look like that so just tilted by say 45 degrees and one of the half axis is a little bit longer and want to be shorter so a B and C what do you think is correct
SLAM_Lectures
SLAM_D_10.txt
what is the dimension of Q the measurement noise
SLAM_Lectures
PP_05.txt
so after you implemented this interaction with the user interface will actually become pretty cool so you set a start node you set endnote and very quickly it will come up with the set of all visited nodes and the user interface will react pretty quickly also it's now more fun to draw some obstacles and have a look at what happens or to make a really complex maze and then watch how the algorithm finds a solution now since there's a hole it goes through there so let's fix that and there are holes too I may now block this path they may also block this path but then it is unable to reach the solution but remember pressing the middle or right button you can still punch holes into the entire structure and have a look it how this affects the overall search Behavior so that is pretty cool but there's one thing which you'll have probably missed all the time whereas the actual path from the start to the goal we are interested in the path so let us add this next now how do we get the path from the start to the go now remember at a certain point in the execution of the algorithm we obtained this set of visited nodes which is connected by direct edges to those nodes which are part of our front so we search for the node with minimum cost and say we get this one and so we enter this node into the set of visited nodes and we look for its neighbors and insert them into our front now let's have a look at front so there are some elements in front and we have this node say this is a b and c and so we have the a node with a certain cost we remove that from front and we insert the other nodes B and C so at the very moment we take out this node and we insert those nodes we know that we actually came from a when we inserted B so we can remember this here we came from a and also here it came from a now also when we inserted a of course we had some predecessor and so on so that when I finally reach the go then it's easy to reconstruct the path from the goal to the start and so the only thing I'll have to do is after I reconstruct this path I have to reverse it to get the final path from start to the go now there's still one thing we have to solve namely we keep this previous element in the records that we have in our front however in the end the front may be empty if we processed all nodes then finally there will be no elements remaining in front so all this information regarding the previous node is lost and so by the very moment we add a node to our visited set we pick it from Front we have to record this relationship that D is the predecessor of a in some extra data structure so we will not only have to modify our visited as we did previously but we will also need a second array which I will call came from and so in the case where we add a I will have to say we came from d right and this came from data structure later allows us to go from the goal to the start by just calling came from came from came from repeatedly so now let's see how we have to modify our current code and as usual there is only quite minimal changes necessary in our code so the first change is here now each Tuple in our front now needs to consist of three components namely the cost the node and the previous node the node we came from however the start node itself doesn't have a previous node and so you should add none as the third component of this Tuple so that later when we unwind the path backwards we know that this is the end of the path so the second change is here and I already wrote that down for you we use this data structure came from and in this case we don't use an array but rather we use a python dictionary which makes the implementation very easy so just leave this line as is here you will also have to add this previous because the element which you pop from the Heap now contains three components the cost position and the previous node so leave this line as is and here you will have to add a line where you record that you came from previous When you entered pause into the set of visited notes and so we're almost done down here of course we'll have to do another modification because when you push a new Tuple to the Heap you have to make sure of course that this Tuple also has three components namely the cost the new position and previous position where the previous position now is pause and the new position is New Path so that's all there is to do and I added here some extra code which you'll have to include as is and do remember to also include this return because now it also Returns the path so if you don't include this line the path will not show up in the graphical user interface now how is the path reconstructed first the path is set to the empty path and then if the position that we obtained is equal to the goal this means if we have quit our Loop up here then we unwind the path now if it did not create the loop here then we repeated the loop until no elements in front of their left and so we didn't reach the goal and in that case we do not execute this part and the path simply is the empty path but if we hit the go then we do this Loop where we just go backwards we append the current position and we used to came from dictionary to get the previous position and we do this until position is none in which case we ended up in the start node and then in the end we reverse the path which is not strictly necessary for the visualization but just to make it complete so not very much to do here some minor additions here and there and so go ahead and program this
SLAM_Lectures
SLAM_A_07.txt
so this was really easy and here's my solution so all you have to do is you grab the left value you grab the right value and then if those values are both larger than the minimum distance then you compute the derivative which is actually the difference quotient and you appen this and if either one of those two is smaller than the minimum distance well we don't care about the real derivative at that position so we'll just return zero in that case because that means later we will not detect the star or the end of a cylinder so now let's run this and here's the result for scan number seven so you see we have those six spikes going down here each representing one cylinder in the real world and our derivative produces for every falling Edge a negative Peak and for every Rising Edge a positive Peak and if we have a look at the range of those values we can see that the Peaks are stronger than andus 100 so we will just use a threshold of plus - 100 to detect the falling or Rising Edge caused by a cylinder let's run this for another scan so for scan number 235 we have the situation that there is one of those peaks of arous measurement values so in the measurement data there's here the signal for the cylinder and shortly before that we have one of those Peaks going down to 15 or something like that so if you look at our derivative we see that our detector worked perfectly so it ignored this strong Peak but it responded to the falling Edge caused by the cylinder so the next thing we'll have to do is to write a program which detects those signals and determines the start and the end of every cylinder in the scan but there's one caveat with that remember there were situations in our scan where there was one cylinder in the foreground there was a smaller cylinder which was partially uded in the background so in that case our scanner hits the wall then it starts to hit this background object then the foreground object and then it's going back to the wall again so in terms of measurement data we'll have this background measurement then we will have a negative slope for the first cylinder and then a second negative slope for the second cylinder in the foreground and then it will go back up again so in this case our method will indicate us there's a left start well there's again a left start and then here's the right the end of the cylinder meaning we have to cope with situations which are unusual so instead of a obtaining left right left right left right and so on we might also get left left right or we also might get left right right so here's my strategy to solve that problem say our original signal is like that you have a cylinder in the background then another cylinder and then it goes back up again so our derivative will be like that the peak here another Peak here and a positive Peak here and our threshold will indicate a left a left and a right Edge now these points are all discrete so I might have those Five Points here then two points here and four points here so what I'm interested in is this point so it is the average depth and the average Ray that will indicate where the cylinder is so my strategy is as follows I have one variable called on cylinder where I memorize if I'm currently on a cylinder and as long as I'm on a cylinder I will add up the Rays and the depths I have a aray counter I have a sum of the Rays and I have a sum of the depth so when I start I'm not on a cylinder so this will be false and I just walk through those values say this is value zero and at Value five my derivative indicates a dep jump so I'll switch to on cylinder mode true and I'll initialize the Rays so zero so far the sum of rays zero and the sum of the depth so now for this value five say the depth would be 50 so add this up I have now one Ray that's Ray number five and the sum of the depths will be 50 I go over to number six as long as it is true I will add up those values so I will have two rays the sum of rays will be 5 + 6 and the sum of the depth will be 50 + 50 now when I have another depth jump I will just discard all the measurements I made so far so the on cylinder stays true I will just reset everything and start over again and now I have the points 7 8 9 and 10 and it will go on like that so it will sum up the Rays one Ray say the depth here would be 40 the sum of the Rays will be 70 the depth 40 and it will go on like that and when I'm here then I will have counted four Rays the sum will be 7 + 8 + 9 + 10 so I will have four death values of exactly 40 in this example now if my derivative indicates a positive Edge then we'll just compute the average Ray which will be 7 + 8 + 9 + 10 / 4 and that's 8.5 and the average depth which will be 40 and I will store this at a cylinder indicating average Ray and average depth and there's one cave with that remember when you sum up the race you might also have those aerous measurements here you have to make sure that you don't incorporate these meaning in that case don't count up the Rays don't add up the sum of rays and don't add this value to the depth so in the end when it jumps back again here I set the on cylinder to false and as long as it is fults I do not add up anything and I'll start over again when I detect the next falling Edge so I want you to program this and so you'll find the file find cylinders question and in this file I prepared everything already for you so you have to compute derivative function which is just the function that we computed previously you have to find cylinders which you'll have to implement and down here there's the main function and the main function is almost identical to the previous main function in addition to the previous file we now also have a depth jump variable which indicates the threshold we are using for finding a depth jump Edge in our scan data and as in the last time we do have this minimum valid distance I load the lock file I pick the scan as in the previous data I compute the derivative I compute the cylinders this is what you have to implement then I plot the scan and also the cylinders now let's have a look at the function you'll have to implement here's fine cylinders it gets the scan it gets the derivative and the threshold for detecting the jump and the invalid range measurement values I should produce a cylinder list I start by assigning on cylinder the value of fults and the sum of rays the sum of depths and the rays which are initialized to zero and then for every point in the scan derivative and this is where you'll have to implement the strategy which is we're looking at so you'll have to remove this just for fun I'm generating some cylinders in this list which then show up in the graph that this program produces when you run it you'll see that result so this is the original scan and the red dots are the detected cylinders these are not the correct results so after you implemented the correct version you will get a result like that so you have here the original scan data and here the points which indicate the cylinders and their exact locations
SLAM_Lectures
SLAM_C_02.txt
so in this last programming assignment is a true member of the set of shortest programming assignments in the world because it can be solved just one line of code so all you have to do is this now since this is our set of values of our distribution and we've chosen to store any finite subset by giving this subset and a start index which is called the offset all you have to do in order to move that distribution to the left or right is you have to modify that offset and you don't have to touch those values so you construct a new distribution you leave the values as they are and it just at the Delta to the existing offset and then give those two arguments to the constructor of the distribution class so we've been only halfway with our task of modeling the error we now model the error and position and reset now our position is not a single point anymore it's a distribution we move that distribution ended up here and here and so on so meaning the only thing we model is the error in the initial position so in reality in Israel like that we start in a position and it may even be the case that in the start position we know exactly where we are for example the robot might sit in his docking station or charging station and go from there so this will be one point zero probability and now we move and now during this movement we have some uncertainty and this is modeled by distribution so the movement is such may be given by a distribution as this one and then after the movement this error in movement translates into an error in position this makes perfectly sense because there's no inherent error of position that the robot carries around with itself but rather the error in position is due to the fact that the robot moves itself using some machinery that is not exact and now this movement is quite easy to understand producing this distribution from that distribution but what happens if you go on here how does this figure look like and that's rather easy let's look at this saying the pig we knew our position is at zero meters and so we told our robot to move for 100 centimeters and it ended up here at position 100 in that head was the probability of 0.5 but also we might have an undershoot so the robot ended up at 99 centimeters the probably of 0.25 or at 101 centimeters with a probability of 0.25 so now we have three different positions where the robot can't be corresponding to this distribution now we tell the robot again to move for one meter so it will end up here at 200 centimeters so I had three possible positions here tell me how many positions will I have down here
SLAM_Lectures
SLAM_E_05.txt
now that's the corrections that looked like now remember in the general formulation of the Bayes filter our new belief was computed by weighting our predicted belief by the probability of the measurement given the state and we had to normalize this when the product filter this is implemented as follows for every particle we give the particle a weight which is the probability of our measurement given that our state is exactly our predicted particle and so this is the same as that and this weight is also called the importance factor now after this step we have computed a weight for each predicted particle but finally our distribution has to be represented by particles and not by weighted particles and so here comes what is known as important sampling we set up a new set of particles so we start with the empty set and then we run a loop in which we draw and new particles and the trick here is to draw an index say I was a probability proportional to the weights of the particles sown each iteration we draw one index and then we add the corresponding particle to our new set of particles we pick the particle with index I from our set of predicted particles and drawing I is done with replacement and the effect of this is as follows say we have three particles and so they all move then they are all sampled and then we have some measurements and according to the measurements say the probability for this particle is very high so it gets a high weight whereas in the other cases the measurements do not fit very well to what we expect so those two cases get low weights then in the important sampling step we will have to wait one and wait two and wait three and so when one will be large and the others will be small an hour we will sample three times maybe you'll get those values and so we picked two times in x1 meaning we will have two particles which are identical we will pick index two as well one particle and we will drop the third particle now the only complicated step is to important sampling so once again important sampling draws and new particles with replacement from the set of our predicted particle with probability proportional to the weights now before the sampling our particles approximate our predicted belief but after sampling they approximate our posterior belief in general the result will contain duplicates and since we keep the number of particles constant this also means that particles with a low weight will likely disappear and this is necessary because this ensures that we do not waste particles in areas where our belief is very small now important sampling is a general technique where the weights have to be selected proportional to our target distribution divided by the proposal distribution and in our case this means our target distribution is our belief whereas our proposal distribution is our predicted belief and so as we know this is alpha times the probability of a measurement given the state times the predicted belief divided by the predicted belief and so we see our weight can be chosen to be the probability of the measurement given State and this factor is enough no importance because the weights don't have to be normalized now let's compute this probability of a measurement given a state say our robot is here looks into that direction and from its Ken it sees a landmark here and it's all find D these are the measurement values that are obtained from the laser scanner measurements now say the real position of the landmarks here and so according to the state of the robot the correct peering should be alpha Prime and the correct distance should be D prime and so we say the probability for our measurement given our state X is the product of the probability for the distance difference and the probability of the difference in angle and we model those by normal distributions so we use the probability density function for both of these probabilities where this is the Gaussian probability density function which we used earlier and we will have to do this for every landmark so since we have six landmarks in our arena we may have up to six probabilities for mark measurements so these formulas actually compute the probability of one landmark measurement and we will assume Independence of the probabilities of those landmark measurements so that finally our probability of all the measurements given the current pose is just a product of the single probabilities so in order to compute the overall probability of the measurements for a fixed pose so for a certain particle all we have to do is to loop over all detected landmarks and for each detector landmark to compute the probability of the measurement which in turn is decomposed into the probability for the distance measurement and the probability for the angle and then we form the product of all those probabilities so this is all there is to do now when implementing this the first step is to detect the cylinders so in our robots coordinate system the robot does this measurement and using some thresholds computes this alpha and distance values that we need to compute our white now fortunately we have programmed all that functionality before because we used that already for example in the common filter in our last unit and so all this is done by the function get cylinders from scan which is in this lamb elibrary of this unit an independent of the number of particles we have we only need to do this that wants in every iteration of our filter and then in the second step we need to compute the weights and this is done in a function you'll have to implement it gets the cylinders and the landmarks the cylinders are computed by this function this is a loop over all particles now regarding our robot in the real world we have this measurement and now we need to assign this measured position of a landmark to a closest landmark in our arena and fortunately we've programmed something similar earlier for our Kalman filter and this function is provided here as the function assign cylinders it takes the cylinders pose and the landmarks it returns a list which contains all assignments so in our case this list contains between 0 and 6 elements and for each element it contains the measure distance D the measured bearing angle alpha and the XY position of the assigned landmark in one tuple and it contains up to six of those tuples now we need to get the predicted Alpha Prime and T Prime so the predicted angle and distance and fortunately you programmed that earlier too because this is our function H which takes a state and landmark so this landmark and the state is the particle and returns the distance D prime and the angle alpha Prime so this is the predicted measurement now for each single assigned cylinder we have to take the actual measurement and the predicted measurement and compute the weight for the single cylinder which was computed according to this formula and this is done in the function probability of measurement which you'll have to program as well then remember this loop is over all particles but down here you'll have to do a loop over all assigned cylinders and for each assigned cylinder you get one probability of measurement which is your weight probability of measurement I give an axe and you'll have to form the product of all their probabilities to obtain your overall probability which then is your overall weight for the corresponding particle now how that's resembling work we need to draw in X pi with probability proportional to the weight of the particle I which we just computed and so we can do this in the following way say these are all the weights and our number them starting from zero for compatibility with your later implementation then I can draw an index with a probability proportional to the weights by drawing a random number between 0 and the sum of all weights and then after drawing this number I'll just have to find out the corresponding index which is 2 in this case so in order to find out the correct index I can just store the cumulated weights as an array of increasing numbers and so after I pick a number I can use binary search to find the correct index and so it becomes clear I can draw em numbers in asymptotic time of M log M now we will have a look at a different technique term the resampling wheel so imagine I subdivide a wheel according to the whites then first I pick an index randomly between 0 and M minus 1 that is in this case between 0 and 3 say I would pick index 1 then in addition I'd pick an offset randomly from a uniform distribution between 0 and 2 times the maximum weight that I have on this wheel which in this case would be w-2 so I pick a random value and I just find out the corresponding index so I check if I am first within w1 and if not I increase my index here and subtract W born from my offset so what remains is only this part then I check if this part isn't w-2 which is the case and so I would pick in x2 and I'm still at that offset now pick a next random number for my uniform distribution which may be that I check if I'm within w-2 which is the case so I picked you again I would pick my next random number from my uniform distribution and add this again to my old offset and so say this would go around here then I would check is it within w2 it isn't so I would increase my index to 3 now it's this remaining offset within w3 it is not so increase my index modulo the total number of particles so it's back to 0 is it within w0 yes it is so my third pick would be index 0 so to give you a hint the algorithm looks somehow like this you first determine the max weight then we take our first index randomly between 0 and M minus 1 and then we pick M indices we first add a random number to our offset then there's a while loop we have to increment the index as long as our offset is outside our current weight on the wheel of weights and after we incremented the index zero one or more times we have to add the new particle so this is the main structure of the resampling using the resampling wheel now here's the program I prepared for you it's limeade the particle correction and it combines the prediction you're programmed and the previous program with the correction which you'll have to fill in here so on top you can see it now imports from the slammy library those two functions we talked about earlier and it also imports the normal distribution here which we need later for the probability density function so the constructor of the particle filter now also takes the standard deviations of the distance and angle measurement here is our function G here's the predict function you'll have to insert that from your previous program here's the measurement function which takes a state or in our case a particle at position of a landmark and computes the expected measurements and this function is a plain copy from our common filter off the previous unit then here's the probability of measurement function which you'll have to implement it takes a measurement and a predicted measurement and returns to weight and here are some hints so in particular watch out for a proper normalization of the difference of the two angles and measurement and predicted measurements are both tuples containing a range and a bearing now here's the compute weight function which is a loop over all particles and so for each particle it assigns the cylinders obtained from the current laser scan which is the same for all particles - the landmarks given the current pose which depends on the particle and giving an assignment you'll have to do a loop over all measurement landmark tuples in this assignment and compute the appropriate weight which you'll have then to put into a list which is returned here and here's the final function you'll have to implement the resample function and it is up to you what you do here but for example you can do the resampling wheel which I just explained then this is the corrections to which is now very simple first we compute all weights and then second we do a resampling based on those weights which gives us new particles which we then assign to the set of our particles which means our old particles are replaced by our new resampled particles and here's the print particles function we already had in our previous code now let's have a look at me in the main function we now also need the cylinder extraction parameters we need to set the standard deviations of the distance and angle measurement and these values are just copied from our settings of the Kalman filter in the previous unit then we'll have to define the number of particles and initialize them that is the same as in the previous code if you set up the particle filter that's also the same as in the previous code we have to read more data because we now need also all the scan data and we need the landmarks in the arena and then finally here's our loop it's not much more complicated than our previous loop if here our control here our prediction then this is the entire correction step so we extract the cylinders from the scan which we need to do only once and not per particle and then we call the correct method of our particle filter given that cylinders and the reference cylinders which computes all the weights and does the resampling finally we output all the particles so now please implement the probability of measurement function to compute weights function and the resampling function and don't forget to put in the prediction function from your previous implementation up here now when you run this it will produce the file part of the filter corrected let's load that and so what you see is a set of particles and when we move forward in time they become less and we will have a look at this later on and then those 50 particles move along the trajectory and around the corner and so on and so it's probably best to load the reference trajectory as well so the particles are placed around our initial point and then they move along to objectory and here something interesting happens first of all we see the set of particles gets wider and we also see that some of the particles seem to take a stronger left turn than the others and we encountered that problem earlier when we noticed they had a too strong left turn at this position might lead to wrong match of the landmarks and in turn two completely wrong trajectory from that point on now what we see here is that some of the particles take a stronger turn but they eventually die out we also see then after while the set of particles is less spread out and it is more spread out when we don't drive straight which makes sense because we've modeled our noise in a way that differences in left and right motor control lead to a higher noise now if you run your code again you will obtain a different but similar result again here some particles taking a strong left turn and then dying out and the particle spreading out when we take a turn you may also try riding this with more particles for example here's a run with 200 particles so you just set the variable number of particles in main to 200 let's have a look again the situation with the left turn the spread when we turn and again we managed to find a globally correct solution
SLAM_Lectures
SLAM_F_02.txt
now transfer this think about the following say there are some landmarks and you observed them from a certain position then essentially what we do is that we set up a link between the positions of our landmarks where we do not directly observe the distance between two landmarks but rather we have served them through an intermediate point which happens to be the position of the robot at that point in time now if you move for a while by all those measurements the distances between our landmarks become very accurately known and so the random error of those relations between the landmarks becomes smaller and smaller so you can think of this as some kind of elastic bands that allow those points to move freely and then with every measurement those bands become less elastic and the distances become fixed now does this mean that the error ellipse of all the landmark points go to zero or at least become very small and this is actually a problem of the anchor or in geodesy it's also called datum definition which means that in though I know all those distances very accurately I don't know the position of this entire structure in space so as we have set up the problem everything is tied to my coordinate system which happens to be at the center of my robot initial position and if the error at this initial position is assumed to be large then this internal structure can move by that amount as it is tight to this initial position so if you set the error to be large in the initial position then the errors of all the landmark positions will never become zero or close to zero no matter how many measurements you take now let's have a look at the dynamic Bayes network which describes the state transitions of our robot so the axis our state and here if the transition which is controlled by our controlling pit you in each state we do a measurement and the value of this measurement depends on the current position of the robot and on our map and so what I just described is also termed the online slam problem namely we want to know the probability for our current state and for our math given all our measurements and all our control inputs when I explained this I explained it for the case of modeling this probability using a multivariate Gaussian distribution and so this means in the dynamic Bayes Network I'm interested in this state and in the current map now the problem with this is that all the information here is subsumed into this last state and so if you don't do this the problem is called the full slam problem in which I determine the Bron Ville T for all states that is over all time steps and the math given all my measurements and all my controls and in this case I determine the map and all the states and this is more desirable although usually it is infeasible now how can i compute those probabilities now there's two principal problems which make the slam algorithm computationally expensive there is a continuous component which were already outlined we have to compute the X Y and theta after robot but also the ax form y1 position of the first landmark x2 y T of the second and so on until the end landmark and this is just for the online slam algorithm and so as you see the number of those continuous variables may become very large and there is a second problem which is the discrete component after slamming algorithm and which involves the following my robot observes some landmarks and then it moves a bit where this movement is noisy then again it observes some landmarks so now how do we know the correspondences now this might be this landmark and this might be dead landmark whereas this is new but it might be also the case that this landmark actually belongs to here and this 2 here and the shifts that we seemingly see here we're just introduced by a large error in the movement of the robot and this is actually a very tough problem because in every step of our state transitions we have to introduce those measurement values in order to correct our state so if you remember we had a predicted belief and our corrected belief but in our case this correction involves assignments of landmarks and so if we do the wrong assignment we may either shift the robots position into the wrong because of that wrong assignment or on the other hand we might miss a correct assignment and introduce a new landmark although it is a landmark that we already observed and this means we would not correct the robots position even though we should and so these assignments which are discrete in nature might introduce errors which might lead much later on to launch errors of our state which we then can't correct anymore because we don't keep track of those assignments so the assignments we make here late to posterior X and then all this information is lost and this posterior X or this posterior belief is the only information that is kept so this is the discrete component which is the correspondence of objects to previously detected objects and so in general calculating the full posterior is usually infeasible due to those two problems so because of the high dimensionality of our parameter space and because of the large number of correspondences in our correspondence assignment now before we go on let me ask you a question say we drove with our robot for a while say from t1 to t2 hundred and what we drove we observed 50 landmarks m12 and 52 the question is for the online slam problem how many variables would be in our state
SLAM_Lectures
SLAM_E_06.txt
now talking about the solution where is the robot actually so now we have all those particles but still in order to make some decisions we need to derive one or more actual states of the robot and this is called density estimation so for example if this is our world and here are our particles you could do a histogram over state space so we've got two particles here one particle there one year three year four here and say two here and one here and so we could say this is the ground state of our robot because this is the peak and the histogram so this is the histogram over state space method we also get a more smooth result if you use a method called kernel density estimation we'll replace the kernel on top of each particle and adding those up forget an estimator of the overall density which is more or less smooth based on the width of the kernel leg we selected but in general we are interested in one or more positions so we could do a k-means clustering which would probably in this case identify this as a cluster Center and this and this for in 2d this could mean if we have some particles which to stripe a curve with a small radius as was the case in our last filter results then we would get two different cluster centers here or in the simplest case we could just assume that the samples are indeed drawn from a simple distribution such as a Gaussian distribution in which case we just determine the mean and the standard deviation from the set of samples now this is particularly easy to compute on the other hand it only makes sense if the distribution of our particles is unimodal thank you to its simplicity this last option is very popular in our case our state consists of position and heading and so to find the mean position is very easy so our estimated mean is just the sum of all the X or Y positions over all particles divided by the number of particles now regarding the heading Theda we again have to be careful since due to the modular characteristic of the angle we might have for example following the robot it's looking to that direction and so the particles will be like this now if our angles go like that from zero to plus PI and to minus point then some of the headings of the particles will be close to plus PI and some close to minus PI so if I just sum them up I will have something like say 4 for particles I will have plus PI plus PI minus PI minus PI which is 0 divided by the number of particles it's still 0 that the correct heading would be either plus PI or minus point so what I propose here is instead of computing the mean angle we compute the mean heading vector so we compute the vector X Y vector X is just the mean of all cosines of theta I the heading angle off the particle I and we Y is the same for the sign and then after we compute this our final estimate for the heading angle is the arctangent of we Y divided by VX and now you see as we divide that anyhow here we don't have to normalize this and as usual in the implementation this will be the eighth hand to function so the dollars to do assuming a Gaussian distribution we compute the mean of the X Y position and the mean heading for our robot now let's program this and here's some code I prepared the density estimation question and it is a really small programming assignment so up here just put in the prediction code and the probability of measurement the compute weights and the resample code you did earlier and all you have to do is to implement this get mean function which is a method of the class and it should return the mean x mean Y and the mean heading that's all there's to do and down there in Maine I modified the loop which now not only prints the particles but also gets the mean and then print out the XY n heading where the x and y is as usual corrected for this counter displacement and this writes an F record for filtered data which we used earlier so the log file viewer is able to understand this so now please program this get need function
SLAM_Lectures
SLAM_E_04.txt
now to see what happens remember how our calman filtering in the last unit worked so we had our landmarks and in every time step we extracted from our laser scan measurements the locations of landmarks which Were Somehow noisy but we used the assumed position and orientation of our robot to map those detected cylinders into the world coordinate system and then we look for landmarks close to this projected positions the closer than a given radius and then we assigned those and each such assignment led to an observation equation in the correction step of our calman filter and so if I don't know where the robot is and just say it is centered in the middle of the Arena looking say in the X Direction then even though I express my uncertainty about this position by large variances The Landmark assignment is based not on the second moments but on the first moments on my estimated position and orientation and so from our scanner I will get those detected poles and then the procedure will do some assignment of landmarks in the vicinity and it will probably even assign this cylinder here which is a completely wrong match and based on this wrong match the common filter will compute the correction and this will lead most probably to completely wrong trajectory so even though in general it would be okay to model my uncertainty in this way the problem in our case is that the observations which I need in the corrections that are not absolute in nature but I obtain them based on my current estimation of the position and orientation of my robot now here's an idea to overcome this problem so if I don't know where I am what if I assume some random position and orientation then I could do the landmark assignment for each of those hypothetical poses of my robot so I would not only try this here but also this and then eventually this position here would lead to the best match between detected landmarks and landmarks in the map so by starting with many many such poses instead of just one there would be the chance that one of those poses is close to my real pose and so the landmark Association would give the best results so that ultimately I would be able to identify the correct pose among all those hypothetical POS and so this is one of the basic ideas behind the particle filter now a particle filter we do have particles and so we represent our belief by a set of random samples and so this is an approximate representation and it's non-parametric and it is able to represent distributions with multiple modes so each of those particles is a hypothetical State and our belief is represented by the set of particles where m is a large number for example M maybe 1,000 and so if this is our true belief what we want to represent our particles may look like that here maybe one here here maybe a few more here's the peak so there should be many particles here now the density of those particles approximates our true belief so now if you have a simple distribution and want to obtain the particles that represent this distribution for example a normal distribution then it would just sample them according to the distribution and return the set of samples on the other hand if we have a set of particles we can compute the first and second moment so our estimated mu will be 1 / M * the sum of all samples so this is the mean value and estimation for our variance would be 1 / M -1 * the sum of x minus the estimated mean squar so assuming that our particles are for example sampled from a normal distribution we can estimate the MU and sigma of that normal distribution well of course the more particles we have the better our estimate will be however if the particles do not represent a normal distribution then we still get this first and second moment but the distribution represented by our particles will be different from the normal distribution that is defined by our estimated mean and variance so now let's have a look at the particle filter prediction step and I want to compare this with with a discrete base filter which we had earlier so in the discrete base filter the update step was given as follows for all XT we computed our predicted belief using that sum overall XT minus one of the probability of ending up in XT when we were at XT minus1 and giving the control UT times the belief of XT minus one that's all there was to do in the update step and that was a convolution so if our old belief looked like that then for every discrete value we multiply that value with this probability which also was given at discrete raster positions only and so for example we placed this here then we had this value placed this here and so in the end we added all this up and obtained something like that and so we had five discrete values here convoluted by three values and we obtain seven discrete values here so the result of the convolution is a widening of our distribution or we could say non- scientifically it's a smearing by convolution and now in the particle filter our distribution is represented by the set of particles and the update step looks pretty similar we now do for every particle the following we sample a particle for our predicted belief according to the distribution of this probability which is the probability that I end up in XT if my previous state was exactly the MTH particle of my particle set and the given control was UT so again say this is my old belief but this is not now not represented by this curve but rather by a set of particles now in this loop I take every single particle say for example this particle here which may be particle I and I take this probability which is the same as here and for this particle the probability of the news St will look like that and so I move this particle to here but not exactly to the center but I Now sample from this distribution say I pick this point here and I do so for every Point say this is my next particle I want to look at then this is the probability I will sample from this probability say I pick this point and so on for every single particle and now you see what we achieved here by a convolution with those probabilities is achieved here by a sampling from this distribution so again non- scientifically we could say we do a smearing by sampling but the smearing is controlled by exactly the same term in the particle filter and in the discrete base filter now to give an example in 2D if my robot is here particles would look like this and my control would move the robot like that then I would have to append this Vector here but I would have to apply noise semi distribution would be like that then I would sample from that distribution and I would apply the same Vector here get this distribution sample from it and so on so this would be my new particle set so let's think about this sampling step so how do we get this probability what we do have so far is we know if our robot is somewhere and we execute some control UT consisting of left and right motor ticks we end up in a new position and in our last unit we already implemented this formula our new position X Prime is a function G of our old position or state and our control and now here this is our particle XT minus1 M this is our control left T and right T and this is our new particle XT overline n now this is an exact function however the movement according to the control is inexact and so we implement this formula above here in the following way given LT and UT we assume that L andt are normal distributed and so we sample LT Prime according to a normal distribution of LT with the variance Sigma L and we sample the right control in the same manner and after we sample this we will compute the new particle by the exact formula using the sampled control so as you see the only difference to the exact formula is that the left and right control is not taken as this but it is sampled according to a distribution which is centered at the left and right control so how do I determine the variance and fortunately we don't have to think about that because in the previous unit you remember we set up those two equations for the left and right variance namely a factor alpha 1 * the left control squared plus Alpha 2 * left minus right squared and the reasoning was that the variance depends on the driven distance and also on the difference of the left and right track and the same for the right variance and so this is all there is to do compute the left and right variances use those to sample the left and right control and then compute the new particle from the old particle by applying the exact movement formula using the sampled control so here's the code for the particle filter and many things will look very familiar because they are very similar to the common filter code which we had in the last unit so this is the particle filter cloth The Constructor doesn't take a state and co-variance anymore instead it takes a number of initial particles otherwise it is the same as the Constructor in the calman filter class it takes the robot constants with and displacement and the control motion factor and control turn factor and it stores all that in member variables so down here is the function G for the state transition and this is just copied from the common filter class with the exception that down here I return a tuple instead of a nump array so that we don't have to import numerical python this time now here comes the prediction function you'll have to implement it takes the control which is left and right and then here you have to program the steps we just discussed and I've put some additional hints as comment here in particular take care if you call the function random gaus it takes the standard deviation here as a second argument and not the variance here's another function I programmed which prints out the particles using a small header PA so this also goes into the lock file and you will see shortly that the lock file viewer now is able to plot all the particles that we output here now let's go to the main function as usual here is the initialization of some robot constants and of the control motion factor and control turn factor and these are exactly the same values as those that we used in the common filter and now I need to generate some initial particles so in this case I use 300 particles and here's my measured State and my standard deviations for x y and the heading and then I just do a loop for all 300 particles I append and one particle which is sampled in x y and heading while the distributions are centered on the elements of the first tupal and the standard deviations are picked from the second Tuple so after that I have 300 particles and I hand them over to the particle filter cloth together with all those constants and now down here is the main Loop it reads all Control Data and then loops and in the loop we have our usual conversion of the motor takes 2 mm and then we just call predict so this call replaces the old particles in the particle filter by a new set of particles which are then printed out and then we take the new control and again replace the old particles by new particles and so on so now you'll have to implement this predict function up here so after you implemented this run it and it will write a log file called particle filter predicted load this file and so you will see the following here is our initialization of our 300 particles in the upper right corner now as we move our time the distribution gets wider and wider until after a while it seems completely random however let us load the reference trajectory now you can nicely see how the particles are kind of centered around the reference trajectory for a while until the distribution gets so wide that no structure is visible anymore this is not surprising since for now we just implemented the prediction and so we still have to implement the correction step
SLAM_Lectures
SLAM_F_04.txt
now let's have a look at the extended kalman filter slan and fortunately we have implemented the extended kalman filter already so first let's have a look at the prediction step now as we know in the prediction we computed our new mu from the possibly non-linear function g which took the old mu and our control and then we computed our predicted covariance matrix as g t where g was the jacobian of g times our old covariance matrix times the transpose of g plus the noise from our control which was a matrix v which in turn was the partial derivative of g with respect to the control times the covariance alpha control times the transpose of v and this covariance was actually set up from the variants in the left and right control so we assume the left and right noise to be independent and the covariance matrix actually only contains the two variances now this function g which we have set up was dependent on x y and the heading of the robot and the left and right control so this is the state mu whereas this is the control u and this was the old x y and theta plus those terms unfortunately we programmed all of that already and from those terms we obtained the derivative with respect to the state which was a three times three matrix with one on the main diagonal and two terms here and we have two cases if the left control is different from the right control or if left and right control are equal and so those two cases led to different values here but still the structure of the matrix was the same and in the same manner we computed v partial derivative with respect to control and this turned out to be a three times two matrix of non-zero values now let's have a look at what happens to g if we augment our state with the positions of our landmarks well then our new state will be x y theta and the first three lines will be exactly the same as previously but in addition we have x1 here and y1 and x2 and y2 and so on because as our robot moves the state of the robot is changed but the state of our landmarks will not be changed they will not move just because the robot moves unless of course the robot bumps into one of them so in general our control moves the robot but it does not include any change in the position of the landmarks so what does this mean for our matrix g well g is the partial derivative with respect to the state and the state now is this whereas this is control now in our case g will be a 7 times 7 matrix and so for the first entry we will have to compute the derivative with respect to x of the first component of g and this is as we see one now the derivative with respect to y is zero and the derivative with respect to the heading is nonzero in general and it's exactly the value that we had on the last slide and so for the first three derivatives we get exactly the same values which we had earlier for the case of the extended kalman filter without estimation of the landmarks so this 3 times 3 matrix is our old matrix and we computed that already and we even programmed that already now what happens here well the partial derivative of the first component with respect to x1 is zero because x1 is not present in this formula so there will be zeros here and in fact also here and here so this is all zeros now what happens along the main diagonal well the partial derivative of this component with respect to x1 is one so we will have ones along the main diagonal and zeros elsewhere and this block will of course also be zero so in order to augment our state with all the variables for the positions of the landmarks we will have to add those equations so those equations just tell us to keep the old values constant so as we see we do not have to modify g at all the function g just modifies the first three values of our state and now for our jacobian matrix g the derivative with respect to the state this means we have to place our old three times three matrix g into the upper left corner and we have to fill the lower right sub-matrix and everything else is zero and so this is a pretty simple modification of our algorithm now we still have to check what happens with r now for r we have the following r equals vt times the variance of our control times transparency vt and as we know the matrix v as the partial derivatives of g with respect to the control so now remember our g was x y theta and in addition x and y of landmark 1 flat mark 2 and so on plus those terms we had earlier with the control l and r only influenced the first three terms so consequently this jacobian is non-zero for the first three terms and zero because lnr is not influencing the lower four components here and so this is a 7 times 2 matrix now sigma control is the variance of the left and right control and vt is just transposed of this matrix and so if we multiply this out we see this is a 7 times 7 matrix but the first three rows and the first three columns are non-zero in general and the remaining elements are all zero and this part of the matrix is actually our old matrix vt sigma control vt transposed so old meaning the matrix we obtained for our extended kalman filter without the estimation of landmarks so this is our matrix rt and now this is a pretty comfortable result so our old g is replaced by our new t but there's nothing to do the trick is just apply the old g to the first three components and our jacobian matrix g which was three times three a main diagonal and non-zero values here and here is replaced by three plus two times the number of landmarks times three plus two n matrix well this original matrix is up here and as you see this has ones on the main diagonal and we have ones on the main diagonal down here so this is three elements and this is two n elements and the same holds here of course and our r which was v sigma control v t which was a three times three matrix has to be replaced by a three plus two n times a three plus two n matrix where again the first three times three matrix is the old we sigma control vt and everything else is zeros and so again this is three and this is two times the number of landmarks now let's implement this so i prepared the slam mine a slam prediction file for you and this is basically the extended kalman filter which we programmed earlier so in the beginning there's the constructor and it is exactly the same as our previous implementation with exception that we have an additional variable which is the number of landmarks which we set to zero down here now here's the function g we implemented earlier here is the partial derivative of g with respect to the state which we also implemented earlier and here is the partial derivative of g with respect to control and this is something we also implemented earlier and as we know from those methods we can build our prediction step which is here so in the prediction step we set up the matrix g and since the call to this function now constructs the three times three matrix g i call this g three and then from the left and right control we construct the control covariance matrix which consists of a diagonal matrix containing the two variances we construct the matrix v which is the partial derivatives of g with respect to control and from that we obtain our matrix r and exactly as above here i termed this r three because again this is just the three times three matrix r which is exactly the same matrix that we computed earlier in our kalman filter but it is not the matrix that we need when our system state is augmented by the additional unknowns required for all the landmark positions now if you look down here i modified these two lines to contain g3 and r3 and i called the function g on the state to predict the new state but both of those lines will have to be modified to handle the full state which consists not only after three variables for the position and orientation of our robot but also contains all the unknown positions of our landmarks and here are some hints so the number of landmarks is the member variable self dot number of landmarks and a numpy function i will return an n times an identity matrix zeros will return a matrix where all entries are zero and if you need sub matrices or sub vectors you can obtain them using indices like that so if you look at the rest of the code here's the helper function get error ellipse which will convert a covariance matrix into an error ellipse and we programmed and used that earlier and down here is the main function as usual we initialize our constants now the initial state this should be actually 0 0 with a heading of 0 but since i can choose it arbitrarily i have chosen other values here and those have been chosen just to make the resulting trajectory fit into our viewer later on so again those values are not meaningful i've chosen them just for visualization purposes then here i set the initial covariance to zero this is what we discussed earlier so i'm very sure about my initial position and orientation simply because i define my map to have its origin and orientation according to my initial robot position then i set up the extended kalman filter slam i read the motor tick data and then here this is the common slam filter loop so all it does is it reads control and calls the prediction step of the filter and as usual the rest of it outputs the robot's position and orientation and it also writes the error ellipse now this is the part you should program now now after you programmed this run it and it will produce the extended kalmanfilterslamprediction.txt file so load it and you will see the following so this is our trajectory which we obtain by just executing the prediction step and this is not different from our previous results because so far even though you programmed it now to handle landmarks as well we do not have any landmarks because we did not integrate any laser scanner measurements yet there's two differences to our earlier solution first of all we now start down here and this is an arbitrary start position that has been chosen in a way that the visualization is useful and i didn't use our usual start position just to remind you of the fact that in slam we do not have an initial map so we do not have any useful information regarding the start point of our robot and the second difference is that our covariances in the start position are zero and so as we move along they start to grow and we have chosen zero which you can see also here because this gives us the anchor for our map or in geodetic terminology this defines the dating so now please program the prediction step
SLAM_Lectures
SLAM_C_06.txt
now let's again have a look at the multiplication of two distributions so we had one distribution centered at 400 with a half worth of 100 and another one centered at 410 with a half worth of 200 meaning something less accurate and we multiplied these two and the outcome was as follows and so we already noted that even though the second distribution is less accurate meaning it has a lower peak and it is wider the result has a higher peak than our prior distribution so the posterior has a higher peak than the prior now let's have a look what happens if we move our measurement distribution so let's move that to 500 then the result looks like this so since the green distribution moves to the right the resulting red distribution somehow leans towards the green distribution and clearly this shape isn't a triangle anymore now let's move a bit more let's move to 550 now it's very obvious that the resulting post here isn't the triangle distribution anymore but we also see even though those two distributions tell us that a robot is at different positions while telling us it is at 400 the other telling us it's at 550 we end up in a final distribution which has a higher peak than any of those two distributions so what we see here is that by adding this information from our laser scanner we have an information gain no matter if the measured value differs from our previous assumed position so now let's put together the two steps that we've learned so far so first within the motion and you remember we have this tree for explaining this with three differently outcomes and then for every movement we again had three different outcomes so we ended up in this tree with five different outcomes and so the probability for certain outcome boss P of x equals the sum of all P of Y times the probability that I go from Y to X and it's summed up over all Y because in order to end up in this X I can go either this way or that way or that way and this is summed up here so these and the probabilities why and these are the probabilities of P of X given Y and this was a convolution and I'm the second step we made a measurement and we found out that the probability for position x given a measurement C is a normalization constant times the probability of measuring seen when we are at x times probability for x and so we solve that using a multiplication and we called that the prior and this one posterior and now is our robot moves along those two steps are always iterated so we move we measure after measuring remove and indeed normally therein to least so in reality we measure while moving now as they move on like this we produce a dynamic Bayes Network so we come from a certain state and then we are in a certain state which I denote as ax t minus 1 and then in this state we perform a measurement say C t minus 1 then we move on to the next state X T and doing so requires us to give some input to the robot for example the speed settings for the motors and this is control so control is normally expressed as u so this is a u T that's the control that we set in order to get to our state X T and in X T again we measured and so on so if we are in a certain state at time T for example we will denote the probability of being in this day as our belief so the robot has a belief of being somewhere so the probability of XT is also termed the belief and this is our belief after removed and after we have seen the measurement and there's a second belief namely if they have moved but we didn't see yet our measurement and this is termed our belief with an over line so now we'll have to use time indices here in order not to get confused which x is which in all those equations now let's write that down again with indices so now a movement step we compute our belief over line of X T using the total probability namely the probability that we end up in XT given that we were previously next t- warm and also given this is different from the previous life the control UT times the probability that we were in the state XT minus one which we now denote as the belief of being in X t minus one and there's a summed up over all XT minus one and his next step we compute our belief after incorporating the measurement which is a normalization factor times the probability of the measurement CP given that we are at position X T times this belief that we just computed and these are the two steps so all we have to do is to iterate those two steps so now we need to compute that for all X so if you write it that way we have also to add for loop for all XT and this entire thing now is called a Bayes filter so so this is our procedure base filter and it has two parameters our old belief our control and our measurement and it returns our new belief since this is very important this is the basal rhythm for any filter we will cover here so for the histogram filter which we actually already did for the common filter and also for the particle filter and now when we want to program this you don't have to do very much we already did this so this line here this ends up being just convolution so programming this is just belief with the over line convolution if you term this muth so convolution of our move distribution and our old belief distribution while this step is just a multiply of the belief which is computed and our measurement nine slam sixty histogram filter you will find a program that implements this base filter in the form of a discrete base filter also termed a histogram filter now if you copy - rec involved and multiply functions up here then it's ready to be used now let's look what happens in main so the program is very short now here it sets a start position at 10 it puts unit pulse there as we did before the controls these are moved 10 times to the right by 20 this will be 30 and so on and the measurements are generated assuming perfect measurements meaning this somehow elaborated code here just produces a list of 10 30 50 and so on just us the cumulative list of this list here plus and added start point of 10 now this down here is the main loop and as you see it's very short we have movement step and a measurement step and both are identical except for the operation that they use the movement uses a convolution whereas the measurement use a multiplication now for movement we set up our control distribution which means from movement of 20 set up triangle with center at 20 and I use a half worth of 10 and after the movement I plot the resulting belief with a river line in blue then after that I model my measurement the measurement tells me that I am now at say position 15 it's also a triangle distribution and I'm using the same half worth of the triangle then I multiply the two distributions and I plot the resulting belief of the next step in red so now running it this is how it looks like so we start with our unit peak and then during move propagate this into this triangle which is standing proved by the update so in the update you get a higher peak so the robot is more certain about its position now let's have a closer look starting from this triangle this bell-shaped curve evolves and we have always the same pattern our movement provided our distribution from this red one to this blue one and then again by the measurement we narrow our distribution again and get a higher peak now if you used a half worth of 10 for both the control distribution and also the measurement distribution now what happens if we said control distribution much much wider then we get the following result after our first movement we get a very wide distribution and this is corrected by the measurement and then again what we gained here by the measurement we'll lose it by the next movement and so on and so essentially these triangle distributions of distributions of our measurements and the movement since there is so inaccurate place no rule so what if iterate the opposite body time we get a similar result to the first one however the peaks of our distributions are lower so if they reduce our measurement accuracy then the estimated accuracy of our robot is reduced to now remember earlier when we didn't have the measurement steps our accuracy decreased continuously now let's do a longer simulation so let's multiply our width of Tarina by 10 and also the number of control inputs by 10 let's run there I see here this is a situation without measurements we start with a unique pulse and then our distribution quickly gets wider and wider and the peak get smaller and smaller now if we switch on our measurement we'll see this is not the case anymore so we start with the peak then it goes down but then essentially the information loss that we have by our movement isn't balanced with the information gain that we get from our measurement and so if the overall situation does not change that is our robot always moves by the same amount and there's always a measurement to one object with a specified accuracy then this value will be stable so after short startup faced our filter will become stable under those circumstances now remember in our Bayes filter everything was formulated in a discreet way so in order to compute our belief over line we compute this sum over all x t minus 1 and then we computed our new belief and multiplications of the probability of the measurement gain the position times our belief over line now seeing this discrete formulation this belief is represented by some kind an airy and in our Python programming it found a way of doing so by using a class that stores those arrays in an efficient manner in terms of storing is start and only those values which are nonzero so then here's Luke and here's a loop - and there's two loops where our convolution Paris and the computation of the posterior there's no loop but there's a larger loop so there's multiplication and the outer loop this was our multi education function and so you see this is quadratic in the elements that have to be touched whereas this is linear but unfortunately as our distributions get wider and wider they might eventually cover the entire world our robot operates in so what shall we do so we might think well then we use log cells so we need only a few of them there are certain problems with that regarding the accuracy because since our robot is here we can't distinguish this position from any other position in the cell so if you give it a movement command which is so small that the robot doesn't leave itself it appears as if the movement wouldn't have any effect in order to circumvent this accuracy problems we would have to make small cells but then as you can imagine you need lots of them and sooner or later we get a storage problem and so it is a question of efficiency if we are able to represent this belief in a different manner so we can replace all this by an integral and we returned a belief in the end also conforming with our original representation in order to explore this I have provided you with the slam 60 histogram filter cleaned up version and so it is essentially the same program as our last one but I rearranged the code so that it is simpler so first of all there are some helpers all the plot routines are in this histogram plot so they won't disturb us in the main code so now scrolling down to the main code if the following our arena has a width of 200 and I will list the distribution we can set a different distribution to be used throughout the entire program and this time we use a triangle as in our examples before then we start at a certain position given by this dist so this means this is a distribution dot triangle centered at ten with a half worth of one so it's a unit peak and here are our controls and measurements so the controls are two distributions first one meaning removed by 40 forward and our distribution has a half worth of 10 then we move again by 70 forward again with the same half worth of 10 and the measurements tell us that after they've been at 10 and we moved forward by 40 we should be actually at 50 but our measurement tells us we're at xD there's a half worth of ten and then after we move again we are at 120 but our measurement tells us we are at 140 but this time the accuracy is lower and our main loop is as simple as that we just called a histogram filter step with the old position our control and our measurement and we obtained the new position in this case the histogram filter also returns the prediction this is not really necessary for the filtering this is just for plotting it down here and this is the call to the plot routine so what does the histogram filter step do let's look here it is as simple as that so the only thing it does is one prediction which is a convolution of our previous belief and the control and then a correction which is just the multiplication of our prediction which we just computed and the measurement that's all there is to do now let's run this so this is what we get out for the two steps now let's zoom in this is the first filter step so the blue curve is our discrete distribution that we obtain after our first movement and as expected it is centered at 50 then we also get our measurement which is a triangle of distribution centered at 60 and so our resulting posterior is this red distribution which has its peak exactly in the middle between our two other distributions because those distributions have the same width so the robots belief is as accurate as the robots measurement and so the result is exactly in the middle so let's move on one next step that looks like this in blue again is our belief over line so after the movement and now we get our measurement and our measurement is still a triangular distribution however now we have chosen a width which is twice as long and so our resulting posterior is here close to our prior now why is this the case because our prior is more accurate than our measurement and so our posterior moves towards our prior now let's also have to look at the shape of those curves now we see we started with triangle of distributions we ended up in something else then be moved and after movement our curve pretty much looked like a bell-shaped curve and this is no surprise because if you remember this curve is computed by a convolution and a convolution of two triangular distributions actually to a binomial distribution and so we end up here looking at binomial coefficients which means that this curve approximates a bell-shaped curve so it looks pretty much like a Gaussian curve now let's try all this using Gaussian distributions so all we need to do is change this to Gaussian and then run it again now here's our new result again let's zoom in so this is after the first iteration and this very narrow distribution we start with is transformed after our first movement into this Gaussian shape then we have second Gauss in shape which is our measurement and the outcome is something which looks pretty much like a Gaussian shape - now as we move on our next distribution also looks like a Gaussian shape and after we multiply it was our measurement the result also looks like a Gaussian shape so we have now seen if you start with the triangular shape our outcome is not a triangular shape anymore but it seems that if these stock with the Gaussian shape then both the convolution to obtain the blue curve and the multiplication to obtain the red curve seem to give us again a Gaussian shaped distribution now what do you think is this true so this here is a Gaussian and multiplying it with another Gaussian again gives us a Gaussian and then in the next step when we move we have this convolution which then is an integral this will lead to your Gaussian as well what do you think is this true
SLAM_Lectures
SLAM_D_18.txt
so congratulations if you made it that far so you really developed some non-trivial code so starting from a real robot you modeled the motion and the measurement and derived all the derivative matrices that were necessary and then implemented the prediction and the correction step of a full-fledged Calon filter and so whenever you have to do this for another robot or another system in general you just may follow exactly the same steps you set up the model for your system transition you set up the model for the measurement and then you derive all the Jacobian matrices which are then required for the prediction and the measurement step of the common filter and so in practice after you have set everything up the problem is usually to find good values for the various co-variance matrices in the filter now when you run your code it will produce this calman prediction and correction text file and when you load that you will notice several interesting thinks so for example in the beginning our robot sits here and it doesn't move for the first few steps but it sees those six landmarks so as it doesn't move the system noise is zero and so in the beginning it will accumulate all the measurements once again which will then result in a smaller eror ellipse as we move on you can see we have more or less landmarks until we have only two here or even none at this position so you can see at this position our uncertainty in heading and position is relatively large this gets smaller as we have more landmarks in our view and as you see we end up with a pretty smooth and globally correct trajectory now let's do the following modification to our code in the common filter loop I put in this line which means that for a certain position where I may get Z one or more four observations I'll just keep the first observation in the list and after we run this we reload the trajectory and now we obtain the following the trajectory looks more Jagged and as we move on we see we always take just one observation if we have at least one and this results in our eror ellipse being larger than in the previous case but nevertheless our trajectory looks astonishingly good and certainly better than what we've achieved in un B using our trajectory correction based on an estimation of the similarity transform now if you take the last observation instead of the first we will obtain this result which doesn't look as good anymore so especially in the beginning we have this movement either due to the fact that this Landmark is relatively far away and our scanner has maybe some systematic error or due to the fact that this Landmark reference position is actually not correct so globally we are not doing as good as in the previous case and also notice for example here as we move along how the measurement to this Landmark influences how my ellipse is oriented so once again this is our final solution and congratulations if you made it successfully through this unit of our slam lecture so see you in the next unit
SLAM_Lectures
PP_02.txt
so dkar algorithm can be applied to any problem where we have graph of nodes connected by edges the edges have Associated costs or weights which are non- negative and we want to find a minimum cost path from a start Noe to a goal Noe and so although we imagine this to be a representation of a road Network while the costs associated with the edges are distances or travel time it is a general algorithm that can be applied to any problem which can be expressed as a problem which needs to find a minimum path in a graph now we are now interested in the navigation in empty spaces so if this is your start and this is your goal then the optimum path would of course be the direct connection of start to go with one straight line segment however if there is an obstacle like another car you would have to go around this obstacle and finding that path in a continuous space of all possible paths from start to go may be pretty complicated wh however think of the following if you make a roster of cells where navigation is only allowed between the centers of the cells then this will implicitly Define a graph and then we may use the algorithm of dyra to to find a root on this graph and if there is an obstacle we will just disallow those graph nodes which will then lead to a different route now I just said that we subdivide the world into a number of R cells and by only allowing movement between the centers of the cells we Define a graph I did not mention that we have an option with regard to what edges we actually allow so if we only allow the four neighbors for each cell meaning every node is connected by four edges to neighboring nodes and these edges all have the cost one then the following cost structure results on a grid say again we want to go from start to goal so start is the node with zero cost now since this is connected to its four neighbors we can reach those neighbors by cost one now going on in our dyra algorithm we will pick an note with cost one next and check which neighbors can be reached these will be those three plus s but this has been visited already then we pick this note it will add those elements and so on then we pick a node with cost two and it will add nodes with cost three and now you can see this pattern of increasing note cost traveling along a front line so now if you connect all the nodes with cost five five you will see this front line so when you use a four neighborhood then the expansion of the cost front starting from s will look like this now alternatively you may also use an eight neighborhood which is includes the four neighbors we had previously each with cost one plus the diagonal Neighbors which now have distance Square < TK of two so approximately 1.4 so if we apply this to our search on the grid we will get one for the horizontal and vertical neighbors of s and 1.4 for the diagonal neighbors then we will pick a note with minimum cost which is for example this one here we will get a cost of two here and a cost of 2.4 going this diagonal here and here the same will happen here and here and of course here now the next element with minimum cost is actually this 1.4 here which can go down diagonal for a cost of 2.8 the next minimum element is two for which we'll get those neighbors 2.4 for which we'll get this neighbor and 2.8 for which we'll get this neighbor then three is the minimum cost note we will get four 4.4 and I will now fill in all the remaining ones and as you see the note G which has previously been reached with a cost of five is now reached with a more realistic cost of 4.4 whereas its ukian distance is actually Four to the right one up so this distance is the square root of 4^ 2 + 1 2 which is approximately 4.1 so you see the distance which we obtained by using the eight neighborhood 4.4 is still not the correct ukian distance but it is better than the distance we obtained using the four neighborhood and now if you try to draw the front line you will see say for a constant cost of four you can go here you also can get almost there and of course you can go here and so you see the front line will now have this shape so we'll get an octagon so if we use the eight neighborhood our search base will have a shape like that when it expands into free space so now let's try to program this now before we start programming I will show you the outcome so you will implement the dyra algorithm and your implementation will be integrated into a simple graphical user interface which I have provided for you so this works as follows clicking your left Mouse button and dragging it will allow you to Define obstacles whereas clicking the right or middle Mouse button will allow you to delete the obstacles again and clicking clear will delete all all obstacles all together now when you press shift while clicking the left Mouse button this will allow you to set the start point and pressing shift clicking the middle or right Mouse button will allow you to define the end point and as soon as both points are defined the rooting algorithm will start to run and it will visualize the set of visited nodes so what you can see here is exactly the Octagon which we have seen on the slide before and which results from a graph defined by the eight neighborhood structure so if you start to Define obstacles here those will disallow the surch in this area but of course the search will go around those obstacles and especially if you block the path from start to goal the search will go around this blockage which comes at an extra cost and so you will see that the overall search space expands when the direct path to the goal is blocked you will also realize that in this present implementation the algorithm is pretty slow so if you play start and goal at a large distance you will have to wait for a long time and we will fix that later so here we go this this is the path planning 01a file and this will be the first programming task so now let me quickly go through this program first of all make sure that you have installed all the additional libraries needed for this assignment because otherwise you will get import errors and the graphical user interface will not show up properly then let's have a look at the first line here this is the world extents so this means that all our path planning experiments will take place on a 200 * 150 grid and if you like you can set this to different numbers if you make it larger then also the display area will get larger but in general making it larger will just make it slower and will probably not give you extra insights so this is our map of obstacles with the following convention if there is an obstacle meaning a blocked raser cell in our environment we will put in the value 255 and if it's free space we will put in a zero so I could have also chosen one here but instead I use the maximum value of an unsigned 8bit integer for reasons that will become clear later on we will also have a second array with all the visited cells which will be set later and this second array will show up in the graphical user interface as this green area which you saw earlier where green means the pixel has been visited and black means it has not been visited yet after running the algorithm we will also get the optimal path from start to goal but we didn't talk about this yet so we will postpone discussion until later here are a few functions which handle the graphical user interface so they are not important if you're interested in what's going on then have a look but otherwise don't be worried about them the main thing that happened here is that there's an update call back function which finally calls your implementation of the dyra algorithm saying it wants to go from start to goal and it gets this obstacle map now let's go down to the main part of the algorithm and first of all here is some movements defined so this entry means go plus one in X Direction and go zero in y direction and this move is associated a cost of one so this goes to the right and costs one this goes zero in X and one in y so this goes up and also costs one this goes left and this goes down and those four possible movements here together they make up the four neighbors and here is the remaining four elements for the diagonal neighbors so this means go one right one up and the cost is S2 and S2 here is set to be the square root of two so this is approximately 1.4 this means go left and up at Cost 1.4 and so on and if you have fun playing with those movements you may comment out this line for example and then the algorithm will be constrained to four neighbors and you will get different shape of the search space exploration as you have seen earlier when we executed the algorithm by hand so now here is the part you'll have to implement and for your convenience there is already the main elements of the algorithm so the main structure is given and you'll have to implement the parts where there is a change so whenever it says change change change you'll have to implement those parts and so watch out for those Commons those Commons are based on the algorithm which was given earlier and should help you to understand the overall structure of the algorithm so let's go through this from top to bottom so first of all front is just the start node now I can't assign directly the start node because front should be a collection of elements a set well in this case we use a list so and we put into this list one single element which is a tuple which contains the cost which is zero for the start node and the start itself so in this case start consists of two elements namely X and Y so in the second part the visited array is initialized so this calls the num function zero so it's initialized to all zeros and I need to know the rows and Columns of this Matrix and obtain them in the previous line by getting the number of rows and Columns of the obstacles array so I use the dimensions of the obstacles array to define the visited array and the visited array is all zeros for reasons that become obvious later I use a data type of float whereas currently it would be sufficient to use some integer because we just want to Mark the visited cells so we only need zero and one or we could use a Boolean array and Mark it with true and false so here comes the main Loop while our front is not empty and we can write this in Python conveniently as while front and first of all we need to get the smallest item from front which is this list and after we found the smallest item we'll have to remove it so since front is a list you'll have to look up which methods of list allow you to remove the element then we check if this has been visited already and remember that the element we pick from front actually consists of a cost and a note or a cost and a position so I just assigned this element to the two variables cost which is now a float and the position which is the XY position now first of all after this is assigned you will have to skip the rest of the loop if the visited array shows you that this position has been visited already in which case its value will be larger than zero now the next thing is to do if it has not been visited you will have to mark it as being visited so you set it for example to the value of one now here's a part which you can keep as is after you checked all this you marked this note you just check if this position that you have reached is already the goal in which case we break the loop so we break this while loop and we are finished and so we return all the visited notes but if not then you will continue and first of all you will have to check all the neighbors so you make a loop overall Delta X Delta Y and Delta cost in the list of possible movements remember the movements were up here and if it did not comment out anything these are the eight possible movements which you will consider in the loop down here so for each possible movement you have to compute a new position in X and Y and then you have to check that this new position is still within bounds of our world so it is not allowed to get smaller than zero and it is not allowed to be larger or equal to the horizontal or vertical extents of our world and so the extend they are stored in extent zero and in extents one and these are the extends we obtained actually up here so these are the bounds of our array and if the point is outside our world then use a skip which means in Python use the continue statement to skip the rest of this for Loop now if it is inside for convenience I put the new X and newu Y into a tuple because we can use that down here and then we have to check if visited is zero and our obstacles are not 255 at this new position then we will append the Tuple to our list front the Tuple will be the cost and the cost is the cost of our element that we have picked here from our list plus the Delta cost that is the cost of the edge connecting our POS position and our new new pass position so we have to add up those two values and we have to enter this as the new cost for the neighbor node and the position of this new node so this is the tle that has to be pended to the front so for example in the beginning when we get the smallest element this will be the start element the cost will be zero the position will be the start and this Loop here Will will be over all the eight neighbors of the start element and if there is no obstacle it will actually add eight elements to the front list so below here there's the main program and you don't have to be concerned about this it sets some callback mechanism which allows you to control the graphical user interface sets an extra button which is the clear button which allows you to clear all obstacles and then it initializes and starts the graphical user IND face so now go ahead and program this version of the dtra algorithm
SLAM_Lectures
SLAM_D_08.txt
so what is the dimension of C
SLAM_Lectures
SLAM_C_05.txt
so in order to answer that question let's do the math so what I'm interested in is the probability that I am in the position X now given that I did some measurement see right an according to Bayes rule that space this is the probability of having the measurement given that I'm in the position X so as you can see using Bayes we can flick those two variables times the probability of X divided by the probability of the measurement and as you know this is the same as the sum of the numerator for all possible X values and so in order not to get confused so that's the loop variable here X - so denominator here just is the sum over all possible numerators and so it's just a normalization constant so the result is this and actually we did something like that earlier when I told you how the manufacturer calibrates the scanner you know I had two figures one loss manufacturer calibrates the scanner by adding a known X of five meters and then looking at the probabilities that he measures certain values whereas we are interested in where the scanner is if it gives us a certain measured distance and so this means for our result if this is our P of X and this is the prior and this is our measurement which is the probability of measuring a value given X then we can compute P of X given C by multiplication so this is P of X we have to multiply this by P of C given X and this gives us P of x given C and so in the end this curve might look like this and this is called the posterior because it is the probability of being a tax after we incorporate our measurement and remember we always have to normalize this and this becomes clear because if you are at this position and you measure a distance that leads to this probability distribution then since those distributions have approximately the same width then your final solution should lead to something like that and that is astonishing somehow if you look at those distributions because you may think something like this will be the result right but that's not true because this peak here is in the tail very small values of this other peak here there's the multiplied values are we're very small and the same holds here because this peak is in the tail of that curve the multiplied values are very small and so what is hard to believe you see it for the first time is that as well this year they are very small much larger than those they have to be multiplied with those values and they actually will give you the peak and so you see here the normalization is very important because these are small values that lead in the end to a large value and that is just because the entire result the entire distribution all those values are very small so summing them up gives a very small value but if you normalize each value by that we will still get a peak and so of course to sum over the entire distribution will be one points here again so now let's implement this so what I prepared for you is a main function that is really very simple so it sets a position and a position error and then generates a position distribution namely in this case a triangle distribution using that position and error and then this is plotted the very same thing is done for the measurement which now is not 400 but 410 and the measurement error is twice as large as the position error and for this also triangle distribution is used using the measurement value and the measurement error and then finally the multiply function and this is the function you should implement up here is called and it is given both distributions and returns a position after measurement or posterior distribution and this is plotted as well so the position is plotted in blue the measurement in green and the resulting distribution in red and if you implement that correctly you should see the following so this is the prior and this is the measurement since the prior is centered at 400 whereas the measurement is slightly off here at 410 and the measurement is twice as white as prior and now if you look at the posterior you see the following even though the measurement was less accurate than the prior posterior is even more accurate than the prior leading to a higher peak in probability so that's interesting now first please program the multiplication off distributions
SLAM_Lectures
SLAM_A_02.txt
now that we ran our robot through our Arena let's look at motor control so this is our robot here is our ligher the scanner which will scan somewhere in that range and we'll have an invisible area here and these are two Motors and the two Motors they drive these axes meaning if they go at a certain speed the robot will also go at this speed and if one of them goes faster then the robot will somehow make a turn now these motors they do have wheel encoders meaning there is encoders that count the number of turns actually not single turns but they count many ticks for a single Revolution this tick count is sent to our control computer our control computer just locks all those values here is one of those lock files and this loog file is also in the set of files that you have for the current unit so this file is called robot 4ore motors. text and we made some code at the start of each line so here we writing M for motor motor information and then all the data so from this data I actually don't need very much so this is a time stamp so this means at millisecond 204 the motor readings were as follows this number here is the encoder value for the left motor and we don't need these three values and this is for the right motor whe encounter and there's actually a third motor which would be this one but which we are not using so all the other values are not interesting if you want to read this file just keep in mind what we need is we have to check if this is an m so this will be the motor readings we could use the milliseconds here then we need this value and that value meaning if we count this is zero 1 2 3 4 five and six so we are actually just interested in the value two and in the value six so also if you look at this the encoder values they do not start at zero so they start at a certain number and then when the robot moves forward they start to increment at a certain point in time so remember left that's the second column and right that's the sixth column we are interested in so now let's try to read out the motor file so in this file we have those lines which are M then a Tim stamp then the position left and three other values which we are not interested in and then the position right both in Tex so first of all let's open the file the file is robots robot 4 motors. text and so we open it it and first of all let's just print out all the values so we say for line in file print the line then we run it and you can see it brings out those lines that we've just seen on the last slide so now let's read out the left and the right motor ticks so I start by making a left list which is empty and a right list which is empty and then I have to split up the line into its columns so I say SP is a split up of the line in fact let's just print SP to see what happens the split command splits up the string in parts and it will have the M column number zero column number one 2 3 and so on and so we can just grab the columns we'd like to have so we take the left list but actually we're interested in the integer and not in the string so we convert it to an integer and we append to the right list the sixth column let's try this it works but we can't see anything so let's just ask for left list so this is all values of the left encoder so now that we have this let's just plot this in order to plot we import PAB and then we just plot both lists plot left list and plot right list let's run it so that's the outcome so these are the values of the two incremental encoders so the left one starts at a little bit more than 20,000 the right one starts at more than 15,000 and then the robot starts to move and they just increment and we can see that the left one starts at a higher value but in the end it ends up in lower value so this means the right wheel encoder has more ticks along the entire trajectory than the left one so the robot does a left turn so the absolute tick numbers are not really so meaningful and so it's hard to interpret uh the drawings that we just made so let us build the difference of T and now we could just program this but I'll show you something else so there is a class which is in the package that you downloaded for this course uh which is called Lego log file you can import that and the purpose of this class is to import all different kinds of records that we will produce throughout this class so in order to read a file you can just say lock file equals Lego lock file and then just read it like that and if you run that now we have this class and for example we can ask for the motor ticks and those ticks are the differences from one time step to the next one so let's just print out the first 20 motor tis and here they are as you can see in the beginning the robot does not move and then after a while it starts to move it starts to accelerate it gets faster but then here's a little bit of difference so the left motor goes one tick faster than the right motor and then we also having something else here we do have some time lag and due to this time lag we have z0 values spread across our measurements if I make that longer you will see that occurs more often see at about every five movements or so we have a z0 now let's plot those values in order to see what's going on so I'm adding the import from pyap and then it is as simple as just adding the blot command down here and now let's see what happens now in this plot we can see the incremental values for the motor encoders and we see those spikes going down to zero which is not what we actually want but we can also see the robot dries straight and then the right motor gets faster whereas the left motor gets slower so the robot makes a turn then it goes straight again and so on so you also can use this to have a closer look
SLAM_Lectures
PP_01.txt
now welcome to this lecture which covers some basics of path planning and you will learn about the algorithms of dtra AAR and finally a path planner that does trajectories which can be driven by a real car so before we start let me show you some of the results so there's a graphical user interface where you can explore the results of all your algorithms this allows you to set a start node and end node and then watch how the algorithm finds the path from start to the end and you may of course change start or end notes and you can draw interactively obstacles so the path has to go around those obstacles and as the path length increases you will see that the number of noes that are visited also increase so this is tars algorithm and we will develop this first and then we will modify this to obtain the AAR algorithm where few nodes are explored so in general the search space is smaller and it is so fast that you can build up and solve complicated maze at interactive speeds finally finally we will use a potential function to keep away the path from obstacles or to find your path in really cluttered environments in such a manner that it goes between the objects and not along the edges of the objects and finally we'll come up with a path planning algorithm which plans trajectories in such a way that a car which can go straight or make a left or right turn with a certain minimum turn radius can actually drive those paths and we will see how this algorithm is pretty creative in finding alternative ways in case there's obstacles preventing a given maneuver will also allow it to go forward and backward and as we will see it is then able to make pretty complicated Maneuvers and those Maneuvers are not very much different from what we would do if we encounter a narrow blocked road ahead and probably have enough space to turn the car at a nearby parking lot or if this is not possible if we can go forward and backward even with a car right behind us so let's first have a look at the algorithm of dyra which is named after the famous Dutch computer scientist etar dyra now suppose we have a graph consisting of nodes and edges and we want to find a path from start to goal with a minimum length now say considering the length we assume that the edges of the graph are annotated with costs such that for example this Edge might have a cost of 40 this one a cost of 50 and this one a cost of 20 and all the other edges have Associated costs as well these edges may be directed or undirected for the moment you may assume that this is a kind of a road Network where the notes are intersections and the edges are Road segments connecting the intersections and the annotated costs are the distances between intersections or the time required to travel between two inter sections or some mixture of both now the task is to get from start to the goal with minimum cost we know that the cost of going from start to start is zero so the cost of reaching the start node is zero now what is the cost of reaching the goal node well in order to reach the goal from the start we have to pick a path through the graph now which Edge should we pick first well and intuitively we should pick the edge with the lowest cost First being this one and since the Edge cost is 20 we reached this note with a cost of 20 now we know the final cost of reaching start is zero and the final cost of reaching this note is 20 why do we know this because say there would be any alternative path in the graph which we remain detect later on then the cost of this alternative path will not be smaller than 20 how can I know this well because the first Edge in this path already has a cost of 40 and now you will notice this argument only holds if Negative Edge costs are not allowed so for example if it would be allowed that this Edge cost is - 10 this is also - 10 this is also - 10 and this is -5 then the total cost along this path would be 40 30 20 10 5 so it would be possible to discover a path later on with a smaller cost than 20 but if Negative Edge weights are not allowed then the total cost along this path would be larger or equal to 40 and so the trick is once we pick this note here which has the minimum cost among all notes that I can reach from start I know its final cost and so I have never to look again at the cost of this note since this is the case I can imagine the following in the beginning I have a set of nodes which I have visited and this set contains in the beginning just the start node also I know the distance to the start node is zero so this set is not only my visited set it's also the set which I know being optimal Now searching for the note which can be reached from the optimal set and has smallest cost I obtain this note here and once I pick it I know it will be optimal too it cannot change its cost later on so my optimal set just gets larger and also contains this Noe now in general the situation at an arbitrary point in algorithm execution is as follows there's a visited set and the start note is part of that visited set and I have already added zero or more other nodes to this visited set that so all those notes which I have added already are already assigned their optimal distances now some of those notes may have outside connections that is there's a direct Edge from those nodes which are in the visited set to other notes which are not in the visited set now since I know the cost of all those notes and I also know the cost associated with the direct Edge to a note outside of the visited set I can also compute the cost of those notes that can be reached by a direct Edge from a note in the visited set now what happens next is exactly the same as what we did in the case where our visited set consisted of only the start node we will look among all the notes that can be reached from any note in the visited set for the note with the minimum cost and we will add this note to the set of visited notes then the following will happen first of all we know the final cost of this note is indeed 100 second we will have to look at all the neighbors of this note because it may be the case that now we find notes that can now be reached from our new enlarged visited set by a direct Edge for example in this case the cost of this note is 100 and the cost of the edge is 10 so this note which couldn't be reached by a direct Edge previously can now be reached with a cost of 110 but we have another important case too so this note that was previously reached with a cost of 200 can now be reached via the newly added note over this edge with a cost of 30 making a total of 130 so we have two important cases here the first one is the case where we discover a new note which was not previously connected to our visited set and so we change its cost from Infinity to 110 in the other case we have found this node earlier and we determined that we can reach this note from within the visited set using this edge with a cost of 200 but then this note doesn't get picked right away and so while other notes are being inserted into the visited set we may discover that the distance gets smaller until at some point the note is picked and also add to the visited set so keep in mind those two cases discover a note for the first time and decrement the cost of a note that has been discovered earlier but which has not been added to the visited set already now if you look at how the algorithm proceeds we find that there are actually three sets involved so we first have the visited set which contains all notes for which we know the correct final cost already and we have another set which is kind of unknown consisting of all the notes which we did not reach so far and we have this set in the middle which is the search front of our algorithm and for all notes in this set we already know their cost however it is not necessarily the final cost which is only fixed when a note is taken from front and added to visited now keeping the notes in two separate sets visited and front makes a lot of of sense because the search for a note with minimum cost only happens in the search front set so we neither have to look at the notes that are invis already nor do we have to look at the notes which are unknown now let's write down this algorithm and I will do so in a quite informal way to keep it simple so we have a set called front and initially this is just a start node and then we iterate as long as there are elements in our front we get the note n with minimum cost where get means we pick the note in front and then we delete it from the front and then we put it into our visited set so we say Mark the node n as visited and now as you remember I have to update all the direct Neighbors of n so inside this while loop there's a for Loop for any direct neighbor M of n so any direct neighbor of the node n which we picked and it doesn't make sense to look into a note which has been visited already so we make sure that M has not been visited so far and for any such neighbor M we have to add m to the front and you remember this was only the first case if it is already part of the front we have to check if it has now that we added n a lower cost so either add or adjust the cost if m is already in front and its new cost is lower so this is the algorithm due to X extra now let's make an example to see how this algorithm Works say I have this simple graph consisting of five notes where this is my start note and this is my goal note and these are the edges with Associated costs of 1 1 4 1 2 and four so this is start this is go now how does the algorithm proceed we have our set front and in the beginning this contains only the start node and since we also know the cost of the start node I will add this to my one and only element in front so this is the situation after initialization of the algorithm now comes my Loop while the front is not empty well it is not empty so I pick the note with minimum cost from front which is the only note with cost zero so I pick it and I remove it from Front then I put it into the set visited which for the moment we will visualize by Crossing out the node as being visited already now I'll have to do a loop over all direct neighbors of the note just added so let's give them names now one direct neighbor is a A can be reached at a cost of one put that into front another direct neighbor is C and it can be reached by a cost of four and there are no other direct neighbors so if we jump back to the beginning of the loop select the element with minimum cost which is node a mark a is visited and by that very moment we also know the final cost of a which is one then check all direct Neighbors which have not been visited yet well one direct neighbor is s but s is already crossed out because it is in the visited set the other direct neighbor is B so I can go to B and what is the cost well the cost of reaching a is one and the edge between A and B has a cost of one so the total cost is 1 + 1 which is two and there no other neighbors of a so we jump back to the beginning of the loop and pick again the minimum element which is B with a cost of two so we cross out B and we check all neighbors that have not been visited yet so a has been visited already but G has not been visited yet so the cost of B was two The Edge between B and G is four so for a total cost of six we can visit G the third neighbor of B is C and C can be be reached via this edge with a cost of one so B is a cost of two by the way I should Mark this here the edge has a cost of one so I can reach C with a cost of three but C is already in the front so what happens is that this entry in the front gets updated and so we can reach C now with a cost of three now we pick the next minimum element of front which is C so we cross it out we know its final cost and we check its neighbors which have not been marked yet so the only neighbor that is left is G since C has a cost of three and the H has a cost of two G can be reached by a cost of five so we see here again we update an element in front and this is also the final element which gets picked in the last execution of the loop so G is marked and can be reached with a final cost of five so if you now look at the optimal path we can go up here for cost of one one down here so the total cost is 2 3 and five so this is the optimal path from start to go whereas there's a number of other paths for example 1 2 6 or 4 6 or 4 5 9 and thus we can verify that indeed this path is the one with the lowest cost now in our programming exercises we will actually use a slightly modified version of this algorithm which is not more efficient but it is easier to implement so let me first write down our previous version so and as you remember down here we have the alternative add m to the front or modify the cost of M if it is already an element of front and the new cost is lower I want to get rid of this complicated part down here so I'll just leave it away meaning that if I added it to front already with a higher cost and later I determine there's a different path to the node M with a lower cost then I add it again to my front and I think that doesn't do any harm because then when I scan later through front looking for the Noe with minimum cost I will pick the element with a lower cost first so but now I have have to be aware that after I pick this there may be still other copies in front with higher cost and so I can't be sure that a note can be only once in front and so as a tradeoff to the amount of work I have saved myself down here I have to make sure up here that I handle each note only once so I have to skip the further execution if node n which is I just picked has been visited already so this is a trof down here I don't have to search if this element is already in front and if so if I have to modify its cost but then up here I have to add a line that prevents me from looking at notes twice so now let's have a look how the execution of this algorithm works on our previous graph so again this is our graph this is our start and this is the goal we want to reach and it has those edges with costs one one 4 1 2 4 how does our front look like well again we start with cost zero and node s we pick the minimum element in front cross it out and note the final cost then we check the direct Neighbors which is a with the cost of one and C with a cost of four we pick the minimum element know its final cost and check the neighbors of a which have not been visited yet s has been visited B has not been visited yet and can be reached with the cost of two so far our algorithm proceeds in exactly the same way as our previous version now let's have a look at B that's the minimum cost element cross it out the cost is two we check the neighbors that have not been marked yet which is G which can be now reached with a cost of six and C which can be reached with a cost of three now here's a difference to our previous version we now also enter this element into the front so the next thing that happens is we pick the C Market is visited and we know its final cost we check all neighbors and we find that g can be reached now with a cost of five now we pick again the minimum element from the front and we find C now here our additional line of code prevents us from checking the neighbors of C again because as we see that c has been marked as visited already we will skip the rest of the loop so we will directly go back and pick the minimum element in front which is now CHI with a cost of five so we will Mark Chi as visited we know the final cost which is five we search all neighbors but they are all marked as visited now we go back to the top of the loop and we find there's still one element we pick this and again we see as we marked G already we will skip the rest of the loop and our front is empty so as you see we arrived at the very same minimum cost of five however we did not have to treat the case where a note is already in front and we have to find it in front and then check if its cost gets lower by the New Path so that makes it simpler however this comes at an extra price and you see that a total of seven elements were put into our front whereas in the earlier version of our algorith GM we only put five elements in there namely each nde of the graph was put exactly once into front and was taken out exactly once from France
SLAM_Lectures
SLAM_D_12.txt
and finally for the computations we need the common gain what is the dimension of the common gain Matrix
SLAM_Lectures
SLAM_F_06.txt
now let's have a look at the observations so our basic setup was as follows a robot stood somewhere with a certain Hattie so this is the robot's position as reflected by the state but the scanner has a certain offset and so if the heading angle is Theta the position of the laser will be XL yl in XL yl simply will be XY the robot state plus the scanner displacement times the cosine and sine of the heading angle theta another robot observes a landmark and the bearing angle relative to the heading will be alpha in the distance as a readout of the laser scanner well we are and this way we obtained the two measurement equations our is XM the landmarks position and format XM minus XL squared + ym minus y l squared and the square root of this where is the bearing angle alpha is your arc tangent of Y M minus y L divided by X M minus XL minus theta the heading angle and so those two equations made of our measurement function H which was dependent on X Y and theta so it is a function of three variables which computes - well use and so we computed partial derivatives of H with respect to the state which we denoted as capital H so this was a two times three matrix consisting of the partial derivative of R with respect to X with respect to Y and with respect to theta and the same for alpha and those computations were based on the assumption that our landmarks are fixed so the coordinates x and y aim of the landmark they are assumed to be constant and so they are not part of the arguments the function H takes but now we have a different situation now our landmarks become unknown as well and so our function H changes and it's now a function of X Y and theta as well as the landmarks x-coordinate and y-coordinate so it is important to understand but if you do not modify our observation equation at all the only difference is that previously we thought of X M + ym as being constants and now we think of them as being variables which part of our state so that means we have to augment our Chicopee matrix by the derivatives with respect to X M and ym and so we get the following we get our old matrix and in addition we get the partial derivatives with respect to X m and yn for both components of our function so now we need to compute these first let's have a look at R so R is the square root of the squared differences in x and y now let's compute the partial derivative with respect to X M and this is quite similar to what we did earlier in the common filter unit so if you can note the term under the square root sq so let's just define Q is X M minus XL squared plus y m- y l squared so the derivative will be warm divided by 2 times the square root of Q times the derivative of the term under the root which is 2 times X M minus XL so this is simply X M minus XL divided by the square root of Q now if we do the same for y you see by the symmetry of those terms we obtain a similar result with X replaced by Y and so these are the two terms we need to implement now let's do the same thing for alpha so as you remember also what's the arc tangent of ym minus y L divided by X and minus XL minus da the heading angle so the partial derivative with respect to X M it's the derivative of the arc tangent and so remember this is warm divided by 1 plus x squared and so we have 1 divided by 1 plus the argument squared times the derivative of the argument which is minus 1 divided by the denominator squared times the numerator times the partial derivatives of the denominator with respect to X M which is 1 and so we obtained - why am - why L divided by Q and by a similar computation we obtained for the partial derivative with respect to Y M XM minus XL divided by Q and so these are the other two results we have so now let's put together our matrix age so remember this will be 2 times 5 in the current case and let's set the following abbreviations so Delta X is X M minus X L Delta Y s ym minus y el and Q will be the same as in our previous definition snuba Delta x squared plus Delta Y squared and so our matrix has to be 2 times 5 now these are the components regarding our these are the components regarding alpha and so the partial derivative with respect to X is something you'll have to look up we obtain this value when we computed the H matrix for our extended column 2 3 and so we found out this is minus Delta X divided by the square root of Q this was Delta Y divided by Q and here we obtained minus Delta Y divided by the square root of Q and minus Delta X divided by Q now the partial derivatives with respect to theta or a bit more complicated they Verdi the displacement divided by the square root of Q times Delta X the sine of theta minus Delta Y the cosine of theta and minus D divided by Q times this term minus 1 let's have a look at the partial derivative with respect to X n this is what we just computed it is Delta X divided by the square root of Q and with respect to alpha is minus Delta Y divided by Q and with respect to Y M where if Delta Y divided by the square root of Q and Delta X divided by Q so this is the 2 x 5 matrix H now fortunately we already have implemented this entire part when we implement it our extended kalman filter so the only additional thing we'll have to implement is this but now here's a close look you will see the following this support is actually the same as this x minus 1 so all we'll have to do in our new implementation is to call the old code and crap that 2 times 2 sub matrix multiply this by minus 1 and put it here now when you implement this keep in mind that we do not have only one landmark but probably you have more of those and so our entire matrix H will be 2 times 3 plus two times the number of Flint marks so the matrix will look like that in general we'll have the derivative with respect to X Y and theta here in the first three columns and we will have all the landmarks here so for every landmark I will have a two times two matrix here and so say you landmark J will be somewhere here and so if we have an observation between the current state and the landmark chain we will have to compute all those values involving XY and theater of the current state and XM YM off the landmark J and we will have to put that two times three sub matrix here as usual and copy that two times two sub matrix of this here times minus 1 so this entire matrix times minus 1/2 this 2 times 2 sub matrix and remember in piping indices start at 0 so this will be at index 3 and this will be at index 3 plus 2j so in order to set up this matrix compute those values basically by calling the old function for computing the matrix H make new matrix of that dimension then copy this part here copy the negative of this part to the appropriate index and make sure all the other values are 0 now let's implement this so I prepared slam 9c slam correction question and as usual we start with a class and I've made two modifications to the constructor now we also get the standard deviations for the distance an angle as parameters and we store them in the corresponding member variables then here's the function Chi behead earlier the derivative with respect to the state the derivative with respect to control and all this is unchanged of course then here we have the prediction and you will have to put in here your previous code of the prediction and your previous code to add a landmark now here comes the interesting part now this is our measurement equation and this is just copied from our extended Kalman filter and this is the partial derivative of our measurement function with respect to the state and this returns our origin two times three Jacobian matrix which we denoted by a capital H so now here comes the first interesting part the correction function and so this needs the measurement which is the range in the angle and the Leinbach however the landmark is not constant anymore but rather it is part of our state so instead of giving the function fixed landmark coordinates we now give it the index of the landmark that is involved in the measurement and so here in the first line we get the landmark coordinates simply from our state by grabbing the x and y value from the appropriate indices of the state and then we call the function that returns the Jacobian matrix but now as this is just the two x three matrix and not the full matrix I have denoted this as H 3 now your task is to set up two full H matrix as we just described and then amazingly all the rest of the code is just copied from our extended kalman filter without any modification now here are the helper functions and they introduced all of them earlier let's have a look at me we have the robot constants as usual now we have added the constants for the cylinder extraction and I have set a relatively large radius for the maximum cylinder distance if our filter constants these are the errors for the movement and now I added the standard deviation for the distance measurement and the angle measurement and if you remember those used to be 200 millimeters and 15 degrees and so I set them much larger because then it is much easier later on to understand how the filter works here we have our usual start position here the covariance matrix and both are not different from our last settings here we initialize our extended kalman filter now with the added values of the distance and angle standard deviation here we now have to read also the scan data and not only the more data and then here is our common filter group so first the prediction step and then the correction step now for the correction we have to do the landmark assignment and this is all souls in the dead observations function which is important at the beginning of the module so this uses the scan data and the thresholds for detecting the slingers Kalman filter data and the maximum distance of a cylinder and so what this does is it takes a robust measurement determines the coordinate in world coordinates and checks if there is a cylinder close to this point and obtains the cylinder coordinates from the list of all cylinders that we have instantiated so far and which is in our current state now if this is the case if there is a cylinder close then max cylinder distance it will return the index of that cylinder and if not it will return minus one so for each cylinder that is detected in our measurements it will return an observation which is a tuple that contains the original measurement so the radio is an angle the cylinder in world coordinates which is this the cylinder in scanner coordinates which are actually only used for visualization and the cylinder index which is either the true index or minus one in any case it will return this observation tuple now if the cylinder index is minus one we call this function which should program previously to add a new cylinder and the use the world coordinates returned as part of this observation tuple to instantiate the X&Y of this new cylinder and we get back the cylinder index which we then use to call our common filter correct function so either we get the index of an existing cylinder then we will use that directly here or we get minus one then we will take the index of our newly created cylinder and give that to the correct function of our common filter now down here we already introduced all those functions so we write position and orientation of the robot we write the covariance matrix of the robot we write the positions of all landmarks that are currently part of our state and also the error ellipses that are currently part of our state and this is a linear thing we also write the cylinders that we get as part of the observations in the cylinder coordinate system this is just for visualization now after you implement this when you run it it will produce the extent of Kalman filter slam core the text I load this and you will see the following Cairo but is here he are all landmarks which were already observed in our start position and then as we move the uncertainties in our landmarks get smaller and smaller and we get a really smooth trajectory but let's have a look at this in a moment please first implement the correction step off the extended Kalman filter slam
SLAM_Lectures
SLAM_G_07.txt
now if you run your code and you're worried about the smoothness of the trajectory keep in mind that be used only 25 particles so if you don't like that modify this number which is said in the main function so here is a result which are obtained using 200 particles so as you can see the trajectory is now much much smoother and looks much more like the desired result so in the beginning all the particles are the same but shortly after the 200 particles spread out nicely and we can see the typical effect in the left turn where some particles take a stronger left turn even though we have more particles you still have the effect that this light mark is dropped since it is not observed as it is occluded by this landmark shortly after this is inserted again and we also see of course spurious landmarks in other places they usually disappear shortly after they are introduced so this is our final fast slammer result of this unit so let me make two additional remarks with regard to the efficiency our Fastlane first is about the proposal distribution and here the problem of the presented fast slam approaches that our proposal is made only based on the given control and this may lead to inefficiencies because if our control is very noisy this may lead to a wide spread of particles and a few only a few will survive in the end because all the others do not fit very well to the measurements of our robot and so in the approach which was presented here we do not care about this and in the book of thron Beauregard and Foxx this is also called fast slam one zero and you'll find a different solution called fast slam to zero in the book which modifies the proposal distribution so that the measurements are taking into account and the second remark is with respect to map management now you have seen we take M particles and say we have up to n map features or landmarks and if you not check with our implementation you will notice our time complexity is in the order of M times n for sampling recently you will find a copy function which copies an entire particle including its map in case the particle is picked now let's have a closer look at this so we have our particles where each holds its own position and also estimated position and covariance of every landmark in our sampling step whenever we pick one of the particles we copy it so we may have two or more identical copies of the same particle and so overall since in resampling we have to pick a total of M new particles and each particle requires copying all and variables of our map our implementation is indeed in your order of M times n now let's see what happens to our two identical copies of landmarks here first of all during the prediction step they will move differently which will not affect the map and then when a measurement occurs they will be updated differently now let's assume that this measurement relates to an existing line so it will affect this mu k Sigma K of this particle and if you'll also affect one filter saying ul Sigma of the other particle now if these particles are close then probably K will equal L so as you see we will modify the list of landmarks only in one place where all the other entries will stay the same now let's think about putting all the landmarks into balance tree so mu 1 and Sigma 1 would be stored in a leaf of a tree as well as mu 2 Sigma 2 and so on and we would use a binary tree and this would be you and Sigma and say this is our particle K so we would put that into that tree so we would have a pointer pointing to that tree and whenever we need some element with a certain index we would now have a logarithmic time complexity for accessing it so so far that's not an improvement but now say during resampling they picked this element twice and instead of copying it they'll just make a second pointer pointing to the root of the very same tree now what happens if you get our measurement see then as we know one of the elements will be updated whereas all the rest stays the same now say our measurement would influence this filter of our particle so this needs to be modified where is all the remaining Kalman filters will stay the same so instead of copying everything in order to just modify one element we will do the following we will a new route for a particle and since our element to be modified is here we don't have to modify the right part of our tree so we will keep a pointer to the unmodified right part but you'll have to modify the left part so we insert a new node here as well but here the left subtree also stays the same so we keep a pointer to the existing subtree whereas the right subtree is modified so we insert a new node and then again the left element is unmodified where's the right one is our new element and so as you see instead of copying all the elements which will cost us in the order of n you only have to modify those elements where you can see it's one modification per level of the tree so it's only the path that we'll have to modify and so instead of taking all of em we now have on the order of the logarithm of n and so our conclusion regarding the efficiency is that we can move from o of M times n to complexity of M logon which is much much better as the number of features in the map crows so remember that's the number of particles and that is the number of map features or landmarks there's another issue here namely if this is our particle and if you get a new measurement we have to find out the likelihood that this measurement corresponds to any of our features in the map so in this case we need to make sure that we don't have to compute the likelihood for all of our map features but only for a small subset so that picking the correct landmark in our list of landmarks does not take time linear and the number of landmarks now this is indeed possible so that overall we get an asymptotic time complexity of M log n which is a very good result now this brings us to the conclusions so in this unit you understood and programmed a particle filter version of slam and I really really can't crack you like you if you managed to program that because even though the particle filter slam is conceptually not very hard to understand it needs quite an amount of code to realize all the required functionality and the particle filter slam each particle is one part one map so keep in mind even though usually we only keep the last position and orientation of our robot each particle stands for all the positions and orientations along the particles part now one of the most important features was that our map features the landmarks are independent given the parts meaning of course the random variables describing the locations of our map features are independent when the path is given and this has latched to our solution to use one independent Kalman filter per feature which has meant that in contrast to the extended Kalman filter slam approach we have no correlations and as you remember those correlations we're responsible for our covariance matrix the size of 3 + 2 n times 3 + 2 n which is quadratic in the number of landmarks so while this is good on the other hand there's also drawback since those dependencies are only implicitly captured by the fast line algorithm in terms of the diversity of our particle set the fast lane algorithm may not perform very well especially with respect to loop closing because due to particle deprivation after certain amount of time the particles in our filter will have a common history and so the dependencies that were captured through the diversity of the particles will be lost if you're interested in this let's have a look at the probabilistic robotics book by thron Beauregard and Fox now another big plus a fast lamb is that each particle uses its own data associations which is in contrast to other methods for example the extended Kalman filter slam here we had to decide in each step the warm and only data Association whereas with our particle filter each particle has its own individual data associations along the path and this is a big plus because this means that our fast lamb algorithm not only samples across different continuous parameters but it also samples across different discrete landmark associations and finally remember that the fastest limb amazingly solves both problems the offline and online slam you does each particle represents the entire path plus the math so the set of particles represents the full posterior on the other hand they're usually not interested in the entire path so we just keep the last position and orientation in each particle and so we used fast slam algorithm as a filter and so this is the solution to the online slam problem now this concludes our unit about the fast slam algorithm I hope you had as much fun as I did in implementing and running your very own fast slam algorithm
SLAM_Lectures
SLAM_B_02.txt
now after we have the assignment of points we're left with the following problem we have a left list of points which are the detected landmarks and we have a right list of points and we already know the assignment between those points we now want to find the transformation which maps all the points from the left list to their Partners in the right list and so what we want to allow is a shift rotation and a scale factor between those two lists and this is called a similarity transform so given a left point we want to rotate this point and we want to apply a scale and we want to apply an offset a shift so that this point is moved to its right partner now the scale this is a scalar value from R the real numbers the rotation in this case in 2D has four values so it's a rotation Matrix and we may use the following Matrix which is well known and so we see the only parameter in this rotation Matrix is the angle Alpha and the translation Vector is a vector from R2 so overall we do have the scale Lambda rotation Alpha and our translation Vector TX and Ty Y and so we need to find four parameters overall so in reality the transformed left coordinates are not identical to the right coord coordinates because we have noise so in reality we'll have to optimize the difference between those two vectors which is often done in a least Square sense so what we want to do is we want to optimize this sum which means we want to minimize the squared length of the remaining vectors between the left and right set of points after I have transformed the left set of points so now how can we determine this well that's a scaler but here in r as we know we have those cosine and S functions and have the translation but this formula tells us that this is a nonlinear problem so the typical solution would be to linearize and then iterate a search for a minimum and this would mean we need start values and we have to iterate and we don't know if we will find a global minimum so this is why I show another method here so first of all let us compute the center of mass for each of the two point loads so that the left Center of mass is 1 / by the number of points times the sum of all points and the right center of mass is similar and after having computed the centers of mass let's subtract them from the original points this will lead to reduced coordinates and we will denote them by using a prime now this essentially means that previously we had those coordinates in some Global coordinate system so the points interpreted as vectors look like that now after we computed the center of mass our new vectors will look like that and of course now if I do the sum over all reduced coordinates this will be zero because we moved the set of points so that they're new center of mass coincides with the origin and the same for the right set of points now let's go back to the original formula that we had to minimize let's put in our new reduced coordinates now if we rearrange this we obtain this Lambda R Li goes here and the r i goes here and what remains is the second part multiplied out the mean of R and T so if we just denote that as T Prime then what remains is the following formula so in order to minimize this original value here we might as well minimize this sum which uses our reduced coordinates so let's minimize this sum now this sum has a plus here and so this reminds us of the bomal theorem so if we apply this to our formula we obtain this result so this is the a this is the 2ab and this is the B and the only tricky thing is since these are vectors 2 a is 2 * our translational Vector transposed times the vector that is obtained by summing this up and so behind here that is M * the squared length of the trans ational vector and here this sum gives us Lambda the sum of RL minus the sum of R but the sum of r that is zero because these are reduced coordinates so the center of mass is in the origin and the same holds here because the rotation does not change the length so this is the same as Lambda R the sum of L this also is zero so the entire term here is zero and so what remains is this term which consists of the first part and the last part and we still have to minimize this now if you look at this this is a sum of squared values so this is greater or equal to zero the same holds for the second term now if I have a sum of two variables which both must be larger or equal to zero then I may obtain the Optimum by selecting them being zero so in the second case I can do this T Prime here is not part of the first term so I can minimize the overall sum by just selecting T Prime to be zero so T Prime being zero means according to our definition of t Prime that Lambda RL minus r plus T our original t is zero and so we obtain t as the center of mass of the right points minus the center of mass of the left points which are rotated and scale so we already obtained T and all we have to do now is to obtain R and Lambda so what remains to be optimized is this term we don't have to worry about translation anymore all we have to do is determine the scale and the rotation now we'll do this multiply out Trick again now this looks like a minus b^ s which is a^ 2 - 2 a + b^ 2 and I will do now a trick so instead of optimizing this I will optimize the following which means that instead of scaling up the left Point Loot and leaving the right as it is I will scale up the left Point lot only by a factor of square root of Lambda and I will scale down the right Point L by a factor of 1 over the square root of Lambda so this makes sure any error that I have in lnr is treated symmetrically so let's multiply that out so by using this theorem I obtain Lambda time the sum of the squared length of all the left vectors after they are rotated and as we know rotation doesn't change the length so this is the same as the sum overall squared lengths of the left set of points and the second part which does not involve the scale and the third part now my formula is of the form Lambda * S A + B + 1 / Lambda C and it turns out to minimize this I have to select Lambda square is C / A and B this term in the middle does not play a role so in our case this means Lambda square equals the sum of the squared Vector lengths of the right Point L divided by the left Point cloud or Lambda equals the square root of the quotient of the squared right Vector length divided by the squared left Vector length so this is our second formula and remember since the B here didn't have an influence on the value of the scale we can determine the scale here independently from the rotation and the translation Now All That Remains is this part here so in order to minimize the sum as there is a minus here we have to maximize this part here so now the task is the sum of the right transposed vectors times our rotation Matrix times the left vectors should be maximized there's no translation anymore there's no scale anymore all we have to do is we have to find the rotation Matrix let's multiply this out for a single point let's just drop for a moment these primes we have RX r y transposed so as a row Vector times rotation Matrix times left X left y as a column vector and if you multiply this out we get this Vector times R XR y let's multiply that out 2 so overall we get this scaler value I see it's a matrix multiplied from the right by a column vector and from the left by a row Vector so we get a scalar and now let's group this so we have a cosine * RX LX Plus r y l y and we have sign timeus RX l y plus r y LX now let's remember we have to sum that up so our overall sum which was the sum over r i transposed time rotation Matrix time Li I now it's the same as cosine time the sum plus the S time the second sum now we have to find the cosine and sign so as to maximize this entire sum but this is really easy because this is a cosine sign that's a unit Vector multiplied by those sums let's think about this this is a vector and we're looking for this unit Vector here that maximizes the scalar product with this Vector you see the scalar product is this times the length of the green vector and we need to find a unit vector and so as is easily seen as Vector that maximizes the scalar product goes in exactly the same direction as the green vector and so we find out that the cosine and sign we're looking for this is just the vector here divided by the length of the vector so we have to take this Vector build the norm and just divide by the norm of this Vector so this is the solution to the third part so in order to find the cosine and sign of my angle all I have to do is build those sums and normalize the two values by the length of the overall Vector so if I want by using that identity I can recover the angle Alpha so here's the recipe you are given two point sets left and right Point numbers say between 1 and M then compute the following compute the center of mass for the left and the right Point set then compute reduced coordinates the L being LX l y and the r being RX r y and then set up the following sum for the index running over all points compute the cosine sum which is the first element in the vector that we just built and the sign sum and also compute the lengths of the vectors which is actually the squared lengths of the vectors of course all sums starting at zero now use all the formulas that we had to compute the scale rotation and the translation so the scale equals the square root of RR ided by LL remember that was the squared length summed up for the right and left that's exactly what we're doing here this is the first result and the cosine and S which we just defined well this is just the cosine sum and the S sum divided by their normalizer so this is the rotation so translation remember this was R minus Lambda RL this is the right center of mass minus Lambda time the rotation Matrix which is cosine sinus sin cosine time the left Center of mass which is the third formula now as you see to compute this I need the right center which I computed right in the beginning and the left Center which I computed right in the beginning I need the Lambda but I computed that already here I need the rotation Matrix consisting of the cosine and S which I computed here so all I need to know is there so this is our third formula and then the function you'll have to implement it should return a tuple made of the scale the cosine the sign and the translation in X and the translation in y I just see I forgot all the Primes here so don't worry so sometimes instead of a similarity you are interested in a rigid body motion which means that you don't want a scale factor to be applied so you want Lambda to be equal to one and you can achieve that easily because instead of computing Lambda this way you can just replace this by Lambda equals 1 so in the fin fin implementation I ask you depending on the flag just don't compute this value but instead set Lambda equals 1 and we'll still estimate the best transformation in the least Square sense however for a rid body motion instead of a similarity transform so now let's implement this so I prepared some code for you which is slam 4C estimate transform and it is based on the slam 4B code so first of all this function finds the cylinder Pairs and that is what you have implemented in your previous code so just insert your previous code here and then down here there's the function estimate transform which you should Implement and it gets a left list a right list and a flag which is called fixed scale and if this is true then you should set the scale factor your estimate to one instead of estimating it using the formulas from the previous slides so as you remember in order to estimate the transform you need to compute the center of mass for the left and right list and I implemented that for you already so this function compute Center gets a point list just sums up the x coordinates and the y-coordinates and then Returns the sum divided by the number of elements and here I called this already so you can just use those two lines now after you compute everything remember to return a tuple of five values which is the scale factor Lambda the cosine of the rotation angle the sign of the rotation angle and the translations in X and Y and this estimated transform should turn the left list into the right list and not the opposite down here I implemented a function which applies a similarity transform so given the transform which is a tuple made of scale cosine sin TX and Ty y this function will transform a point P by just applying the scaled cosine and S to the coordinates and adding the translation and returning the resulting XY tle and in the main function only a few things have changed down here so we still update the pose we compute the coordinates of the cylinders and the scanner coordinate system transform them to the world coordinate system then find the cylinder pairs then here we call your new function estimate transform which gets the world cylinder points and the reference cylinder points and these list comprehensions here they are made to pick only the points that are part of a pair that was found in cylinder pairs so if you have six reference cylinders and maybe you found four World cylinders but then in these cylinder pairs you only get three pairs because the fourth one didn't find its partner then this Constructor will build two lists with three points each and here I'm setting the fixed scale to true because I do not want the transform to scale the result and after you find the transformation we transform the world cylinders using that transformation so that we can see the final position of the detected cylinders in the world coordinates system after applying the transformation that fits them best in at least Square sense to the reference cylinders so now try to implement this function up here and don't forget to insert your code to find the cylinder pairs from your previous solution
SLAM_Lectures
SLAM_F_08.txt
and there's no simple answer because as you've just learned if you have only a few then this means for the landmark assignment it is good because if there's only a few it's hard to confuse them on the other hand it means for the localization that if there are only a few landmarks we will have less observations so our accuracy is low or even if there is too few we will have Parts in our trajectory where there are no observations at all and so as we only run the prediction our position error will grow quickly so in conversely if you have many landmarks then our localization is good however it's easy to confuse them now in order to have many landmarks for a good localization and still not to confuse them we need distinct landmarks so remember that in our case all landmarks were the same because in the real world they actually have exactly the same dimensions but in practice they may be different in dimension or for example in color so if you had used the camera in addition we would have been able to detect the different colors of our landmarks and then we could have added the color as a component in the description of our landmark and now this does not need to be a color it can be anything so it can be the dimensions of the object it can be a detector that detects if a surface is flat or it detects the curvature of a surface or if there is a dial ledge or even if there's a corner likewise it may also be that our camera detects an an image of the object and we are able to detect feature points for which we can derive a high dimensional Vector of the scripto values and so in general the idea is to use Landmark signatures so that our Landmark is described by its position and also its signature and so if our robot sees a new landmark and the question is if this should be assigned to an existing Landmark it does not only use X and Y but also the signatures to decide this now when do we actually assign landmarks now our algorithm worked in the following way a robot scanned and detected a cylinder and then we looked within a certain radius for the closest landmark and if we found one we assigned it and if not we generated a new one now this assignment used a fixed radius and in our case we experimented with a radius of 50 cm and also 40 cm and we found out that the algorithm is brittle with respect to this parameter now let's think about this fixed R does it actually make sense by matching the following a robot is here it measures this cylinder and that cylinder now here there's a landmark in our map which we have observed very often and consequently the error ellipse for this Landmark is pretty small on the other hand say there is also a landmark which is a bit further away from this measured cylinder than this distance on the other hand the uncertainty of this Landmark is really large so obvious viously in this casee I should not assign The Landmark to this measurement whereas in this case I should assign it and in detail upon deciding this I should also take the uncertainty of the robot into account and it is pretty easy to do so and it's called maximum likelihood Landmark assignment and it uses the following so C is our measurement and our function H our measurement function takes the state and predicts a measurement so the difference of this is the difference between the real measurement and the predicted measurement and this is a measure how well the landmark fits to our measurement values I don't use this directly because that wouldn't take into account those variances rather I form this product with the inverse of a matrix p and C minus H without transpose where p is a covariance matrix which I obtain using the following formula it is H the chopan of our measurement function times our predicted covariance of the state time HT plus q and it's easy to see why this makes sense because Q is the measurement error captured in this covariance Matrix this is our error in the state so if I multiply from the left by H and from the right by HT I just propagate this error through the function H so this is error or variance propagation and so what comes out is the error in measurement that is due to my error in the r position and orientation and so I add those up so basically this part is due to my error in the state whereas this part is my error which results from my measurement so I bring those two together and I compute this value and then use a threshold when to accept the measurement based on this error so this should be smaller than a threshold this is also called the mahalanobis distance and you have seen that earlier because in the normal distribution your remember the probability density function was something like a constant time e ra to the^ of min-2 and here we had our x - mu transposed times the inverse of our co-variance * x - mu and so thresholding this value essentially means setting a threshold in a gausian distribution and accepting all values which are in this range so this is called the maximum likelihood Landmark assignment there's one more thing I want to mention regarding Landmark assignment this is the provisional Landmark list and this works as follows say our robot is here and now it sees a landmark for the first time but now we will not put this Landmark into our state Vector as a standard Landmark rather we will say it's in a provisional Landmark list but still we will give the landmark its Associated co-variance Matrix now if the robot moves on and it observes the landmark again then our coverance Matrix will decrease but we will take the landmark into our final Landmark list only after several measurements indicate that we have found the landmark consistently so in this case we will put it into our final Landmark list in the state Vector now how can we handle that in practice and this is actually relatively easy because the idea is the landmark is still put into the state Vector but the equation is modified so that for this landmark in the provisional list the observation equation that is set up only handles the position of the landmark as an unknown but not the robot's position and orientation and so in this case the robot will influence by subsequent measurements the accuracy of the landmark but the landmark won't influence the robot's position as long as it is in the provisional list so it is handled just in the same way as a final Landmark only we keep a flag and due to this flag abuse a slightly different observation equation so this brings me to the conclusions what we have learned this time is slam the technique that gave the lecture its name and we learned there's a full slam problem where we compute the posterior over all states and assignments of landmarks and we learned this is usually computationally infeasible so we learned about the online slam which is easier to handle and this uses a deterministic computation of landmark assignments so instead of providing the full posterior over all states and assignments we assign on the Fly using a deterministic algorithm and this is also its main drawback because as we saw it's brittle with respect to Landmark confusion also even though the online slam is less complex than the full slam it still has a huge update complexity that grows with the number of landmarks because the size of the state vector grows with a number of landmarks so it is potentially Unbound nevertheless online slam has been used by many groups with considerable success and as you have seen for a small problem it has worked out perfectly so congratulations this time you have learned about the online slam and you have implemented the extended common filter version of the online slam which for the first time solved the problem of localizing our robot with without having a map in advance so now for the first time we can just put our robot somewhere and it will build up a map and localize itself in the map it has just built so congratulations again you made the step from localization to mapping we will elaborate on this topic further next time and I hope you join me for the next unit
SLAM_Lectures
PP_06.txt
we now have a pretty cool implementation of dykstra's algorithm so set the start set the goal now you will see not only always the nodes but also the actual path that is taken from start to go let's set a different goal and now let me show you something interesting say if you go from here to there then you have this total cost but as you see dijkstra's algorithm not only expands towards the goal but it also expands of course the same amount in every direction and so if i put an obstacle here this will not change anything regarding the total cost it changes the search expansion down here of course but if you watched here this frontier didn't actually change now this changes when i start to force the minimum path to get longer then not only this path is longer but also as you have seen if you watched carefully the set of visited notes expands in all directions so this means it gets very costly if the path to the goal gets longer especially say you have a kind of a dead end road then you see such space expands so that for this slight detour here we have to expand the huge number of cells in this left area here also let me show something else which is quite interesting say our goal is here then we get this path now if i block this path you would assume that the set of visited cells gets larger in all directions but it doesn't this is because due to our distance measure all those paths have exactly the same distance until we hit this upper bound here in which case the set of visited cells expands again now just for fun let's do our maze again and indeed as you see the algorithm works and shows us the shortest path from start to the go now let's come back to this situation where we go from the start to the goal and we see that the algorithm expands all the notes not only in the direction towards the goal but also in the opposite direction so this seems somehow strange because by the very moment you're almost here you also look at nodes which are for example here and so it seems not possible that when you're almost at the goal here that then another node which is expanded at the same time back here will still have a chance to come up with a path that reaches the goal when this node is so far away in fact the direct distance from here to here would be about two times the distance from start to go so how can we improve our search so that notes are expanded in the direction to the goal and not so much in the direction away from the goal let's have a look let's first have a look at the dijkstra algorithm once more so at a certain point in time when we started at the start node we arrived at the situation like this where this is the set of visited nodes and all the nodes which are close to the boundary have a similar cost and now if we ask ourselves if we should add this node next to the set of visited nodes we check for the minimum cost well the cost in this case is the path from that node to the start so using dijkstra's criterium we will have a cost of this node which we will call chi for the moment which is the cost from start to end and this is kind of backwards looking because if this is the goal then obviously the algorithm is expanding in the wrong direction which is due to the fact that any distance to the goal is not part of our computation here but it makes sense to use this cost because this is actually computed by the algorithm and it is exactly known whereas i don't have any information regarding how far it is from this node to get to this goal because i didn't see the goal yet so now let's think about a different solution let's say we'll make a greedy solution so say i start here and my goal is to get here with nodes and edges being around here everywhere so what about the following when looking at the dire neighbors of s i pick the one which gets me closest to g and then starting from that note i'll check the neighbors and again i pick the one that gets me closer towards g again i check the neighbors pick the closest one and again pick the closest one and i reach my goal now if you look at the boundary of my set of nodes in visited you see that we expanded far less notes than we would have expanded in the case when we had used the dijkstra algorithm so in the greedy case we use the following cost we'll just say this is h well that's the distance to the go but we don't know the distance to the goal so an estimate distance to the goal and what we used here is the direct line without knowing how the underlying graph actually looks like now we expanded far less notes here but on the other hand is the result still optimal
SLAM_Lectures
SLAM_C_07.txt
so let's find that out let's define a function f ofx which equals e the IR constant raised to the power of -2 * x - mu this is a mean / Sigma and this is the standard deviation squared now if we plot this function we will have the following x - mu / Sigma is z independently of Sigma when X = mu so here you have x f of x and here at mu our exponent is and since here's a minus this is the largest exponent which we can get so it is e ra to the^ of 0 which is 1 and so here at mu we do have the value one and if we draw the values to the left and right we will get this typical Bell shape and remember those values never are zero and so we have tails that go to plus minus infinity and there's another important point which is that X = mu +us C Sigma because for those values x - mu here the exponent / Sigma s is 1 and this happens here at the inflection point where the curve goes from a left turn into a right turn and again here where it goes from a right turn into a left turn so at those points X is Mu +us Sigma and now here's the question what do you think so in order for f to be a distribution the following needs to be fulfilled the integral from minus infinity to plus infinity of f needs to be one now what do you think is this true
SLAM_Lectures
SLAM_D_05.txt
now let me ask you a second question so this is our degenerated Arrow ellipse and so we know this half axis here that is zero now I would like to know what is the extend in the other direction so is it Sigma X or is it Square < TK of 2 Sigma X or Square < TK of 4 Sigma X or Square < TK of 5 Sigma X so please choose the correct solution from A B C or D
Introduction_to_Robotics_Princeton
Lecture_8_Princeton_Introduction_to_Robotics_Randomized_motion_planning_RRTs.txt
all right I think we can go and get started so here's a quick reminder of uh the concept that we covered in the the past two lectures uh so the main uh I guess ideas set of ideas we've been covering the the past two lectures has to do with motion planning and specifically we'll be looking at motion planning in discrete spaces foreign so so far on this kind of module on on motion planning we've covered four different algorithms and these are all Graph Search algorithms so specifically we looked at BFS so breadth first search and DFS depth per search search two lectures ago uh and these were algorithms that gave us feasible motion plans so by feasible I mean it gets you from point A to point B without colliding with uh with obstacles and then in the previous lecture uh we described two additional algorithms so the first one was dijkstra's algorithm and then the second one was the a-star algorithm and these are algorithms that give you optimal motion plans so you define some notion of optimality so some class function and these algorithms will give you both feasibility and optimality so some way to get from the start to the end without colliding with articles but also the optimal way however you define optimal um it'll give you some some optimal paths um so I guess these are extremely powerful and hopefully you kind of see the power of Sony's algorithms using some of the examples that we saw in the previous lectures um but they have some pretty important uh pitfalls a pretty important shortcomings and the main one is this assumption that everything is discrete uh so really if you're thinking about motion planning for uh for an actual robotic system like a drone uh we're operating in a space that's inherently continuous it's not discrete so for a continuous planning problem we need to destroyce our space somehow [Music] um so I guess does anyone see what the issue with this might be so if you're trying to do this realization uh for a drone let's say uh what's the what's the problem right so in principle you could digitize and apply these algorithms but uh what's the challenge yep exactly right so so there's this term called The Curse of dimensionality [Music] and that's exactly what you said so imagine that you have some d-dimensional space that you're discretizing and imagine that you discretize each axis so each dimension of that space with B bins so B like points along each axis uh what you're going to end up with is B uh to the D uh so be ready to be points right in your graph story problem and this is exponential in the dimension so at least with this kind of naive digitalization scheme by just describing each Dimension uh with uh would be like points um so if you're taking a two-dimensional space uh then it's B squared three dimensional space is going to be cubed and so on um so if you have even a moderately High dimensional continuous planning problem and then the number of vertices in your Graph Search problem gets extreme they've seen large and I guess at some point these numbers get the silly like larger than the number of atoms in the observable universe and so on so yeah this is the the main challenge this like curse of dimensionality um and actually the the problem is even worse than it might initially appear uh if you want to say okay what's the big deal uh we don't really care about particularly High values of d right so my drone lives in in the real world my real world is three-dimensional um so yeah I guess why do I care if it's uh larger or why do I care if it scales poorly uh for for large D um so yeah I guess to see exactly why this course would have mention challenges is so bad even for a system like a drone that's operating at least kind of nominally it seems like operating in a three-dimensional environment um the reason it gets challenging is when we start thinking about the the geometry of the robot so so far in both lectures we've kind of made this assumption that the robot is a point so it has no physical extent right um that's the the Assumption we made in the the previous two lectures when we were discussing Graph Search and digitalization and so on so things get much more interesting when you start taping into account the the geometry of the robot itself so in reality of course thank you your robot has some non-trivial shape so it's obviously not a point it may not be a sphere it might just have some kind of complex geometry um so this is a example which is going to help uh motivate some of these Concepts so imagine that your robot is operating in this room so this is some bounded environment and there are three obstacles in the the environment so they look like this and imagine the starting point for your robot is over here and the goal is over here and so both of the the robot is a rectangle this is not a particularly complicated shape but um yeah it turns out even this is like pretty challenging so this is your robot um so the robot can translate and also potentially rotate to change its orientation so this has three degrees of freedom corresponding to the center Mass location let's say and the rotational degree Freedom the orientation of the robot um so yeah I guess let's think about how the robot might plan to go from the starting configuration let's say it's like this to the ending configuration which is with the center of mass let's say at point B and then the orientation actually doesn't matter let's just say it wants to get to that point B with zero orientation uh so how would we go about doing this so here's a simple idea [Music] that sometimes works but actually in this case doesn't work but it's nevertheless kind of instructive to think about this the simple idea uh so simple idea is just approximate your robot with a circle more generally this would be a sphere since we're working just into the I just imagine that you take your robot and you just approximate it with the smallest Circle that encloses the shape of the robot and let's give the radius of that Circle a name let's call it r um so what does this allow you to do so basically what we can do is we can uh inflate the obstacles by some like margin which is equal to R so what I mean is let's say you have let's just look at one obstacle the one of these rectangular uh shapes over here so each of these with an optical so what you can do is take this rectangular Optical shape and just make it larger so every Point here you can kind of extend it by uh a margin capital r and so what you'll get is something that looks kind of like this and you can think of this inflated obstacle now as an obstacle for a point robot right um so we could say we're not going to think about the robot being Aroma having some kind of extent we're going to instead think of the robot corresponding to a point but then the obstacles are now inflated by a size of of r so yeah I guess what's the problem with this so that okay what's the advantage first so we launch into this we're back to the setting of Point robot which is something that in principle we can handle that presumption we've made so far everything is uh all they are kind of uh the geometry of the robot is just a point um so that that's the advantage but yeah the disadvantage of course is we've made things infeasible right so the motion planning problem here uh that are drawn out over here is actually feasible um so the robot can kind of go like this and then have a horizontal orientation squeeze through the Gap and then get to the goal but if we inflate each of these obstacles by a radius r so let's say this is one of the obstacles that is another obstacle over here so if we inflate this one as well we're going to get something like this and then I guess there's a third obstacle somewhere over here we'll play that so now we just end up with a wall right so there's no way for a point robot to get from this point to to this point be all right so yeah this is an idea actually let me make sure I guess that the the idea is clear any questions so in this case it doesn't work uh like the way I've drawn the picture but actually turns out that this is a pretty powerful and like popular uh like pretty widely used idea and it's something that you'll use in the next Lab when we do motion planning for the the create Supply robot Um this can work if the obstacles are kind of relatively uh well separated but if you have obstacles that are close together then you really need to think about the the actual shape of the robot to solve the motion planning problem okay so all right so here's a I guess a better idea much more uh important idea so this is the idea of uh configuration space so the configuration space pretty much what it sounds like that's the the space corresponding to configurations of the robot so in this example with this three degree of Freedom system the configuration space corresponds to All Points X Y Theta right so any particular x y and Theta is X and Y is the the center of mass location of the robot Theta as the orientation so any particular x y and Theta specifies a particular configuration of the robot the configuration space is just a space of all these uh tables X Y Theta um so all right so I guess one just note here um is that c the C is the configuration space um for yeah let's say let's in this example uh C is not the same as R3 so it's not the same as euclidean like the three-dimensional euclidean space uh I guess they're saying on c y like what is the main difference [Music] yes yeah exactly so so Theta here is what makes things uh what makes the configuration space not the same as nucleating space so the topology of this space is not the same as R3 so specifically uh the angle round right so zero orientation uh is the same as two Pi so the angle keeps increasing your rotation keeps changing at some point the robot is back to its original orientation and that of course would not happen if you're just moving in three-dimensional euclidian space so yeah this is just to make a point that the configuration space uh is interesting like already we haven't thought about obstacles yet we're just uh assigning um or we're just defining the space of all possible configurations that your robot could could be in so let's take into account obstacles next any questions on the definition so far of the configurations okay [Music] so yeah let's think about how obstacles fit into the configuration space [Music] thank you so let's define a set that we're going to call C subscript Ops OBS for obstacles so this is the set of X Y Theta such that the robot [Music] in configuration X Y Theta is in Collision with some obstacle so if I visualize the configuration phase like this so let's say this is X this is y this is Theta um so each point here corresponds to a particular configuration of the the robot so that is the robot [Music] um we can check whether this configuration results in a collision or not with the obstacles so we have these three obstacles over here so for every Point here in the configuration space we can kind of visualize like where is the robot we can ask is there a collision or not and this set crops is the set of all configurations where the robot is in collision with some obstacle so crops is a subset [Music] uh in general of C the configuration space all right questions on yes you drew a favorite reference there is XY is supposed to represent the position of the center of mass and Theta orientation exactly yeah yeah so what I'm kind of visualizing here is the configuration space and it's exactly what you said so X Y uh these actually these uh directions correspond to the Central Transport location of the robot Theta is in the orientation so every point in this three-dimensional space has a kind of instantiation in terms of uh like physical like configuration of the robot and then we're defining this subset which is all the configurations um that result in a collision with an obstacle good other questions okay so we're going to visualize uh like what these obstacle sets look like in configuration space and it turns out maybe you can kind of roughly like Intuit this uh they look very complicated in configuration space even in these kind of relatively simple two-dimensional examples um but okay but I guess what's the the advantage of thinking about things this way um so we've again reduced our problem [Music] of doing motion planning while thinking about the um the extent of the robot geometry robot we've reduced that problem uh into planning for a point robot um so the idea is to find a motion plan or find a path uh for a point robot let's just say a point actually in configuration space such that the path avoids uh see right so let's say this is the configuration space there's some like C obs it's not going to look like this I'll show you exactly what it looks like uh in a little bit on the other slides uh but yeah just imagine some subset uh where the robot uh is going to be in Collision if it's in any of these configurations uh the robot is in some starting configuration which we can call a everyone wants to get to some goal configuration which we can call B um and now this is just a motion planning problem for a point right where you just want to find some path now in configuration space that avoids the uh the obstacles or that avoid free arms like the set of configurations that results in in a collision with a with an obstacle so yeah this is a really important idea maybe slightly subtle so I want to just pause first I can see their questions go ahead I think before you mention that like orientation yeah so uh [Music] yes yeah good so the um the kind of like more General version of the planning problem uh has the goal uh not necessarily being a point but you could say uh the goal is a set and actually maybe this is an exercise uh so in that example which I realized now where I said that the orientation doesn't matter uh what would that set look like in in configuration space good align yeah so um a line that goes through the goal X Y uh and it's kind of like vertical right so any Theta is allowed so that's yeah that's exactly what it is okay good other questions on this idea of configuration space and why or how we reduce the problem of thinking about the geometry of the robot uh to now just thinking about planning uh for a point because I think one of the things we've seen early on for motion plan is that the robot kind of like has knowledge of the environment yes yes yeah yeah yeah yeah so see obs uh is is that knowledge or you could say like the obstacle locations are like given to us and then like in principle we could construct uh crbs uh given the knowledge of exactly where the obstacles are yeah so far we're still assuming that we know what the Optical locations are all right so in principle we could just discretize right [Music] so we can just describe our configuration space now [Music] and then like apply Graph Search algorithms like a star or whatever any of the other algorithms um but this is is challenging uh challenging to do in practice um and yeah that's basically kind of two main challenges that I want to highlight [Music] [Music] s [Music] so yeah conceptually things are nice uh like the picture is nice but they're actually like Implement uh what I said over then practice like discretize your configuration space and apply graph storage algorithms and that has like two pretty serious challenges so the first one is that uh the configuration space can be high dimensional so potentially like very like high dimensional so even though the kind of ambient face that your robot is operating in the three-dimensional or two-dimensional in the examples that I'm drawing here uh the configurations case configuration space uh can be much higher dimensional so imagine like a n link arm so a robot with and links so I've drawn here I guess one two three four five links and and there's some obstacles here um so the motion planning problem is uh like to get the robot from let's say this initial configuration uh to some final configuration like maybe there's some object here that the the arm wants to graph so you can try to kind of squeeze the arm like through the gap between the obstacles and the grounds but uh that object over there um so yeah I guess both the configuration space dimension for this example to easy or too hard of a question go ahead yeah yeah okay good exactly right so s maybe with that the ball then SM so s is the circle and you have like n copies of it so the circle because we're looking at Angles um so each joint here you can that corresponds to one degree of Freedom uh each like joint angle that goes from zero to two parts of the circle um and then yeah each every joint is a is a separate degree of Freedom uh so you have like n copies uh questions dimensionally by Fun with multiple levels you have a fixed distance so you don't actually have to know x y X1 for each one oh I I see so sorry yeah maybe my uh maybe the confusion was because I didn't describe the setup clearly so here uh by endless I mean you can control uh each joint uh separately right but um the the distance is still the same so as long as for uh the position um like the distance is fixed if you have an angle from from one to the next step sort of yes but I guess you still need the the N angles to to specify right uh right so it wouldn't be exponential with that oh yes yes so it's not uh exponential so by SN what I mean is um so it would be like RM uh if you were looking at like variables that were like translational and so as bias I mean like the the circle uh so we have like five or how many hour like M copies of the uh the circle um so it's I guess another way to write it is so we have zero to two pi cross zero to two pi [Music] um and we have yeah and and copies of this so that's all right yeah I guess is this here yeah good sorry and things yeah like geometrically so it's not quite like that uh yeah it turns out it's uh yeah it's not it's not that those are like not exactly the same shapes yeah and these things are kind of like hard to uh to visualize I think the the easiest thing might just be to think about this you just take zero to do PI uh and then you have a Cartesian product and I guess that product end times okay yeah I guess the point here is that just this can be pretty high dimensional right so if you have a seven link arm then the configuration space has a dimension seven in general and dimensions for a analogon so that that's the fourth challenge if you want to discretize uh this space uh now uh you have uh this curse of dimensionality so the number of points in your discretized problem in your graph storage problem uh is going to be exponential in the dimension of the configuration space and yeah you're going to have some like gigantic like after shopping that's not going to be feasible to solve in practice uh and the Second Challenge is that constructing or even representing like computationally representing uh the set see outs the uh the subset of configurations in your configuration space that are that result in our Collision can be extremely challenging like it's not even clear what kind of representation you should use to describe sea outs uh and yeah the reason is that this Set uh seabs can be pretty pretty complicated and I'm going to show you some pretty cool visualizations of this [Music] so yeah I guess just to just to set this up um so that's the the robot up there so it's really just a triangle so again a three degree of Freedom system so I was drawing a rectangle uh yeah just uh switch that to your in your head to a triangle uh it's the same setup so X Y is the uh kind of Center uh centroid location so that's the black dots on the triangle uh Theta is the orientation uh so initially this is just going to visualize the configuration space so no obstacles and then pair of rotations the vertical Direction can either be sorry not have the audio okay yeah so that's showing the physical configuration that's showing on the left is showing the uh the point in the the configuration space or points in the configuration space so as the robot rotates uh the X Y location is fixed and so you're going up right so the the up Dimension the vertical Dimension is the orientation at Theta here's a more complicated path as here X and Y are changing and Theta is changing as well and depending on which way let me just play that again depending on which way the orientation is changing you're either going up or or down in the configuration space so here there's like one kind of point of the triangle that's fair um and the triangle is kind of rotating about that point so X and Y uh are going in a circle and Theta is constantly increasing uh and so the the path that you get in configuration space is a helix right so if you just looked at it from the top you you'd see a circle but if you take into account the changing at Theta as well you get a helix yeah I guess questions on those before we look at the obstacle ones let me just play the Helix again pair of rotations the vertical Direct yeah I think Helix maybe the clearest one too uh clear signal she will want to visualize foreign so the next clip is going to show the obstacles as well and that's where things get more geometrically complicated even in this relatively simple scenario where everything is cleaner all the motion is in the plane and the the obstacles are just like two-dimensional obstacles so the portion that kind of shaded and red on the left in the configuration space uh I think that's where the robot is not fully installation and then when it gets its collision and that's kind of the middle portion um like over here and then back here again like the robot is not in Collision but it's just like touching the the edge of the crops set the set of configurations where it is in collisions thank you yeah here's another example uh so robot is just kind of on the boundary right so it's just always just like one point that's touching it then at some point uh it does go into the as the outset and then it comes out again yeah so this is showing the full uh the arms um for just one obstacle right so there's only this one like pulling middle Optical on the on the right and this is the the set of all configurations uh that have the robot in collision with that obstacle so I guess the goal here is not to get you to visualize necessarily like exactly why it looks like precisely like that but just the point here is that it looks complicated right and like even constructing it uh or like representing it like how would you represent them it's not convex it's not a polygon it's just like a complicated uh kind of shape uh like maybe you could uh approximate it with a union of polygons or unions unions of like spheres or something but it's not quite uh natural and here's the most interesting one this is a bunch of obstacles there's not even that many right it's like fire or something but the the set of configurations uh like this the offset uh yes there's like very very complicated all right yeah I guess questions on um so you can imagine now doing this for like an end link arm right yeah so this was just a three-dimensional configuration space if you have a 10 dimensional configuration space obviously these sets are going to be even more complicated and hard to represent and if we discretize that space uh solve a graph search problem uh on that disk drive space that's going to be just a gigantic Mass computationally okay so yeah I guess what's the questions on this good I'm just confused back through the bottom was that just for the sake of the video or mathematically what it like yeah so here because the the why sorry the the Z Dimension uh the vertical Dimension is the orientation uh so the robot goes um yeah actually I guess the question is whether let me see so why was that that was it was going back oh the one before okay yeah like over here with this one yeah yeah um so why does it go oh why does it go back up uh or go from up to to down so I think the top point is like 2 pi and then the bottom point is is zero and so those are the the same uh configuration um so yeah if you want to go uh so let's say we're going from zero like to 30 degrees like so on uh like up to like 360 degrees and then you wanted to go beyond uh then yeah you're kind of like going up and then you can say you're going back down again which is at the same point and then up again so that yeah that's how it's been visual is good other questions all right so yeah I guess what's the the solution so like I said conceptually um things are nice right we can like mathematically like we find these sets we can say that we're going to do some planning on the the configuration space but actually doing a computationally uh is uh is challenging uh so that's where this algorithm called the rapidly exploring randomized tree rrt algorithm uh comes in so this is one of the most popular uh algorithms or variants of this basic algorithm that I'll describe today are like among the the most popular emotional planning algorithms for doing motion planning in continuous spaces for pretty complicated systems um so our views essentially get around both of these challenges so definitely the the challenge of uh representing the configuration space and to some degree the dimensionality challenge as well um so this is a kind of historical note they were introduced around the year 2000 so a little more than 20 years ago and they became popular like very very quickly so within just a few years people were using it for uh all sorts of different emotional planning problems and part of the appeal is that they're very simple to implement I guess as you'll see in the the next homework assignment um yeah they're like relatively straightforward to uh like Implement and kind of understand in terms of the algorithmic uh procedure and they get around the technical challenges um with the approach I was describing where you discretize your configuration space um just one uh note I guess is that what I'm going to describe here today is the the most like basic versions of kind of the vanilla like version of the rrt uh there are I think probably hundreds of of different variants of RTS so when I was a grad student uh like at that time when he went to conferences there were like whole sessions on like RT like blah like RTV apps just like hundreds and hundreds of variants of authorities um and I'll say a little bit about some of the the important variants uh but at least for now we're going to focus on the most kind of basic uh vanilla uh variant of the the RT okay um and just I guess one final Point um so unlike the algorithms that we've uh described so far which are these Graph Search algorithms uh rites uh operate kind of directly in the continuous space so they don't discretize at least initially so there's some kind of crystallization that happens but at least initially you can think of the emotional planning algorithm operating in the continuous configuration space okay so I guess this is a reference if you're interested I'll write it down here as the chapter 5.5 in the uh the planning algorithms book by Steve Miller is actually the like one of the creators of the RT algorithm he's also the one who wrote the planning algorithms books if you're interested in learning more I guess look at chapter 5.5 of his book okay so I'm gonna go through the steps of the RT algorithm uh pictorially and then I'll write down the the algorithm a bit more formally so the setup here is that you have two obstacles those are the the stats that are shaded in red uh you have a starting configuration which I'm going to call Q subscript a and a goal configuration Q subscript B um so you should really think about this um like visualization as corresponding to uh configuration space um I guess for the sake of visualization I'm gonna describe the algorithm for a point robot uh that's um that's trying to like navigate through the space uh but in your head you should think about configuration space and I'll make that connection a bit more precise once we understand the basic structure of the algorithm okay so the first iteration of the algorithm the first step in the algorithm uh you just sample some random configuration um so ignoring the obstacles just some random configuration so we're going to call that Q subscript or indeed random you then look at the line segment uh that connects uh QA the starting configuration uh to Q rank so this is the line segment again in configuration space and then you extend along this line segment so you start from QA you extend by some like parameter D that let's say it's just like fix this is a step size that we're going to fix before like running the algorithm um you then check whether Qs is in a collision or not so when you do this extension that extended point we're going to call Qs we're going to check whether it's in Collision or not if it's in Collision we're gonna throw it away um and then we're going to kind of repeat if it's not in collisions and that's that's how it is in the pictures is not in collision with any of the obstacles then you keep that extended point um and so what we're going to do is basically incrementally legitimately build a tree so particular kind of graph that doesn't have any any Cycles um so right now we've just built a tree that has two vertices so QA and Qs and an edge that connects uh like those two words okay good so for now um let's just say that we're checking that the point uh Qs is in Collision uh but you're absolutely correct what we really should be doing I'll say a bit more about that later what we really should be doing is checking whether that whole line segment is in Collision or not but just yeah for Simplicity imagine that we're checking just just whether Qs is in Collision question sure yes yeah definitely let me just go through it again I'll go through a few more iterations that because it should become clearer but yeah Q ran there's just some random configuration um and I guess by random I mean you've picked some probability distribution here you can just think of like uniformly random over the space corresponding to the slide so you look at the line segment that connects uh QA to qram um and then you just extend along that line segment by some distance uh that we're calling D that distance is this one like tricks parameter in our algorithm um and so that's that's how we got the Qs questions so Q ran and could be in an obstacle Q Rands have been an obstacle yeah uh so we're not gonna check whether Q Rand is in an obstacle or not and the thing we're checking is whether Qs like once we do the extension uh whether that was an obstacle or not okay so yeah I guess let's look at some uh some more iterations so we repeat this process so iteration two of the algorithm uh again we sample some random configuration uh ignoring the obstacles again we look at the line segment except what we're going to do now is we're going to look at the point in our existing graph which right now just has these two uh two points um we're going to look at the closest point closest to Q ramp um so right now it's like this vertex over there that's the closest to Quran so we're gonna call that closest point Q near we're going to look at the line segment again that connects Q near to Q ran and then we're going to extend by that same like parameter d uh and that extended point we're going to call it Qs question yeah yeah so we're uh sampling this random configuration we're going to look at the the closest point in our existing tree to that random integration we look at the line segment that connects this Q near the closest point to a q ran we do this extension operation again we check whether Qs like this extended point is in Collision or not if it's not in Collision we add that to our graph we add that to our tree uh so that's that's where we are at the end of the second iteration yes so for the purpose of drawing pictures I'm kind of visualizing everything in in euclidean space but um yeah like really you should think of this as corresponding to the uh the configuration space uh there's one thing I haven't kind of discussed uh in most detail yet which is how do we do the Collision checking right so um yeah if everything is kind of planar like this we can just check whether a point is in collision with any of these so yeah there's a there's a collision checking kind of problem that needs to happen which is like Qs like defines some configuration of the robot we need to check whether that configuration uh is isn't Collision or I'll say a bit more about that but yeah I guess first let's just understand the the kind of basic mechanism of the algorithm and I'll discuss on the implementation details you know in a few slides okay so this is the end of iteration two um this is what we have so we now have three vertices in our graph we were saying in our tree uh and two edges so we do this again so we sample a random configuration we look at the nearest point to that random configuration in our existing tree and we call that Q near we look at the line segment that connects a q near to qram we do this extension at this iteration Qs is in Collision uh so we're just gonna throw it away so we're not gonna do anything we're gonna revert back to the uh the tree that we have had before this iteration questions on on that step okay so let me just do a few more so another key round random configuration look at the nearest Point extend uh not in Collision so we're going to keep that like I said Q ran could be in Collision where we're just ignoring the obstacles when we're randomly sampling uh configuration no no go ahead sorry yeah so when you choose that Q land is it the second one or closer to you because they kind of yes yeah so maybe the way I've drawn the picture uh yeah it's a little bit tight so I I think this I think that the one I call Key in here is actually the nearest one uh but I was just like eyeballing it so it could be yeah it could be that like whatever is actually the business one I think it's okay uh even QA yeah even Qing yeah so anything in our existing tree yeah yeah I think I might have an example um in a bit okay here it is so this is a new Q ramp um so now the the closest point is the one like over here so that's the one we're gonna call uh cuneer and then we extend again check whether it's a collision and that to our tree question how is uh Qs chosen why not just pick a qrand and then go directly to them um so like take a I guess that would be a potentially really large uh step right um so what is happening here is that Q ran uh in a sense is just helping us pick like a direction to explore in uh and then we're exploring from the closest point on the tree like in that General uh Direction um but we're keeping the amount by which we're growing between to be like otherwise you'd be having like a really large uh like steps that we're taking and the part would like end up looking yeah like a little bit of a jumble and you might not even like find a feasible like motion because well the distance that we're doing the extension bias constant here so far in the way I've described it yeah in this like vanilla hours okay let me see if I had more uh okay so that's all the iterations I wanted to discuss going so foreign like Beyond uh like beyond that point yeah good questions yes yeah I'll come to that in just in just a second yeah so this was the same question as uh as before um but I guess maybe I can just preview it all right I'll I'll do it in a bit I haven't I have it explicitly on the slides in a bit uh so let me just go through the uh explicitly um I guess maybe questions on the steps the the algorithm other questions and all right so the first step is you initialize your graph which is really going to be a tree uh with just one vertex that's the the starting vertex uh the starting configuration and then you do this kind of while loop until you get close to the goal uh so you're not going to sample or you're not going to hit the goal exactly because we're just sampling like random like configurations but at some point we're gonna grow the tree uh enough so there's some point on the tree that is like close enough by some threshold to the goal configuration um so yeah the steps are kind of what I described so Q Rand is a random sample uh configuration um so just some point in configuration space uh Q near is the vertex in your existing tree that's the closest to qram uh we have this extension operation uh where we look at the line segment that connects Q near to a qrand and extend along that line segment if Qs is in Collision then you just throw that out if it is not a collision then you add that to that vertex to your vertex set to your tree and you keep going until you get close to the the goal configuration um so here's the visualization of what it looks like are the uh just kind of exploring without any obstacles so if you just grew the tree on its own without thinking about any articles so people have kind of theoretically like studied uh what are these do and there's like tons of papers on this they kind of have this like space filling type property um so right at the beginning you see that they like that explores in a whole bunch of different directions and then it starts getting more like detail right like it starts filling filling the gaps and you can kind of formally like prove that it does that with like high probability like it covers like open Visions in your configuration space and then at like cover the kind of smaller like open open regions but of course this is sorry yes go ahead sorry like is like it's like is this is this like same form of RIT that we've talked about because it seems like it's exploring like this like different parts of the space and simultaneously so like in our example it seems like we're exploring like one at a time but was I just like one branch and we have main branches um let me see so I think the visualization is happening kind of quickly here so that it might yeah I think it looks like it's uh doing a bunch of exploration simultaneously but I think that's just because like each iteration is happening like quite quickly in the the video um but it is the the same like version uh that I described I think the reason it looks like it's exploring in different directions like simultaneously is just because of um like where the qram is getting sampled and then based on what the closest point is that's the direction you're gonna explore in uh and yeah I guess if you just do that quickly enough like it looks like it's exploring in lots of different directions question yeah so basically I was going to say we really know what QB is right like we're going to be yes so what happens if you run key and then it makes a straight line and just keep increasing that way or release um so right now the way I describe algorithm q s would have to be uh close to the QB like the gold configuration for you to terminate the algorithm um yeah like I said there's like many different variants so like one modification you could make is you could check whether the line segment between lecunia and Q Ram is like fully uh kind of nautical engine and a qram is close to the goal then you can just say I'm done yeah yeah so that's definitely one like modification you can make uh here's a more kind of interesting example so this is the RT like actually exploring in a space with obstacles and you can see it found about like pretty quickly so it's the red curve over there you just play that again yeah so it's just within a few iterations like it finds a a path from the start to the goals all right so now yes good foreign [Music] yeah so this is one of the uh the modifications you could make um I think the step size oh on this one okay yeah it they might be like decreasing the step size for every for every iteration um uh yeah and I guess that could make you like explore more quickly like initially uh and then uh yeah take like smaller uh like Steps later on um yes I guess let's talk about some of the different variants so um so yeah this is this is the subtitles question so we need to specify like what the step size is um you can think of that as a hyper like parameter in the algorithm um yeah like like we were just discussing like one possibility is to make the step size like change somehow with the iterations initially you want to take larger steps uh later on you want to take uh smaller steps um you could yeah there's like many different uh like ways of like playing around with the sub size parameter uh and yeah going back to your uh well I guess two people ask this question um so we ignored the fact that points along uh the line segment that connects Q near to Qs could be in collisions you could have a case where both Q near and Qs are not in Collision uh but if you look at the line segment there's some point uh that that takes in Collision so really what we should be doing is checking that the entire line segment is Collision free um so I guess a simple way to do this is you can just like discretize the line segment and do a whole bunch of points and check that those are all uh non-emoculation in certain cases you can really like fully check whether the entire line segment is in Collision or not uh and you'll see that in the the next assignment when you implement the rrt we'll kind of do the good version the correct version where we check that the entire line segment is is not in Collision not just whether Qs is in Collision or not um all right so I guess here's the point I really want to emphasize um this is the the reason like RT has worked so well um so RTS don't require an explicit construction or description of the CR set so the obstacles uh set in in configuration space if you think about the steps in the algorithm all we need to do so the most kind of fundamental operation the most basic operation uh is we need to check whether the robot is in collision at some particular configuration right so we're sampling this qrand we're generating this Qs and then we're saying okay like at this Qs is the robot in Collision or not um and it turns out that that problem like if I give you a specific uh configuration for the robot and the obstacles are in some specific uh like have some specific geometry and locations uh then you can do that Collision checking uh very efficiently uh actually a lot of the algorithms uh for Collision checking like a patient Collision checking came from the video game industry uh so yeah you can imagine I guess lots and lots of video games um like have uh like points for like not colliding with obstacles like penalties for colliding with with things so a lot of like effort went into really fast Collision checking for video games even simulating some of the physics of video games requires like Collision checking so there's lots of software and lots of like really clever algorithms that work really efficiently for doing Collision checking for a specific like configuration of the robot and specific configuration of the obstacles which is precisely what the RIT needs we're not coming up with some like explicit like polygonal or like some other construction of sea orbs right we're never representing that set explicitly uh we're just relying on this ability to do Collision checking questions yes [Music] we are just we are countries just have the euclidean representation of the like of the object of the obstacles and then when we are like exploring like a new point in the configuration space we only need to transform that point into the Ukrainian space yeah yeah so the reason this is simpler is because it's just one point like one specific configuration uh whereas with crops you're thinking about all the configurations that result in a collision and that's a much more like complicated uh like set yeah I guess other questions on that point I think that that's the main like thing I really want to have it uh here's a video that maybe illustrates that so we're back to the piano movers problem uh so someone implemented uh the IRT uh for literally moving a piano in this apartment um and I think this is showing the kind of like Steps in the rrt algorithm um so for our visualization purposes before I was just drawing like points and everything was like planar um here like actual like configurations are being sampled like configurations of the piano are being sampled and we're checking uh whether things are in Collision or not which again is something that I guess you can take my word for we can do a pretty efficiently and that's the path that it found at the end this is a path that gives you configurations that the piano must take to get from some initial configuration to a final configuration without colliding with uh with obstacles I think this makes sense but like just confirm this like algorithm it gives you like the physical path but not necessarily like that yes yeah yeah excellent yeah so this is just doing feasible motion planning uh I'll mention a version that does some kind of like Optimal planning as well but this one is just giving you a few little path and we'll see actually the feasible paths can be like very weird and like very far from any notion of like optimality um yeah let me show you what I guess one can do with uh with rt so this is a group at mitralized group that we're using rtus for doing motion planning for a drone uh in these like indoor environments so this is the path that already found it's actually yeah some variant of authority and then you'll see the Drone actually executing that trajectory and these are like pretty cluttered spaces as you can imagine that the geometry of the Drone uh really does matter and I guess I'll emphasize again there's an assumption here which is that the obstacle locations are known right so we haven't come to the version yet where obstacle locations are unknown uh but Phil is like pretty pretty impressive that gave you uh if I told you where the obstacles are the RT algorithm finds a feasible plan and you can get your robot to execute it um and yeah just a couple more details so we can get back the actual like path uh from A to B by keeping track of like parents as you grow the tree so it's the exact same process as uh like BFS TFS diagram a star and yeah there's a question about optimality um so the paths that you get from RT can be pretty Jagged because you're kind of randomly sampling directions you're like growing the the tree in these in these different directions um so here's an example this is a PR2 kind of like humanoid robot and I think the the motion Panic problem here is just get the arms uh to something that is close to grasping the the mug that's on the table and I guess on the table there's also the planning algorithms book so this is what it does [Music] all right so definitely feasible right like I've gone there eventually but it's not yeah it's not really afterwards um so yes question you described can you just look at the jacket edges and try to learn yes yeah uh and and that's that's something uh you can definitely do so you you find uh just a rough plan with a with an RT uh and then maybe you fit like a spline or something some other kind of like smoothing uh to smooth it out there's a different algorithm like a different variant of the rrt um well like I said there's many variants probably the hundreds of variants but one of the uh like really important uh variants maybe the one you would want to use in in practice is called RT star um so it's kind of analogous to a star like Optimum so RT star returns uh approximately optimal Parts um so in contrast to the standard version of the RT that I described that just gives you a feasible thought and nowadays there's uh like really good software libraries for implementing uh many of these algorithms so I think ompl like the open motion planning library has implementations of uh foreign Beyond I mean in this course you should implement it yourself make sure you understand it but beyond this course if you're interested in using RTS or rtsr that's something that I would recommend looking into so it's approximately after approximately because all of these algorithms have some like random component and I guess as you take more and more iterations you get something that gets closer and closer to Optimal under certain assumptions another important variant is what's known as the bi-directional rrt so in the version of the RT algorithm that I described you're kind of growing the tree from the starting configuration but you could imagine growing like two trees like one from the starting configuration another from the the goal configuration and then terminating when like they connect somehow this is just to give you a flavor of like different possibilities like some other variants that you can have with with other artists okay so RT star so this is back to our uh like this PR2 uh planning problem that's what RT star does I think this video is actually from the uh the original uh like paper on RT star so that's kind of what you expect and just gets to the the goal and yeah here's the comparison of RT with arkistar on the two-dimensional planning problem and you can see that RT star is slowly kind of refining its path uh to get more and more optimal I think here the Ophthalmology Criterion is just a distance we want to find the shortest length path from start to go and your heart you're not giving you something that's optimal obviously and then here's the side-by-side comparison of the planning problem for the pr2s RT starting schools you know they're actually our team has just like random like dance thing but get Fair eventually which is uh questions on this go ahead um like well they're right is it possible like the right hand gets there much after um so it depends on exactly how you uh yeah I guess how you set up the the problem so if you ignore if you just say that I want like both hands to be like close to uh the month so it could be that yeah the like one hand gets it gets there quickly the other hand uh that's definitely like a feasible plan and it's something I already could find other questions yes why was it exploring like all these spaces after it found a path because it kind of seems like after we find like apat you would want to just refine the path yeah so not necessarily right I guess that could be like multiple ways so if you're going like around an obstacle um so if there's a kind of like two qualitatively like different ways of getting to the goals of one like left Round And obstacle one right around obstacle uh but maybe like one was just slightly better so maybe going left around the obstacle uh is slightly shorter um maybe there's like a slight like asymmetry in where the obstacles Place uh in that case because of the randomness of the algorithm you might find a path that goes right around the obstacle but then later on you might find another thing that kind of goes the other way and then you refine that like instead so yeah that's the reason you keep like searching uh like as the algorithm uh luggage rates yeah so I guess a lot I'll talk about it yeah five more minutes uh is this notion of completeness which is really Central uh to motion planning and it's something that people have thought about for uh decades in the Washington planning literature uh so here's a concrete question so Suppose there exists some path from the initial configuration a uh to the the final configuration B uh will an RT find it so is it like guaranteed to find it um I guess what's the sorry let me just Define this notion and then and then we can discuss it so uh this thing is called completeness so planning algorithm is complete if it finds a path from start to go and does so in some like finite like amount of time uh and if the path doesn't exist then the algorithm like terminates in some finite amount of time and returns failure and like tells you there's no way to get from start to to goal as a boss is just running uh forever so I guess what do you think through the rrt is it complete or not so because Q Rand has chosen uh let's say like uniformly randomly uh you're not going to sample something that's exactly something you've sampled before um uh right so that has like just zero like probability of like exactly something like something you might sample something that's like close uh but not not exactly the same yeah yeah I guess what do people think is the RT algorithm complete or Goods forever yeah yeah exactly so it's going to keep it's going to keep trying right and it's not gonna know and that it that it hasn't uh that that doesn't exist about um so the discrete search algorithms that we described in the past two lectures so BFS TFS texture a star those are complete and you can prove that complete for the discrete version of the problem so the graph search problem they're complete so if there's a path they'll tell you in a finite amount of time and find it if there isn't one it will terminate in a finite amount of time and tell you there's no path but the RTS are not complete so they may like you said run forever if there's no path from himself to go uh there's a different notion of completeness which goes by the name of probabilistic completeness uh which says that if a part exists from start to go uh then the probability that the Rd algorithm finds that path uh converges to one uh as the number of iterations uh increases um so as you like iterate through the algorithm they are the algorithms the probability that you find about if one exists that probability converges to one um this is under some technical condition so I haven't stated what the conditions are but under some conditions which you'll get to explore in the the assignment uh the next assignment with the where you think about the RT under those under some assumptions you can prove probabilistic completeness so it's not always the case that the rtus is probabilistically complete but if you make some kind of relatively mild assumptions these are assumptions that have to do with like separation of obstacles if you make those assumptions that you can proved that the RT is probabilistically complete all right questions on on this yeah go ahead uh so RT was not complete because if there's no path uh it might just run forever right so it's not gonna like terminate in a financial system I thought there was a path um uh no it's both of these conditions so planning algorithm is complete if it finds a path from this article if it exists and also if it doesn't exist [Music] okay good I'll see you uh next week [Music]
Introduction_to_Robotics_Princeton
Lecture_22_Princeton_Introduction_to_Robotics_Convolutional_neural_networks.txt
all right let's go ahead and get started um so we're gonna continue our discussion of uh of deep learning so today's gonna be the last lecture on supervised learning and the next lecture I guess based on the the poll that we took uh is going to be on reinforcement learning uh so in the previous lecture we uh continued our discussion of multi-layer neural networks and we use the tensorflow playground to basically see uh the the kinds of Rich non-linear functions that multi-layer networks allow us to efficiently parameterize and then we talk about how we can train multi-layer networks using stochastic reading descent and then we looked at overfitting so what happens when you just minimize your trading loss uh you can do that pretty well but potentially at the cost of not doing so well on the thing that we actually care about uh which is the best loss uh and then I discussed two two uh kind of classes of approaches for dealing with uh this from overfitting so we talked about regularization in the previous lecture where you augment your training loss with an additional loss so a regularizer which sort of encodes Occam's razor so you try to find functions that are simple in some sense while also minimizing your training loss uh and I mentioned that uh I guess in this lecture we're going to talk about a slightly different approach which is to be more clever about the architecture uh the neural network architecture that we pick uh that has fewer parameters and that's another kind of uh structure another kind of Simplicity uh that potentially overcomes this challenge of orphaning um so I got the multi-layer kind of neural network architecture we were looking at we have some input which is a vector so you can think of it as an image and then we have a bunch of these intermediate representations with each layer at each level of processing we apply this kind of canonical form of a function Sigma of W X plus b w the weight Matrix B is a bias term and H is the hidden kind of representation that we get and then we do this a bunch of times until we get to the the output layer so that's our predicted label um so I guess to see uh What uh one of the challenges uh with this is so imagine that you have a an image corresponding to your ex so that's the the input and imagine that's the RGB image of pretty low resolution so here I'm assuming it's 200 by 200 red which is like really really low uh resolution so the crazy fly cameras are much higher since the Xbox 360 by 480. but even with this low resolution um the input Dimension is 120 000 so 200 times 200 times 3 so that's the dimensionality of x uh the weight Matrix W1 for the first hidden layer is going to have Dimensions uh D1 by 120 000 where D1 is the dimension of the the first kind of hidden uh representation and we're probably going to want many layers right um so the total number of parameters uh that you end up with when you parameterize multi-layer neural networks in the the way that we that we've done so far is really really large if you have images that are larger size or if you have like neural networks that are of a larger depth or D1 is even kind of anything reasonable even if T1 is a is 100 you're already at like a million uh parameters or I guess a thousand a year only at a million parameters um so the kinds of neural networks we've discussed so far are called fully connected layers so fully connected because each neuron so each dimension of each hidden representation potentially depends on all the dimensions of the previous layer so if you think about the arrows in this picture corresponding to dependencies so what does each dimension of a hidden representation each neuron depend on everything kind of depends on everything as long as you're going kind of left to right input to Output um what we're going to talk about today are convolutional neural networks um and the motivation for for CNN's uh convolutionary networks uh comes from the fact that these kind of fully connected uh networks that we've dealt with so far uh don't really exploit the structure of the image so if you're looking at computer vision applications in particular and if x is a image we're kind of just flattening it right so we're taking X and we're just thinking of it as belonging to some Vector space so we're thinking of it as some large vector and we're getting rid of the the kind of spatial like two-dimensional nature of the image and CNN convolutionary networks take advantage of the the fact that the input is not just some arbitrary Vector but the input is really an image uh and and they preserve the kind of spatial structure of images as I'll discuss uh in a in a couple of minutes uh but yeah I guess this is the kind of picture I want you to have in your mind so you have a image uh which you can think of it as kind of a volume um where the depth Dimension uh is three if you have a RGB image and it's just one if you have a grayscale image uh and the the width and height are I guess what you would expect like that's the width and height of the the image uh and then cnns are gonna process uh these volume so you have an input volume you process them into different representations until somehow you get to the predicted level okay and I guess I should mention that a lot of the visualizations here are from uh this course Sanford cs231 it's a really nice resource and if you're interested in learning more about cnns and other related Concepts uh I definitely suggest checking it out um okay so again just at a conceptual level we'll get into the details in just a bit but there's basically two kind of major differences that convolutional neural networks have uh as compared to a fully connected neural networks uh the first difference like I said is that we're not just treating X as some arbitrary Vector we're treating it we're gonna think of it as a bunch of neurons that are arranged in three dimensions so we're going to think of it as a tensor um and then the the second uh kind of really major difference is that neurons in a given layer uh so basically each Dimension so when I say neuron again I mean each dimension of a hidden kind of representation only depends on a small number of previous dimensions in the previous layer um so I'll get into the details again in just a in just a bit but yeah I guess those are the two main distinctions um with the fully connected layer each dimension of a hidden representation depends on all the other dimensions of the previous layer with CNN's that dependence is going to be much more sparse okay so this is the kind of basic operation that happens in a convolutional neural network so we're gonna start with some input volume uh so in this case I'm going to work with 32 by 32 images and these are RGB images so you can think of this as a tensor so a multi-dimensional array with Dimensions 32 32 3. and then each layer in a convolutional neural network is going to transform a volume a transform a multi-dimensional array a tensor into some other kind of three-dimensional volume and then we're going to keep doing this like I said until we we get to the the output Dimension so I guess I want to make sure that this idea is clear that we're not just thinking of the input as belonging to some I guess 32 times 32 times 3 dimensional Vector space we're thinking of it as a as a 3D Volume so we're thinking about International array as you can kind of index into the input with three indices like I J and K so I maybe corresponds to the the X Dimension like the width uh GA corresponds to the the Y Dimension that height uh and then K corresponds to RGB like which color Channel you're looking at so I guess any questions on that representation okay all right so the the main operation that does this like transformation from 3D volumes to 3D volumes uh is what's known as accumulation a convolutional layer so I'll describe that operation here and then we'll talk about how that gets composed when we have like multiple layers so the the convolution operation operates on like this 3D Volume so 32 by 32 by 3 image in this case and what we're going to do is take a kernel a filter in this case the the kernel is five by five by three so it's another kind of smaller volume of of elements at five by five by three and we're going to convolve this kernel this filter with the image so what that means is that we're going to take this filter we're going to place it at a certain location in the image we're going to calculate that dot product between the filter weights so with the elements of the filter and the elements of the input in the region that we're looking at and then we're going to do this data at all the different locations in the image so that's the kind of high level picture I'll give a visualization in just a second and I guess convolutions are something we've seen previously so when we were looking at Edge detection so back in the very first lecture on computer vision uh this one of the first ideas we had for how to do Edge detection was using convolutions uh so beside if I have a a one-dimensional image a grayscale image in this case uh to find edges what I can do is apply this kernel so minus one one zero uh to the image so convolve this kernel with my one-dimensional image um so if I kind of take the dot product in the very first location uh I get let's say minus one times zero so I pad the image on both sides with zeros so it's minus one times zero one times five and seven times uh zero um and I keep I keep doing this so it's five times minus one seven times one six times zero and that's two uh and if I do it here seven times minus one six times one forty one times zero so that's minus one and so on and I do this at every location in the image uh and I get a oops sorry a bunch of uh numbers at the end right so I started with a one-dimensional image and I end up with a a one-dimensional uh array as well question yeah so the three is yeah is RGB yeah yeah so if it's the grayscale image that would be 32 by one yep yeah okay so uh conclusions are um uh or I guess yeah I mean we're gonna apply conclusions in the same way as we did for the one dimensional case except now that uh instead of one the arrays uh we kind of have a three-dimensional uh arrays uh multi-dimensional areas but yeah the dimensions uh being 32 by 72 by 3. so we take this in this case five by five by three uh kernel or filter I guess those terms are used interchangeably um and we're gonna do this convolution uh so the way this works is we're gonna take the 5x5x3 kernel and place it at a certain portion of the the input space so you kind of have two five by five by three arrays that are kind of overlapping with each other and you just take the uh the dot product and potentially you add a bias term so we have these two five or five by three chunks we're doing element wise multiplications and then we're just adding up all the the element wires products and so you end up with a 75 dimensional kind of dot product if it's if you want to think about it that way and then you add an maybe a scalar term if you like um and then so I guess maybe let me just pause here and make sure that this this operation is clear any questions on uh like the dot product what it means okay and you do this at every location so you take the the kernel the filter and you slide it along the image so every location of the image so for every location you get one number right that's the number that you get when you take the dot product so at the end of this operation uh what you end up with is a 28 by 28 by one um all right so 28 by 28 so you're losing dimension in this case because we're not padding uh the image so we're not making the image larger uh we're just uh taking this this five by five by three patch and placing it I guess wherever um uh like the image like actually kind of exists and so we lose some Dimension so we went from 32 by 32 uh to 28 by 28. and this thing is called the activation map so the thing that you get the bunch of numbers that you get the the array that you get after you do this convolution operation okay uh and then you can do this uh with a different filter uh so you if you pick a particular kernel so a particular five by five by three array to to do the combination with you'll get one particular activation map uh if you take a different five by five by three array and do the convolution you're gonna get a different 28 by 28 by one uh activation map and then you can do this kind of multiple times so let's say you have six different filters six different kernels that you do conclusions with each one gives you a 28 by 28 by one array and if you have six of them you have six uh 6 times 28 or 28 by 28 by 6 and that's kind of what's visualized over here uh so I guess this is the the main operation so I mentioned your foundation neural networks you start off with uh some block like some multi-dimensional array 32 by 32 by 3 and then you process it to get another block and another like multi-dimension array in this case 28 by 28 by 6. um so I guess we can do this again so maybe just as a question uh so if we think of this as our intermediate representation like our hidden representation 2018 by six uh how would you do a convolution again to get some other multi-dimensional array like after this so let's say I wanted a uh 24 by 24 by 3 block after this uh what is the the operation convolution Operation that I would need to do good yes exactly yeah so five by five by six if you take that as your kernel and then you convolve that with this so a 28 by 28x6 um so the six needs to match right so the five by five by six uh has to match the like the six needs to match the kind of last dimension of this array uh and then we're going to lose Dimension again so we went from 32 to 28 we can go from 28 to 24 if we look at a five by five by six patch um I guess if you want to maintain the dimension then you can pad so you can pretend like the the image is actually larger than it really is by doing like zero padding for instance or just extending extend the image with uh values that are on the uh the boundary and then you can keep uh doing this right so you can do one of these conditional layers uh so transform the input volume 32 by 32 by 3 into 28 by 20 by 6 in this case I guess I'm doing 24 by 24 by by 10 and so on um until you get to the the output layer and to make the so I guess one important thing to maybe point out is that the convolutional operation is linear right so we're uh like just taking a DOT product so to end up with functions that are non-linear we need some activation map like we need some non-linearity so the way convolutional networks are structured is that you you do one of these convolutional operations you apply these conditional layers and then you take an activation function so the activation part is identical to what we discussed previously so it's a element wise non-linearity so let's say you're working with a sigmoid function you take each element of this 28 by 28 by 6 array um and you pass each element through a non-linear function so you end up with another 28 by 28 by 6 array and then you do another convolution another nonlinearity another condition another and so on so that's basically the uh the computation the sequence of computations that happens to go from input to output and because we have these non-linearities um convolutional networks are able to represent uh non-linear functions similar to how we uh were able to represent nonlinear functions with fully connected layers like multi multi-layer um perceptrons so I guess any any questions on on these operations all right um so I guess yeah why are and does but why like why are we doing this like why are convolutionary networks a good idea uh there's basically two main reasons um the first one is that we have significantly fewer parameters right so with a fully connected layer um we uh like I guess as I mentioned a few slides ago we potentially have a really really large number of parameters like the weight matrices the bias vectors that we're trying to learn uh with the convolutional network we significantly reduced the number of parameters that we're learning so the thing that we're going to learn are the um uh like filters the the kernels with which we do the the convolutions uh so each convolutional layer uh has number of parameters equal to the number of uh elements of the filter so the the filter the first filter we did was five by five by um three um by let's say like six that's only like 450 uh parameters so sorry each each uh kernel was five by five by three and I said we have six of them so that's where the the time six at the end comes from that's only 450 parameters compared to if we go back to the slide over here so if you're doing things in in the kind of fully with fully connected layers we had a 120 000 times like D1 like parameters right so that's uh significantly more parameters um than what we have with a convolutional network so the only thing that we're learning maybe let me go back here uh the only thing that we're learning uh are the the weights over here like the elements of this filter and if we have multiple filters then we're learning uh like the parameters corresponding to all of them but that's still a tiny number uh as compared to what you get with a fully connected layer okay so that's one reason uh so we're significantly reducing the number of parameters that we're learning uh the other reason is that convolutional networks are exploiting uh spatial invariants so if you're doing object detection let's say if you're trying to determine whether there's a cat in the image or not uh then like these visual features uh are kind of spatially invariant so if you have a caps in this part of the image that's kind of the same as having a cap in this part of the image and similarly with other features so like lower level features like edges or Corners so an edge in one part of the image uh is kind of the same as an edge in a different part of the image and that's I guess one of the justifications for convolutions so you can think of these condition operations uh as doing some kind of feature detection like we're saying is there an edge so that's what we did with the one dimensional array right like the the three by one convolution that we did these are like more sophisticated versions of that like we're asking is there a particular you hear the Republican feature here if they're a particular feature here and we're using the same feature detectors like the same kernel in every portion of the the image and this makes sense in Envision because of the special ingredients question yeah so the size the Cyber uh it's it's kind of um uh there's some like rules of thumb uh like you usually like five by five by three or something like that is usually uh like what you would pick um so part of it depends on the the scale of features that you're trying to extract so so typically I guess this is kind of showing uh like visualizations of of the the features so typically what happens is that the uh the features that uh you're looking for at the beginning uh are kind of low level features like things like edges and corners and so on uh and then as you go deeper into the the network network um you start getting uh features that are more like complicated and then at the end you get uh like more and more kind of complicated features um there's no kind of theory really to tell you like what uh size of the kernels to pick but the intuition is that if you pick something that's too small then you're focusing on like tiny portions of the image and asking is there some feature here is there some feature here and so on um and so you're not getting that much information if you're just zooming in on a very tiny portion of the image if you make the kernel size too large then you're kind of looking at the entire image almost or a significant portion of the image and there's a bunch of stuff happening inside right that you're missing out on so there might be a bunch of edges or a bunch of Corners in a large portion of the image and so you're not necessarily sensitive to changes in those um so yeah I guess something like five by five or maybe seven by seven uh is typically pretty good for computer vision applications yeah good other questions good exactly yeah so so what this is um visualizing uh is basically what uh features of the image is a particular layer sensitive to uh so that's that's kind of what what's being visualized here so lower level uh like features are sensitive to lower level sorry lower initial like representations like kind of in the the beginning of the network are sensitive to low level features like um uh the corners or or edges and then yeah as you go deeper in the network they get like more sensitive to more like complex like patterns uh and these are for like train networks so I think this is for uh a convolution neural network that's trained on imagenet these are the features that end up being learned uh like after you do sarcastic uh gradient descent yeah okay I guess other other questions on convolutions okay um so conditional networks I guess I've been around for a long time so early cnns were proposed in the the 1990s by uh yeah Lacon and as collaborators uh there was a bit of a kind of deep learning winter uh that lasted all the way to about 2012 or maybe a little bit earlier during 10 or so uh and I mentioned this before so I guess the big thing that that happened in 2012 was alexnet so this was work from uh Jeff hinton's group where they used convolutionary neural networks uh to basically beat the state of the art for the imagenet image recognition competition by a massive margin and that got people uh really excited about deep learning and unconvolutional Nets and of course I guess things have gone kind of crazy since uh since then in the last 10 years and there have been many many variants right so I guess what I described is the most kind of basic version of the vanilla version of a complete neural network there's a lot of bells and whistles that you can add that really make a big a difference or can make a big difference in practice and these things um kind of tend to some of them tend to stick around some of them uh are really popular for like a year or two and then people move on to other things so like vgg networks like we're really popular in 2014 there were other architectures Google or Google Nets and Inception that were popular in 2014 residual networks popular in 2015 those are like still still around actually and there's yeah lots and lots of variants um I'll say a bit more how to maybe choose the architecture but we are going um not there's no great methodology it's like people people like someone has some like some researcher has some iteration for like what structure uh like what bias in the the architecture uh yeah obstruction of the architecture might improve performance they try it out if it works well they write a paper and if it works even better then other people use it and so on yeah there's no like um uh deep bacteria it's it's like very uh empirical here uh but I guess there are a few things actually maybe I can add a bit more to that so there are a few um sort of things that people try to do with architectures um so one thing is like can you encode uh inductive biases so by that I mean can you can you somehow exploit the structure that's inherent in your data so when I think about CNN's is that they're exploiting spatial invariance uh like like I said like some object in one portion of the image or some feature is the same as in some other portion of the image so things are spatially invariant if you have kind of those kinds of intuitions for uh like what structure might be 120 yard data and if you find a way to bake that into the architecture uh that can make a big difference in practice uh the other big thing is can you somehow just make the learning process like the stochastic we understand like the optimization process um more efficient somehow like the computations that go in there uh can you make that more uh more efficient um and that allows you to soak up lots and lots of uh data um so yeah if you have some architecture that has the kind of right structure while also being able to be optimized pretty efficiently um that's uh that's a win so yeah I guess those are that's not some maybe rough intuition for what people are trying to do when they come up with with new architectures I guess one thing that people have found generally is that increasing the size of the networks and things in the the depth of the network for instance can be massively effective increasing the depth of the network can make the optimization uh good sarcastically in descent harder and so that's one kind of Direction like especially with the residual networks resonance people found a way to have like pretty deep networks so many layers of processing while at the same time making the optimization with sarcastic playing descent not kind of super prone to getting stuck in in local minimum and yeah state-of-the-art architectures can have many many layers so I put greater than 15 layers so that's kind of arbitrary these numbers that keep changing or keep increasing year on year um and this is uh this is showing uh the top one accuracy for image classification on imagenet compared to the number of operations so you can think of the the x-axis as kind of corresponding to the number of parameters roughly in your neural network and you find typically that as you increase the number of operations as you increase the complexity of the network while keeping some of the the basic architecture like the conclusions constant and that gives you better and better performance so yeah I guess somehow training larger networks on more data gives you gives you better performance uh so I guess one question is like what architecture should you choose in practice I mentioned there's lots of variants of foundation networks uh or or even other architectures uh so one rule of thumb is just see what is currently state of the art uh for imagenet I guess if you care about computer vision or if you care about something else natural language processing for instance then see uh there's like kind of well-established like benchmarks at this point uh see see what uh what is doing really well what's the state of the art today and and that gives you a sense for what architecture you might want to use uh yeah and in practice you'll find that this gets you most of the way 80 90 of the way maybe on the way but then maybe there's a little bit of kind of extra performance that you can squeeze out by being even more clever by exploiting more structure that's specific to the learning problem that you're interested in so if you have some domain knowledge uh then you can try to bake that in somehow nowadays this there's actually been kind of an interesting convergence towards Transformer architectures so I won't have time to to go into them but these were developed in the context of natural language processing um so I guess if you've heard of like gbt gbd3 and many many other large language models Transformers are kind of are at the heart of those and they're becoming more and more popular for other kind of non-language based applications as well I think for computer vision there hasn't it's not the case that I guess Transformers are are necessarily state of the art there's a there's a bit of a kind of battle like if someone comes up with a Transformer architecture for vision and then someone says lucky convolutions are still useful and then yeah there's no consensus but but I think what's interesting is that um there is a a bit of a convergence like people are uh um like before I I guess on this slide I had a whole bunch of things before 2017 uh like every year that we like two or three things that people were excited about uh since 2017 um yeah it seems like there's uh like the Deep learning I guess Community as a whole is moving more and more towards uh Transformers I think for Western conclusions uh still make uh sense because they're exploiting uh like the special variants but combined with uh like these Transformer architectures uh like seems to give really good performance questions yeah yeah so Transformers uh are for data that are inherently sequential like language if you have a sentence uh there's a bunch of letters or a bunch of words and Transformers do that so the way Transformers get applied to non-sequential data like an image so an image is not inherently sequential is by because like tokenizing uh so you basically treat the image as a sequence you can like Chunk Up the image into different parts you can think of that as a kind of sentence if you like or I mean just a sequence and then you pass that to the the Transformer it seems like a very kind of weird uh thing to do uh it seems yeah it's a bit it seems a bit hacky right that you're taking this thing that's not necessarily a sequence and and like basically treating it as a sequence but yeah somehow surprisingly that sort of thing seems to uh to work pretty well yeah uh so yeah I guess if you're interested there's a paper uh so Beijing Transformers like vit uh that's the um essentially state-of-the-art as far as I know anywhere for uh using Transformers for for vision and they are at least competitive with convolution neural networks uh for vision and for language tasks they're like by far like the complete consensus right now that Transformers are are what you want to use for for language tasks uh if you're interested in Transformers actually the original paper from 2017 this is from Google attention is all you need uh is actually a really good resource so it's often the case that uh the original paper is not super readable and it's maybe not the best resource that's not the case here this paper is really nice solution and if you're interested I would maybe start here and there's a bunch of like blog posts as well that do a good job of explaining uh Transformers okay I guess the other uh thing that you might want to consider is to use pre-trained uh networks so we kind of did this for the the lab the most recent lab uh with the object detection um so if you're looking at computer raising tasks yeah you could take a neural network that's been trained let's say on the image in a data set to do image classification um and then maybe you are interested in a different task like me you don't care about image classification if you care about something else um you can still reuse uh most or like parts of the uh the network that was trained to do image classification uh the idea is that you you take the network that was trying to do with classification uh and you keep uh the first few layers or like most of the layers uh and you basically retrain just the the last uh one or two layers maybe even one layer um for your particular task or or for your particular data sets this is called fine tuning um and yeah I guess the intuition is that features that are useful for object recognition uh which is what this original network was streamed on train Four with imagenet uh are also useful uh for your particular application um so being able to detect edges or corners or other kind of more complex visual features uh is it's kind of like generally useful right it's not just useful for object recognition but many other computer vision tasks also benefit from from being able to to detect those visual features okay so I guess to make all of this work uh there's a bunch of different software libraries so this is just five of maybe the most popular ones here uh Phi torch tensorflow uh Jacks Cafe and Keras um these things do a lot these software libraries do a lot of different things and I guess the availability of really mature software I think has led to uh a lot of the the progress that we've seen in deep learning uh over the last few years so these packages these toolboxes allow you to pretty easily specify different architectures if you want to play around with different variants of convolutional neural networks they allow you to build those variants pretty easily they allow you to load existing architectures you can just say build me a Transformer or build me a convolutional neural network that has kind of these number of layers and these uh like dimensions for the hidden representations uh and they'll just like do that for you you can choose different optimization schemes and different uh parameters within the optimization schemes for example the learning rate or different variants of sarcastic again since I mentioned atom in particular that's the the most popularly used one there's a couple of other ones that the people also use you can use load different data sets you can say load like the M list data sets that's the uh the handwritten digits or load imagenet or load kind of any number of other data sets that are freely available online and I guess importantly they also allow you to interface with gpus and cloud computing resources uh in a kind of so you basically don't have to worry about how the computations get spread across gpus or uh or across multiple like CPUs if you're working with a server so I guess the software library is handled that kind of multi processing for you um so yeah I guess CNN's uh are uh like I said pretty much state of the art for uh um all or most computer vision tasks so image classification uh semantic segmentation uh so looking at each pixel and saying is it a road or a pedestrian or a bicycle or so on uh image captioning yeah I guess image captioning is a bit different because uh there's a language component and nowadays like I mentioned Transformers are are taking over there pause estimation which is important for Robotics and many many other computer vision applications um so I guess what I want to do for for the the rest of the lecture uh or for part of the lecture uh is to talk about uh applications Beyond uh computer vision so so far we've been motivating almost everything uh with written tasks so we've talked about X like our input as corresponding to an image but neural networks are not just for computer vision but um but can be used for for lots of other things as well um so I guess one of the most maybe direct applications of everything we've discussed so far is imitation learning uh so let me play a video and then I'll say a bit more about imitation learning so this is a video from 1989. years ago who would have thought that cellular phones would be as common as they are today well we may be thinking the same thing someday about our next story cmu's robotics Institute is developing a car and it's already being tested on the road money added reveal Flanagan win long for the ride Todd Yoakam is behind the wheel of this Army ambulance but the CMU graduate student isn't driving a computer is the machine watches the road through a video camera perched above the windshield and steers the vehicle itself we've had it up to 70 miles an hour and it's driven autonomously for over 90 miles straight without the human ever touching the wheel that's right this robotic vehicle is driven to Erie and back on I-79 it's the result of eight years of military research at cmu's robotics Institute scientists here don't teach computers how to drive instead they use a computer process called a neural network to create a machine that can learn to drive Dean Palmer Lowe's the Project Director we don't tell that anything except steer like I do learn to steer the way I'm steering right now sometimes it keys off of the road edges sometimes it looks for the painted Lane markings down the center of the road so it varies and it learns in different conditions what important features are on that road type at slow speeds the nav lab can use a Laser Rangefinder to spot and avoid objects in the road but to date it's too nearsighted to do it at highway speeds it may be a while before you'll be able to buy a robotic car that can run on down to the convenience store on its own but perhaps not so long for robotics Technologies pioneered here at CMU to begin showing up in state-of-the-art fires and trucks the robotics Institute recently received a more than two million dollar Grant from the U.S Department of Transportation to find ways to use the military technology to improve highway safety Chuck Force the principal investigator there are 13 000 people killed every year just by running off a rope single vehicle accidents one potential Lifesaver a warning system that could sense when a vehicle is drifting off the road and awaken a sleeping driver the robotics Institute is well on its way to creating one Phil Flanagan kdka-tv Eyewitness News yeah I guess it's kind of interesting to see what things that they were right about what things they were wrong about so the alert system that's something we do have nowadays right that's that they were right about I guess they're also going to write when they said that uh yeah it might be a while until we see autonomous vehicles that just go down to the grocery store we still don't have those um but yeah I mean what's interesting here is that they were using uh imitation learning um so basically you have a human that's that shows the the robot how to drive so the human provides lots and lots of data of just driving around in different road conditions and then uh essentially this is uh you can think of this as an analog to uh image like recognition right so this is a supervised learning problem uh so at every uh time step uh the robot receives some image some input some sensor observation uh and then we've told it what the kind of correct label is what the correct action is um and then we have this data set of images or sequences of images and labels of like steering outputs and then we can just fit a neural network to this data using the the techniques that we've discussed in this course um so I guess limitation learning is is uh is really powerful and and uh still being pretty heavily utilized for uh modern day uh autonomous vehicle system development uh this is a video by wave uh I think it's a couple of years old now but uh let me let me just play it [Music] foreign [Music] in the UK that's why it's on the other side of the road [Music] foreign [Music] it's all imitation like the only thing that the the neural network is learning to do is imitate the human and I guess they're they're saying that I think over here uh somewhere at the the end over here yeah no hand coded rules are specified all behavior is learned from the data which um I think gets you some way uh but uh I guess some of the the rules we might want to uh bake in right like stop at a red light seems like a good thing to bacon rather than to learn uh just by imitating uh human data but yeah I guess the meditation learning gets you uh part of the way there uh like they're it's like pretty impressive like what you can get with limitation learning but I think not a lot of data right they were using uh I forget what it was exactly like 30 hours of human driving data which is uh not not like a massive um kind of investment in in data and if you have like more data then you can get uh better uh performance questions yeah so that is a major challenge um and uh yeah I mean I guess with the the other side of the road you could probably like hand code some like transformation maybe uh you can like pretend everything is like flipped or something like that and then use the the same uh neural network but there's other things that are much more like subtle right uh like uh like the width of the the roads or like visual features that appear or even like conventions uh so even like different states within the the US have like different like rules right I guess in I think New York yeah turning turning right on a traffic light or whether you're supposed to uh like how you merge into um like a roundabout or like stuff like that there's a lot of like local conventions uh and I think that's one of the reasons you see like a lot of vehicles vehicle companies like mostly focusing on a couple of cities right so waymo uh started off in uh Chandler Arizona uh did a bunch of kind of years of testing there and deployment there and I've recently uh kind of moved Beyond uh to San Francisco and I think to La as well if I remember correctly uh I guess all cities that have kind of nice sunny weather for the most part but like going to like from there to like New York City or or to a different country uh is like pretty challenging and not something I think that that or something that's still a major like open like research challenge yep good I guess other questions on this go ahead yeah it's It's Tricky I mean you can go in kind of your code and add like if statements uh so you could say like if you see a traffic light that traffic light stop else deploy imitation learning policy uh something like that um but but yeah it's it's not uh it's not super obvious and I guess that's one of the appealing things of using imitation learning or just learning pure Learning Without trying to hand code things is you don't need to worry about uh how to like take in like rules and and bake them in but um but at the same time yeah it seems a little bit strange to learn everything like things that you can hand specify like really hard rules uh you should probably uh like bacon but I guess one of the lessons I think from learning has been that uh like you have to be careful about exactly what you make in so as an example you might want to say never cross the like double yellow line right in the middle of the road um uh but that's something that humans do like all the time uh so if there's a car kind of parked in front of you then if you if that was like truly a hard-coded rule then your autonomous vehicle would just sit there and do nothing right and just like wait for the person to come back whoever is like double parked and like take the car away I guess what humans do is just wait for like the oncoming traffic to stop uh cut across to the other lane a little bit and go back um so I think that the number of things that you want to hard code is probably relatively small but but some things like stopping at Red traffic light seems uh themes seems good hardcore and like turning right on on our traffic light like those kinds of things uh seems like they should be hard-coded too yeah definitely not uh so people I guess tried doing this in the the earlier days of uh autonomous vehicles so uh uh DARPA like that's the the defense agency that uh like funds a lot of this or funded and funds a lot of This research had some competitions and uh I guess back in the day then like 2005 seven or so uh there was much more like hand coding so it was like pretty uh like the modern like deeper like deep learning Revolution uh but yeah I guess they found that the cars would just do silly things um in the like for the sake of following the rules exactly like 100 of the time uh and so yeah I guess humans like break like all the traffic laws traffic rules right like we break them uh like all the time uh and I think for for good reason uh or most well I guess mostly for many times often let's say for good reason or not always for bad reasons good other questions okay so I guess imitation learning is not just for autonomous vehicles here's another just a two minute video uh showing invitation learning in the context of robotic manipulation infants are born with the ability to imitate what other people do here a 10-minute old newborn sees another person stick their tongue out for the first time in response C6 his own Tanya imitation allows humans to learn new behaviors rapidly we would like our robots to be able to learn this way too we've built the proof of concept system trained entirely in simulation we teach the robot a new task by demonstrating how to assemble block towers in a particular way which in this case is a single stack of six virtual blocks previously the robot has seen other examples of manipulating flux but not this particular one our robot has now learned to perform the tasks even though its movements have to be different from the ones in the demonstration okay so all right so I guess this is kind of the the end of our discussion or almost the end of our discussion on uh supervised learning so as I said we'll talk about reinforcement learning and some deep reinforcement learning in the the next lecture um yeah I guess given like the limited amount of time I've only provided a very brief uh intro to deep learning this massive amounts of like work massive numbers of people that are working in the area and and there's been a massive amount of impact already in the the last uh 10 years in particular so here are some resources uh if you're interested in in diving uh further um so this deep learning uh textbook by uh Ian Goodfellow banjo Aaron corwell this was written in 2016 or so I guess these things tend to get kind of outdated uh pretty quickly but some of the core ideas in in that book are are definitely so relevant deep learning with python that's another uh good one there's a bunch of actual like code uh in there that might be worth checking out a couple of foreign course theoretical foundations of deep learning that get stopped I guess every now and then and foundations of RL also that stopped periodically in the ECU Department so I guess what I want to do uh just uh towards the the end here is zoom out again so we try to do this a little bit with some of the other modules so I want to zoom out and think about some of the the broader implications of machine learning deep learning uh specifically as they apply to fairness so we've adopted a kind of like technical perspective on on learning so far right so we said we have this training data set and we're gonna just find some neural network with um uh maybe by adding a regularizer but essentially by minimizing the the training loss to just trying to fit our data our training data as well as possible so I guess what could possibly uh go wrong so there's lots of things it turns out that can go wrong and and one of the things that people have realized uh over the last uh maybe six or seven years in particular has to do with fairness or unfairness that comes about from this kind of learning scheme um so one of the most maybe kind of well-known examples of uh of this kind of algorithmic bias comes from recidivism so recidivism uh is a legal term so it's the risk of reoffending um so yeah I guess if someone has committed a crime what's the uh the probability that they're gonna commit another crime and that sort of determines like bail amounts or whether someone gets fail and those kinds of things so many states in the the US use actually data driven algorithms or learning based algorithms for predicting recidivism and there's a software kind of in particular that received a lot of scrutiny called the compass system um which yeah basically has a data set of people and kind of labels associated with them like whether they listed awaited like whether they refunded um and yeah you train something to make predictions and use those predictions on kind of new uh individuals uh and yeah I guess what people found is that uh these systems these automated systems uh can be significantly uh biased and specifically uh there were studies I'll point to one in the next slide showing that there's a significantly higher uh false positive rate for black defendants uh there's some defend some debate on the exact kind of rates of the false positive and false angular rates but there's kind of General consensus that that it's significantly different at least so this is a study uh I guess there's a fine print over there from 2018 uh studying this uh the system this Compass system and they found that this kind of commercially developed software that uses learning uh is basically no more accurate at predicting recidivism than just some kind of random person like untrained human um uh like uh predicting uh and yeah I guess one of the implications of this is that there's a bias in this case a racial bias that humans have like untrained humans have and then also this Compass kind of system also like learns this bias um so the middle kind of block uh of bars is showing um the kind of false positives um so a defendant is predicted to receive it but they do not so false prediction of residualism uh and then the the other side is kind of false negative so a defendant is predicted to not listen to it but but they do in fact um and any others are kind of pretty significant difference uh based on race so there's a higher false positive rate uh for black defendants and a higher false negative rate for white defendants I guess as they found in in this study uh and this problem is not just restricted to uh recidivism uh there were other studies uh in the context of uh facial recognition so this is Joy uh many user was a or is a researcher at the MIT media lab and she and her group found that uh there's kind of racial biases for facial recognition algorithms that are also based on on machine learning and I guess part of this I mean there's different uh reasons for for this and many other examples as well um and it's not just race right there's like other like demographic kind of features that can be discriminated against and these show up in in other examples for example automated systems for determining whether or not someone gets a loan for examples in healthcare uh and yeah I guess almost any application where we're using learning to make predictions about people um and this bias uh this unfairness can can come from different sources uh so a major source has to do with the training data so if you're training data itself is biased and you're just minimizing your training loss then the thing that you learn is also going to be biased there's potentially other sources of bias as well so the learning algorithm uh or model evaluation and selection so at the end a human decides like should I deploy this model or not and that can also introduce bias into the system um so yeah I guess those kind of consensus around uh fairness in the sense that we want our AI systems to be fair I think most people agree on this uh but it's not actually totally clear what exactly we mean by by fairness so at the end of the day we somehow need to change our like stochastic in descent algorithm right or something like our neural network architecture to ensure fairness and to do that we have to be mathematically precise about what we mean by by fairness uh there's two categories like two kinds of fairness that one could consider so the first kind is what's known as group fairness um so this is the kind of fairness that I guess I've been motivating uh with the examples on the the previous slide um so here you have some algorithm that takes in as input so this is kind of our X like a person's like an individual's data and then you have some output let's say a binary output binary label uh maybe prediction for whether this person will listen to it or not so group fairness uh basically imposes statistical requirements about the treatment of different groups so let's say a treatment of different racial groups but in general it could be any kind of demographic group race or or age or or so on um so there's different possible definitions you could come up with for for group fairness uh so a pretty reasonable one is that across different uh groups so let's say across different races or across different ages you want the false positive rate to be equal right so false positive is if you make a positive prediction uh but actually the the label is negative so incorrectly predict the syllabus for example another possible definition is that you want the false negative rate to be equal across your different groups uh yet another possibility is equal positive predictive value ppv across groups so this is related to the false positive and false angular rates so ppb is the so among those who are predicted to listen to it how many of them actually do so it's the number of true positives divided by the the number of like total positive calls I guess the details are not super important I just want what I want to convince you is that these three are all pretty reasonable definitions of of fairness the punch line is that there's this kind of neat theoretical result from relatively recently about like six years ago that says that there's no classifier so no algorithm that can simultaneously ensure these three kinds of uh awareness unless the classifier is absolutely perfect which is not possible like we're learning from some data we're not going to be 100 accurate on on all individuals or if the the base rates of let's say residualism are exactly equal across the group which is across all the groups which is also not realistic so basically what this result is saying is that in practice it's impossible to achieve these three kinds of fairness simultaneously so I think this is a pretty countries intuitive result like it seems like all of these definitions obviously definitions are pretty reasonable but but yeah there's no way to achieve them simultaneously in practice uh so basically what this means is that we need to choose like we need to commit to some notion of fairness and we cannot have everything here so that that's one I guess kind of uh of a fairness group fairness so fairness across different demographic groups uh there's a different kind of friendness which is called individual fairness um and roughly what this means is that similar people are treated similarly and to explain this there's a really kind of funny and nice video uh which I'm gonna just play with you and many more because after we did this about 10 years ago uh it became very well known and we did that originally Mr capuchin monkeys and I'm going to show you the first experiment that we did it has now been done with dogs and there was birds and there's chimpanzees um this is so what we did is we put two capuchin monkey side by side again these animals they live in a group they know each other we take them out of the group put them in a test chamber and there's a very simple task that they need to do and if you give both of them cucumber for the task the two monkeys side by side they're perfectly willing to do this 25 times in a row so cucumber even though it's really only water in my opinion but cucumber is perfectly fine for them now if you give the partner grapes the full preferences of micro monkeys correspond exactly with the prices in the supermarket and so if you give them grapes that's a far better food then you create inequity between them so that's the experiment we did recently we videotaped it with new monkeys who've never done the task is thinking that maybe they would have a stronger reaction and that turned out to be right the one on the left is a monkey who gets cucumber the one on the right is the one who gets grapes the one who gets cucumber notes that the first piece of cucumber is perfectly fine the first piece he eats and then see she's the other one getting grape and you will see what happens so she gives a rock to us that's the task and we give her a piece of cucumber and she eats it the other one needs to give a rock to us and that's what she does and she gets a grape and she said the other one sees that she gives a rock to us now gets again cucumber [Music] she tests her Rock now against the wall she needs to give it to us and she gets cucumber again [Music] so this is basically the Wall Street protest that you see here anyways yeah I guess so this is individual fairness so yeah similar individuals I guess in this case similar monkeys should be treated similarly you can also mathematically formalize individual fairness similar to what we try to do with group fairness and there's lots of different definitions and there's a I guess tutorial um from a principal Professor uh like who lays out 21 uh different definitions of fairness and they're kind of associated uh politics um so the main I guess takeaways here um are that there's lots of uh work especially recently over the last five five or six years on mathematically defining what we mean by fairness which we have to do if we ultimately want to like make these Notions of fairness into our learning algorithms there's a lot of interesting theoretical results which I think help us and force us to think about uh what we mean by fairness not just for machine learning algorithms but what do we mean by fairness even just for ourselves like as humans or as societies um and yeah I guess formal definitions uh like I said enable algorithms for enabling uh or ensuring fairness of our machine Learning Systems if you're interested in learning more there's a couple of resources there's a really nice talk by Cynthia dork who's one of the the Pioneers in in this like area of fairness so the emerging theory of algorithm awareness yeah it does a nice job of laying out the basic ideas and the basic kind of technical results those are Netflix uh I guess documentary called coded bias which I highly recommend it's really nice uh kind of overview of of this issue and there's also a book by Michael Kearns and Aaron Roth from UPenn called the ethical algorithms if you want to dig deeper and three more examples of these and and some of the research that also uh there yeah I guess they're both like professors at Brandon so they talk about some of the technical approaches towards tackling this challenge as well all right so I guess any any questions on on this on the furnace stuff or any of the other things all right I'll see you next time [Music]
Introduction_to_Robotics_Princeton
Lecture_15_Princeton_Introduction_to_Robotics_Mapping.txt
all right yeah let's go and get started so the the plan for this week so today's lecture and also Thursday's lecture is to wrap up this module that we've been focusing on uh of uh localization mapping and state estimation uh so just to remind you uh of what we've been doing in the the last uh couple of lectures so the main kind of technical Hammer we've been using uh is what's known as the base filter so we introduced this a few lectures ago a couple of lectures ago we looked at some specific instantiations of it uh one in the setting where you have linear Dynamics plus gaussian uncertainty and also linear sensing models certainly that led us to this kind of really nice closed form implementation of the base filter known as the camel filter and then we also looked at the particle filter which is kind of an approximate way to implement uh filtering in much more general settings and that's what you're implementing as part of this assignment and then in the last lecture uh what we looked at was the problem of localization to figure out its location it stayed more generally given map of the environment and we discussed kind of many different kinds of maps I'll remind you quickly what the main categories of maps were location-based maps and feature-based maps um and then yeah we basically said that localization you can think of it as a particular application of Base filtering and then you can use techniques that we've discussed particle culturing for instance uh in order to perform localization um what we're going to do in today's lecture is kind of the the opposite from the previous lecture so in the last lecture we were assuming that we were given a map of the environment and the goal was for the robot to figure out its location today we'll do the opposite so we'll uh do mapping so the robot doesn't know uh what its environment looks like it doesn't know the map of the environment but we're going to assume that the robot has some ability somehow to perform localization so maybe just a setting while the robot is a GPS so the global physician system that gives it it gives it its location in some Global kind of coordinate systems on the global reference frame so it knows its location but it doesn't know what the surrounding environment looks like so doesn't know where the obstacles are and so on and that's what it needs to figure out so that's what I mean by the problem mapping in this context um so really it's kind of like a chicken and egg problem right so in the previous lecture we assumed that we were given a map and the robot is trying to localize itself in this lecture uh we're assuming that the robot can somehow localize itself so it knows obligation uh what it's trying to do is mapping um and so they kind of depend on on each other uh in order to localize well you need a map in order to perform mapping you need localization so really you should be doing these two things kind of simultaneously uh so in the next lecture we're gonna put these pieces together and tackle the problem of simultaneous [Applause] localization and mapping usually abbreviated as flan but but yeah for today's lecture we're not going to do slam we're not going to do these problems simultaneously we're just gonna make this assumption that somehow the robot can look like the South but it doesn't know its map and that's what it's trying to figure out and this is not a completely crazy assumption so if you think back to your rrt lab with the craving fly um I guess in that case you measure what the map looks like right like you had a tape measure and you looked at where the obstacles were and what the radii were and so on um but the robot we assumed could localize itself so it could figure out uh exactly what it does or roughly what it is in the environment using its own sensors so it has a downward facing Optical flow camera that gives you a good speed and you can use that to kind of estimate what its location is relative to some starting point so I guess keep maybe just keep that in mind when you're thinking about the robot knows where it is but it doesn't know what the environment looks like all right so that's the the setup um and maybe just like really briefly let's just think about uh the different kinds of Maps we talked about in the previous lecture uh and what the the pros and cons are of the different representations uh so in the last lecture we talked about two categories [Applause] of maps so these are location-based Maps and also a feature based Maps uh and the distinction was that with location-based Maps uh We've Associated some property like occupancy let's say with every location in the map so you can query the map and say is this location occupied or not uh versus the feature based map just gives you features properties of certain landmarks in the environment so there's a door here there's a window here so a finite kind of set of landmarks and their properties locations colors densities things like that um so I guess like Google Maps is an example of a feature-based Maps if not Google Earth but but just a kind of vanilla like Google Maps that we use for navigating from from places to places uh it's a feature-based app right so you can't say you can't ask Google Map like hey is this specific GPS coordinate occupied or not you're not going to get that information so if you query a point on the road for instance Google Maps is not going to be able to tell you like at that instant is that point on the road occupied or not like is there a car is there some construction happening uh Google Maps won't give you that information the location map location-based map just by definition gives you that kind of information for every location Okay so um within I guess these categories there are many representations that you can choose and I think it's important just to know what the possibilities are we won't go into too much detail but I just want to give you a sense for uh what the possibilities are so in the location-based map category uh there's a kind of subcategory which are known as geometric primitive Maps and I think it's easiest to understand this with a example so let's say we're doing mapping in two Dimensions so there's a planar setup and our obstacles are represented as a polygons let's say foreign in general this could be any uh kind of Union of shape Primitives so squares or like spheres so every every obstacle in the environment uh we're going to represent as a union of polygons or Union of spheres or Union more broadly of some like primitive objects um and I guess the the kind of benefits of this kind of representation uh is that it's relatively memory efficient or can be relatively memory efficient um so for instance if you're working with just uh polygons or actually maybe even even simpler let's say we're working with spheres so each obstacle is is a sphere or more generally like a union of squares you could have something like this um in this case all you need to store are the centers of these spheres and their radii right so that's a relatively small number of things that that need to be stored as compared to an occupancy grid map which will come to in a second but yeah so these kinds of maps where you just have some simple primitive shape Primitives and you represent objective Union so that can be a memory efficient question uh this would be a few uh this actually okay so it could be a feature-based map uh but it could also be a location based map if you implicitly assume that anything that's not uh in the yeah like the union of the other shapes is unoccupied um yeah so if that is can be implicitly assumed then that becomes a location difference okay yeah good question um all right so I guess the the uh downside of this can be is that it can be hard to represent really complicated obstacles so it just depends on the on the environment like if you think about this classroom uh maybe spheres are not the best choice right or motion primitive like how would you capture any chair as a union of sphere that's not it's going to be like super clunky so depending on the match between the shape Primitives that you're using so like spheres or polygons and the environment so maybe if you have rectangles and that could actually be a pretty good representation of this environment like each chair you could think of as a union of just two uh well object angles I guess like two cuboids and that could be a decent approximation so depending on the environment and The Primitives you're using it might be relatively easy and memory efficient to represent the environment or it could be a complicated okay so that's I guess one possible choice the one that we're gonna work with today is a occupant secret map so we discussed this in the previous lecture um in some sense this is like conceptually the simplest kind of representation so you just discretize your environment uh in the form of a grid here again just for the sake of pictures I'm making everything 2D but you can imagine 3D versions of this and then for each cell you say it's either occupied or not and this was the kind of environment we were using uh or the kind of representation we were using when we talked about Graph Search right so BFS DFS a star and dijkstra we were using this kind of representation um so the advantage of this representation uh potentially is that we can represent arbitrarily [Applause] complicated obstacles by making the the grid finer and finer but I guess the downside uh is that it could be pretty memory intensive um like if you make the grid pretty fine uh and if you think about these categories they're actually I guess related in a sense like you can think of occupancy grid Maps as just a a particular like instantiation of geometric primitive Maps where the geometric Primitives are squares right in 2D or or cubes in in 3D um but yeah often I guess it's useful to have different like terminology when we're thinking about these usually with geometric primitive Maps it's not like an occupancy grid map it's like some other geometric primitive like spheres or cuboids or or things like or polygons typically all right I guess any questions on on that right uh yeah so it is just uh exactly a list of shapes uh and uh parameters that define those obstacles um and yeah I guess depending on whether this is meant to be a location based map or a feature based map um we can say that anything that's not within these shapes is unoccupied uh but yeah the representation is just uh just a list of the different uh obstacles all right so the other kind of map uh maybe just really quickly that we discussed in the last lecture as a feature base map [Music] which is so I gave an example of this in the last lecture so this Associates uh or this I guess represents a bunch of landmarks like window door table and so on and they're like Associated properties shape color other material properties Maybe um and the the kind of benefit of this representation which I didn't really emphasize in the last lecture I just want to emphasize it here is that it contains a lot of uh cement or can contain semantic information so by semantica information I mean information Beyond just the geometry of these landmarks so the fact that this is a table right it's not just like some blob of stuff you can do like something with a table like it's like meaningful that it's a table if you know that this is a table you might guess maybe there's food or utensils or something so this kind of semantic information like information Beyond geometry can be useful for some kind of high level planning so for instance if you have some of these landmarks or if you've identified some of these landmarks you can try to guess what this room is so maybe with some more landmarks you could guess maybe it's a like a dining room right or a kitchen or something and that that kind of information might be useful if the robot is looking for something so let's say it's looking for like coffee or it's looking for a fridge if it sees that there's a table or like other things that might be like a kitchen encounter if it's able to identify those then it might be able to like guess that okay maybe the fridge is not so far so that kind of like planning at high level uh where it's not like purely geometric kind of obstacle violence but trying to find something these kinds of maps are useful for the occupancy grid um instead of just shading a box we appropriate let's say colors yeah yeah good so um you can add extra so in practice what I guess you would probably do is like combine these kinds of maps so you might have a uh like a more like refined like geometric map like an audio is a good map and then overlay it on top of that you could have some additional information like colors and things from which you can extract these like semantic like categories like is this uh like table this is a door and so on um so in the end practice like depending on the task here so if you're just interested in going to a particular like GPS coordinate and that's given to you then you probably don't need the actual information but for a lot of tasks which uh I think yeah the one I often think about is like finding some particular object like a fridge in a in an apartment um like that kind of task like often benefits from having like this extra like information as well which you could extract like you said from the just things like color good other questions okay um oh yeah I guess the the downside of this just purely using just semantic information just feature race map shows that it's like hard to do Optical avoidance right so it doesn't really support housing support yeah it doesn't directly support Optical avoidance so like I said you should in practice probably like fuse these two representations okay so let's talk about how to actually do mapping so for the purpose of today's lecture I'm going to focus on just one particular kind of map uh maybe I'll click here [Music] but the general like technique I'll describe is based based on base filtering and so you can extend the general technique to other kinds of maps as well but just for the sake of concreteness I'm going to focus on occupancy grid mapping and the reference for this is chapter [Music] 9.1 and 9.2 in the probabilistic robotics book I think it's actually useful to have a kind of little visualization in mind like as we go through the map so I'm just going to play a quick video just to help you visualize yeah I guess any questions while we wait for the production uh yeah so the reason it doesn't necessarily support obstacle avoidance like directly is that with a feature based map there's no guarantee that the things that are not represented uh are actually like free um so yeah the way to think about it is like this is a list of uh kind of meaningful or like important objects in the in the room but it's not saying there's nothing here or it's not necessarily saying that there's nothing here so there could be something here but maybe it's just not represented as part of the feature-based map yep uh we're gonna get to that I guess we haven't yeah I'll come to that in a second yeah wait did I mention that already or no we will come to that in just a minute I guess I mentioned the greatest rule so uh yeah I'll I'll I'll describe that in a second uh okay so yeah we're gonna do um occupancy grid map like I said uh so this is uh let me turn the light off [Music] so this is an example of the technique that we're going to describe um working in in practice so what's going on is that a drone and that's currently here being human pilot you see the human in a second so there's a pilot assist mode um the bottom left video is yeah just from the perspective of a fixed camera in the room the top left video is from the perspective of the Drone drone camera and this has like some depth camera on the on the Drone and it's building up a map so our priority like before it starts operating in the environment before we start moving around it doesn't know what the environment looks like and then as it moves around in this case like as the human pilot flies around it builds up this occupancy grid representation of the environment and you can see it's kind of like pixelated like they've chosen a particular resolution and each kind of little voxel so the 3D analog or pixel uh updated right so as the robot moves around it's figuring out more and more uh like what parts of the environment are occupied by obstacles and then let's see at some point it switches okay so this is autonomous mode so this is something we're familiar with right so once we've done mapping like once we have the map once we have the optimal secret representation then we can use the techniques that we've described previously uh Graph Search technique potentially or like rrt to find a path from some point a to some point V that avoids the obstacles uh the occupants the occupied regions uh in the the map that we've created okay so I guess that's the picture I want you to have in your mind as we discussed the math um yeah initially there are a lot of sitting there it can see some portion of the environment as it moves around it updates uh what parts of the environment it thinks are occupied any questions on the high levels I guess oh one of the colors yeah that's a good question if my okay I'm not sure uh it could represent confidence potentially uh or actually yeah maybe you might be right it might represent height that's uh that seems consistent right so I guess things on the ground our warm colors yeah okay that seems likelier than what I said good thank you good other questions okay all right so let's yeah I guess that's that's the picture we're gonna go into the map and and see how to actually do this okay so yeah let's get to the question of uh that I deferred previously so how are we going to represent Maps so we're going to think of a map as a random variable and we're going to call that random variable M with a bar the bar again just denotes that it's a vector I'll talk about the exact representation in a bit but the reason we're going to think of this as a as a random variable uh is that we're gonna associate some probability um so really a probability distributions over Maps like that's what we're gonna update using sensor measurements uh that the the robot gets um so yeah I guess pictorially if we're working with with occupancy grid Maps um imagine just conceptually this is not like kind of computationally efficient but think about like associating some probability with with this particular map right this is a specific map um like I've uh associated associated like occupancy or not with every cell in this map uh so this is one like instance of a occupancy grid map uh there could be many other uh instances so here's another I guess instance where let's say this is occupied this is occupied and this is occupied um so I guess how many let's say it's a m by n uh grid uh so how many [Applause] possible Maps uh earlier given that we've let's say committed to some particular digitalization to the N by Android uh how many how many Maps go ahead yeah so 2 to the N squared which is a lot of maps right if if n is anything even kind of remotely reasonable um and M squared because we're working in in 2D so we have N squared cells uh associated with uh with the map in 3D it's really enqueued and the reason it's 2 to the n is because each cell uh in the occupant secret map so there are N squared cells each cell can either be occupied or not uh right so so it's uh like two times two times two times two like up to the number of uh cells which is N squared into the N cubed in in 3D so yeah we have that many possible Maps uh so I'm just going to represent or just Define uh capital N to be N squared so it's 2 to the capital N possible Mass um and we're going to represent a probability distribution over that set so that's that is absurdly large uh but we're going to find some I guess relatively kind of memory efficient ways some relatively computationally efficient way uh to define a probability distribution over that space and that's why we're thinking of it as a random variable because we're associating probabilities with each possibility for a map I guess there's not answer your question about the Run variable yeah is why you would just illustrate each like cube in this case it's a random variable rather than the whole map okay good so that is what we're going to do actually but uh but that turns out to be an approximation and I'll get to that in a second but yeah each cell you can think of as a random variable but this kind of conceptually if you were computationally unlimited you could uh think about representing a distribution kind of in full generality over all possible Maps which is yeah due to the capital N Maps Okay but yeah I guess that's the main challenge um like this is exponential in the square of the the number of like bins you have in each Dimension so exponential in the uh the number of total cells in practice you might want to use uh well depends on the size of the environment but let's say like at n equals little n equals a thousand and so this number is going to be like absolutely gigantic right two to the million is like more than the number of whatever like atoms of the Observer of the universe and whatnot so uh so yeah this is not a a computationally efficient way to think about things but conceptually at least you can you can think about this so what we're going to do so the goal uh is going to be to represent a probability distribution so the probability associated with each map given some information so the information that we're assuming here I'll explain this notation in just a second so this X One colon T is just shorthand for X1 to up to X at time T and this is shorthand for Z1 Z2 V up to time T so this is the history of states let's think about them as dislocations the history of locations that the robot has visited from time step one to the current time step which is time 30 and this is the history of sensor measurements and just for the sake of concreteness you can think of this as like range measurements so the robot has a range sensor so depth sensor something that gives that a distance along many Rays that's one possibility but yeah some something that like tells me robot information about its environment uh so as I said at the beginning we're going to assume that the robot can localize itself so this information the history of the locations that the robot has visited we're assuming the robot knows and the center measurement like that's what it's like directly sensing so it knows that so given these two pieces of information so the history of past States and the uh the the history of sensor measurements uh the robot needs to basically update its probability distribution over the the set of maps so that's kind of the conceptual like framing of what we're gonna try to do any any questions on that okay so professional uh yeah so we're we're gonna come up with a uh a way of doing things that doesn't require you to think about the history every time um yeah it's not maybe totally obvious right now but maybe we can revisit that point uh yeah so we're not gonna actually have to truncate things like in principles we're going to be able to take into account all the history but pretty computationally efficiently that's a good question okay so yeah let's think about the the like problem of exponential blow up [Music] so we don't want to uh well yeah it's not going to be like possible to store uh two to the capital N numbers right that uh Associated like probabilities but different Maps uh so we need some more um I guess computational equation uh representation and just for the sake of concreteness uh let's think about the the map as a vector of size um M squared so capital n and one so if there's a one in the ith element of the vector that represents uh occupied so the cell occupied if there's a zero that represents that the cell is not occupied so obviously I guess I'm just making this representation concrete and we're let's say we pick some ordering right so maybe this is like one two three four like five six seven eight something some consistent ordering that you pick on the the cells um and so we have this just a capital N by one vector that represents each map [Music] so we're going to make this big assumption which is going to be the the key to making things computationally efficient which is that for all uh I and J so these are indices ranging from one through capital n we're going to assume that MI and MJ are independent random variables so the way we're thinking about this is that each cell uh has associated with it around a variable which is just a zero or one so it's a Bernoulli and variable you can think of this like a coin toss and we're assuming that uh two cells uh whether they're occupied or not that's a random variable those two random variables are independent of each other for any pair and I guess maybe just a slightly just to be careful uh let's say I not equal to J right so for for any uh pair that's not we're not looking at the same random variable but for two a different I and J uh the random variable Mi and MJ are independent okay so I guess what do people think of this assumption does this thing right or reasonable or like one of problems with it go ahead I think in the real world yeah yeah cluster yeah yeah exactly so so real world environments have a lot of uh structure like a lot of the correlation right like if uh if a bunch of these cells are occupied uh then like it's likely that like the next one is also occupied um so this is not a a real yeah this is like not actually true like it's not actually the case that these random variables are independent um but this is going to solve our uh like problem of computation computation inefficiency which uh I kind of hinted that uh before let me pause and maybe just make sure that this point is clear I guess it's an important point I want to make sure that the the assumption is clear yeah there are questions I'm happy to try to answer okay all right [Music] [Music] okay so I guess what does this what's the benefit of this uh um kind of representation where we just assume that we have a bunch of like independent random variables um so the benefit comes in when we think about this distribution that we want to represent that I wrote up up there so the probability distribution or Maps given a history of robot States or locations and a history of robots sensor measurements um yeah so with this Assumption of Independence uh what does this distribution look like yes question or answer maybe yes good okay perfect yeah so it's the the product uh which is usually represented by this like Capital pi uh of uh over indices uh ranging from uh one through capital n sorry this is uh yeah product of of uh one through through n uh of B of m i these are the notation here okay good um where Mi is like the ayat element of M so it's a particular cell location um so for each cell we have a distribution so this is a distribution over like a binary valued variable right um so mi can either be zero or one so we're representing each of these for for each element and then we're taking the product across all the cells so that's the buy the product going from one through through n question is am I the skater did I sorry thank you yeah yeah I already asked the bar good question um okay so maybe just uh uh quick uh clarification on the notation so uh the x of 1 through T and Z of one through d uh that's indexing time um so these are all the locations uh from uh time equal to times have equal to one through the to the current like time which we're saying is T and the same with the center measurements um what we're indexing with the i is for a particular time uh the location of the cell so this might be the first cell this might be the second cell third cell fourth and so on uh and so yeah I guess that's the the distinction in the uh the notation so what we're saying here is that we're interested in figuring out or like confidence on whether the IET cell is occupied or not so let's say this was like 0.99 so that means that we're very confident that the itself is occupied given these two pieces of information which is all the information that the robot has access to which is the uh like all its locations from the first time step to the current time step and all the sensor measurements from the first time step to their current time so I guess if that makes sense while you're Computing this probability yes yeah good so we're going to do things incrementally so initially uh the robot is just going to have some like prior distribution over Maps uh and as it as like time uh like updates so as it receives uh like a sensor sorry it's a location and the sensor measurements uh it's going to like incrementally update its like confidences like its probabilities of places being on fire or not so at every time step uh the robot is going to make a update so it's going to get I guess hopefully more confident about whether or not like cells are occupied good okay um all right so yeah just to kind of be explicit uh about the the benefit here so what we're representing now uh so we just need to store in the robot memory so we need to represent capital and [Music] um probability distribution so capital n binary so this like binary is zero one random variables with this Assumption of Independence uh in contrast to uh two to the capital N numbers that we would have had to represent if we were thinking if we didn't make this assumption so that's the benefits we run from two to the capital M which is some ridiculous number to just uh capital N so that's a massive uh like boost in in the memory efficiency and computational efficiency as well okay so how do we actually do this uh so this update of the robots distribution over Maps [Music] um actually there's going to be I guess one more uh assumption so we're gonna assume that we have some prior over Maps some prior distributions of before the robot starts operating in a particular environment before it receive the sensor measurements uh the robot just has some prior distribution over Maps that's that's going to have the same representation so we're again the museum that the prior factors like the prior like kind of multiplies in this form um so I guess what's uh maybe what's a reasonable maybe not reasonable but like even just one possibility for what a prior uh could be uh for for mapping so distribution uh on uh like Maps distribution that's represented like this uh before the robot sees any uh any sensor measurements good yeah so we could say uh someone like possibility would be [Applause] P of let's say I'm I equal to one so the probability that the itself is occupied and this probability is uh something uh let's say like 0.5 for all I um so intuitively what this is saying is that before there are a lot to see many Center measurements uh it's kind of a just a coin flip uh so any uh cell like any particular cell uh is as likely to be occupied as it is to be unoccupied uh you can make this a bit more refined by changing this number so instead of 0.5 like maybe you have some idea of uh how many cells uh in your environment could be occupied so if you think about this room uh I don't know we would guess maybe what uh 20 30 or something of the the cells are uh are 30 of the volume of this room is like occupied maybe a bit less than that so maybe this is like closer to like point two or or something like that so if you have that kind of rough information of how many cells are likely to be occupied or not you can break that into your uh prior okay all right so what we're going to do is um just focus on um all right we're actually going to talk about the mapping part now so we've talked about a representation for distributions or Maps so now the question is how do we actually do this mapping we're going to focus on just a particular um MI so just one random variable so just imagine we have one cell that we pick so maybe it's like this cell over here this is the the Eye itself and we're just going to focus on on that and because we're making this Independence assumption whatever I'm going to describe for the for some like I itself we're basically just going to do the same computation for for all the cells like from one through capital n [Music] um uh and then we're going to represent or denote by z i uh so this is going to be the the sensor measurement so the robots send corresponding to um location I like Sarah um so this yeah I guess the specific sensor model like depends on the specific sensor that your robot has access to uh for the sake of concreteness imagine that the robot has a range sensor so the second sensor we were talking about in the previous lecture so there's a bunch of rays along which the robot gets the distance to the closest obstacle um so let's say that the robot is over here and it has these raise um this is giving it some information about what cells are occupied or not so anything that the array kind of before the Ray it's like I said listens that's reported these cells are likely to be unoccupied uh and then like if it hits this cell uh like that cell is likely to be occupied we're just going to keep things kind of simple here and just abstract away the the specific sensor uh and just imagine that for each cell uh the robot receives some sensor measurement that tells it whether or not it's occupied whether that particular cell is occupied but that measurement could be wrong right so there's some probability that like what the sensor is saying is is not actually the case so the sensor is saying that the IXL is occupied there's some probability that that's not the case it's actually unoccupied [Music] all right so yeah so let's do this uh uh updating of the robot's confidence and so again we're focusing on this one particular cell MI so the probability that that cell is occupied uh given some sensor uh measurements so um what I'm going to do here actually is this for the very first time step so before the robot receives any sensor measurements so just at Time Zero it has this like prior belief over occupancies and that's the prior or Maps uh at the very first time set uh the robot is going to receive its location which we're assuming is perfect we're working perfectly localized itself and it receives some sensor measurements so the location we're going to represent by X and the sensor measurement is going to be represented by z i so that's what I wrote up there so that's the sensor measurement corresponding to location I so this is a slightly like abstract kind of sensor model again for every cell we're saying that our sensor tells us whether or not it's occupied and there's some probability that the sensor is incorrect um okay so yeah this is just for the first for the first time step uh the robot receives this information it needs to update its belief about whether or not the ISL is occupied um I guess questions on on what this means like what the sensor means or or any of this notation okay so we're going to use uh Bayes rule uh to do this update as I said this is going to be a uh kind of implementation of the base filter um so this is equal to some actually skipping some uh some algebra here so going from this to what I'm going to write down is a few uh steps of like manipulating probabilities I guess if you're interested this might be a useful exercise just to brush up on probability uh going from from gift to this uh yeah you can take my word for this is uh true so this is equal to P of the I given X and MRI equal to one times the probability of am I equal to one given X [Applause] divided by P of z i given X so this kind of looks like a Bayes rule right so we want the probability that the ISL is occupied given this information and we're kind of like inverting that uh on the right hand side um but yeah like I said there's a few steps here that I'm skipping just for the sake of time but you could try to see if you can get from from here to here uh and this may be one more uh step over here uh and this step we're going from uh here to here as a assumption uh so I've just gone from the only turn that's different is this one's a p of Mi equal to one given X I've just turned that P of m i equal to one and this is a an assumption um yeah I guess intuitively there's someone see what this assumption is saying and whether or not it's reasonable it's a reasonable resumption or yes yeah maybe are you foreign against uh all right go ahead so I think the assumption is saying that uh the probability of a grade cell being likewise independent of what your state is yeah there may not be the assumptions for example yep yeah exactly so uh so it's uh it's not completely accurate but it's not uh like totally like ridiculous as well um like the robot is actually getting some information from its location right like a robot is in a certain location uh like it must mean that that cell is not occupied um but we're we're kind of getting we're throwing that information away like we're just saying that um like it's location the robust location is independent of whether or not the uh the ISL is is occupied so uh yeah I guess if it's location is somewhere like far away from I from the ISL then this is pretty reasonable um but but yeah it's it's an assumption which makes the match simpler okay so all right we're actually uh let me just explain the uh the different pieces that show up on the right hand side so left side is what we're trying to calculate this is our updated belief about whether the the ISL is occupied given this new information that the robot received at the very first time step uh the stuff on the the right hand side so let's just look at these expressions so I'm going to call this um let's see I'll call this equation one um so the right hand side [Applause] of equation one has a couple of terms so the first one is p of VI given the robots state or location and Mi equal to one so this is our sensor model uh which we discussed kind of versions of in the previous lecture so this is telling us what's the probability that the robot receives sensor measurement zi um given that it's in location X and given that the sellers in fact occupied um so yeah this you can this is I guess typically we assume that this is given to us [Applause] uh the way this would work in practice is um you do lots of experiments with your particular sensor so you tape your sensor in a bunch of different environments you put your robot in different locations uh and you just see right so you see if the ISL is in fact occupied uh and your sensor says it's occupied what's the probability that it's correct like it's actually occupied you would hope that this number is uh like large like uh if the robot sensor says that the item is occupied uh that uh um sorry the which way did I say so the probability that it's incorrect is small so the problem will be that it's correctly reporting the occupancy for for any cell is high but that's something you can characterize uh by doing experiments with your sensor so that's we're going to assume that that is given to us the second uh term over here um is a p of am I equal to one so that's this time over here so this is our prior right so before the robot received any sensor information uh that's what I wrote up there so there's some prior belief about whether or not the ISL is occupied so that is also is going to be given and the last term is this denominator term so pfvi given X um so it turns out that this is complicated too to calculate but there's going to be a trick to avoid the calculation [Music] [Music] okay so let's just look at that term for a bit so be of zi so the sensor measurement uh given location um so yeah I guess my uh the rules of probability which we discussed a few lectures ago so this product so product of the conditional distribution with the so z i given x times V of X is equal to The Joint distribution so this is actually how we defined conditional probabilities when I give the quick kind of Crash Course on probability um so I can then write this by the rule of what we call the rule of total probability as this is over m so we're summing over all possible Maps uh joint distribution so that's equal to this and so for you so just rearranging these terms uh P of the I given X is now this divided by P of x um okay so in principle we could maybe calculate this um and actually this term you can like rearrange as well but yeah we can I'll stop here so what's the problem with this if you're actually going to try to implement this and like outplay this term what would be the the issue to just think about the numerator so we're summing over m m again is the the map against a bar here so you mentioned the number of maps right like the number of possible Maps is 2 to the capital N uh so if you were to calculate this summation uh it's a summation over 2 to the capital N possibilities uh and that's yeah that's not going to be something that's computationally feasible so calculating that last term like that denominator term directly using this kind of computation is not going to be feasible um but yeah it turns out that there's a nice trick which allows us to bypass that computation um so yeah I guess I wrote equation one so similar to the one so we can go through the exact same computation up there and get this expression so that instead of looking at Mi toker one let's look at Mi equal to zero uh and by the yeah the same kind of computation up there this is um let's see so sorry given X and z i uh is equal to preo z i given X and Mi equal to zero if m i equal to zero divided by po the same denominator z i given X and I'll call this equation two okay so the trick to not calculating this denominator term is this going to look be to look at the ratio of 1 and 2 and so that term is going to actually cancel out so we will we're going to divide uh one by two so on the left hand side we have B of Mi equal to one given x v i on the bottom we appear am I equal to zero given x and z i uh and then on the right hand side which is foreign on the writing side we have in the numerator VI given X m i equal to one divided by P of z i given X uh so that's the right hand side of one and then the right hand side of two you know this times all right so we got this nice cancellation so that was the uh the term that we didn't know how to calculate but by looking at the the ratio that term cancels out and now everything on the right hand side is is something we can calculate so it just depends on the sensor model which again we're assuming that we've characterized our sensor and we have a decent sensor model and it depends on our prior uh and I guess there's one other there's one more uh step before we wrap up the description so uh we know that the probability so if you look at the left hand side of that equation so the probability MI equal to 1 given x and z i is equal to 1 minus the probability that Mi equal to zero given X and VI because we're looking at binary value with random variables like the occupancy for the ISL can either be zero or one so the probability is when you sum them up have to be equal to to one um so then the all right I guess how are we doing on time so the left hand side I'll call this equation three to the left hand side of three uh only depends on so the only thing that's unknown so if we substitute the denominator with one minus this so the only thing that's unknown is this probability and the right hand side like I said is is all like known so from this we can calculate uh what this is and that's our that's our update right so for the ISL we received some sensor measurement we received some robot location we went through this kind of visual calculation and then we we now have a updated probability that the ISL is occupied so before receiving these sensor measurements we had some prior probabilities P of Mi equal to one which we just kind of chosen in a simple way uh one third one receives the sensor measurement it can update its confidence that the IXL is occupied okay so this was the first time step and just for I guess the way I described it this is for a particular cell like this the eye itself so in practice what you would do is do this computation for all of the cells so I pulled a one through capital M and the nice thing here is that things are parallel right so you can do these computations for all of the different cells in parallel it's not like their computations kind of different on one another that was sorted by construction like we assumed that everything was independent so that makes the computations uh paralyzable um I guess the other note is what I described here as for the very first time step so robot has a prior map and then it does this update and so for every cell it now has an updated confidence that the sellers occupied or not uh so yeah I guess let me if someone asked the question of um the history of sensor measurements I think you asked like whether or not we're gonna have to take into account the whole history of sensor measurements so maybe just quickly let's let's think about how you would do this in the next time step um so after the first time step I guess this to be explicit with a picture so the zero time step before receiving any sensor measurements uh the conferences uh let's say of things being occupied or just like 0.2.2 uh 0.2 for for each cell after this uh the robot has some like new consequences maybe some cells that's more confident that they're occupied some other cells they're less it's like confident that it's non-occupied so how would the next time step go um how would you I guess kind of maybe repeat the same kind of computation once the robot receives a new sensor measurement yeah so it's actually the exact same computation as what we did over here uh the only difference is instead of uh this being the prior uh we're going to use our beliefs from the previous time step and that's going to be the the prior so at every time step we're actually only incorporating um or this computation only involves things that have to do with the current sensor measurement but we're incorporating all the information that the robot has received by like updating the uh like the map like probabilities so implicitly it's actually taking into account all the information that's received but the computations like if you look at the specific terms that appear like only have to do with like sensor measurements from the current time step and I guess this is a general property of like base filtering so if you look at um like yeah go back a couple of lectures and look at uh like the way we wrote down the base filter by maintaining a belief over the state or like the location or the map in today's lecture at every time step it's kind of a summary of all the information that the robot has receive and actually it's updating that summary at every time so I don't know that does that address your question okay good all right another question I have another question yeah so earlier we were talking about how the state or does The X represent just the initial state here okay so the X represents uh in what I wrote down over here it's yeah it's just the first like the state of the first time set um so the specific assumption I guess I it isn't but uh I think the Assumption we had made was that P of uh yeah am I equal to one given X is equal to a p of m i equal to one yes um yeah or just yeah this uh and yeah so what intuitively what this is saying is that the robot's location is not telling us anything about whether the the seller is occupied or not uh we would continue to make that assumption even though it's not really true like the robots sensor sorry location is telling us that that specific location is not occupied even in the subsequent time steps we would keep making uh this uh assumption except that this right hand side over here um actually so I guess the uh the specific assumption is going to be P of Mi given uh like the X at the current uh time step uh is equal to P of MI where this uh like right inside over here is the most up-to-date like belief about the the occupancies so we will still make this assumption like just to make the the math kind of uh simplify uh yeah I guess I didn't write write this down uh explicitly but instead of x uh which is like the the location at the first time step we're going to have a location at the the current time set like the theater time step and we'll like make this this assumption does that make sense or do you stand for these that we were writing after that original [Music] um yeah so any X that appeared over here uh I was kind of reading that as the the first time step so like X Sub 0 x sub one um and then that time index would change from like one to uh to T like the current time step yeah but the the computations would be uh like identical except that yeah it's like the sensor measurement at the current time set the state at the the current time step uh and this like prior term uh is the belief uh before this current timestamp good other questions all right uh so I think the AI sent out a announcement about the midterm reviews it's going to be tomorrow at some point I think and then we'll wrap up uh Slam in the next lecture okay [Music] [Music]
Introduction_to_Robotics_Princeton
Lecture_7_Princeton_Introduction_to_Robotics_Optimal_Discrete_Planning.txt
[Music] foreign so just a reminder of where we left things in the previous lecture so we started our discussion of motion planning and specifically we started talking about motion planning in discrete spaces so we thought about how we can take a continuous motion planning problem uh promotion planning in a continuous space uh discretize it which then allows us to use some pretty powerful techniques from Graph Search algorithms specifically we discussed two algorithms in the previous lecture breadth first search which is BFS and adapters DFS and I mentioned that there are basically two considerations when we're thinking about motion planning uh and this is similar to the two considerations we had when we were thinking about feedback control uh so when you thought about feedback control we wanted our feedback controller to stabilize our system and to be optimal in some sense uh similarly with motion planning we want feasibility so we want our motion plan to find some path from our starting configuration a to a configuration B uh that doesn't collide with any obstacles and this was achieved by BFS and BFS um the second thing we want is some kind of optimality so we might have time or length of the path or some notion of energy that we want to minimize uh so we didn't really talk about that in the previous lecture uh the goal for today is to discuss algorithms still Graph Search algorithms still motion planning on disk drive spaces that allow us to find Optimal motion plans um so this was kind of the the general structure of the algorithm we looked at in the previous lectures about breadth first search and depth of that first search had this General structure the main data structure we were keeping track of was this queue this priority queue denoted by the the letter Q we basically take things out of the queue explore its neighbors and then add things to the queue take more things out and so on so incrementally searching through the graph until we get to the the goal location um so we're gonna still maintain this General kind of structure of the the algorithm uh but we're going to make two new modifications so the first one is that we're going to implement this Q dot get vertex a little bit differently uh and basically we're going to have some prioritization that's different from BFS and DFS so BFF and DFS use either first and first out or lasting for South uh today we're gonna implement this q.get vertex function a little bit more cleverly than we did in the previous lecture so that's going to be one uh the main really modification the second modification is this part of the the algorithm so this resolved duplicate X Prime so I'll say more about that uh as we uh discuss the the algorithm okay so I guess let's first Define exactly what we mean by optimality so what are we trying to be optimal with respect to and how do we Define um some notion of performance or metric um so the way we're gonna Define optimality is by thinking of a cost function which is also known as a loss function um and the way this is going to work is that for our district is problems we have these no these vertices and edges we have this graph structure we're going to assign a cost uh or a loss and those are terms that are used analogously uh we're gonna assign each uh Edge a scalar value which is we're going to think of as a cost or a loss and the total cost of a plan uh is basically the summation of all the costs for each Edge along the plan so if we look at one particular path from A to B so that's one particular plan we can sum up the the costs along all the edges and that's our cost for the buff and our goal is is basically to find some way of getting from the start configuration a to the gold configuration B uh without colliding with the obstacles that minimizes the total cost um so this is a fairly abstract formulation so the cost could correspond to time uh the cost could correspond to energy the simplest one is just length so if you just associate uh the number like one let's say just one unit of distance to each cost that's another perfectly reasonable cost function but in principle you could encode some complicated cost structures in this formulation so as an example if you think about uh maybe uh going up a hill so things that are on this side of the the graph have higher costs so edges on this side of Fire Sauce maybe edges on this side have a lower cost yeah you can make up I guess many different uh cost functions we're just going to work with this abstract formulation okay so the algorithms that we're going to discuss today um it's going to be useful to keep track of three different kinds of vertices so I'll give them some names so the first kind of vertex that's going to be useful to keep track of are vertices that we're going to call univitive so these are basically one of these that we have not yet explored so we haven't added them to our queue yet uh the second kind of what it says are what are known as alive vertices so these are vertices that are in the queue uh so there are things that we haven't yet taken out of the queue so we haven't explored uh the neighbors of the alive vertices and finally the the third category for the fuse are what we're going to call Deadwood Thief so intuitively these are vertices that don't have anything further to contribute to the search more formally uh or more precisely these are vertices whose neighbors we have explored so essentially these are one things that we've taken out of the queue and then explored their neighbors so yeah this is just some terminology that's gonna help us as we describe the these optimal planning algorithms okay uh so the first optimal planning aluminum that we're going to discuss is what's known as a dijkstra's algorithm uh the main idea behind the actual algorithm as I mentioned previously is to modify how the qdot get vertex a function is implemented so what do you prioritize for taking out of the queue and exploring and the actual algorithm is going to use an estimated cost to get from some Vortex which we're going to call X so sorry get to a Vertex from the starting vertex a and that estimated cost is going to be what we use to prioritize which Vortex to pull out of the queue to explore so I will get into the details of that and I guess this is a reference I mentioned the textbook planning algorithms by Steve Laval it's really excellent reference for all things motion planning so these Graph Search algorithms like certain particular chapter 2.2.2 in case you're interested in exploring more okay so I guess here here's the uh the idea of index are a bit more carefully written out so for each vertex X we're going to Define uh the scalar quantity uh C star of x that's going to correspond to the optimal cost uh to come from a to X so this is the the optimal so that's why we have a star there uh what we're going to maintain is an estimate of three star so an estimated optimal cost to come from a to X for each vertex X in our graph uh and the priority queue will be sorted according to this estimate which we're going to call cfx um of the the optimal customers so C star is the optimal cfx is what the algorithm actually maintains that's the the estimated cost to come uh 2 over at x a sorry 2 over X from the starting vertex a as we Explore More and More vertices uh we're gonna update uh these estimates cfx for every vertex X that we're exploring so that's the general idea we're basically modifying uh what we take out of the queue based on this estimated cost to come okay so we're gonna initialize uh C of x uh for all X other than the starting vertex we just infinite for the starting vertex we know what the optimal cost economies is a zero so to get from a to a there's no cost that you incur there's no Edge that you need to Traverse so we're just going to initialize uh C of a which is a famous C star of a to be zero for all other vertices we're going to say C of X is going to be infinite in our initialization and then each time a Vertex is considered so each time we pull our vertex out of the queue uh we're gonna estimate uh or we're going to update uh the estimated cost gun so C of X Prime uh is going to be updated to C of x plus the cost associated with the edge that gets from X to X Prime so if you remember the the algorithm for BFS and DFS from the last lecture we consider some vertex X that we pull out of the queue we then look at its neighbors um and so for that step when we look at the the neighbors of X so for each neighbor X Prime uh we're gonna do this update so we're gonna take the estimated cost to come for the vertex X and then we're going to add in the cost of traversing The Edge that takes you from X to X Prime okay uh and then the the line uh which I mentioned at the the beginning of the lecture was resolved duplicate X of uh of uh the algorithm uh is basically going to account for the fact that there might be multiple ways to get two some vertex X Prime so there might be one way we got to the vertex X-ray and previously in the algorithm uh maybe we while exploring the graph a bit more we found a different way to get to X Prime uh we're gonna update uh the estimated cost gun for X Prime uh if the newly found path is better than the one we found before so if the the new uh like C of X Plus L of B uh is lower than the the previous kind of best estimate of the cost to come for X Prime uh then we're going to update that estimated cost to come and then we're going to just reorder uh Q accordingly based on the the new estimated cost account okay so I guess one I'll go through an example so hopefully uh this will become super clear when we go through a concrete example uh here I'm just trying to convey the intuition and some of the data structures and the basic structure of the algorithm uh I guess one question you could ask is uh we're really maintaining an estimate fear facts of C star so the estimated cost to come but when does cfx become uh actually exactly equal to the optimal cost to come for for a given vertex X and it turns out you can prove via an induction based argument that once X is dead so basically once you take out X from the queue and explore its neighbors um the estimate coax format that word XX becomes equal to the optimal cost to come see Star effects for that vertex X um so yeah basically when the vertex becomes dead we know that X cannot be reached with a lower cost so this I think it's not uh completely obvious but it's something that you can prove we are induction if you want to see that the proof laid out in chapter 2.2.2 and 2.3.3 in the planning algorithms book okay so I guess any questions on the general structure of the algorithm before we go through a concrete example all right so this is the example uh graph that we're going to look at so it has these vertices again the goal is to to get from the starting vertex a to the goal vertex p and I will find some costs to each of the edges so the numbers that are kind of labeled along each Edge correspond to the cost of traversing that edge so if you want to get from a to V3 or V3 to a um you assign a or you incur a cost of two if you want to get from V2 to V1 or V1 to V2 you get a cost of three um and I guess I'll mention again as I did in the previous lecture right now we're looking at undirected graphs so the the cost of going from some vertex we want to V2 is the same as going from V2 to V1 so all of these algorithms you can modify in a not kind of too complicated way to also handle graphs that are directed so you might have some cost of going from V1 to V2 but a different cost of going to V2 to V1 I think it's simpler though to work with the undirected graphs version okay so this is the setup so let's go through Dexter's algorithm and see it in in action questions yes yeah so you can think of the the cost as uh like part of the problem definition um and the cost is going to depend on the specific application that you have in mind and we'll discuss a little bit about how to uh or like some of the challenges with defining costs but yeah for now just imagine that someone like a user has specified these so we don't get the like modify them we just have to like our job right now is just to find the optimal apply and again from here good yeah other questions okay so all right so as I said we're gonna start by initializing uh the estimated uh cost to come for each vertex so for the starting vertex we know that the the cost to get there is zero uh for all the other vertices we're just going to initialize cfx to be infinite infinite and then we're going to initialize the queue to have a single vertex which is the starting vertex a so so far it's basically the same as BFS or DFS with this additional uh kind of information that we're keeping track of which is the the estimated cost to come for each vertex okay so we take out the starting vertex from uh from the queue and we start looking at its neighbors so there's three neighbors so it has V1 V2 and V3 are those Neighbors and as we look at each neighbor we're gonna update the cost to come for each vertex so let's look at V1 so V1 the new estimated cost to come uh is zero plus one right so zero because the the constagram for a uh was Zero and then one because that's the cost of traversing The Edge that goes from a to V1 and one is obviously less than infinite and so we found a better way or yeah we found one way to get to V1 so we lower the estimated cost uh from infinite to one for the word X we want and then we add V1 to the queue and now we're going to do the same thing for the two other neighbors of a H so let's look at V2 so we do the same kind of computation here so the cost to come to V2 is the cost to come to a which was 0 plus the cost associated with this Edge that connects a to B2 so that goes to seven so seven is less than infinite so we update that same thing for me three and AI so you also added add V2 to the the cube yeah same thing for V3 uh so zero plus two uh is two so we update the cost account for to V3 and then add that to the cube so that's basically the first kind of iteration of the algorithms at the end of that we have some updated uh cost to come for personal advantages so we want video on V3 and then we have a queue which contains those three vertices so I guess any questions on that first iteration of the the algorithm sorry so yeah let's let's keep going so the interesting step in the algorithm uh as I mentioned before is uh what to prioritize to bring out of the queue so what should we explore next in the the algorithm uh and the way that's your algorithm does this is by choosing the vertex in the queue that has the the smallest cost to come estimate uh so in the queue we have three vertices they have costs estimates one seven and two so the lowest one is V1 so that's the one that we're going to prioritize that's the one we're going to take out of the queue and explore its neighbors and yeah I'm gonna just Mark with a check mark anything that is like a dead vertex anything whose neighbors we've we've finished uh kind of updating the cost estimates for okay so yeah let's look at V1 uh next that's the the thing that we prioritize uh it has two neighbors so a which is already dead so we don't need to worry about that uh the other vertex is V2 which is not yet dead like we haven't taken it out of the queue so we're gonna update the estimated cost to come for v2 so that update is one plus three so one because uh the estimate for V1 was 1 plus 3 which is the the cost of the Edge from V1 to V2 and we see that 4 is smaller than our previous best estimate so 4 smaller than 7. so we update so seven was the the path that goes from a to V2 4 from a to V1 to V2 okay so we marked that as uh fully explored as a dead vertex with a check mark so the end of this set we now have two vertices that are in the queue V2 and V3 okay again we go through the queue and find the vertex that has the smallest estimated cost come so V2 has a cost of four across the common four V3 has an estimated cost income of two so the vertex that we're going to take out is the vertex V3 okay so yeah I guess let's go through the the same process again so V3 has three Neighbors answer a which is already fully explored already kind of like that so we don't need to worry about that we too which is not yet dead and B yeah both of those are I guess alive so uh V2 um what is the estimated cost to come when we're looking at like how to get to V2 from V3 so it's two plus five right so V2 had an estimated cost of cost to come of two plus this uh cost of five which is the the cost of going from B3 to V2 okay so I guess maybe maybe just uh check your understanding so what's the update that we need to do uh to the the estimated cost to come for v2 any thoughts good yeah no update right so so this way of getting to V2 so the this kind of new way uh which goes from a to V3 to V2 is worse than our previous than the best previous way so the best previous way uh had a custom estimate of four uh this way uh has a cost to come estimate of seven so we don't need to make any updates to the Costco estimate for what we do okay so the other neighbor of B3 is B so we we do update its cost of gamma estimate so two plus seven which is nine Which is less than infinity uh so we we update that cost so at the end of this process we have two vertices that are in the queue V2 and B again we take out the vertex that has the the smallest estimated costs come which is the vertex V2 which has a estimated across the comma four all right so we want retrieve with a check mark we explore the neighbors of V2 which are still alive so the only neighbor we do which is still alive is is the word xB as we update its classical estimate four plus one which is five which is less than nine which was our best premium testing which will be yeah everyone we do with a check mark so that that vertex is also dead now uh yeah I guess the last step is simple we only have one vertex there I take out from the queue we take that what I thought that's the goal vertex and that's the the end of the algorithm right so we found well okay so we haven't yet founder uh or at least I haven't explicitly described how to find a path uh from A to B but we did find the optimal cost uh of getting from A to B so the optimal cost is equal to five yeah I guess does anyone see how to get the the actual optimal path out as well yeah exactly so it's the same the same process as um what we saw the BFS and DFS so each time uh you make an update uh to be the estimated cost to come you keep track of the uh the parent vertex the one thing you have to be slightly careful of is when there's no update so we saw one step where there was no update to the estimated cost to come there you shouldn't update the parent right so the parent should always correspond to the best uh like parent like the parent corresponding to the the best estimated cost account that's the only kind of flight uh thing you need to be careful of when you're implementing dijkstra that's slightly different from BFS and BFS all right I guess any questions on on the algorithm hopefully the steps are clear but I'm happy to go through anything that was not clear question [Music] yes right yes um let me see fifteen okay um okay yeah so here um so here we are we've marked a as a dead vertex so we have these three vertices in the queue we only need to be three uh and then we're gonna take out the vertex that has the smallest estimated.com which was one corresponding to the V1 and then we explore its neighbors so it has only two neighbors so B2 and a uh a is already fully explored like yeah it is a dead vertex so we don't need to do anything with the vertex a so the only kind of alive vertex is V2 and so we update the estimated cost count for v2 to be 1 plus 3 which is less than less than 70. um and then yeah and then we Mark that as a dead vertex and then I guess what's the question on the The Next Step yeah okay yeah on the next step um so we have two things in the queue V2 and V3 so we take out from the queue the vertex that has the smallest estimated cost about so this is yeah this is the main difference between uh the actual algorithm and BFS and DFS so BFS and BFS it was just kind of first and first out or last and first out uh here we're prioritizing uh things to explore next that have the lowest uh estimated cost to come so over here between these two vertices V2 has a estimated cost to come of sorry P2 has a estimated cost of gamma 4. V3 has a estimated costing arm of two so we're going to take out from the queue we're going to take out a week three so that's what we did in the next iteration okay good yep yeah other questions on the the algorithm or the example all right okay so yes let's uh we went through a simple graph uh search problem here of course what we're really interested in our motion planning algorithm so we start off with some continuous planning uh kind of domain we have some obstacles we discretize our space as we mentioned in the previous lecture we get a graph and then we want to find some optimal way to get from the red kind of star that's our starting location to the green star and this video is showing how the actual algorithm explores the space I think the cost here is this distance so each vertex has a cost of one I believe and you're also allowed to go diagonally here I think and I believe that the shading here of the colors that corresponds to the the cost so anything that ran has low cost settings anything that is green has a higher cost to come okay so yeah it's kind of doing something reasonable at the end that it finds what looks like a optimal uh but it's still a little bit slow right so it's not super efficient and we'll yeah we talk about that so yeah this will be the optimal part of the fund the at the very end a question okay um so this exploration process you can think of as happening kind of in the the head of the algorithm ahead of the the robot uh so this is not a physical exploration so the robot is not like actually moving during this this process um what we're assuming is that someone has already done some kind of aspiration and given you a map of the uh the environment and so this entire process is just happening before the robot moves so it plans out apart from from the starting to the goal location and then the actual motion is going to be just the uh the execution of the the optimal plan that it found um you can modify these algorithms to do exploration as well as if you don't know the the math of the environment uh you can kind of like physically explore using DFS or DFS so yeah the robotic actually moves in different like ways and then eventually I guess gets to some kind of goal so that's that version of the problem is not the one we're thinking about here we're just saying someone who's giving you a map of the environment you know they're starting in Go locations so in the planning phase like before the robot moves you go through this album does that make sense okay good all right other questions on this okay so yeah I guess just I want to really like emphasize this uh so these are not uh like completely like intellectual uh like exercises and these algorithms like really get used in practice um so Graph Search is a core component of lots of different things not just in robotics but domains uh kind of Beyond robotics if you think about Google Maps and that's a graph search problem right you want to get from some location editors on location B minimizing some cost so the cost could correspond to various things like fuel time distance and so on um yeah I guess Google Maps has I think it uses time mostly right as as the cost but nowadays I guess they give you some Eco like friendly like paths as well but yeah I think that the default is some kind of not sure oh it's not something that this corresponds to the time um yeah so Google Maps and other kind of mapping uh or or other such like Services news algorithms that are similar to lecture they have modifications and I'll we'll discuss some of the modifications here today but if you're interested there's an article that talks about how these algorithms get used in practice all right so just looking at this kind of exploration again as I mentioned it's not super satisfying like it's exploring things in a kind of relatively uniform way and at the end of the expiration process it's basically explored everything like it could have explored so there's some onesies that are like around here uh that you save exploring uh so you don't have to explore but everything else like over here over here uh the algorithm ends up exploring what seems a bit wasteful right like we know that the goal is there to just like get like get get there right like that's what we want to tell the algorithm uh but some of the extra algorithm is not doing that exploration in a clever enough way um so uh yeah I guess looking at this animation it's kind of frustrating um so one question we could ask is can we somehow further reduce the the number of vertices that we explore uh during this graph search process um so that's why the a star algorithm comes in so this is one of the the most kind of popular algorithms for discrete function planning in robotics uh so the extra algorithm uh has essentially an identical structure to digital algorithm the only modification is going to be how we prioritize things to take out of the queue so the queue dot get vertex function uh is going to be implemented just slightly differently from what we did with dijkstra's algorithms um and the specific kind of modification we're going to make for the actual algorithm we use C of X so the estimated cost to come 2x from a that's what we used to prioritize what it is to explore with the ASR algorithm we're going to use a slightly different quantity which we're going to denote by f which is going to be C of X plus an additional term which we're going to call H of x so I guess H just stands for heuristic um cfx is the the same has the same meaning as what it did but actually that was the estimated cost to come from either X h of X is the estimated cost of goal uh from X x to the goal vertex B right so in a sense f is kind of a estimate of the total cost to get from A to B while passing through X like that's the the intuition maybe you can keep in your mind so the the first part of the cfx part is the estimated cost to go from a to X the H bar is the estimated cost to go from x to the gold vertex and B um one really important uh kind of note is that h of X so the estimated cost to go needs to be an under estimate so a lower bound on the optimal cost to go from X to the goal location B uh yeah this is super important so without this the algorithm BS algorithm is not guaranteed to work correctly in the sense that it can output something that is not the the optimal path from A to B it just output something that is yeah it's a sub-optimal so you can construct examples I won't go through this here but you can construct examples where if h of X is not an underestimate of the optimal cost to go from X to B and then if our kind of does the wrong thing and gives you a sub-optimal uh yeah there's something important to keep in mind so I guess any thoughts on how maybe for a concrete motion planning problem in a robotic setting how would we go about choosing age so let's say we look at the the problem we looked at on the the previous uh I guess slide uh so just a 2d motion planning problem uh with a starting location on the gold location and some obstacles how would you define or compute what this h of X is for any given word exacts that obvious I guess this condition that it needs to be a lower bound go ahead okay yeah so zero is perfectly uh valid right so zero is always going to be a lower bound that reduces to digital algorithm right so if you set h of x equals zero for all X and then f of x is exactly equal to C of X for for all X um and that's yeah that's the same as lecture so yeah I guess this is one way to see that uh DSR algorithm oh sorry the diagnose algorithm is a special case of a star like if you just set H to be zero so you're not going to get anything kind of Beyond lights for if you if you made that choice even though that's a perfectly valid choice I guess any other thoughts on how yep so um yes yeah good yeah so you can basically ignore the obstacles right uh and you could say if there were no obstacles uh how can I get from X to B um so you could just look at so depending on exactly what the costs are so the cost is like distance you could just look at the the euclidean distance so just ignoring all the obstacles like if I just went from from X to B what's the cost I would incur um so yeah that's one way to get a h of X which is a valid kind of like lower bound or the optimal cost to go from from X to B so yeah often we can get uh like a Heroes that get an underestimate of the optimal gas to go like that really easily especially for emotional planning algorithms actually just ignoring all the benefit almost like trivial planning problem and this compute the uh the cost to go from X to B for that problem where you ignore the obstacles that's another estimate yeah I guess questions on that hopefully again so if there happens to be no obstacles in the path that you host is that it's no longer um okay yeah sorry so by Andra Smith really I mean less than or equal to uh so not a strict inequality equal to is also a point yeah yeah thank you very much good other questions okay okay so yeah I guess we discussed this already with a special case we start like this set uh here's the algorithm spelled out in its entirety um yeah part of the reason I did this is you have to implement the extra algorithm uh in the design tomorrow but we've actually discussed uh all of this so maybe I'll just go through it that'll be quickly so the first block of code here is just initializing different things so Q that's our main data structure we initialized it with the starting water other stuff that's getting initialized here is the cost to come and the cost to go so C and H and then the function f which is the summation of CN n and this is the main kind of loop that explores through different vertices so if we take out something that is in the goal when we return over here we're taking things out of the queue the main kind of difference between ESR and indica is the thing that we take out of the queue is not the vertex that has the smallest estimated cost to come but rather the smallest F which is the smallest the cost to come plus the cost of government so that's the main monetization there so we take something out of the queue based on that f for for X we look at this neighbor and then we do the updates uh for the estimated cost account plus the estimated cost to go for for the Neighbors if you find a path that's better than what we've found before we updated we updated the estimated costs ago and we update the neighbors oh sorry the parent as well which allows us to define the actual like path from information yeah so it's the exact same structure as Dykstra the only modification is how we take things out of them or what we take out about the key yeah I guess questions on if star um yeah I mentioned this already if you keep track of the parents and I'm not allow you to get the actual part so let's look at installers I mentioned uh that dijkstra has been kind of unsatisfying behavior of this uniformly exploring things if star has a much more kind of intuitively appealing way of exploring the space so it's biasing things towards the goal because feel something that the goal is prioritizing to explore next so you can see that a bunch of vertices over here like edges of the space are not explored by a-star so it's getting it's much more efficient in terms of the website that needs to explore this the heuristic is the only constraint that it's like I'm thinking in a situation where if you if H isn't monotonically increasing with the distance one side line http um so the only condition for ASA to return an optimal plan is that age needs to be a underestimate of h-star the automotive um so uh if H is something so I guess what you're saying is that if H doesn't have a monotonic relationship with is go to the left first just by defining each so it might go that way but the algorithm won't terminate uh even if it even if it goes uh one way so it will still explore uh I guess other ways uh before it terminates so yeah estar has always guaranteed to return the the optimal part so the upper part may not be unique [Music] so there might be like multiple paths that have the same uh Records so we'll give you one of those parts that corresponds to the the optimal way to get from from A to B and yeah I think what what you're saying is that or yeah in the case that you're saying it might be less efficient in terms of the search but it will never terminate incorrectly uh yeah maybe you could like try to work through the example you said and you'll see that it's still going to explore foreign for a second so by taking into account an estimated cost to go where making the search much more efficient than it was kind of interesting to think about some of the history of uh okay star so DSR algorithm was invented back in the 1970s and 1972 by researchers working on this robot all shaky and yeah this paper on shaky which I think was published around 1972 it's actually a really amazing paperwork so if you're interested in yeah it's a lot of them in general I would recommend taking a look at the paper a lot of the ideas in that paper are still like open or a lot of the questions they were thinking about in the paper are so like open kind of research questions and a lot of the ideas still around instead of the art approaches so I'm going to play a video uh of uh so this was created by the inventors of sheikhi I think it's yeah it's kind of amazingly modern in a way even though the video is going to look I guess modern [Music] thank you for this 1970s music as well [Music] that's all right we are experimenting with a mobile robot please call in Stacy our goal is to give check the economy abilities associated with intelligence abilities like planning and learnings even though the time we just changing things right now the programs needed to plan and coordinate the activities are complex the main purpose of our research is to learn how to design these programs so that robots can be employed in a variety of tasks ranking in space exploration to Industrial Automation safety operate in this experimental environment room Norway and simple objects [Music] the top plans long to Shakey's television camera he uses it to obtain visual information about his own location or about other objects these walls were raised high enough so that shaking cannot be over them with a television camera should be uses these feelers or cat liters to tell him if he had bumped into anything even push objects with this push bar foreign [Music] [Music] and the capital of Rachel is quiet an executive program handles communication between the experimenter and the robotic Chinese movements are directly controlled by a set of low-level action programs these programs convert orders such as role of 2.1 B into appropriate command circuits on vehicles individual escape two other examples of low-level action our fans and the low level action provides some of the building blocks to be more complicated intermediate about coordinates yeah I think this is showing the other one actually operating foreign [Music] super visually impressive but I think if you if you look at the the paper uh like you'll appreciate that there's so many ideas that they came up with including a star and a lot of the problems that I mentioned that they were thinking about back then are still relevant today so there's some computer Visions the robot actually had a camera which it was using to figure out where obstacles were in the environment the opticals were kind of relatively uniformly colored to make the popular vision problem a bit easier is reviewing some planning algorithms like XR it was exploring the space so it actually didn't assume that knew exactly where the obstacles were beforehand so it's actually exploring the space like physically like incrementally that's that version of the problem you talked about um but yeah if you're interested in this I was posted a link to the video and as I can check out the paper as well okay so yeah I'll talk about a stars back in the 1970s so what's happened since then in planning uh so as with many other fields like subdomains are Robotics and other people in general machine learning has had a massive impact on the way we do things so one kind of direct way that you could think of incorporating learning into planning is by learning the heuristic right so the example that we discussed before we were kind of manually specifying what the heuristic should be like some underestimate that we come up with like euclidean distance from to The Goldberg actor or some other kind of valid under approximation but one thing you could do is just learn what that heuristic like what that cost to go like estimate is and the way you could do this is by solving lots and lots of motion planning problems so we have a bunch of different graphs a bunch of different planet problems so each one of them it fits some function like a neural network to that gives you an estimate of the the cost to go from some like animal vertex x to the gold vertex and that's actually what uh well that idea is what was used in the alpha gold system that beat the world champion and you go and installed back in 2016. so they were using some Graph Search algorithms so the specific algorithms they use is what's known as Monte Carlo research but the main idea was uh basically estimate the cost to go like estimate like the value function that is known as the learning domain for different like Integrations of the go board so if you have a particular configuration of the board at some given time you want to do is like just figure out what's the next best move and you could do a search to find a move that's going to lead to kind of the maximum probability of success in the future that search potentially is on operation I guess it is like intractable like there's lots and lots of different possible sequences that you need to explore so the trumpet how much you need to search they were basically using an alarm heuristic so yeah the way it kind of it matched to our like setting is instead of using a manually specified h of X you learn what about h of x from lots of previously solved planning problems we're going to say much more about the machine learning later on the course but this is kind of just giving you a preview of how it applies to a devotion plan questions on any of us okay all right so yeah certain time so the last thing I want to discuss so I want to just like zoom out a little bit and talk about some broader implications of some of the technical materials that we've discussed so far and specifically the broader applications of this notion of optimality so if someone asked like how does the cost function get defined is beautifying the line so that's the question I want to consider at the end of this lecture so so far yeah as I mentioned we've encountered the this like notion of optimality twice so when we looked at feedback control we said we want stability and optimality LPR gave us a particular kind of notion of optimality like the quadratic costs on State and control inputs and then we looked at optimal planning algorithms as well in today's lecture um so we consider relatively simple like cost functions and with lqr which is the quadratic like penalties on the deviation and state and deviation and control input um where uh like you're planning argument we're basically saying the option the cost could be like time or distance or energy or something like that uh but it turns out choosing a good cost function uh for a more you know challenging task is a problem like by itself um so the the main like I said that's how can we ensure uh that the cost function that we Define so this is some mathematical objects and automatic function that we're defining uh that's the really kind of precise like thing like somehow captures the less like precise thing that we have in our head right if we want our robot to do so happy we take a task and specify it in terms of a cost so this problem has a name so it's called the value alignment problem so the problem of specifying uh via a kind of precise function uh what we want our robot to do like how to make sure they're aligned with our like values right I like what we as the humans want our robots to do um so to make this more concrete there's a famous kind of a philosophical thought experiment so this was used by the philosopher mcbostrum around 20 years ago now so it's called a paper clip maximizer it's almost like any discussion you hear about the value line problem that makes the reference to this paperclip maximizer product experiment so the thought experiment here is really simple the really have a AI system or like a robot in our case whose only goal is to make as many paper clips as possible right so it's a really simple cost function so here you can think about this as a reward function so the robot gets a reward of one let's say for every paper clip that it manages to produce if you're thinking about another cost you can think of a cost of negative one for every paper clip that produces so we're minimizing the cost maximizing and privilege that's all it does all right so I guess what's the problem uh so here's what the Boston so the AI will realize quickly that it will be much better if there were no humans because humans might decide to switch it off because if humans do so there would be fewer paper clips of the human bodies contain a lot of atoms that could be made into Paperclip so the future of the AI would be trying to get it to work would be one of which there were a lot of paper clips but no right yeah I guess a lot of people think of this is this reasonable or like is this useful like a Start experiment of the paperclip maximizer any like objections or or or do you think like this is actually actually happened go ahead can you realize that like like you needed to go to like yeah scoring belt and paper like whatever material you're going to use yeah many possible outcomes to like something like this so there are many possible Avenues but I guess it's trying to find the optimal uh yeah the optimal like solution which is to maximize the amount of paper clips so I guess what you're saying is it could do something else first like when you go to the actual about mind selling materials makes paper clips but then it's going to realize oh there's like more more materials here on Earth which is like humans so I'm going to turn on the humans into paper clips as well the other kind of interesting like volunteer is the point about switching the robot off uh so the sentence where it says so the robot would quickly realize that it would be much better than humans because humans might decide to switch it off um so the technical term for that is an instrumental goal um so the robot was never instructed uh to try to not be switched off but if it's only goal is to maximize paper clips so it cannot do that if it's Switched Off so as an instrumental goal as kind of a side effect it's going to try to prevent itself from being switched off and yeah humans might switch it off and so it's gonna try to automate humans according to this I'm not sure what it is but there's something like somebody's laws there's like like yeah okay yeah sure yeah is this at all tied into that yeah it's kind of tied into that so asthma's laws are um well I guess they're kind of uh like fictional uh and yeah I mean they come off a fair bit when we think about like philosophy in robotics but the asthma abouts are like kind of hypothetical laws that you might want to encode into a robot so like not harming the humans um so here we're saying that we're not encoding like any kinds of laws like astronaut laws into our robot we're just saying maximize paper clips and these are some of the side effects that might then like until so one potential solution to this is to bake in like rules for our robot to prevent this kind of thing from happening but that actually turns out to be like fairly challenging by itself so I guess asmov's like books like the reason they're interesting is because the laws are not uh like kind of completely like water type so it has like stories basically involved like exceptions like the laws so the laws were like completely one of that I guess that would be yeah yeah definitely so that's that's uh what people like so we'd like to import values I guess so that's that's what I mean like valuable learning problem comes from so if you could somehow encode values into our robots maybe prevent something like that from happening but yeah it turns out to be relatively challenging to do that I guess other thoughts on this like does anyone think it's not realistic or or not we're thinking it was good I mean it doesn't really seem like and I think I think that's a kind of reasonable like counter argument like if you look at our robots today I showed like videos of many bloopers in lecture one we're like pretty far from like this level of like capability but I guess the proponents of this kind of thinking say okay like maybe you don't have to worry about it now but like maybe we'll see like some pretty rapid like progress as we have in the last like decade with the machine so we need to think about this to be prepared for like the time and this when like we are actually close to this kind of capability and it's interesting to think about it did you have a yeah I guess I mean or optimize uh value over like a center members yeah parameters that we decided they can be out of there is no yeah then not gonna consider that yeah right yeah yeah you could you could like bake him uh like some kind of additional like values into into the robot but then you have to think about like all the possibilities like here I think he's just laying up uh some potential like side effects of this maximizing paper clips but it might be like hard to enumerate like all the things that could go wrong so another kind of uh neat example of this is if you ask her about to if you give a robot like Rewards for cleaning things up in your leg room let's say so maybe this is the whole robot uh who gets a reward every time and they clean something up the robot could decide to like break things uh in order to clean things up right to like break things clean it up and like get a reward so that's another new example go ahead um how could this be using correctly so like I know when it's like it's like I want to hear assumptions is that everyone is an attorney is it wrong at some point so you kind of have to like build safeguards into it so that kind of is what this makes me think of yeah including something we're about to do like think like your logic might not line up exactly look like yep how you laid it out yeah yeah exactly yeah okay so yeah two possible uh okay let me sorry before that uh so yeah this will be concrete so this value line problem is like how can we make sure that our cost function like this mathematical object like actually aligns with our like human values uh and we need to somehow take into account side effects so a little about like breaking things to clean them up as well eliminating the humans or instrumental goals which is like trying to prevent itself from being switched off when we specify some uh so there might be yeah some some possible objections or like other arguments to this like paper clip maximization thought experiment uh one we're taking some of you like mentioned is that it sounds somewhat like far-fetched like we're pretty far away from uh like robots having uh like enough capabilities to really need to worry about this and the other one is like maybe somehow or can we not just like learn values from humans uh Super Robot is like observing humans kind of going about doing things can be something I learned from that the second uh Dependable Point here is going to tie into our like discussions on learning later on and maybe if we have time I'll say we can agree with that there's a value line problem but I just want to address the first one so like maybe we're not at the level of capability where it makes faster uh to really worry about this but I think we actually are so we don't see this yet in the context of Robotics but we do see it uh when we're thinking about uh like social media like Platforms in particular so yeah I guess I would argue lots of people argue that uh this value alignment problem is already a real like problem that we're encountering today uh when we think about uh like social media like albums but like prioritize what to show you uh on our social media platform uh so these algorithms have some objective so they're not like pure like engagement right I guess the the usual like steps that people beat like social media like purely engagement they're trying to maximize like other things but there's some kind of victim objective like some the cost function or the work function that the algorithm is using to prioritize like what to show you like popularized and like maximizing and they have like some correlation with engagement right so I guess if there's like no correlation with computation then clearly make a profit but whatever like the actual uh like objective is that they're maximizing uh can't have like pretty neat um like AdWords consequences I guess we've seen over the last few years so I think even though in robotics we're not at the level of capability where uh like the value language problem is necessarily like a really like wrestling challenge we see this in other domains so it can be interesting and important I think all right yeah I guess any last comments or thoughts okay I think that's all I had so I'll see you
Introduction_to_Robotics_Princeton
Lecture_23_Princeton_Introduction_to_Robotics_Reinforcement_Learning.txt
all right Dad maybe we can go and get started so the plan for today is to uh to cover the enforcement learning so this is going to be basically the the last kind of technical topic that we cover in this course so in the previous lecture we talked about imitation learning uh which is this idea that you collect some demonstrations some kind of expert data uh from a human and when you learn a neural network that basically tries to mimic what the human is doing and you can apply this in the context of driving for instance like Lane following by mapping a neural network from images that your robot car observes uh to control inputs in the form of steering to mimic basically what you saw the human doing and we saw this kind of working in practice for an autonomous vehicle examples this is urban driving in in the UK using imitation learning and it works pretty well right it's a little bit kind of jerky like the staring motion but overall uh it can mostly does the um so I guess one challenge with imitation learning uh is that we're somehow like inherently limited uh in terms of what we can achieve uh by the demonstrations that we were provided by the human right so we're ultimately just trying to mimic what the human is doing we're not really going Beyond um what the human has shown us how to do so one I think kind of natural follow-on question is can be automatically discover strategies that are potentially better than just uh mimicking what the human is showing us how to do so that's where reinforcement learning comes in uh so I guess to give you the the kind of basic idea behind our important learning here's a video of reinforcement learning kind of working in practice in a simulator so the the goal here is pretty simple so the car needs to park itself in the kind of red rectangle that you see in the bottom left of the video uh and what's going on here is that the car is given a penalty or a reward so if it gets close to the The Parking Spot It Gets a kind of higher reward I could crash this into something then get the penalty if you can think of that as a negative board uh and it this is basically trial and error learning right so try something uh get a reward and try something else yeah get to reward uh and it tries all sorts of crazy things that you can as you can see uh in the beginning it's kind of very random it like crashes into all the other cars or crashes into it uh everything uh and yeah this process keeps going so this is attempt number nine I'm not gonna play the whole video it's very long uh but yeah let's maybe skip ahead so it's still struggling attempt twenty thousand uh it starts doing something kind of reasonable like at least it gets to the vicinity of the the red uh parking spot but then it overshoots uh and then yeah we keep going so this is a Time fifty thousand it's still uh still learning uh all right so 200 and it's yes it's still it's still going I'm gonna skip right to the end here so uh 310 000 attempts uh and it learns how to park itself uh and it actually does it so that the final performance is relatively uh impressive like it's able to actually uh maybe let me just play that little clip over there so from different initial conditions it's able to park itself and I think yeah so this one it like turns and backs up um other parts itself like really quickly and then this one uh one of the the ones it kind of uh yeah backs up like that and then turns and Parks so the final performance after this kind of problem error learning process uh is actually fairly uh robust to initial conditions so here it does like a kind of K turn and of course itself all right so I guess that's uh the basic idea behind reinforcement learning uh so in reinforcement learning we have some robot uh so in the RL literature this is typically called the the agent that's interacting with some environment uh the environment and the agent the environment the robot have some state so the state I guess here we're using The annotation from our feedback control lectures so XT is a state UT is a control input to the at any given time there is some stage for the robot on the environment the robotics the control input and there's a new state and there's some cost or reward that the environment gives to the robot and that's basically the the main kind of signal that the robot is trying to minimize if we're thinking about cross functions so the goal is to learn some controller so some mapping from States or the robot sensor observations to actions that will minimize the cost that you're getting from the the environment uh so yeah I guess after the video showed so RL is basically up doing kind of trial and error so we're going to parameterize some controller some feedback controller some mapping from States or operations to actions using a neural network for instance and then we're basically going to tune the weights of the neural network we're going to tune the controller to try to minimize costs um so I guess I showed you kind of a slightly silly video just to show you how our works at a high level but over the last few years there's been a a lot of progress in RL so I guess about eight or so years ago there was a lot of progress on using RL to solve video games so these are Atari games like all Atari games and this is showing um yeah basically using the script to be the end here again yeah using like enforcement learning to learn how to play these these video games and you can get kind of like superhuman uh performance on on many of these tasks I guess another really famous example around six years ago 2016 is the alpha go system so this was from bigmind uh so they famously uh beat the the world champion and go at least at all back in in Army 2016 uh using some deeply important learning techniques uh and in the robotics uh kind of community it has also been a lot of excitement around the reinforcement learning so this is an example from Google so this is their arm form where they had a bunch of robots I think about 14 or so robotic manipulators that are basically learning to grasp using reinforcement learning and again the basic idea is the same so initially you start off just trying random things and you see how well you perform and then over time you try to maximize the reward so the reward here corresponds to success so I guess a reward of one if you pick up something and let's say a reward of zero if you if you don't pick something up and after last surprise you you can get um are really good at grasping so I forget what the exact success rates were for for this one but yeah nowadays for these kinds of uh objects you can get uh somewhere between 95 and really 98 success rates uh and yeah I guess a lot of these objects are were not seen during training so during training time uh you use travel matter on a particular set of objects and then I test test time you deploy it uh on objects that that you haven't seen before and you can see it doing some relatively it's like learned to do some relatively clever things maybe if I go back over here so over here it kind of like separates the the objects and and picks up the yellow one um and it's able to pick up uh flexible so like non-rigid objects as well which is uh has been kind of uh traditionally challenging for more like model based techniques for for ground spring you see there's more I guess some interesting ones here yeah so basically it was able to learn to grasp a pretty diverse set of of different objects using default learning um and even more recently so the last uh two or three years or so um there's been a kind of resurgence of interest in using deep resource learning for Locomotion so this is uh from Marco Hunters group at xerick where they're using deep reinforcer learning to train uh quadruped robots and also some bipedal robots uh to uh to walk over different kinds of terrain uh and this is yeah this has been in simulation and that the training is happening um yeah so they're like simulating lots and lots of different robots that are all kind of going through this trial and error process uh with uh with different terrains and then ultimately they deploy uh the Learned controller um on the the real Hardware system so this is the the animal uh core repeal uh robot that that group develop and you can see that it's like pretty good right so it's able to to walk over different terrains navigate through different spaces climb over different different objects all through this kind of end product this is the end product of the reinforced learning process so we're actually going to spend a bunch of time uh looking at this paper in particular uh figuring out uh what were the ingredients to make it work I guess once we've covered some of the basic math behind RL uh so I guess some some more General comments on on our uh so resource learning is generally much more challenging than supervised learning for a number of reasons so in supervised learning the data that we're learning from is basically provided to us so we have some fixed like static data sets like imagenet for example and that's just given to us in reinforce learning we are collecting data so the robot is kind of actively influencing the data that it sees so it's trying something out in a simulator or out in the real world and that influences the the data that collects and small kind of changes to the robot's actions can have pretty drastic impacts on the future data that it sees and the future data is what it's using to learn and so so you need to basically reason about the Dynamics the reason about how actions at the beginning of a episode can propagate and lead to a drastically different action different outcomes in the in the future this is also also this problem known as a credit assignment in reinforcement learning uh which is that if something went wrong uh figuring out exactly why it went wrong so what action maybe somewhere at the beginning of the trajectory led to some kind of inevitable Collision maybe so you need to figure out you need to like assign credit to actions potentially way back in the past for outcomes that happened much later in the future and this is a major kind of technical challenge um but I guess the power of uh of reinforcement learning deep reinforcement learning uh comes from what I mentioned right at the beginning of the lecture uh like this potential ability to discover like automatically discover clever policies uh controllers feedback controllers that are hard to uh specific explicitly like right down by hand so it's kind of going through this learning process and discovering things that uh yeah maybe beat the world champion and go for instance um the other I guess major kind of appealing feature of deep reinforcement learning uh so I guess the the term deep here just means that there's neural networks deep neural networks so we're using somehow for for Universe learning exactly how it depends on the specific method and we'll get into that uh but yeah it's one of the the really exciting things uh about deep resource learning is the ability to handle Rich sensory inputs so if your robot has a vision or depth so these are like pretty high dimensional uh sensory alterations that your robot is getting are turning that into represent that you can then use to plan and and select actions and that's one of the again really powerful things about deep RL um in practice uh to apply the pre-worker learning you either need a good simulator so I guess as we saw in the video the straw and error learning process can take lots and lots of time lots and lots of iterations and while the robot is learning it could do really silly things so you need potentially a good simulator or the ability to perform lots and lots of Hardware experiments so that's what they did with the Google art form they set up many different arms lots of different objects and they kind of automated the data collection process and that's what made that possible okay so that's the the kind of high level overview of deep reinforced learning any any questions on on that okay so let's start trying to formalize uh this idea of reinforced learning uh the main kind of technical uh I guess concept uh that uh RL works with is what's known as a markup the session process mdp um so the notation we're going to use in in just I guess today's lecture is going to be consistent with the literature on RL which uses a slightly different set of conventions than the people in feedback control do the meanings are kind of the same but the the letters that people use in NRL are different so we're going to use St to denote the state of the robot and its environment at time step T and we're going to use 80 to denote the the action so previously when we talked about feedback control and I guess most of the course we used XT as our state UT is our control input and the meanings are identical this the symbols are going to be different to be consistent with the vrl literature so SD fit 80 control input or action um so a Markov decision process mdp is specified by some State space so that's something we're already familiar with from feedback control some action space so this is our control input space Also familiar from feedback control some Dynamics uh potentially probabilistic Dynamics which is also familiar from when we talked about based in filtering particle filtering all of that so this is exactly that so this assigns the probability of the state being equal to or the legality of St plus one given that the previous state SD and the control input the action that the robot took 18. uh and we have some distribution or initial States so B of s0 uh the so I guess everything Above This reward function uh is familiar to us from when we talked about basic filtering right Dynamics model distribution over initial States uh State space action space uh the one I guess new thing uh when we're defining a market of decision process is the reward function so this is analogous to the the cost function uh which we saw when we talked about lqr you can think of the reward as just the negation of the cost so I guess RL people are maybe more optimistic they think about rewards uh rather than the costs but it's the the same basic idea so uh your robot gets some reward so this is a scalar typically non-negative reward that's a function of the state at time t s t and the control input like the action 80. um so I guess with that set up any any questions on on the uh whoops sorry on the setup here okay so yeah with that set up our goal is to find some policy uh so again the the term policy uh is just a feedback controller this is another one of those things where there's a distinction uh in their terminology between feedback control and RL so I'm going to try to stay consistent with the the RL terminology so I'll say policy but yeah you can think of it as a feedback controller so our goal is to find a policy that maximizes the expected reward over some time given time Horizon uh we're gonna work I guess for some technical reasons with the stochastic policies um so this is a distribution over actions so at any particular State uh the policy is going to map that state to a distribution over controlling controls rather than like a deterministic choice of of action and yeah I guess this is for some technical reasons that we'll get into a little bit later um so formally uh the objective in reinforce learning uh is to find some policy so find some Pi that maximizes the cumulative expected reward so summation over zero to some capital T that's our time Horizon potentially T could actually be Infinity so an infinite time Horizon we're summing up the expected reward at the the different uh uh and the randomness here uh comes from two places uh so one is because we have probabilistic Dynamics so given a state in our action we have a distribution over the future States uh and we have a stochastic policy so given a particular State the choice of actions is Is Random um there's a slightly different version that's uh sometimes kind of convenient to look at which is a discounted version so everything is kind of identical here except that we have a gamma to the T that's inside the summation and Gamma is some scalar that's less than or equal to one and basically what this is doing is setting an effective time Horizon so typically you use discounts for cases by the time Horizon the capital city is infinite so what this is saying is that for short t for small T care more about the the reward so when Tia small uh gamma to the t is something that's close to one so when T is large gamma to the t is something that's small so this is a more heavily weighing rewards that are in the near future less heavily Way Rewards that are far away uh we're not going to do too much with the discounted version but yeah I guess it's good to know about since it's often used in practice any questions on the formal kind of problem set up and what we're trying to do in NRL all right and there's a different variant as well uh partially observable markup deficient process or upon VP so this is the same setup as a mdp with the exception that we don't get to observe the true state so with an MVP the the policy is a function of the state with the Palm DP we don't get to see the the true State what we get is some observation and again this is something that we're familiar with so when we talked about Bayesian filtering we had a observation model a sensor model so that's exactly what this is in the reinforcer learning literature this kind of distinction between mdps and palmdps is sometimes ignored or not yeah paid that much attention to and people often think about some sequence of observations so some sequence of images let's say that your robot is getting as corresponding to the state and then you kind of forget about the fact that there's this uh partial observability just by saying that whatever I'm actually observing like the sequence of uh observations from my camera I'm just going to treat that as a state and I'm going to pretend like I have a mark of a decision process so I guess that's what we're going to do in this lecture we're not going to uh think that much about the partially observable case we're just going to assume that we have access to the state and in principle the state could be something High dimensional like images that your one was getting or a sequence of images from from like kind of previous time steps all right um so I guess I think it's useful to just think through what I described here with the mdp uh and connect it back to our feedback control uh lectures so when we discussed optimal control uh specifically when we discuss lqr the setup was very very similar right we had some Dynamics linear Dynamics we had some cost functions analogous to the reward and we were trying to find some policy some feedback controllers some mapping from states to actions that minimizes the costs here we're maximizing the rewards but it's kind of the same basic idea uh one sort of Distinction in terms of emphasis in reinforcement learning uh has to do with whether or not we assume direct knowledge of the Dynamics of the system uh so in reinforcement learning is there many variants of RL but often the focus is on coming up with algorithms that don't look require explicit knowledge of that Dynamics so don't require like an explicit form for p of St plus one given st80 um so that's that's what we're going to do in this lecture so we're going to develop a model free reinforcement learning algorithm so there's still some actual Dynamics we're just gonna not assume knowledge to their Dynamics um so yeah I guess the the main assumption then sorry good yeah why would you do that um so part of the motivation uh is to do reinforcement learning in Hardware so I think that the clearest may be justification is if you're doing reinforcement learning on the actual Hardware system uh then it's a kind of convenient assumption to make that you don't have explicit knowledge of the Dynamics um because um yeah you're basically like you don't then have to like come up with a Dynamics model for your Hardware system you're just going to like try things out uh see how things perform and then connect to the QR policy to improve the reward but yeah so there's a I think that that's the main kind of uh justification often people don't do that like people do research learning in simulation uh and so as soon as you have a simulator uh you have a model because someone wrote the simulator and the simulator is kind of explicitly like simulating the Dynamics of the system uh and there the justification is not as clear uh um and there are like other I guess techniques that that also are like Justified like Beyond like model free techniques like model based Improvement learning techniques are are Justified if things are insane but uh yeah I think there's a kind of um it's a slightly like weird thing in the the community like people often apply model 3 reinforcement learning techniques in a simulator and sort of pretend like they don't have a model but it's in Sim and so there must be a model because someone wrote The Sim and that's a it's a bit of a disconnect a question yeah I think like I said the clear justification is when the true Dynamics are a really hard to model so you have some like fluid dynamics or like contact is often like pretty hard to model and if you're doing learning uh on Hardware then it makes sense like then like you're not explicitly coming out with the model but but yeah often that's not what people do people do it in in Sim um so it's still valid right I mean you can apply like a model free you can like ignore the fact that uh the simulator um has encoded as part of it like the Dynamics you can ignore that and just apply a model free technique and that's often what people do I guess partly it's just like kind of convenience partly also it's that the the simulator could be pretty complicated um so writing down uh kind of explicitly um sorry Puerto yeah writing down explicitly uh like this probabilistic model even with a simulator uh could be uh just annoying to do or a little bit challenging and so yeah partly it's a kind of good questions other questions [Music] yeah so simulators um they're often based on kind of first principles uh or mostly based on fresh vegetables with some augmentation like some maybe components that are learned from from data but yeah I guess the the most like popular uh like simulators that people use in reinforcement learning are yeah it's kind of like physics physics based uh simulation not uh or at least the the kind of equations are not learned like the parameters uh could be learned um from from like real data but uh yeah it's not like we're learning ethical by me like those those things are like the equations of motion are [Music] all right so see I'm going to switch to the uh the Blackboard uh so the specific approach that we're gonna look at uh is what's known as policy gradient [Applause] so I'm gonna describe the the most kind of basic version of policy gradient I'll point you to some extensions that are used in practice and then yeah like I said we'll think about uh all the different tricks that you need to use to actually make this stuff work in practice for robotic systems so the high level strategy uh behind policy gradient is pretty straightforward it's basically what was Illustrated in that kind of car video so we're going to start by choosing some policy so policy again is just a stochastic mapping from states to control inputs we're going to call that by Theta and you should think of this as a neural network parameterized by weights Theta and so the the output of the neural network is a distribution over action so distribution Over Control inputs if you have finite number of actions then you can think of the the output of the the network as being a discrete probability Vector so a vector that that sums to one and that is between each element is between zero and one if you have continuous actions then you can think of the policy as outputting some parameters of a distribution so let's say a gaussian distribution on control inputs and the input is the state so let me let me just write it down again so python assigns some probability or likelihood actions for any given state so yeah we randomly maybe select some some policy some neural network parameters data and then we're going to estimate the gradient of the thing that we're trying to maximize so the thing that we're trying to maximize is the expected cumulative reward so we're going to estimate the gradient of the expected uh cumulative reward and this is the the gradient with respect to our neural network parameters and then we're just going to do a gradient Ascent so we're trying to maximize the the expected reward so it's been Ascent instead of gradient descent so we're just going to update our parameters Theta in the Direction of the estimated gradient and we're going to keep doing this so we're going to try running this policy we're going to see what happens based on what happens we're somehow going to estimate this gradient and then we're just going to take a step in the direction of the waving and we're going to keep doing this until we hopefully converge to something that's good I guess any questions on the high level uh strategy here tobacco rate exactly yeah so Peter is what is parameterizing the policy so if I give you a particular setting for the weights theater then that gives us a particular policy so particular mapping from states to distribution or actions yeah so you can think of this as uh maybe a convolutional like neural network that has a certain set of widths and just the concatenation of all the weights according that up here good okay um so just uh pictorially [Applause] so yeah I guess a useful picture maybe to have in your mind is that you have a particular policy and you run it so you um run in this case let's say three three different times so this is some initial State these are three different trajectories that you get from running your policy and maybe yeah I guess maybe there's some slightly different initial conditions there's also a stochasticity coming from the policy and stochasticity potentially in the the Dynamics of the system as well and maybe you find that these two trajectories lead to high reward um and this trajectory let's say leads to low reward so intuitively what we should do is increase the probability of choosing the actions that were chosen in these trajectories and then we decrease the the actions that were chosen in in this trajectory and the specific kind of adjustment that you make is going to be based on the estimated gradient of our expected cumulative reward and I guess that's one more thing to emphasize uh what I'm going to describe is going to be model 3. so we won't assume explicit knowledge of the Dynamics so P of s t plus 1 given we want this here we know what it is of course there is some Dynamics but yeah we're just not gonna we're gonna assume that we don't know exactly what it is uh we're just gonna assume the ability to try things out uh in the the real world or in the simulator and based on our trials we're going to try to improve the policy okay so I guess let's maybe just start by making this observation um so the probability uh so I'm going to subscript it with with Theta uh so the joint distribution over States and actions t80 um so if we expand this joint distribution out just by definition of the joint distribution conventional distributions we get this [Music] this one Pi Theta be so this joint distribution Over States and actions you can expand it out into the initial distribution of the state so that's P of X zero and then a product of the Dynamics kind of distribution the probability of s t plus one given sd80 and the policy um and we're going to use this kind of shorthand notation so I'm going to Define this distribution this joint distribution we're going to call it P of data Tau where tau is a state action trajectory so it's just the kind of ethere of a0 this one A1 all the way up to SD a d um so I guess with this notation our goal is to find some optimal setting of the neural network parameters or some optimal policy which is just the art Max or Theta of our expected cumulative yes this is just helping us compress notation a bit a little bit so Tau stands for trajectory and so here what we're doing is looking at the expected value over the randomness in trajectories that again comes from two sources so the randomness and the Dynamics uh and then the randomness and the policy and we're looking at the expected cumulative reward I guess any questions on on this notation here all right so one thing we can do so yeah we're trying to maximize this one thing we can do uh is estimate this expectation taking a bunch of samples so we can approximately estimate um so I'm going to call this actually uh J of theta so for any policy we have expected reward that's the thing we're trying to maximize so this is kind of similar to supervised learning uh where we're trying to minimize the training loss here we're trying to maximize the expected reward so we can approximately uh calculate J of theta like this expected reward by taking a bunch of samples capital T um so yeah I guess what we're doing here is we're fixing data and then we're running the policy capital and times so we end up with capital N different uh trajectories so capital and different state action trajectories and we're just looking at the the average or average sound reward and that's an estimate for the thing that we're trying to maximize uh so one thing we could do is finite differences right so I guess we talked about this when we talked about uh uh empirical responsibilization in supervised learning um so you can order uh data in every component uh you can see uh what reward you get for the kind of unperturbed version and then the product version and then you can look at the difference and then divide by the amount of the vertebration and that gives you an estimate for the gradient and then you can take a step in the direction of the gradient um so I guess does that seem reasonable uh maybe what are some challenges with that or how long how long will that take how many steps for uh like for like to estimate the whole kind of a gradient vector so yeah I guess the number of like parameters right uh is really large so Theta is uh all the weights of a neural network so maybe millions of parameters um so to get each component of the the gradient you need to perturb each component run your policy and then look at the difference and yeah you do that for for every component of theta so this is going to take a really really long time and that's it's not uh particularly kind of efficient or feasible um so yeah I guess find a different thing uh you can always like try it out but it's it's rarely uh efficient we're gonna I guess not to use finite differences we're going to find a different way um so find a different way to estimate the gradient of theta with respect to sorry the gradient of J of theta so that's the expected cumulative reward with respect to Theta and again we're going to do this in a model 3 way so our estimate our estimation procedure is not going to assume knowledge of dynamics of the system [Music] all right so um let me just make things a little bit more compact even so JF data uh reset is the expectations or their or Anonymous and trajectories so the expected a cumulative reward so I'm just gonna abbreviate the cumulative reward as our doubt so the reward as a function of the trajectory so I'm just defining this uh to be the summation of the rewards from zero to a capital T are SD [Music] um so the the gradients actually sorry let me just uh right and I'll write this expand this out a bit so by definition the expected value uh is the integral uh of uh P Theta Tau R down the top um so P Theta we defined a pairs left joint distribution Over States and actions so the gradient of JF data with respect to Theta is the integral of the gradient P of theta down all right so yeah this is a valid expression for the gradient this is the thing that again we're trying to estimate once we have this estimated then we're just going to update data in the direction of the trading um all right so I guess this this term so the gradient of P of theta relative to Theta so is that something we can calculate so if we can calculate it then we're in business we can approximate this integral we have a bunch of samples and then estimate the gradient that way this is something we can actually given the setup we have yeah so just directly um this term involves the uh like the the probability uh of like the next safety of the previous state and the the control uh input uh but yeah I guess you're right so there's a way there's a trick basically to uh get this which is in terms of the the Dynamics which we're assuming we don't know uh to purely depend on uh gradients with respect to the policy which we which we do know uh and yeah this is a a neat trick that shows up quite a bit in reinforcement learning and in other contact as well [Music] which is known as the the log trick or the policy gradient long trick sometimes it's also known as the reinforce trick uh so this reinforces the the method uh well the paper I guess that uh forced uh used this trick to do policy gradient um so the trick is to rewrite this term so the gradient uh PF data now by first just multiplying and dividing by B of theta down you have to have it down so we haven't done anything yet we're just multiplying and dividing by the same term um and this is the gradient of log P of theta uh okay and this is from the chain rule um so if we look at the the gradient of the log of P of theta down uh so if you use the chain rule so that's one over B of theta Tau multiplied by the the gradient of P of theta down uh and that's yeah that's exactly what we have over here so this portion over here uh is exactly this right by the chain rule uh so one hour uh PF data so that's this part multiplied by the gradient of B of theta and that's that's that part I guess any any questions on this bit of algebra okay so yeah we're we're gonna just rewrite this gradient using this this trick so the gradient of J of theta we can write as P of theta gradient log times our down detail so we still haven't quite solved the problem right we instead of a gradient of Po Theta we have something that looks even more complicated now gradient of log of P of theta so it's not clear that we made any any progress but if we then expand out so leave this fee of theta [Music] um so we're going to plug in this PF data expression into that expression over there so we have gradient foreign [Music] the log term uh yeah I guess the kind of complicated term is the the log P of theta so log gradient Theta P of theta or not sorry the other round [Music] because the gradient of the log so let's just actually look at the log PF Theta Tau is the log of this so it's a log of products which becomes a estimation so I guess that's kind of why we introduced the log to turn the products into a summation so we have P of uh s0 or a log of BFF zeros Plus summation log by Theta Plus log okay and then if now if we look at the gradient uh of log B of theta down so the right hand side over here and the only dependence on Theta is from this term so this term over here in this term over here and these two terms do not depend on Theta so when we take the gradient with respect to Theta we have no contribution so we're just left with gradient Theta log by Peter 80 given s t all right so we're almost there actually uh any questions on on this calculation all right so the last step is to just plug in [Music] um so we were trying to yeah so we had the the gradient expression over there um so we can rewrite this gradient Expressions the gradient of JF Theta uh we have called this equation one we call this two so gradient of JF Theta is the expected value so this is just from equation one so it's the expected value of Theta log data times there are more right I'm just rewriting this integral as an expectation so it's an expectation because we have the product of a probability Sometimes some something so we just end up with the the expectation of that something and the gradient term we've calculated over here so we can just plug that back so gradient log B of theta is the summation of gradient Theta log Pi of theta as three plus one given St 80 uh times the the reward I'm sorry there's no detail here it's just the expectation okay so I guess what have we done uh the main thing that we've done is turned the gradient of Po Theta which we had up there so it was creating the photography of theta and creating the P of theta over there uh into a gradient of something that just involves the policy right so the policy is something that we are explicitly writing so we're writing it down as a neural network uh and so we can actually calculate uh what this gradient is using automatic differentiation for example the kinds of techniques used in the the previous assignment so everything here so maybe I'll actually just write this down slightly more explicitly so the reward term is just a summation of our sd80 so this expectation we can approximate via sampling so we can run our policy a bunch of times let's say capital M times and just average um this so one hour n and then I equals 1 through n uh of this turbo over here [Applause] yeah and the the cool thing over here is that everything on this right hand side is explicitly computable uh via just satellite questions oh sorry sorry I I don't know why yeah I better start doing that thank you oh this is sorry this is still yes it's just over here thank you yeah so is this the distribution of our actions given the government State uh so because we're parameterizing the the policy using a new network again this this term is explicitly computable we can take the gradient of the log uh and this term over here actually we just index with eyes since we're sampling over here so if we run the policy Capital end times we end up with capital and different state action trajectories these are like yeah explicit values for States and actions or uh Time Zero to the capital t uh so we can calculate what this is we can calculate what this is uh we can just take the the empirical average over here and that's our estimate for the the gradient and again I'll emphasize that this is model free in the sense that nothing over here is assuming knowledge like explicit knowledge of the Dynamics of the system right we're just taking the gradient with respect to the policy and over here we're just looking at the rewards that the robot is getting from the environment and yeah nothing over here requires knowledge of P of St plus one given St 80. okay so yeah I guess that's the numbers kind of the complicated part like finding a nice way to to estimate the policy ingredient [Music] once we've estimated the policy gradient we can just apply variant ascent so the reinforce this is the name of the algorithm from the paper that first kind of used this idea to reinforce policy gradient so the first step if you sample a bunch of trajectories it's a capital N of them using your current policy Pi Theta the second step is estimate the gradient using that where did it go you can start a question up there equation three so using equation three and then you just take a step so you update data uh as theta plus uh some Lambda so that's going to be our learning rate times the the estimated gradient um and that's pretty much it the Lambda is some parameters that we choose it's the learning rate so it's analogous to the learning videos basically studying how large of a step you take uh in every update uh for uh for Theta um all right I guess any any questions on the algorithm or the the algebra all right um one I guess important uh sort of implementation um or maybe just one important observation is that the learning rate can have a really large impact on whether or not this algorithm works well in in practice so if you choose a learning rate that's very small then it's just going to take a while for your policy to converge to something good if you set it to be too large um then you might take a step in the policy parameter space that ends up leading to a policy that's very bad uh and it might be impossible to recover from that and the reason is in reinforcement learning the data that you see which allows you to estimate what this gradient is is influenced by the the policy right so these trajectories over here that we're using estimate the gradient and those trajectories themselves depend on the policy so if you have a bad policy you might just end up seeing trajectories that are like a useless in terms of estimating the gradient and you might not be able to recover from that so if you end up with one bad policy you might be stuck with that bad policy kind of forever and the unfortunate learning algorithm is not going to converge to something good so in practice there are a number of variants of this kind of reinforce policy variant algorithm that people use to practice uh and the basic idea is to [Music] take steps that are large enough to make progress in terms of maximizing the reward but not so large that you're kind of just completely messing up uh what the the policy is um so yeah I guess I'm not going to the kind of details of of that I'll just mentioned um a variant that is maybe the most popular of this right now uh so it's called proximal policy optimization so it was developed that open AI in 2017 uh and yeah it's still I guess uh probably the the most popular uh variant of policy period that people use in in practice and the basic idea is that it controls the step size and prevents the policy from being updated too much but at the same time but you need to update it of course enough to to actually learn a good policy so all right I guess any any questions on the theory before we talk about some of the implementation aspects all right okay all right so uh this variant that I mentioned the PPO professional policy optimization has been used for a lot of different applications so in robotics uh I'm just going to highlight a couple of them um so this is a group from from Berkeley uh and collaborators where they used PPO to train in simulation walking policy for the Cassie bipolar robot and then they were kind of running this on the on the actual Hardware I think they took it Outdoors at some point I won't play the whole video there we go uh and the other paper that I mentioned uh this is from Marco Hunter's group at the Eda Zurich uh let me actually just play the whole uh video and then we're gonna talk about different aspects of it [Music] thank you so we just get to the last part this is the actual Hardware deployment thank you foreign [Music] and again this was using uh PPO like this this variant of a policy gradient that I mentioned um so there's a bunch of other kind of approaches as well that we're not gonna have time to go through uh so model based reinforcement learning that's actually something that we're already familiar with so when we talk about feedback control motion planning uh things that assume knowledge of the the Dynamics of the system uh yeah I guess that kind of classroom methods is known as model based RL there's a different methods called value-based Methods where instead of trying to find the policy directly you get some estimate of the value function so this is the cumulative reward and once you have that estimated you can kind of back out the policy from that and there's techniques that combine value-based methods with quality based methods that are known as active methods so I guess instead of in one lecture trying to bombard you with all the maps for all of these different techniques uh what I'm going to do in the last I guess 15 minutes or so is talk about some situations if you're trying to use these reinforced learning algorithms in robotics what are the things that that you need to do to make things actually work well in practice so we're going to look at this this paper that I just showed a video from The Locomotion uh on the the quadrupedal robot as a kind of case study this is the paper and I guess if you're interested they actually have open source code so you can go to their GitHub repository and download their code and clear on with their resource learning uh alone uh yeah as I mentioned they use PPO uh proximal policy optimization which is a model 3 policy gradient based reinforcement learning technique um the policy gradient is the part I kind of described the learning rate uh adaptation part I I didn't describe in this case uh the so maybe just going back to the the previous question about uh model 3 versus uh model based I guess one argument for using a model free approach here uh is that the the simulation of the Dynamics is pretty complicated right so we're simulating contact so the legs of the robot are making the break in contact with that the robot and writing down some kind of really explicit model for those Dynamics and then taking the gradient of those Dynamics can be quite challenging so in this case it's pretty convenient to use a model free approach the policy here is a mapping from the robot's observations so the the state of the robot and also the the terrain that the robot observes to the robot's desired joint positions and that mapping is parameterized using a neural network and there's basically four kind of ingredients uh that this paper uses to to make things work in practice and these are also again relevant not just for this paper and Locomotion but but also for other applications so I'm just going to spend I guess what's like talking about each of these things so fast simulation on gpus curriculum-based training reward shaping and domain relation uh so the first simulation part uh this paper is using nvidia's Isaac gym which is a relatively recent simulation environment and that leverages gpus and that's not something that was uh prevalent before IV gym so traditionally robotic simulators have not uh kind of fully utilized gpus it was mostly CPU based if you use either gym and I guess for for this paper in particular you can simulate thousands of different robots in parallel and that's really important for resource learning because I guess as we saw it can take lots and lots of iterations lots of plant matter to converge to a good policy so yeah I guess being able to do lots of simulations in parallel especially on gpus is a kind of major enable and the other thing that that they use in this paper that other you know papers often use is a curriculum Based training uh so the idea here is that training a policy from scratch on really complex terrains uh can be pretty hard um so you might just not uh improve at all uh like your initial policy so the idea behind curriculum is kind of similar to a course so we don't cover RL in lecture one and in robotics we covered towards the end uh the same kind of idea over here so you increase the difficulty of the tasks um so in a kind of a game inspired fashion so video game inspired fashion so what they're doing here is ask the robots get better at simpler trainings they make the terrains more complicated so they basically have different levels so similar to a video game and then as the robot gets better the simpler trains they make harder another robot is not succeeding on some terrains and then they make it simpler and then yeah I guess as there was learns it gets more and more the trains get more and more complicated and they say that after a thousand iterations of learning the robots have reached the most challenging level uh for all terrain types and our spread across the map um yeah the other I guess important ingredient here is the choice of the reward function so the map that I derived here we were to think that someone gives you a reward function in practice choosing this reward function well can be crucial to good performance uh in principle you can just apply policy gradient to sparse rewards so reward function that assigns the reward of one if your robot succeeds at some task and a reward of zero if it like falls down or doesn't succeed um but this can be pretty inefficient so the robot is only getting a signal if it succeeds and so initially there's not a lot of kind of learning signal and so it might take lots and lots of iterations to converge to something good so in practice providing non-sparce rewards like choosing the reward function to provide more signal even when the robot fails can lead to more efficiency in this paper they use a reward function that's a weighted combination of nine different terms so there's a collision terms that's the second from the bottom so the number of collisions the amount of time that the the feed of the robot are in the air and a bunch of other terms that have to do with the amount of joint torque that you're applying to try to minimize the amount of torque uh you're trying to do well in terms of how well the the velocity is the track and so on question uh hand tuning so yeah I guess someone probably a graduate student or a female graduate students spend a bunch of time uh coming up with good good ratings for this I mean some of this can be automated like you can try a bunch of weights and see what what works well but yeah there's a fair bit of uh kind of effort uh manual effort that that goes into this as well um and then the final uh I guess ingredient uh is uh a way to transfer from simulation to reality uh so we saw that all of the Machining was done in the and the simulator this Isaacson GPU based simulator But ultimately of course we want the thing to work well on the hardware so how can we ensure that the policy you learn in Sim actually transfers to the real world uh so one pretty popular approach is what's known as a domain randomization uh the idea here is fairly straightforward so in simulation you randomize a bunch of different properties so you can randomize the terrain you can randomize friction properties you can randomize different other like physical properties like muscles inertia and so on and then you train in same with these randomized parameters and the hope is basically that if your policy in Sim is robust to these different kind of variations of different parameters then the reality some kind of particular instantiation of the surrounding studies until the Palestinians will work in in reality there's not much uh Theory necessarily behind us this is kind of an empirical thing um and yeah in this paper they randomized friction the amount of sensor noise they added random perturbations they actually like poke the robot like pertured the robotic personal robot randomly and that's supposed to make the Learned policy robust such soil can transfer to the real world and yeah I guess with these ingredients they're able to uh to kind of get uh the different behaviors that we saw over here um so this one in particular I think is interesting because I don't think they had movable uh objects in the simulator but by randomizing other things you're kind of getting robustness to yeah to this kind of uh as well and then finally I just want to emphasize that these ingredients so domain animation good simulation robot shaping and so on are not specific just a Locomotion right I just chose that paper just to make things concrete um here's a different domain so this is a robotic manipulation this is open ai's Rubik's cube solving system this is just a two minutes video and then we'll end we're trying to build robots that learn a little bit like humans do by trial and error what we've done is trained an algorithm to solve the Rubik's Cube one habit with a robotic hand which is actually pretty hard either for a human to do we don't tell it how the app is to prove that the cube in order to get there the particular friction that's on the fingers how easy it is to turn the faces on the cube what the gravity what the weight of the cube is all of these things it needs to learn by itself the interesting thing is that kind of standard techniques in robotics haven't been able to scale to that complexity that we see in a robotic hat humans have evolved to be able to manipulate and operate our hands so there's a huge amount of learning that's happened through Evolution to get us to this point as a species and the robot has to learn all of this from scratch instead of trying to write very dedicated algorithms to operate such a hand we took a different approach where we create thousands of different simulated environments and learned to do the task in all of those and hopefully the robotic Hab will be able to do it in the real world as well this means like thousands of years of experience that your network has had in simulation every time the algorithm has gotten good at the task we make the task harder that's really crucial because in the exposure to really complicated environments in order to eventually be robust to the real world you put a rubber glove on the hand and can still carry out the task this ability to generalize to new environments feels like a very core piece of [Music] it really changes the way we think about training general purpose robots moving from thinking too much about the actual algorithms and start thinking about how do we create complex enough worlds where they can learn at some point then it would be more down to the imagination What robots could actually accomplish the hope is to build robots that can do many different tasks to increase the standard of living and give everybody a better life [Music] all right so yeah it's just uh to end if you're interested in learning more about uh I put some references here uh so the first one is that textbook this is kind of the classic textbook in or else start and that's probably the place to start uh yeah there's a few other resources as well so an intro to deep reinforced learning uh has a kind of nice summary of techniques specific to D parallel there's a course in Berkeley which I also recommend they have a video lectures notes assignments and so on if you want to dig really deep that's another good resource open AI is spinning up has a bunch of tutorials in the portal that's another good way of kind of getting started in a Hands-On way any questions on on any of this [Music] yeah so the the thing that happens to the policy gradient method that I described so the thing that would happen in uh parallel uh are the capital and rollouts like getting the trajectories from a particular policy so you fix your policy you run it capital and times and that computation can happen in parallel and then you kind of collect the results estimate the gradient and then update and the update is kind of a centralized thing and then you parallelize again and then you update yep good other questions all right I'll see you on Thursday [Music]
Introduction_to_Robotics_Princeton
Lecture_21_Princeton_Introduction_to_Robotics_Overfitting_and_regularization.txt
all right I think we'll go ahead and get started so welcome back from the the Brick uh hopefully you had a nice break relaxing uh rub the home stretch it's in the previous lecture we started uh talking about how we can train a single layer neural networks uh we uh I guess a couple of variants of of gradient descent so the main one being stochastic ready in the sense SGD and then as part of this assignment you're implementing uh stochastic reading descent or green descent on some toy kind of data sets and yeah just to remind you that the form of a single layer or neural network with something like this so we have a input for example an image you multiply that with a weight Vector adding a bias term potentially and then pass that through a non-linear function like a tan hyperbolic function and so on um and the way we were doing this training of w and B the way we were choosing wmb was to use stochastically in the center again we sent uh to minimize the training loss so this is the average loss on the capital and training examples that we have access to foreign so for each input x i we make a prediction so this is the predicted label and then we compare that with the true label Yi and then we average all of the classes across the different examples and we looked at some examples of different losses the simplest one just being like a binary correct or not correct loss and then we saw some of the challenges with that and then we looked at the binary concept we lost which is what you're using as part of this assignment um and then we started looking at how we can go beyond a single layer networks to multi-layer Networks so this was just at the very end of the uh the lectures that's where we're going to pick up pick up today so I'll say much more about uh multi-layer networks what they look like some theoretical results associated with them uh and some practical challenges as well so let me just open up the slides here [Music] okay so all the the inputs here are going to be very low dimensions in particular two-dimensional so you can visualize everything um so this I guess can everyone see maybe three other hand at the back if you can't see things clearly the attacks uh all right so this here is the um the domain uh in which all the the inputs live uh so each point here is a particular X um and we're gonna like this visualization here uh is the visualization of the neural network so we're going to begin with zero hidden layers which corresponds to just a single layer neural network I'm going to choose um I guess one of these data sets let's say this one and let me just reduce the noise here okay so the orange and blue clusters are different classes you have the class labels are negative one and one instead of zero and one so the Orange is negative one and the blue is is positive one um and I guess the options here so the learning rate that's the uh that determines the step size at every iteration of stochastic grain descent there's a batch size of of 10 so that's how many samples do you use in every update for stochastic game descent damage that's the non-linearity we're using uh ignore these for now we'll we'll come back to it in a bit in this data set so this is a data set of maybe like a 100 or so examples and pretty much immediately the the training loss which is visualized over here to ignore the test slots for now just focus on the training loss and the same last bridgemont immediately goes to zero so let me just play that again so this is running stochastic reading descent kind of in real time uh in your browser okay so um yeah so you can see that the session boundary so that's the uh the boundary between orange and blue um separates the classes pretty much perfectly in just a few steps of creating design uh so this is the simple case this is the the case where we expect a single layer neural network to work uh here are some more interesting cases uh this is one where a linear decision boundary uh fundamentally cannot separate the classes you can maybe try a different linear boundaries and see there that there isn't one that works to separate the classes so sarcastically in the center is going to find something and you can see that the training loss is pretty high right so it's close to almost like 0.5 so it's doing it's classifying kind of half the examples correctly the other half incorrectly so it's finding it finding something but you can see that it's not correct there's orange stuff on this side but there's also orange stuff on on this side and then some other examples this is another one where there's no linear decision boundary that separates the two classes and again it's basically just finding something that has a training loss of 0.5 so it's classifying half of the examples correctly we are incorrectly um so this I guess motivates uh we saw some of these examples uh in previous lectures this motivates the need to have a richer function class something that doesn't just have a linear decision boundary but a more uh complex decision boundary uh which we can get uh by adding uh hidden layers so let me start with one hidden layer uh on on this example so you can see that it starts to do something more uh interesting so that the boundary between orange and blue is no longer linear because now the space of functions that we're optimizing is non-linear still not able to perfectly separate the two classes so the training loss is lower than it was before so it's lower than 0.5 it's about 0.18 so we can make the the dimension of the Hidden layer larger so I'll make it six dimensional and now you see that it's able to get the drain loss to zero pretty much or close to zero I'll go down there question yeah yeah so it's not quite a polynomial uh because the the non-linearities uh are all I guess don't uh keep the function space polynomial but but yeah it's sort of like a degree so if you chose the non-linearity maybe in a specific way you could you could get out the polynomials but uh yeah it's essentially adding more and more nonlinearity as you add more hidden layers other questions on this go ahead squares next to you oh good yeah so the small squares are visualizing uh what each neuron is is doing so each I guess each each of these neurons is taking um the two-dimensional input so X1 and X2 and each neuron is itself like a single layer you can think of it as a single layer neural network like it's doing uh I guess the sigma W transpose X plus b uh but where X is whatever the kind of previous input was so so going from here to here uh we're doing uh segment on W transpose X plus b and then to go from here the hidden layer to here we're taking these as the input and passing that through a single layer so this is kind of visualizing uh What uh each um each of these neurons is doing so each one has some linear decision boundary uh associated with it so that's what they're visualizing here yeah so you can see that the training loss I guess as we were answering questions went down to zero uh we can do this for some of the other examples as well so let's try this one and again you can see that the uh the training loss is uh going down to zero so it's able to pretty much perfectly uh separate the two classes uh orange and orange and blue um maybe there's one more we can we can try the spiral one this one's a bit more challenging it's going down but pretty slowly there we go and now it's quick yeah I guess this shows you I think it'll converge to something that's not uh going to perfectly classify everything so it seems like it's close to convergence here so it's doing something right like it's kind of getting the blue here it's getting the the orange here but it's misclassifying uh uh actually it's not yeah I guess it was classifying some of these these points over here but it's doing a reasonable job definitely better than 0.5 so we can increase um the complexity of our function class at another hidden layer and see what happens it seems better it's getting the Shane glasses 0.078 which is lower than it was it's basically getting everything there's some Blue Points here that it's misclassifying as Orange yeah so maybe we can try just one more increase the I guess the eight is the maximum there we go pretty much almost there just that last little bit yeah so it gets the the spiral uh pretty much exactly go ahead okay uh it's uncommon yeah it's pretty uncommon to choose um uh periodic functions uh so actually sorry I guess we should be careful here so uh did you mean these as an activation function yeah so sine is yeah so it's not an option here uh but but you could uh yeah you could use that it's not that common uh I don't yeah I don't remember an example where there's like a company potentially yeah maybe if there's some like inherent like periodicity like baked into your data that might be a good reason to use it yeah yeah let me switch back to the Blackboard yeah so I guess the short answer is that we're gonna minimize the overall uh training loss um so instead of using single layer Network here to make a prediction we're going to use a multi-layer networks a multi-layer network um and yeah I guess the picture to have in your head is that we start with the input we process that through a bunch of a single layer networks until we get to the the output Y which potentially could be itself a vector or it could be a scalar depending on the problem and each of these uh corresponds to uh to the the hidden layer so the actual computation is as follows so you start with the input X um and you pass that so you multiply that with some Matrix W1 you add in a bias term B1 and you pass that through a non-linearity and then you keep doing this so H2 is Sigma W2 and now you pass in the H1 instead of x and to this layer H3 is the same thing where you pass them H2 as the the input to this layer and so on um so let's say h d d for for depth is w d h d minus 1 plus be the and then finally you get to the output layer as follows so that's the the general kind of computation corresponding to a multi-layer network what I've drawn over here or written over here uh is called a multi-layer perceptron so without imposing any further structure on the matrices W this thing is called the multi-layer perceptron we'll we'll talk about other maybe better or different choices for structure W in the next lecture and I guess just a couple of notes so each Dimension here so each hi is a vector [Applause] right so each hidden layer is outputting a vector and the kind of number of neurons in the tensorflow playground uh corresponded to the number of dimensions in each intermediate representation so each hi uh belongs to our and I um and the dimension ni is determined by The Matrix Dimension so W1 and so on so um so each hi is sigma w i HR minus 1 plus b so wi has dimension ni times ni minus 1 h i as dimension and sorry h i minus 1 has Dimension and I minus one times one and then bi has Dimension uh m i times one so you can think of bi as a column Vector so this gives you uh ni by one and as the the dimension of h i and you keep you can have like potentially different dimensions for each of the Hidden representations after you get to be the output layer and I guess the the other note is that a sigma is applied element-wise so Sigma again is the non-linearity so Sigma is always going to be a scalar function so taking a scalar outputs a scalar so you apply a sigma to each dimension of hi separately so at the end or sorry not not each dimension of Asia each dimension of w h m minus one plus v i and this is the vectors you apply a sigma to each element separately um yeah and at the end you get a record of Ni by one I guess any questions on on this General computation Okay so so yeah actually before talking about the the training I think one natural question is how many uh layers do we need like how many of these like what should D be like what should the the depth of the network be and what should the dimensions of these intermediate uh layers be so there are some kind of theoretical uh results that give you some useful information I don't want to kind of or sell this I think these results are like good to know about they're not necessarily that useful in practice but it's yeah it's like good to you know about them um so they're basically these Universal approximation theorems so I'll present just one and I'll mention some some variants of it um so let's say your function class uh so these are the the set of functions that are defined by this kind of structure uh so each each function I'll call like fnm so it takes in an input X which is a vector and I'm just going to Define this to be uh W two Sigma W1 times x so this is a pretty restricted version of the more General thing that I wrote down here so all you're doing here is taking the X multiplying that by a matrix W1 applying element-wise non-linearity Sigma and then taking the output of that and multiplying that by W2 so these are non-linear functions or you could represent nonlinear functions uh using this kind of architecture because Sigma is a is non-linear um so the result that let me just actually I guess set up some more notation uh so let's capital X be some bounded uh subset of the space of inputs so some compact set of some sort of the domain which is the space of inputs uh and let f um and let's say that the space of inputs is our n x so NX is the dimension of the input space and let F be some uh continuous function um from the space of inputs which as I mentioned NX to the the space of labels which has some Dimension which I'm calling NY so the results with universal approximation theorem or I guess one company of universal approximation theorems [Music] states that for any value of Epsilon which is greater than zero there exists W1 or exist W1 and W2 such that effort [Music] F of minus f and M of x um and this is some uh Norm so let's let's do normal it doesn't matter that much uh is less than Epsilon for all X in responded domains about the subset which we're calling capital x so yeah I guess that's intuitively in pictures uh what this is saying uh sort of Imagine That X is a scalar why is a scalar we're choosing some bounded domain of access and we have some continuous function that's defined on this domain so it looks like this um what this result is saying is that for any tolerance that you pick so any Epsilon greater than zero that you picked uh you can find choices of W1 and W2 such that the resulting function F and N which I wrote up there approximates this continuous function any arbitrary continuous function within to within this threshold so let's say like FN and uh maybe looks uh something like this so the solid line that says M and the dashed line is F and N and the difference so the largest difference uh is less than Epsilon or this bounded domain that we're looking at um so I guess any any questions on on this go ahead uh yes good question so this is basically saying that we can overfit so that that we can so for any well okay so it you I guess that's one interpretation okay that you can offer it um another iteration is that whatever the the true underlying function is so let's say f is the uh the function that like actually underlies your uh data like your true data maybe that you don't have access to so f is the actual mapping from inputs to labels then there exists some neural network of that pretty restricted form and that will arbitrarily well approximate that function f on this on whatever like boundary domain that you choose a question is okay good yeah so that was going to be I guess the next comment I make sure the intervention is not bounded so um I don't know if I gave the name to the hidden Dimension but so the output um of Sigma w1x can be um yeah that can be arbitrarily large so this is not saying that there exists some um like there's not providing any bounds on uh on like the the dimensions of W1 and W2 which is saying that there exists some and it could be yeah it could be absurdly large exactly yeah yeah yeah for any fixed domain sorry for any fixed function and fixed domain there's going to be some finite dimensional W1 and W2 but as you make uh F more and more complicated more like Wiggly maybe uh like basically have a large like curvature uh in a bunch of places uh the complexity of uh F and M uh the as kind of measured by the dimensions of W1 and W2 could grow up to Infinity yeah okay other questions on the result okay so I think yeah like I said this is useful to to know about because it gives you some theoretical justification for looking at functions of this form so if I guess if this property wasn't satisfied then um maybe you would be slightly concerned potentially that this is not a good class of functions to to look at but like I said I don't want to I guess oversell the importance of the result for uh for a couple of reasons especially I guess before that let me just make a another note so what I'm presenting here um like the depth is fixed so we just have basically one hidden layer um there's a similar result for bounded with so if you constrain the dimensions of the intermediate representations if you can see in the devices that W1 and W2 but allow the the depth to to go to Infinity so the number of uh I guess I just erased it but the number like the the dimension like d uh another dimension B um the the value of D that you allow the number of uh single layer operations that you do if you allow that to grow to Infinity there's a similar uh result like this Universal approximation result so similar to that for about width uh but arbitrary depth yeah and I guess that's just uh to emphasize uh what I was saying before so there are other function classes um so for example polynomials or trigonometric uh polynomials that also satisfy this Universal approximation so for any continuous function on a bounded domain you can approximate that function with polynomials of potentially a high degree so there's not nothing really that special in terms of the theoretical result about like multi-layer perceptrons um so there's like other function classes that also satisfy these results so in practice I guess the reason uh we see multilateral perceptrons used for deep learning rather than polynomials for instance so like I guess someone's trying to use followers for uh computer vision I think that the main reason uh is a really empirical one so in practice um so I guess main reason for neural networks like multi-layer neural networks uh is that these function classes seem to capture the structure of real world data or functions so like I said the universal transformation theorem is good to know about like it gives you some theoretical justification for using multi-layer perceptrons but the main reason really is that the kinds of functions that we encounter in practice so for example in computer vision or in other applications like natural language processing it seems like these architectures like these function classes do a pretty good job of the compactly representing those function classes that actually do arise in in practice I think that's the real reason we use them not so much because of this Universal approximation here um but yeah I guess historically so I think I don't know exactly when these will prove maybe in the 70s or 80s and this had some um like historical like a sort of some kind of like motivation like historically for looking at these function classes but uh yeah I guess now it is no one pays that much attention then it's mostly just that these things work well in practice and that's why we we use them yeah any I guess any questions on the result okay so yeah let's get back to your question was this question about uh training to use the multi-layer networks [Music] um so the question here is how do we choose the weight matrices like W1 through WD and the bias vectors uh and we're going to do the same thing as we did for single layer Network so we're going to minimize the training loss using sarcastic creating descent or one of its variants so the training loss in this case we're going to denote by L again which is we can think of as a function of the weight matrices and the bias vectors so how many of these you have just depends on the specific uh like depth and uh yeah the specific depth that you choose so we're going to define the training loss to be again just the average uh loss or our capital M training examples um where why I predicted and now depends on uh like our choice of the the weights and the the biases so why I predicted I guess I raise the uh the specific form but uh like why I predicted Texan x i as input passes it through each of these like single layer computations until you get the label out so this depends on foreign so yeah I guess what we're doing is looking at the kind of end-to-end loss on our training data set and choosing W1 and uh MW2 and the device is to minimize this average training loss on the uh sorry the average loss on our training uh data set um so yeah I guess the question is how do we foreign do this minimization how do we apply stochasticating descent [Music] so so if we're doing 80 percent and also for doing uh sarcastically in descent what we have to do is take uh each weight or bias for each W or B and updated by moving in the direction of the negative of the gradient so gamma again is the step size um and yeah we look at the gradient of their training loss with respect to W1 uh multiply that by the Learning rate gamma and then take a step in the direction that's opposite to to that gradient and then we do the same thing for the other parameters and then the same thing for the biases as well um and yeah this one right in here is uh other set of computations that we do at each step of upgrading the standard percent we look at the training loss uh or if you're extremely understand we look at a training loss on some subset of our data we look at the gradient with respect to uh the different parameters and biases and for each one we take a step in the direction that's minus of a gradient I guess here I'm choosing a single learning rate for all of the weights and buy season principle you could choose different learning needs as well foreign [Music] okay so they don't necessarily have to be distinct right so we're just using some setting of them uh that minimizes our training loss uh and it could be that you end up with uh I guess neurons that are similar or the same uh but yeah in practice you wouldn't because we would just initialize the uh like each of the W's and mbus randomly here yeah typically it's like you choose uh there's like various kind of clever initialized agencies but at the simple ones where you initialize them to be something close to zero with random what is it like layer one correspondent yeah uh yes exactly yeah [Music] yeah How does each neuron coming into this representation yeah so I guess we haven't really been super careful about defining what a neuron is um I think it's easier to think about just uh hidden layers and hidden representations you can think of a neuron as one component like each dimension of a hidden representation like of the hiis you can think of as a as a neuron yeah I guess that's typically how I think of them but yeah I think it's easier just to think think about uh this kind of processing with regard out here where you start with the input uh you apply like a single layer computation to it to get some hidden representation H1 uh and then do the similar combination again and then all the way up to h d and then finally to the the label so each of these is a hidden layer and then each each one is two up to HD is a hidden representation uh and yeah I guess as I was saying each dimension of that each element of that you can think of as a new one good other questions okay so I guess the the thing we need to do to actually Implement uh so that's again the sense is calculate the gradients so I'll just briefly mention how one can do this foreign [Music] [Applause] so the technique to do this is what's known as back propagation which is really just the chain rule is the time to your name for the the chain rule so let's see how this works let's look at um a kind of restricted class of functions so it's the same kind of class function that we were looking at for the universal approximation result um so we're going to ignore the biases here and there's basically just one hidden layer so in this case the training loss is just a function of W1 and W2 and again the predictions for each input depending on the choices of W1 and W2 [Music] um so what we need to calculate is the the gradient of the training loss with respect to W1 and the gradient of the training loss with respect to W2 um so yeah we're going to use the chain rule so we're going to first calculate uh the gradient of the Y prediction with respect or yeah variant of Y with respect to um uh this hidden representation so let me write about here so w is W one sorry Y is W2 Sigma W1 X so this Sigma W1 X so let's call that h um so we can calculate the gradient of the loss l so that's the little all right sorry I forgot the yeah so that's the the last great example that we can calculate the uh the gradient of L with respect to uh y I predicted so that's a relatively simple calculation because the the last here is just something like that the binary across entry velocity let me just calculate what that gradient is and then we can calculate the gradient of y i predicted with respect to W1 and W2 so for the second part like calculating the gradient with respect to W1 and W2 and that's where the chain rule comes in [Music] [Music] so I'll do it in terms of partial derivative so that's the kind of transpose so partial y uh partial w 2 so that's so we're looking at the partial with respect to W2 that's just h I guess transposed by convention um and then the partial so that's the first part of calculating the gradient of the partial with respect to W1 and then partial with respect to w two um uh so we can look at partial y actually sorry Mr W-2 so the W2 times partial y uh partial h and then partial h partial W1 so this is the the chain rule foreign this architecture as having one hidden representation each so we're starting with the the input going to age and then going to to Y uh the reason it's called Mac propagation is that we look we kind of work backwards so we look at the the gradient of the label with respect to the hidden representation H and then we look at the gradient of the Hidden representation age with respect to the the weight W1 um and if we have sorry what we're asking is by perturb age a little bit how does the label change uh and then if I perturb uh x a little bit uh how does the other hidden representation change and if you have like multiple layers uh you kind of work backwards from the label uh all the way up to the the input so that's where the name back propagation comes from but it's basically just the chain rule right we're just calculating the gradients using the the chain rule uh the gradients of the loss the training loss with respect to each of our parameters like each of the widths and biases uh in practice I guess it's good to maybe work through this and make sure you understand why it's just the chain Rule and exactly how it works uh in practice there's like software libraries which I'll mention in the next lecture which do this gradient computation very very efficiently and completely automatically for standard kind of neural network architecture than losses and and so on um so yeah I guess back propagation people also realized and that you could use it to train multi-layer networks back in the 1980s or so and I guess a couple of different researchers like we discovered that and this is the main kind of work hours like algorithmic Workhorse this combined with stochastic beings and for training neural networks today um yeah if any any questions on the training part so did I answer your question other questions okay all right so that's the training uh bar um so so far we've kind of focused on the story of finding rates and biases that minimize our training loss but I guess as we've mentioned before we don't really explicitly care about minimizing their training loss like we for our training data set we already know what the labels are what we really care about is not the training loss but the test loss so on examples that are not part of our training data set how well can we predict the labels and that's where overfitting comes in which we mentioned previously and I guess we'll spend a bit more time uh discussing now so let's go back to the tensorboard okay yeah and I realized I left the green descent running for that example so it's yeah the train loss is exactly or like pretty much exactly zero and it's captured the spiral uh pretty nicely okay so yeah let's look at this data set over here so this is the one where a single layer Network can perfectly separate the the two classes in training uh the one sort of uh difference I'm going to to introduce is adding a bit of noise on the examples uh so what this is doing is basically uh just smearing the the data a little bit um such that there's no uh one kind of uh linear classifier uh like one linear decision boundary that's going to perfectly separate the the orange from the blues if you pick something over here and there's going to be some small fraction of the training data that you miss classify um and I'm gonna just have one hidden layer with six neurons or six dimensional hidden representation and I'll keep all the other parameters the same so I guess what I want you to focus on is the difference between the training loss and the test loss so the training class is what we're familiar with so that's just the the average loss in our training is that the test loss is being calculated by giving the trained neural network examples that are not part of a training data set and evaluating the loss on on those kind of unseen examples okay yeah so what we see is that so the training loss is kind of going down nicely so 0.09493 um and yeah I guess we're seeing that it's not finding a linear decision boundary so we're uh using one hidden layer so we can represent kind of non-linear functions and it's using that non-linearity right so it's bending the decision boundary which which is where the function is zero uh to sort of try to distinguish between these examples that are uh kind of in this like boundary between uh orange and and blue uh so it's making the the function more and more complicated as we as we do the the training so let's make it even more interesting so I'll add one more hidden layer with again um six uh dimensions and again just keep an eye out on the difference in the training loss and their Teslas so again the training loss is going down so that's what we're minimizing we're using stochastic game descent and what's interesting is that the test loss is going up after after a while uh so it's a point yeah one four one five and so on uh so even though we're minimizing the training loss uh we're doing a good job separating the the training examples really well but uh but yeah on unseen examples uh we're not doing so well in fact as we uh decrease the training loss more and more uh we're doing worse and worse on the thing that we actually care about which is the the test uh so I guess just to show you that this is not specific to this data set so let's go to this data set uh again I'll use two hidden layers with six neurons each and again just keep an eye out actually delete that text right okay yeah so keep an eye out on the training class and the Teslas again we see that there are training loss is going down as we expect that's the thing that we're minimizing but the test loss is actually going uh up by by quite a bit so if you look at the curves here so initially you see that as the training loss goes down the test loss also goes down so the train loss is gray the test loss is uh it's the the black curve so initially they both go down but then after a while the training loss starts to still keeps going down but the death loss starts to to increase so I guess this is what's known as overfitting so we're overfitting to the finite number of training examples that we have access to um and yeah you can see that it's kind of making the decision boundary more and more complicated as the number of steps of stochastically in this end uh increase um yeah I guess any any questions on uh on this phenomenon Okay so yeah so the question is how do we how do we prevent uh overfitting let me not I'm gonna use the tensorflow play around again so let me try to try it over here so they're basically two approaches or two kind of classes of approaches for preventing or fitting um intuitively uh the way to think about how we can prevent overfitting is uh Occam's razor so this is just iteration so if we have multiple functions that all fit our data uh kind of equally well so let's say that all have a equal training loss what we want to do is expect the other function that is the simplest in some sense so we have to Define what simplest means so I'll just kind of write a a short version of this so choose so I guess among functions that all Federal data so our training data equally well equally well as defined according to our training loss so you want to pick the one uh that is the simplest and you can kind of see lotion that we're picking just by minimizing the training classes not particularly uh simple right and you can you can see that by looking at the complexity of the decision boundary so it's trying like really really hard to perfectly classify all the examples in our training data set and we're giving up um we're doing that well but we're giving up uh the best last which is what we really care about [Music] so I guess with this intuition of uh open razor there are two uh approaches or two classes of approaches uh that you can use to prevent or fitting uh so the first one is what's known as regularization so basically adding something to the loss to adding some term uh to the loss function that promotes Simplicity Simplicity as as kind of formally uh we'll Define in just a second the other class was to basically choose a neural network architecture more kind of cleverly based on some knowledge that we have about our data generating process so this one we'll we'll spend more time talking about in the next lecture but the basic idea is to uh choose architectures that reduce the number of decision variables so the number of parameters that we're fitting um here we're saying like W1 W2 and the device is can be pretty much arbitrary uh but we're going to add a bit or we can add a bit more structure we can say that maybe some of the elements of uh the weights and biases need to be zero and there's only some elements that are going to be non-zero so that reduces the complexity of the function class that we're using so by choosing uh by choosing a kind of more judicious architecture we can potentially kind of get around this problem of orbiting so let's talk about the regularization one uh and then we'll see it see an example of this and uh tensorflow of playground [Music] so with regularization what we do is minimize not just the training loss but their training loss uh plus an additional term that's known as the regularizer so so let's say the training loss we still denote as l so we think of this again as a function of the weights and biases uh what we're going to do is add another term where Lambda is going to be a scalar parameter that basically says how much regularization to to apply uh multiplied by as this is one particular choice so multiplying by the the summation of the norm squared of all our decision variables and so on um so we look at each of our weight matrices each of our bias matrices uh we look at each element and square and add them up and so this is just a scalar this whole thing is a scalar which is a function of the weights and biases and then we multiply that by some Lambda so Lambda is zero then we're just minimizing that the training loss if we increase Lambda what we're saying is choose W's and B's let's use weights and biases that have a small Norm so we're basically biasing our choice of wmb to things that are closer to zero so there's nothing kind of really fundamental about zero you can think of this as just making the function class more simple implicitly we're not like explicitly reducing the number of parameters we're just saying like try to find something that's close to zero in terms of the the weights and biases so I guess any questions on the just the process like what we're doing over here okay so let's look at uh an example in the playground [Music] so this is without any regularization we're just minimizing the training loss so let's let it run and maybe I'll just keep track of the training and Teslas over here so without regularization and then with regularization training and the tests so yeah I guess it depends on how much how long we let it run so the training loss let's say we stop it over here is about like point two one three so without regularization it's about 0.213 and the Teslas actually yeah write it down 0.427 um and if we run it we can see that the the training loss is going down and the test loss is going up right so we're seeing uh overfilling as we expect so now if I add L2 regularization so L2 because we're looking at the the two Norm of these uh decision matrices and vectors and I'll add a regularization of 0.01 so that's the Lambda the regularization rate uh and it's the exact same setup the same data set and so on so we see yeah I guess what's interesting is that the training loss is still going down so 0.28 9288 uh but the test loss is no longer doing uh what we were seeing do before which is uh go up right so the test loss is uh basically constant a point at three uh five five so the training loss is 0.288 and the best loss is about 0.355 um so yeah I guess what's what's kind of interesting is that we're doing a worse job of fitting our training data right so our training loss previously without regularization was 0.213 uh once we added regularization the training loss is higher 0.288 it makes sense that it's higher because we're adding an extra term right we're not just minimizing the training loss we're minimizing trading loss plus something that's penalizing the complexity of the function our question strategy to take the derivative of the difference between test loss and trading losses yeah good so in practice um so the test loss is something that's kind of hidden right so the desktops you don't have access to explicitly uh it's just like future examples here we're like pretending like we have access to uh to the test loss but some variant of what you said where you have some data set that's finite you split up that data set into a training data set uh let's say that's 90 of the data set and then the rest of the 10 that's what we call the validation data set um and you can basically do this you can stop the the training uh once the validation loss uh starts going up and the validation loss is kind of a proxy for the the thing that we really care about which is the the Tesla yep good yeah and the I guess the podcast here was that the the test losses is a fair bit uh lower right so it was 0.427 uh without regularization once we add this regularization it's 0.355 and we don't have this phenomenon where the test losses is going up uh and yeah I guess we can try this with other uh data sets as well so maybe let's do this one so let's switch off the regularization again so we can see that it's doing some kind of funny stuff right it's sort of linear here but then it's adding a little like island of stuff to capture some of these orange points over here uh and again the same thing is happening with a chain loss is going down but the test loss is is going up so the Tesla's yeah 0.175 176 and it's going to keep going up uh and then now if I add in a regularizer 2.01 again let's say yeah we can see even just qualitatively like it's not making the function as complicated like there's no like little island that's appeared it's basically linear uh there's some non-linearity kind of over here uh and the test loss is a fair bit lower so there's 0.17 something it's Point uh 101 with the directorization and I guess if we crank up the regularizations if you make it point one let's say so that's what I order my little higher so that's also still about the same point one one or so uh but you can see that the the function is simpler so the decision boundary is closer to linear so that's basically what we're saying here like try to find something that's that's simpler or simpler in the sense of uh in this case the decision boundary will be closer to familiar all right so that's strategy one I guess organization uh other strategies so typically you use kind of both in in uh in conjunction with each other so you both regularize and try to find the best neural network architecture that you can find that reduces the number of uh parameters um that you're optimizing for so we'll say more about that in the the next lecture any questions on on this um yeah so the last assignment is do you uh tonight we're going to release the final project uh probably like early tomorrow um so the structure of the the project is pretty uh or well sort of similar to the the rrt lab so that was the motion planning lab um so it's similar in the sense that the obstacle setups are going to look very similar so we'll have the the red PVC pipe set up in in our two spaces and linger and G105 the main difference is that the obstacle locations are going to be unknown uh to you our priorities they're kind of going to show up uh on the demo day I'll say a bit more about that and then the obstacles will be replaced in some configuration uh I guess we always receive questions about this it's not going to be adversarial right we're not going to choose some like really bizarre uh set of configurations there's just going to be kind of random-ish locations for the the obstacles around title like specifically mess up your but yeah I guess the point is that you don't know exactly where the obstacles will be beforehand so you need to use the cameras and some kind of computer vision to figure out where the obstacles are to then avoid them uh we're going to use the same obstacles as the RT lab so we're just cylindrical obstacles the Duc pipes with a known radius like it's just whatever the pipes are whatever Rock pipes are like present in the spaces are currently the obstacles are pretty uniformly colored so that makes some of the computer vision a bit easier and the goal is basically to get to the other side of the obstacle field and then land near some Target object so the target object is a book that you can bring so the book has to I mean it can't be like some ridiculous book that's like massively large so we have some some like rules on on the box it's basically just a normal book you can make some modifications by looking in the cover and so on but uh a regular book and then you basically land so the book will be placed on either like a table uh something similar to this or like a pedestal uh it makes it a bit easier to see like as the Drone is flying uh if the book is apt a similar height to the Drone and then it's easier to see uh you don't have to land on the table like you can just land on the on the ground and there's going to be points for basically how close you get to the the book and I'll explain more about that and and the evaluation is going to be on uh like our demo day which is a Dean's day December 16th so we'll send out a sign up sheet um uh soon within the next couple of days or so as I guess each team will sign up for I think it's a 20 minute uh time slot and you can choose based on your schedule uh I guess whatever works and uh questions sorry the Target the location uh okay so the location of the target will not be uh known so we're going to place the the target uh just in the kind of end zone of the obstacle uh so the um I guess the so X is is forward according to our coordinate system for the crazy fly so in the forward Direction it's kind of known it'll just be at the end of the course and I will just place the the obstacle uh not the obstacle the the target object laterally in some location uh yeah roughly uh pretty much roughly yeah you know the the X uh direction or the X location um so I guess a few ideas so we'll provide some starter code uh for you to get started on um so there's a bunch of different options for you to choose from we're not going to specify a particular approach so you're free to use uh whatever approach you want um so I guess principle the approaches to this could be to figure out the locations of the obstacles using the the camera so for instance because the optical geometries are known uh you could look at the the number of pixels that each obstacle spans and then back out uh the distance based on that and then once you have the distances you could use uh RTS or prms or estar or any of the kind of planning algorithms that we discussed in this course to do the planning and the planning probably makes sense to do in a receiving Horizon kind of Fashions you plan a little bit execute some portion of that plan re-look locate where the obstacles are replan and so on uh we're also I guess we'll be playing around with some pre-trained neural networks for monocular depth estimations this has been kind of like there's been a mass amount of progress on this uh relatively recently over the last year or two so with a single image uh figuring out depths uh to the uh the different locations in the in the scene they give you relative depth um so it's kind of a scaled depth that you get not absolute depth so there's some like tricks to getting the the absolute depth from that relative depth but we'll provide some starter code for that as well if you're interested in playing around with some of these uh fancy uh neural networks it's actually based on Transformers which we'll say a little bit about in the next lecture um yeah I guess there's other approaches you can calculate some or compute some library of trajectories like motion Primitives move straight turn left turn right and so on and then apply them any sequentially as you see where the obstacles are hand tuned strategies uh like find uh like the largest gap and like navigate uh towards that Gap um those are like perfectly legal as well um and like those kinds of strategies have like worked pretty well in the the past they're kind of relatively simple uh there's some like hand tuning hand specification that goes on uh but yeah that's perfectly fine and then you can use uh like the pre-trained neural networks that we use for lab eight and for localizing the the book and as I mentioned you can make some modifications to the book so if you want to uh like add a cover to the book uh like change the the color and so on so that's something you're you're allowed to do um grading rubric this is written out or going to be written out more carefully uh in the instructions uh but the grading is basically going to be into parts so one is the collision avoidance part so there's 80 points for that and that has to do with the the fraction of the course that you Traverse uh before colliding and then 20 points for landing uh near the target object and The Landing part is just based on the horizontal distance and we have some uh thresholds for like for the points there and the final score is the the average of the two best trials so you can have I guess one run that's uh that's pretty bad and we often get questions about oh like what happens if the Drone just does something crazy so in that case it's fine like if it clearly does something like the battery kind of uh gets drained or whatever then you can just like rerun We're not gonna be like super super uh strict about those kinds of things um yeah and as I said this will be written out a bit more carefully in the uh the instructions foreign notes uh so as I said we're not uh kind of restricting you to a particular approach you're allowed to use opencv in particular I think it's going to be like super useful and other software libraries that that your previews from and yeah you can be creative with the approach as long as you it's not something kind of completely silly just go straight and hope for the best question cylindrical obstacles in the one lab where they are oh yeah yeah definitely yeah so you can move them around I guess just don't take them home but yeah you can you can move them around yeah and the main piece of advice as always with anything is to start early no one's gonna listen to this but but you should because it's gonna be a scramble at the end if you don't start early yeah other questions on this go ahead um so they're always around uh yeah you can test them any time there's I think it's like 1am to six a.m you're not allowed to go in but but any other time yeah now the option should already be there so yeah yeah there I guess right now you're doing the other lap so they're not there but you can they have like hooks you can you can place them another question yeah yeah uh okay so uh what I would recommend and this is written out as well in the instructions is pick one space and stick to it uh because the lighting conditions uh can be different uh and that can potentially mess up your computer vision algorithm so just yeah just pick one space stick with it and all three netted zones are going to be available so two in the an linger space and one in in G105 yeah question uh yeah so it's just a forward-facing camera unfortunately there's no way we couldn't figure out how to get access to the optical flow camera yeah that would have been uh nice but uh but unfortunately not yeah if you figure it out I guess let us know and then we'll use it for next year other questions all right sounds good so I'll see you on Thursday
Introduction_to_Robotics_Princeton
Lecture_20_Princeton_Introduction_to_Robotics_Stochastic_gradient_descent.txt
all right there let's go and get started so this is a reminder in the previous lecture we started discussing learning based approaches I guess mostly to the computer vision but as we'll see many of the techniques that we discussed will also be really relevant for other kinds of learning beyond computer vision so it seems like they're the most I guess popular choice on the vote at least as of right now is a reinforcement learning so I think the material we cover today unless things change will be relevant to reinforcement learning so I think that'll be like a nice connection so in the previous lecture we assumed access to a data set of labeled examples so this is the the context of supervised learning so we're going to continue to look at some Horizon learning for a little bit until maybe we talk about reinforcement learning or some other topic and I guess just as a running example we're using pedestrian detection as a kind of example problem which might be useful for autonomous vehicles for example so in that context we have a set of images uh red corresponding labels so each image we're thinking of as a vector in some high dimensional space the space of all images and we're assuming access to let's say capital N different examples with corresponding labels so we're thinking about the labels are maybe just binary so zero one one if there is a pedestrian in the image zero if there's no pedestrian in the image and the goal that we set for ourselves is to learn the mapping to learn the function from images or image to the label so what we want to be able to do is given some new image that was not part of our training data set we want to be able to say whether The Pedestrian in the image or not and the way we're gonna learn uh this is is given the uh is with the help of the data set that we're given um so we're looking at least for now at a particularly kind of simple class of functions uh so these are single player neural networks and that I introduced in the the previous lecture um so it's basically just a linear or a fine function of the input passed through a non-linear function Sigma and we discussed some specific choices for Sigma so for example sigmoidal activation so it kind of looks like like this so the input to segment answers is Sigma of Z and this is the input to Sigma so Z is very large and positive then Sigma out but something that's close to the one if Sigma sorry if V is very negative uh then Sigma output something that's closer to zero so that's just one particular uh Choice uh and inside that we're just taking a DOT product of some weight Vector which we're going to learn uh not part of the weight Vector will be the image vector and then just adding a bias term would be uh and then passing that through the non-linearity function so yeah I guess that that's uh the structure of functions that we're going to look at we're going to expand Beyond this this class of functions but at least to begin with I think this is a good place to start and what we said in the previous lecture is the way we're going to learn the weights and the bias so the W and B those are the parameters in our function the way we're going to choose them so choosing W and B so we're going to choose them by minimizing our training class and I guess the name for this is ERM empirical risk minimization and specifically we defined training loss as the average loss so the notation we use was l which you can think of as a function of w and B so for each setting of wmb you get a particular training loss so we Define this as the average over the capital N examples that we have uh of uh level l of Y predicted and the true label and why predicted is sigma W transpose x i plus b right so so for every uh X every image in our data set we pass that through our function for a particular choice of wmb we think of that as some kind of prediction and then we compare that prediction with the true label which in this case is maybe just zero one and depending on how far away the prediction is from the true label we incurs in loss and then we just look at the average loss and again we looked at some specific example losses so conceptually maybe the simplest example is just a zero or one loss so you're assign a loss of zero if the prediction matches the label um what I guess we mentioned in the brief lecture is that this optimization problem ideally you want it to be smooth in some sense and we'll discuss exactly why in today's lecture and for that reason uh a better choice of L is the BCE loss so that's the binary across entropy class and so yeah with a specific choice of loss function this kind of is well defined you can think of the training class as a function of wmb and we're going to find wmb that minimizes that gives gives you the smallest possible training loss so at least that's kind of the story we have up to now we make some modifications later So the plan for today is going to be to do two things so first is uh think a bit more about how we do this optimization so how do we find wmb to minimize the training loss and the second thing we'll do is move Beyond single layer Networks to multi-layer networks so we discussed some of the limitations of single layer networks in the previous lecture so single layer networks are basically just doing template matching so we have some template w we compare that with the image and see how close the image is to the template the other way to think about it is it's basically like linear classification so we have a hyperplane in the space of all images that we're separating that space using and if your example falls on one side rather than the other side and then you can classify the image accordingly and we saw some simple examples where linear classification is not sufficient for separating different classes of examples all right so I guess that's roughly where we left off in the the previous lecture so let's start off with the the first uh point so how do we find W and B uh to minimize the training loss uh unfortunately the the magic cloth disappeared uh it like fell uh yeah someone dropped it in there and I don't have a way of taking it up so we're gonna have to use the Eraser again yeah sorry I should just buy my own I guess [Music] all right yeah again just ask me if things are not clear and this thing is not a factorial again all right so how do we choose wmb so the short answer I guess is using gradient descent um so I'll describe great innocent and and the like some of the uh I guess uh variants that the people use uh in in practice um so yeah maybe you've seen billion descent uh before but I think it's so useful to just go through the basic idea and then discuss some of the modifications and that people make for uh for creating descent um so I guess as we said over there you can think of the the training loss uh as a function of w and B right so each wmb that you pick given a fixed training data set leads to a particular training loss so just for Simplicity uh consider a case where W so the weight Vector is a scalar and B is fixed so we're just going to ignore B so yeah I guess that's also make like a ton of sense in practice I mean this is basically saying um like little X like the example is just a scalar value um so we're just taking the space of x which is a line and we're just going to divide it up somewhere and say like if you fall on one side of the line that's that's uh a one if you fall on the other side it's a zero but yeah I think this is a a good place to start and then we'll make it more complicated so in that case the the training loss maybe looks something like this so here is this L of w and W on the the x-axis uh so it could look like this so depending on what choice of w you make you might have a lower or higher training loss and if you use the binary cross entropy loss and the signal activation function then this landscape is going to be smooth it's not going to have discontinuities um so yeah I guess in this case we can use gradient descent [Music] to try to to minimize the training loss but before we do that maybe let's just think about a kind of straw man algorithm which is a random search um so the idea here is just very simple so just choose let's say k random W's So Random According to some distribution that that you pick and just evaluate the training loss for each one and basically just pick the best one so pick the W that gives you the the minimal uh value for the training loss among the this like randomly chosen set of W's that you picked um so this does something right I mean you're just choosing a bunch of values randomly and just picking the one that happens to give you the the smallest uh trading loss but this is not going to scale to anything beyond uh like a scalar W like if you were if W was like truly High dimensional um so if you if you're working with images NW has the same dimensions as the images then yeah the chances that you'll get are good uh training loss just by another example like a bunch of w that is pretty slim but I guess the reason I mentioned this is we'll actually use something like this combine it with uh with building the sand uh in a in a bit yeah any questions go ahead take them in them randomly instead of like uniformly disperse them across to available like range of value yeah you could do that as well especially if if like w is is a scalar you can just uh like this guitar is and pick the the ones that gives you the smaller stuff here um in a higher dimensional space it might be harder to digitize so this just randomly sampling a bunch of points you can do uh in in like a higher Dimension as well like as opposed to having to discretize each each Dimension a question exactly yeah yeah so that's I guess that's the the point that was going to make um so the the other technique for for trying to choose a w [Music] is to use uh gradient descent so so in this case it's just uh like a slope uh in this end so we're just working with a or derivative of then like one dimensional example so again W on the x-axis Lo on the the y-axis uh so yeah what you do here is start off with some initialization for for w so pick a particular value of w you look at the the slope so the derivative of L of w at that point um and then you take a step uh in the direction that's negative to the slope so in this case the slope is positive uh so you move in the the negative Direction so you end up over here so so we are new and W this is the W started with you took a step to the left so I guess just to write like this down formally so you start with some let's say random w uh and then execute this for Loop so for let's say k equal to one through capital K for some number of iterations uh actually I'm today oh no I used the same letter over there let me just change that that's not that the same key let's say this is uh like M or something yeah so 4K equal to one through K we update our w uh so w minus some scalar gamma uh multiplied by the the slope so d l d w all right so I guess there's a couple of things to think about um so what this is doing again is just trying to move in a direction that locally decreases the loss as you look at the slope you move in a Direction that's that's negative that's uh opposite to it so one I guess important thing to think about here is what is the scalar parameter this constant gamma so gamma is known as the learning rate or also knows sometimes with the step size and Gamma has to be chosen kind of reasonably carefully over here um so I guess let's think about it so if you choose a small gamma uh what might be the concern that go ahead it's really small yeah it's really slow right so if you make it really small you'll keep moving in the right direction but it might take you a long time to converge if you choose it to be very big I guess what might be the concern there go ahead yeah yeah exactly so you might go uh all the way from here uh to somewhere like over here uh and maybe just like keep like bouncing around so you take like very very large steps so the gradient so the slope is just telling you the direction you should move in locally uh to make the last go down but if you take a very large step and then you might actually make the last uh go up so as an example um with large gamma it could be something like this you want to move let's see yeah I guess I'm actually yeah this is a reasonable example so if you start off over here if you make a small step which would be to the right if you follow the negative the slope and then you make progress in terms of increasing the uh the training loss but if you take a really large step then you might end up somewhere over here and then if you keep doing this you might converge maybe to to this point eventually or maybe you just keep I'm bouncing it on so yeah I guess you have to be slightly careful about choosing the value of gamma it does sound like nice Theory so if you know things about your training loss so if you know it's gradients and it's like second derivative you know no bounds basically on how quickly it changes then you can use that to pick gamma in practice it's not often known so it's hard to use that theory but at least yeah theoretically there's some kind of good choice of gamma a good in the sense that you make progress but you don't and you don't like jump around but that assumes that you know you have some knowledge about that gradients and then the curvature of the Lost landscape question search and like put a bunch of like start with kind of a physical point s yeah good so I guess what's the yeah I mean you kind of mentioned this already so the challenge with with getting the scent uh is that you can get stuck in local Optima [Music] foreign so you make progress locally so I guess this is an example so you make progress so maybe start off over here you take some gradient steps so that keeps moving you to the right and then eventually you kind of converge to this point but this point doesn't have as low as a training loss as this point over here so ideally you would have wanted to end up here uh so yeah I guess there are a couple of strategies so one strategy is to combine with random search so you run that Loop many times with different random features and initializations and maybe you get lucky and or another very least you can just expect the setting that gives you the lowest uh last question [Music] yeah if your stuff so big you can definitely spot it out entirely I guess what I was saying here is that uh like you pick like a reasonable step size like a small step size um and each and then run that Loop for different initializations so maybe the one of the loops gets initialized here you end up over here maybe one of the loops gets initialized here and then maybe you uh you end up here but yeah I guess that's kind of assuming that you've chosen a step size that's not too large I'd say not always like jumping around the crib okay uh yeah there are other strategies as well like just some some clever some other let's say like clever uh initialization um so if you happen to know like some uh kind of reasonably good uh setting uh for for w then you can just initialize it close to where you think it might be good and then let gradient descent do the rest so that assumes some other kind of like prior knowledge that you have about uh functions that that might be good uh it turns out I guess one of the uh like really kind of mysteries or amazing things about uh deep learning so when we talk about multi-layer networks we're really really large networks like not in the case but which we're looking at right now with a single layer for for a really large neural networks uh it turns out that um this is not that much of a challenge in practice uh and yeah I guess people have some like Theory theory around why it's it's not really a challenge um I guess one theory has to do with uh like the convexity or the non-convexity of this last landscape so this is a non-convex landscape so it's got like multiple local Minima and the local Minima are like pretty different from each other right so this minimum is like pretty different from from this one uh there's some I guess theory that tries to explain the empirical phenomenon that if you just run a great in descent or like a very some variants okay in the center level strap in a bit for deep neural networks for really really large like over parameterizing neural networks then it tends to find local Minima that are actually either close to Global Minima or are actually the uh the global so I guess that's yeah that was initially in empirical observation that like training with very large neural networks seems to make the optimization problem easier and then there's some recent theory around trying to explain why that might be the case question um so with the random search uh you don't necessarily have to have constraints like you can choose like a gaussian let's say on the the way W and then sample from that gaussian and then I guess what we're doing is like sampling the initial values right and then running gradient descent from each of those randomly sample the initial values um so there's no constraints necessarily here yeah all right other questions okay I guess if you're interested this is like Way Beyond the scope of this course but uh if you're interested in in like another theory of like modern uh kind of deep learning optimization probably the place to start is neural tangent kernels so this has been like one of the big I guess developments in theater so machine learning over the last like five years or so this I mean yeah it doesn't fully explain the empirical phenomenon I was talking about like but uh optimization for a really large neural network seems to not suffer from this problem of getting stuck in local minimum but yeah it makes some good uh progress but yeah I guess if you're interested but we're not gonna really have time to cover that [Music] all right so yeah I guess let's think about the the more General case uh when W is a vector to some side dimensional Vector same Dimension as the input X so in this case what we can do is replace the derivative uh with our gradient and keep the general structure of the algorithm exactly the same um so yeah we still want to to find some wmb that minimizes our training loss so again just for the thicker pictures I'll draw things as if they were scalar but think of the x-axis now as maybe containing both W and B the y-axis is a loss yeah really it's not just one x axis the domain is like very high in the dimensional if you're working with images um so I guess just a piece of notations I'll Define W hat W Delta uh to be the concatenation of the vector W with the scalar bias term B and then we can essentially just apply the same process as over there [Music] all right so gradient descent it's going to have the the same kind of General structure so for people through one through some number of iterations such as I'll come back to the number of iterations in just a bit but let's say we're doing this for some finite number of iterations uh we go through this uh series of steps so we choose well actually sorry before this we choose some random W tilde so we choose a random initialization for the W and the B [Music] um and then we update W tilde let me write it like this um sorry W tilde and then this return W tilde so I guess I'm abusing notation at a little bit I mean right up here so it's like abuse of annotation so I'll write l W tilde and I guess what I mean here is so this is the same as the training loss of w and is this more convenient to think about one vector that has all the decision parameters but yeah the same general structure right so we choose some random initialization or some initialization for w tilde for wmb and then we do a bunch of updates uh so at each iteration we look at the the gradient of the loss function with respect to W tilde evaluated at w tilde and then we update let me actually yeah so we evaluate this at the current uh W tilde and then we take a step so we update W tilde in a Direction that's negative to the the gradient gamma is still a scalar this is the learning rate again foreign things we said about the learning rate also black hairs if you use it to be really small then each step is just taking a small with making a small update so it might take a while to make progress on the training last if you take it to be too large that you might jump around and potentially not even converge okay I guess questions on on this yeah good so the way I've written it here uh it's the same learning rate it's the same gamma applied to uh to each uh dimension of w tilde so each Dimension will tell the uh is taking a step that's proportional to the component of the gradient uh and and the proportionality constant is gamma for all the components in practice you can have uh learning rates that are different for each component the other thing you could do is like somehow like normalize uh like wto's you could scale everything so that each component kind of roughly has the same scale and so in that case choosing the same camera for all the components that makes sense so the problem you might have is if um let's say the first component of w tilde has a huge influence on the loss and the second component has like a tiny influence on the last then you might want to take a a larger stack for the second component as compared to the first component but if you scale the parameters that's that they're kind of all roughly commented and using the same gamma for all components make sense yeah I guess there's various tricks for uh for doing this I'll mention some like variants of creating the scent that people actually use in practice and we'll come back to this point of choosing the learning rate good other questions maybe let's for a minute think about the this for Loop um you could replace this with a why you so instead of just running this for some finite number of iterations um you can yeah run this until some termination Criterion is is reached so we could do a while I guess something uh like the same process the W tilde is updated as such uh see I guess what would be a good Criterion to stop and like like stop the the iterations in return some answer great yeah so some condition for the while loop I guess would be a reasonable choice say it again yeah so you could you could definitely do that so you could check if the uh the training loss is below some threshold that you set um that good work the challenges maybe there's no way to uh to actually achieve that training loss threshold that you set and so this might never determine it uh sorry I guess yeah and of course yeah exactly so you could look at uh the change in the training loss so for every iteration here you could keep track of the previous training blocks then you could see how much the training laws are being updated and you can have a threshold on that another I guess equivalent way actually is to just look at the the norm of the gradient so that the sizes are gradient if that size gets very small then you terminate as well so in the kind of 1D picture uh if you approach this point over here and you see that the slope is below some threshold that you pick and then you determine it but yeah I guess that's kind of similar to saying uh the next jump like the next update is going to be small and then you terminate all right other questions on this okay all right so yes I think this is like pretty reasonable right so what we're doing is at every iteration we're moving in the direction of steepest descent like that's the minus of the the gradient and then we keep doing that until we converge or we like I spend a lot of patience [Music] foreign [Music] so let's think about like what we really need to implement creating the sense so this makes sense as a kind of General structure but we need to basically calculate the gradient right to implement this the scheme um so how do we compute gradients so in this kind of supervised learning setting that we're looking at I mean there are some pretty standard answers if we do discuss reinforcement learning and then we'll have a similar algorithm but calculate the gradients get gets much more complicated but yeah so let's stick to what we're doing now supervisor name uh one option is to calculate or let's say estimate gradients numerically so basically using a finite difference approximation so yeah I guess how would this work how would you implement this actually maybe let me motivate this a bit more so let's say that your loss function you don't have some explicit representation for it um so in the example I was describing before we could like write it down write down the last function exactly like the exact like function that Maps W tildebr to a loss but yeah I guess let's just for the sake of argument let's say that you basically have like Oracle access to the loss function so you can query so you can give a particular uh value for w tilde and you can ask what is my loss function at that W tilde but you don't have a explicit representation of it go ahead the first element of yep yeah good so for each component you so let me I guess write it down so if you look at um so compute the difference between so l w tilde uh evaluated sorry the last evaluated at w tilde plus some Epsilon some small for automation uh divided by Epsilon where the vector Epsilon is a bunch of zeros per Epsilon and a bunch of zeros right so you put up some component of the uh the weight vector you compute the loss at the operator Point compare that with the the original point w tilde and you divide that by the the perturbation amount which is the Epsilon right so I guess if you think back your motivated calculus course like the way gradients were defined as using this except that you take the limit as Epsilon goes to zero so we're just saying we're going to choose a finite Epsilon non-zer Epsilon and instead of taking the limit just calculate this quantity for each component and then the concatenation of all of those gives you the the gradient all right so I guess this is this is one way to do things and this might be useful or can be useful um under the kind of setting I mentioned where you don't have an explicit expression for the loss but you just have like Oracle access so you just you can ask what is the last function at this particular W uh and you get some number uh one example is if your query if you're doing some kind of optimization with a human in the loop so you could ask the human hey like how do you like the setting of things and the human says yeah good or bad or something like number and then you calculate this kind of you make some perturbation and you're asking them again and then you optimize things that way there are some examples where I mean not exactly what I said but uh but like variants of what I said where you don't have explicit access to the the last function this kind of scheme is useful uh I guess in the setting that we're looking at we don't really have this problem like we actually can write down what the last function is [Music] so option two yeah actually I guess maybe maybe another example that that's a bit more practical so uh let's say you have a you're doing some kind of uh optimization process for a legend robot so I actually did this as an undergrad I spent many many hours like tuning the the gate well I guess watching a robot a luggage robot like a hexapod robot uh tunic gate so gate means like how it works so there's a bunch of parameters that Define like how they're about works um and then you set some parameters so that would be analogous to wtilla here and the robot does something right so it can like moves around in some weird way and then you can calculate some Metric so uh there's a bunch of options cost of Transport is one so that has to do with Energy Efficiency uh and like speed like it combines Energy Efficiency with speeds you want to have the high speed but also keep the Energy Efficiency low uh or if you just look at Raw speed right like how quickly the robot run uh in that case there's no explicit function that we can write down right so so you can't uh like analytically compute the gradient so something like this actually makes sense over there so you choose some values uh for parameters that Define your git you see what the speed was or the what the cost of Transport was and when you update them and then you do this kind of finite difference approximation and then you can use something like so that's another example but yeah in our setting in this kind of super Islamic setting that we're looking at we can just calculate gradients analytically [Music] so the training loss we defined as the average loss across our training examples uh and so the gradient which is what we need of sorry lfw tilde is the gradient of this quantity so it's one over n gradient with respect to a w tilde of the class um so I guess by the linearity of the the gradient I can bring the the gradient inside the summation and then we have a so we have an explicit uh expression right for for Sigma and for the the last so if you're choosing the sigmoid activation we can write down exactly what that function is uh if we're choosing the binary cross entropy loss we can write down exactly what the function L is so we can actually calculate this via the chain rule right so you look at uh D Sigma DZ and then you look at the gradient of this expression with respect to W and B um so of course you could do this by hand but uh yeah there's some nice uh automatic differentiation tools yeah I guess all the kind of modern machine learning packages have ways to calculate these gradients analytically I agreement automatically so if you specify if you just tell the software package I'm using the binary class and we lost the sigma evacuation then it basically handles the gradient configuration for you but yeah it's under the hood it's calculating these these gradients all right so I guess any questions on on that all right so what's uh yeah I guess what are challenges with uh with what I described over here so one major challenge is a is one of computational efficiency right so so the gradient descent procedure that I described over here um requires the computation of the gradient for each term in the summation so each term in the summation corresponds to a particular example in your training data set so for each example in your training data set you need to calculate and I guess evaluate the gradient of the loss with respect to your parameters and then evaluate that gradient at the current setting of parameters so this can be pretty computationally expensive for large data sets because we basically have to Loop over or at least evaluate like all the uh the gradients for all the training examples and we have lots and lots of training examples then and this can be quite computationally uh challenging to to do um so I guess one uh variant of grading descent is what's known as stochastic uh abbreviated as SG d [Music] uh which kind of gets around well it has two benefits so one is the is kind of the direct benefit which has to do with computation efficiency there's a side benefit which is much more subtle that I'll get to uh in a bit but yeah let's describe uh what stochastic being descent does uh another name for this it's not used as commonly but like mini batch uh gradient descent yeah usually people just call with uh stochastic gaming descent so yeah with SGD um what we're gonna do is instead of evaluating the gradient for all of the examples in your training data set we're just going to randomly sample some fraction of the examples and evaluate the training sorry evaluate the gradients for that randomly sampled batch that Dynamic sample subset and then keep randomly selecting some proportion of the data at every iteration so yeah let's write down the the steps in the algorithm so again as before we're just going to choose some W tilde like to initialize everything with so yeah maybe we just do this randomly uh and then we execute this for Loop and again you can replace this photo for the while loop where you check some of the conditions that we discussed before so what we're going to do in the while loop is we're going to randomly sample capital B examples from our training data set I guess what I'm done we are just being a uniformly randomly be examples I'll say a bit about the choice of B but think of b as just some number that's less than capital N where capital M is the the total number of examples in your training data set and then we're going to Define uh L hat of w tilde which is going to be the average training loss [Music] so I want to write down the whole expression here but the the average training loss uh on the the B examples that we randomly chose uh in step three uh and then we yeah I guess we just apply here in this end uh sorry questions sorry one more time yeah yeah good so I guess I mentioned that there's like two benefits to stochastic and descent uh so what I'm describing now is just the kind of computational benefits that that's the most direct one uh what you're saying is also true and I'll get to that in just a minute maybe let me just uh write down the steps and then we'll get to that that point is much more subtle and much less theoretically well understood but but yeah I'll get to it in a second all right so yeah we Define the uh the last uh L hat across the randomly selected uh batch uh subset of examples uh and then we compute the gradient of L hat with respect to W tilde and then we update W tilde and then just return W uh um actually sorry I keep forgetting the evaluation to be just evaluated the current setting of WTO so it's the same exact kind of structure or almost exactly the same structure we're still taking steps in the direction that opposite the gradient we still have this learning rate gamma the only difference is every iteration in our for Loop instead of evaluating the gradient for all of the examples and then something goes up we're just randomly selecting some proportion B Over N of examples from the data set and that random fact we keep changing like every time we randomly select some examples and then run the squaring percent procedure using just those examples and yeah typically I guess you would choose maybe like I don't know 20 or so of examples like partly this depends on like how large your training business and how much computation you have access to um so if you have a certain budget for computation then you can move that to select the choice of B but yeah something like like 20 or so can work in practice and yeah I guess what this does like most directly is that it [Music] improves the computational efficiency right so we've reduced the uh the number of um like gradient calculations we're doing at the InStep so if we're just using a a batch of B examples to do the the updates uh the second point which is that point clearing any questions on that all right yeah the second point which is more uh Stuffle and so empirically it seems to be true uh I think theoretically the some papers which yeah I'm happy to point you to some like Theory but not 100 uh satisfactory I think but so there's basically two benefits actually let me there's really three benefits the most direct one I mean there's no question but this one is this computational efficiency um yeah each iteration requires significantly less work and then I'm ready to send uh it could still be that the the number of steps you need to make to converge to something good like maybe it'll be larger potentially but at least each step is less computation than if you're doing the full version of operating the same um the second um yeah potential benefits which seems to be kind of empirically uh valid for for deep learning is that uh you could uh jump out of local so because there's like this kind of inherent like Randomness in the the steps that you're making um yeah this is all right let me draw the one-dimensional version of the picture um so with the standard variant descent like you're just going down the uh the flow open you you're going to converge to this point and there's no Randomness in the process except for the random initialization but we'll stochastically in the same here not going down exactly the slope right you're not taking a step that uh kind of exactly aligned with minus of the gradient because the gradient uh your calculating stochastically you're just Computing an approximation of the gradient using some finite number of examples so it could be that you basically like jump out so instead of converging to this uh solution you'd maybe jump out and then you end up over here and then you end up uh converging to this uh and this could happen uh I guess do people have iterations for uh why it might happen or if you have a question go ahead did you just as easily jump into areas you could also jump into uh areas that uh yeah the opposite could also be true uh so there's some so I guess this is what I think it's not totally clear like exactly why uh this is a a benefit right so it seems to help like it seems to be the case that the kinds of solutions that you get with stochastic gradient descent seem to be better than just doing uh grading descent uh so let me add the third point because the like two or three are kind of coupled uh which is that you could get better generalization so we haven't really talked about generalization yet we're going to spend some time talking about it much more deeply in the next lecture uh I kind of briefly mentioned it in the previous lecture so optimizing the training loss is kind of just a surrogate for what we actually want so we already have the training data set right like we don't need to uh like know the the labels for the train videos that we all we have this data what we need is examples or labels for future examples that are not part of our training data set so that's what I mean by generalization so doing well on examples that were not part of the the training data set and yeah I guess empirically at least it seems like using uh stochastic reading descent seems to help with better generalization um I won't yeah I won't go into the I guess since we haven't really talked much of our generalization yet I won't go into the details but the rough intuition is that you can uh basically jump out of Minima that are very sharp um so if you're using uh gradient descent so not the stochastic version if you start up over here you might end up over here but if you're using stochasticating descent because of the randomness like this is kind of a Sharp Valley you might jump out uh and end up over here um and this is like a a wider uh local minimum like wider in the sense that it's not as like sharp as this one then uh the curvature here is smaller than the the curvature here um and I guess there's some iteration for why even if these have the exact same training loss like this point this point have the exact same train loss uh The Wider one uh tends to generally better like do well on examples that are not part of your shading data set yeah like I said that story is like much more subtle we spend more time thinking about generalization in the the next lecture um yeah I guess this is kind of maybe partially address your question yeah that are still pretty good yeah that's that's reasonable intuition yeah and there's there's some there's some Theory but like I said not super estatisfaction here expression so why you're they're more Stables yes or not even necessarily out of distribution even just other examples yeah that are not in the training data set uh what I yeah what I'm saying is this situation that there are like papers uh that try to formalize this if you're interested this is like Way Beyond this before this course but yeah I guess if you're interested as a person at TTI the Toyota technology Institute uh Navi s-r-e-b-r-o uh yeah he's done a bunch of work on trying to understand this like phenomenon like understanding why um sorry not here this picture over here like why uh flatter Minima tend to to generalize better uh there's a bunch of kind of other people as well uh but yeah I guess his work he gave a dog a couple of years ago so I remember his his work if you're interested yeah take a take a look at that um so this is not in the training data set yeah yeah yeah yeah I guess I don't want to harp too much on the generalization points uh just yet because we're going to spend much more time in the next lecture talking about generalization so maybe we can revisit this question uh in the next lecture but I guess maybe just take it on faith or look at some of some of this work that I mentioned but yeah it seems like I'm well definitely this is true and then empirically at least as of right now like 2022 things like these these things empirically are also true in the sum Theory uh around it all right other questions go ahead foreign um oh I see ah that I don't know how to do that um like how to choose the examples uh such that you end up at somehow a better minimum uh yeah I don't know how to do that that's an interesting idea yeah I mean typically what people do is they randomly select uh some fraction of the the training examples and use that but yeah maybe maybe there's some kind of player algorithm that better selects the the examples that you use yeah that's a good question I don't know of people who listen to that cool all right so I guess let me see what we're doing on time so about 10 minutes um [Music] foreign [Music] like Workhorse for in terms of optimization for for deep learning and there are some other uh like bells and whistles um so in practice um the variant of stochastically in the sense that is most widely used is called Adam thank you um so yeah I guess this was uh proposaline was proposed maybe seven or so years ago uh and yeah if you look at the atom paper it has some like absurd number of citations I have no idea what it is anymore like tens of thousands of citations uh yeah it's the most like popular uh algorithm uh for uh for doing optimization in deep learning so basically what it does is uh also like adaptively chooses the learning rate so in what I described so far we just choose some learning rates that's fixed uh Adam has a way of choosing the learning rate kind of adaptively adaptively adaptively changing the learning rate uh based on the statistics of the gradients and that are being calculated uh the basic I mean it's still using like stochastically understand but with this like addition of of choosing uh the learning rate more uh carefully um the other I guess thing I'll mention is uh gpus uh so graphical processing units when we run stochastic being descent a lot of the computations are paralyzable right so each term in the summation you can calculate independently of every other term and then you can sum up those together and that's the kind of computation that gpus are are particularly good at so relatively kind of simple uh calculations that you can do in parallel so that tends to significantly boost the computational or significantly boost the computation time significantly reduce the computation time in practice [Music] all right I guess any other questions on the optimization part okay so the the last thing I want to talk about so this is just a preview we'll talk more about this later is going Beyond a single layer of neural networks to multi-layer Networks I mean the basic idea is fairly straightforward like we're just going to stack um a bunch of single layer networks together soon until we get some output so the input is again some image let's say some Vector X the output we can call it y um and each intermediate thing uh is a hidden layer so each one H2 and so on up to you until you get to the the output layer y so that's the I guess the general picture I'll write down the exact computation [Music] foreign so the the steps for calculating the the output are as followers as follows so each one the output of the the first hidden layer uh looks very similar to just a single layer calculation single layer neural network um but with a couple of modifications so each of these uh like HIV is is going to be a vector and so we have a instead of just a kind of vector with we have a weight Matrix and then instead of a scalar bias term we have a vector by a serum so let's say like X has Dimension uh n by one [Music] um let me see if I wrote down the dimensions explicitly yeah no okay so um so W uh one transpose will have dimension uh let's say like d one by m uh and then this has Dimension D1 so what what we end up with once we do this calculation uh is D1 uh sorry D1 by 1 d by y d one by one vector right so that's uh H1 so W1 I guess is to be explicit so W1 here has Dimension n by D1 and so W1 transpose has Dimension B1 by n so that that's the first computation so we start with the input which is a vector we do this computation uh Sigma here is uh some element wise non-linearity right the thing inside Sigma is a d by one vector so we apply a non-linear function like sigmoid or value or any of the other choices to each component individually uh to this like vector d by D1 by one vector and then we repeat this this computation uh so H2 is Sigma of W2 transpose H1 let me just call it B1 so plus B2 and so on so h um let's say k there's Sigma WK transpose h k minus 1. plus BK and finally the the output y uh you're with the same computation again so w let's say K plus one transpose h k Plus B K plus 1. and the way I've written this here why it need not be a scalar so it could be that y like the label is itself some some Vector it could be a scalar in which case you just have to make sure that the the weight kind of matrices give you a scalar at the at the end over there so you might have a a vector with instead of a matrix weight but yeah it just depends on exactly what the label is all right so I guess that's the the rough structure so we're just repeatedly applying a single layer computation or an organ so we end up with these like intermediate uh representations uh and then uh like finally get some predicted label why um so I'll say much more about uh I guess why this might make sense like the rough iteration is that this can represent like this process can represent functions that are non-linear in x and so we're no longer limited to just the kind of linear Transportation or like template matching uh that we were doing with a single layer network uh what I've described here uh this kind of General uh like computation is a multi-layer MLP um so it's just kind of a just a generic uh architecture that I've written down over here if you make more uh or if you add more structure to the weights then you get like different architectures like contribution internet and we'll spend some time talking about those as well but all right I guess that's a preview of what we're going to talk about any questions on on this computation what's going on here question here like like the original input and um like H so they don't have to be like the same yeah good so so they don't have to be the same uh so it could be that we're starting with some uh like dimensions of the input and then we're making the representation have a larger Dimension and then going back down again uh to the uh the whatever the label uh Dimension needs to be uh yeah so it could be that the dimension is increasing or decreasing uh and yeah in any of those can valid from this architecture good other questions all right so I guess what we're going to do in the the next lecture is uh say a bit more about uh like multi-layer networks uh talk a bit about how to optimize them and spend time thinking about the question of generalization as well all right so I think everyone knows this but I guess some people ask me at the beginning of the lecture so there's no class for us on Tuesday like when it's following a Friday schedule on Tuesday so yeah enjoy your Thanksgiving and I'll see you after [Music] today [Music]
Introduction_to_Robotics_Princeton
Lecture_1_Princeton_Introduction_to_Robotics.txt
all right i think we're gonna go ahead and get started so here's the the plan for for the day so welcome to lecture one of uh intro to robotics mae 345 549. here's the the agenda for the day so we're gonna start by talking a little bit about some logistics for the the course so assignments uh projects and and so on uh i'll say a little bit about the motivation for the course and give you an introduction to the content that we're gonna cover and then i'll talk about the syllabus and give a kind of broad overview of the specific technical topics that we're gonna cover in the course uh so maybe just a quick sound check so this is the back can you hear me okay you say yeah okay good all right perfect so my name is ani majinder i'm faculty in the mechanical and aerospace engineering department and also associated faculty in the computer science department i also spend a little bit of time doing research at the google ai lab at princeton so i spend a day a week roughly working at the uh the google princeton ai lab uh doing some research and robotics and control theories uh so my first my full first name is kind of hard to pronounce so i usually just go by ami so you're welcome to just call me ani i guess if you're more comfortable saying professor professor martindar that's obviously fine but you're more than welcome to also just call me that's what i usually go by if you're interested in learning more feel free to look at our lab website and also talk to some of the phd students in our group um so i guess speaking of phd students we have a really solid team of ais for the course so we have five ai's for this course um so i'll just give up maybe just a quick uh introduction maybe if you're here you can just uh wave so sasha badrova is a third year phd student in in my group eric lebowski is a fourth year phd student in professor alex cladron's group anya olsen is a mn student in mae allen wren is a fourth year phd student in my group and nate simon is also a fourth year student a third year student rather not yet in my group um okay so yeah i guess all of the the ai's have taken the course before a couple of them have the course before so i think you're in really good hands uh we also have some support from the mae department in the the form of john primo so if you've taken some lab courses in mae then you probably interacted with him so he'll be helping us out a fair bit with the hardware component the lab components for the course which i'll say more about later in this lecture today uh here are the prerequisites for the course so i think written down like this they look a little bit intimidating uh they're not meant to be intimidating they just meant to be kind of exhaustive to cover my bases um so i'll just go through them quickly and kind of roughly say what i expect you to know i don't expect everyone to be absolutely comfortable with everything that's on this list and it's perfectly fine if you're rusty on some of these topics so the first one on the list is multi-variable calculus if you're familiar with partial derivatives mediums things like that then you should be in decent shape so if you've taken math 201 or 203 at princeton or something equivalent uh if you're a graduate student uh linear algebra so just the basics of linear algebra vectors matrices uh matrix multiplication uh that kind of thing uh basic probability so if you've taken or three or nine that's more than sufficient i think this is one where especially if you're coming from a mae background then you probably haven't seen much probability recently or if you've seen it it was a while ago and maybe you don't quite remember it um so i'm actually going to give you a quick sort of crash course uh in the basic probability that we're going to need for this course when we get to that part of the course where we will need it uh if you've seen bayes rules so maybe just a quick show of hands i guess how many people could write down base rules i'm not gonna ask anyone to write it down but if i asked you to could you write it down okay i guess maybe third or half the course so yeah if you're familiar with base rule you should be in good shape if not you will be familiar with it by the time we're done with this course um some basic differential equations i don't expect you to be able to solve analytically differential equations in fact like basically none of the differential equations we'll encounter in this course will be analytically solvable except for just a few special cases uh what i do expect you to understand is just what a differential equation means uh and the basic kind of intuition or idea that a differential equation allows you to predict the motion of some system given some initial conditions uh and finally some programming experience so course one two six i guess hopefully everyone has taken that or something equivalent uh we're going to be using python for this course a lot of the modern kind of robotics and machine learning toolboxes software nowadays uses python so if you haven't seen python we're going to do a quick intro to python just a very quick crash course on the basic syntax and some packages like numpy and sci-fi so sasha who's one of the the ai's will be leading an intro to python uh session so this is not mandatory we'll schedule it based on the poll that we sent out for office hours so if you think you benefit from it you're welcome to join if not there's no pressure at all uh we'll have it recorded as well the slides will be posted so you can take a look offline if if that's what you prefer um so more logistics on grading um so the bulk of the the grade so 45 percent of the grades will come from problem sets the problem sets will be a mixture of theory uh programming and hardware so i'll say a bit more about the hardware the lab component later in this in this lecture uh so they'll be assigned the process will be assigned on wednesdays and will be due by midnight also on wednesdays the first problem set will go out not tomorrow but the week after uh so eight days from from today is when the first round that will be assigned that's going to be due a week after that there'll also be a take-home midterm exam so that's something that you'll work on individually early november is the the date uh or the time frame that i've set for this i put some tentative dates in the syllabus so if you want to plan ahead uh take a look at the syllabus which should be available on on canvas i guess hopefully you have access to it if you don't maybe let me know and i'll i'll fix it uh and finally there's gonna be a final project at the end of the course which uh again i'll say more about uh it's gonna be on uh on drones and i think it'll be a lot of fun uh maybe just one quick comment on the schedule for the semester maybe you've noticed maybe not but things are slightly weirdly scheduled this academic semester in particular i think this class will not meet the week of thanksgiving and the tuesday before thanksgiving princeton is doing a friday schedule uh so yeah because of kind of things like that i've had to make some tweaks to the due dates and so on for assignments and exams but i think the the calendar we have is pretty good and should not put like an undue uh burden on you um we sent out or eric sent out a poll for uh office hours uh i think many of you have filled it out if you haven't uh please try to do that today we're going to try to finalize the the office hours uh for the course uh probably at some point tomorrow based on their responses uh from the polls so the goal of the poll i guess is to just make sure that we have good coverage uh during hours where you are available um so we'll have six hours total of office hours so each of the ais will we'll do one hour i'll do one hour as well i think we'll have pretty good coverage throughout the week and hopefully through different times when you're available as well some more logistics and policies for the the course so these are written up on in the syllabus in a bit more detail i'll just provide an overview so we're kind of on the same page so collaboration on assignments uh theory and programming and hardware uh are definitely uh permitted and i guess more than permitted you're encouraged to collaborate with with your uh peer students uh on various aspects of the problem sets for the theory and coding portions uh you should basically submit your own write-up that reflects your own understanding of the the material of the course um like i said there's gonna be a hardware uh kind of lab portion as well um with with drones and that will be in teams and again i'll come to the logistics for that in just a little bit uh and yeah of course you should make the the ai's kind of job as graders as easy as you can to make your you know write ups legible so they're not spending like hours uh trying to figure out what you wrote uh yeah if you're submitting code then make sure things are kind of reasonably uh commented so you can get partial credit even if things are not kind of perfectly right we're also going to implement a late policy so this is something that i guess comes up a whole bunch especially during the semester um so you are welcome to submit things late but there'll be a 10 deduction for every day of lateness and if you submit it beyond a week then we won't accept it um yeah we'll enforce this like pretty strictly um so the only exceptions will be things like personal emergencies illnesses so obviously if those kind of things come up i just send me an email and we'll make an exception but if you tell me i was busy with some other midterm uh then we won't make an exception so we'll kind of enforce the slave policy pretty strictly in that sense um yeah like i said this can be a take home midterm exam that will be individual it will be open notes no internet open books but no no collaborations you'll work on it individually and yeah finally regular attendance is strongly encouraged so i'm not taking attendance or it's definitely not mandatory but i guess hopefully you'll enjoy the lectures you'll get something out of it and i will continue uh coming to them um so we have some references for uh for this course um so there are no required textbooks or nothing that you need to buy but we'll use these three textbooks kind of just loosely as references so the first one is introduction to autonomous uh mobile robots by new russian and roland sigwart uh the second one which is one of my favorite textbooks is planning algorithms by steve lavalle and finally the third reference will be probabilistic robotics by sebastian throne will from burgard and peter fox so there are pdf versions of some of these books available online and i've just linked to where you can find them so you're not required to buy anything and as i said we'll use these kind of loosely so i'll supplement all the lectures with lecture notes and lecture slides that i'll post on on canvas and that might end up being your main reference but then for supplemental reading if you're interested in digging down some of the details i think these are really solid references that you might end up using if you can continue with robotics beyond this course uh just some other logistics we'll use the gradescope which is accessible through canvas for the other course uh i guess hopefully at this point everyone has used createscope and is comfortable uh if not it's a pretty easy thing to use and i think you'll get pretty comfortable with it and it makes the job of the ai's the graders easier as well we're also using a discussion for for the course so i sent out a email at some point last week so you can kind of get started on that discussion this will be for questions that come up uh primarily about assignments uh mostly about labs probably uh hardware issues things like that uh potentially also about like some of the theory uh questions in the assignments uh and maybe about logistics if you have questions about logistics also feel free to uh post uh something on discussion uh particularly things that you think others might benefit from so if you think that there's some question that that other people in the class might benefit from having the answers to feel free to post those questions on discussion and we'll try to respond kind of as frequently as we can within with reason um all right so i guess the final i think piece of logistics is uh is ma549 uh the um is divided between me 345 which is cross-listed across ece and and computer science as well and m8549 so me 549 is the the graduate level track uh so 345 the undergraduate version has 72 students who are enrolled the grad track 549 has 12 students who are enrolled the lectures are going to be shared for for both tracks many of the the problems are going to be the the same for 345 and 549. uh the 549 students are just going to have slightly more challenging and some additional uh like challenges in the uh in the assignment but uh the rough kind of content for the the course is going to be there um all right yeah i guess any questions on on these logistics before we move on to discussing some of the course content all right i guess everything is clear good all right so yeah let me talk about some of the motivation for the course so what are we gonna um cover in in this course so the course is titled introduction to robotics um and so of course the first kind of natural question is what exactly is a robot so as a side note if you ever find yourself at a robotics conference and you really want to start a fight this is the question you should ask so yeah it turns out that this kind of basic question what is a robot is something that elicits really strong reactions strong opinions from the viruses it's not something that people kind of universally agree on but for the the purposes of this course uh we're just going to give a kind of working uh definition so we're going to say that a robot is some embodied agent that can be programmed to perform physical tasks so let me just maybe write down the key phrases that i've highlighted here so the first one is embodied agent the second one is programmed and finally a physical task so yeah like i said this is a working definition uh that's good enough for this course it's not something everyone agrees on uh there are other reasonable definitions as well that you might encounter if you can continue with robotics so if you just open up wikipedia for instance or just a dictionary this is the the definition you might you might encounter so robot is a machine especially one programmable by a computer capable of carrying out a complex series of actions uh automatically so it's essentially the definition we gave so this kind of embodied agent which is machine programmable uh complex series of actions i've highlighted or emphasized one word which is automatically there which doesn't directly appear in our definition uh here's another one this is from the robotics industries association the robot is a pre-pro reprogrammable multi-functional manipulator designed to move material parts tools or specialized devices through variable programmed motions for the performance of a variety of dust so again it's pretty similar to the definition we gave a little bit more verbose as many more words in this definition and there's one interesting aspect here which is a variety of tasks which didn't directly appear in the definition we said physical tasks but not necessarily a variety of physical tasks uh and finally this is uh from the ieee so robot is an autonomous machine uh capable of sensing its environment uh carrying out computations to make decisions and performing actions in the real world and i think this is a really solid definition as well and that mounts pretty well onto the definition that we gave with one exception which is the word autonomous again appears in in this definition which didn't appear in our definition so ultimately i think all proposed definitions have some or the other problem they're not like completely satisfactory and i think to understand why it's helpful to just go through some some examples and think about what is or or is not a robot so this is boston dynamics is a atlas humanoid robot so i think this is completely uncontroversial so if this is not a robot then nothing is a robot so this definitely satisfies our definition it's an embodied agent it has some physical embodiment uh it's obviously programmable so you've probably seen many youtube videos of the boston dynamics engineers uh having programmed mr about to do too many things many physical tasks locomotion manipulation and and so on so i think this one uncontroversially is a robot um the second one is a crazy fly drone so i have a um i brought one of the the drones here so this is the the robot that we're going to be using for for this course it's a small drone that we're going to program to do a variety of tasks and i think again this one is is uncontroversially a robot drones in general are pretty uncontroversially robots uh again they satisfy our definition uh so embody the agent so clearly i brought it here this has been body program program programmable that's what we're going to do in this course and physical tasks i think is is maybe the the most uh ambiguous one so i guess maybe does someone want to challenge that one like does a drone really perform physical tasks or or not any thoughts on this just yeah yeah yeah yeah exactly so so they're not in if you have a very strict definition of what it means to perform a physical task like manipulating some external object uh directly that's not something they're doing right like typically they're just manipulating themselves but i think our definition of physical tasks we can keep to be like broad enough to also encompass that so yeah we're going to say this is a robot i guess if we said this was not a robot then this would be somewhat disappointing of course if we spent all our time working on something that wasn't there okay here are some other more interesting maybe ambiguous examples uh this is a dishwasher uh all right so i guess does someone want to maybe argue for or against either one uh for for this being a robot or against this being a [Music] yeah okay good uh all right maybe counterpoint or supporting that are you i would say okay yeah yeah yeah um so i think this is definitely like on the on the border so if we look at this definition it kind of satisfies the definition so it's embodied clearly we put wishes in it uh physical tasks i think that one is also relatively clear it's washing your cleaning your dishes uh programmable i think that's the one that's not completely clear you can program it to some degree you could say like rapid wash or whatever cycle uh but it's not really like you're not going and like writing python code but hopefully i guess to program your your dishwasher you have another point yeah yeah and i mean even even dishwasher maybe has some like basic sensing like if water is like overflowing maybe it has some some basic uh sensing uh capability all right so the goal with this exercise is not to come to any sort of consensus but uh i think yeah you could say this is kind of a robot but again the whole course was about programming dishwashers uh then you would be very disappointed if that was our intro to robotics courses it's not a completely satisfactory robot even if it satisfies most of our definitions uh to some degree uh another example so this is a automatic door um so this is maybe another kind of counter example there is some sensing here so there's a little camera or some kind of sensor that senses whether a person is is about to enter uh opens up the door so there's some sensing there's some actuation there's some computation and it mostly again satisfies our our definition there's obviously an embodied agent there's an actual door there uh physical tasks opening and closing so relatively simple physical tasks uh programmable again questionable like someone obviously programmed it to do something uh but not something super interesting uh i guess does anyone have strong opinions one way or the other on this one yeah this one is also also kind of uh ambiguous um all right here's another one uh so this is uh some voice assistant like amazon's like alexa or something like that uh what about this one i guess thoughts on whether this is a robot maybe for oregon like this according to our working definition that we propose okay yeah so the the fact that you can produce sound i guess you're saying is means that it's an embodied agent yeah physical task it's a little bit ambiguous right so if physical task is just projecting sound uh then then yes uh the programmable one is completely unambiguous right so some of the most like advanced uh ai that goes into these uh like voice assistant systems but yeah i guess these ones are are less clear i think any any counter arguments maybe someone would argue why this is not a robot good yeah yeah yeah i think that that's that's fair so i guess which one would you say it doesn't quite uh yeah physical task is the most uh ambiguous one probably you could argue again just just for the sake of argument that is performing some physical tasks like you ask it a question it gives you an answer and then that affects something that you do right like you say alexa what's the weather today it says bright and sunny you go for a walk so indirectly it's performing the physical task of making you go for a walk right you could argue that but it's it's yeah it's uh it's not completely satisfactory good you could say it's moving air moving air yeah yeah that's a good one so in order to produce sound it's uh it's moving air air is physical so it's performing a okay good yeah yeah yeah it's obviously tucking up some uh electricity uh yeah raising your uh your relationship a little bit so it's performing some uh physical tasks anyway again the goal with these exercises is not to come to some consensus you could argue about this for hours probably uh but just to think about just basically the fact that it's not so simple to define exactly what a robot is yeah no that's that's a great point as well um it's sort of like a friend like almost right in a way and in that sense it's a a robot a friendly uh well i guess hopefully friendly uh robot all right here's i think this is the last uh of these little uh thought experiments uh this is a da vinci surgical uh robot well i guess i well i gave away what my perspective is on whether or not this is what it was but this is a surgical system um this is tele-operated so there's a surgeon who is kind of physically uh controlling these uh these joysticks uh which then eventually leads to uh some uh kind of manipulation of uh tissue or whatever it is that the surgeon is like surgeon happens to be doing so yeah i guess what about this one do people think that this is a robot or not yeah i definitely say it is like the most sketchy areas but it definitely is assisting you like yeah yeah good yeah i would agree with that i think it satisfies the definition relatively cleanly so it's obviously embodied it's performing a physical task like you said the programmable one is the most ambiguous uh there is actually some relatively sophisticated uh like feedback control that's going on to make sure that even if the the surgeon's hands are vibrating a little bit that doesn't transfer over to the the end effector the thing that's actually manipulating uh stuff that's inside the body uh the reason i i specifically kind of brought this up was if we just go back to some of the definitions the alternate definitions that i provided at least a couple of them have this word uh automatically in there which i didn't include in the definition that i gave so automatically autonomous and so on um this one this robot the davinci surgical robot any robot that's tele-operated is not fully autonomous right so there's a human who is controlling at least at a high level what the uh the system is doing uh i definitely think this is a robot i don't think we should rule out uh human robot interaction altogether by saying that the robots need to be completely autonomous i think this is a great example of a actually commercially i think that would be successful robotic system okay so yeah like i said ultimately i think all all proposed definitions have some of the other issues uh if you're interested in thinking more about some of the philosophical kind of aspects of this question of what is or isn't robot i've linked to a couple of uh articles in the the slides uh the slides should be posted by the way on canvas i think everyone should have access to it i'll try to post slides and let you know before each lecture maybe not that much before each lecture would let me before the lecture begins at least all right another kind of interesting question is uh the difference between robotics and nai uh so the term ai there's a lot of hype around and recently uh i think this difference is something that most roboticists do agree on uh if you probably ask the random roboticists at a at a major international conference like what's the difference between robotics and ai i think you'll get a relatively kind of straight answer which is that a robot needs to be embodied it needs to have some physical embodiment and an ai system need not be embodied of course you can split hairs you can ask what exactly does it mean to be embodied we looked at some examples before with the the voice assistance for example like moving sound does that count as being your body just yeah just being in a box uh like having some like circuitry does that kind of is being embodied but i think yeah intuitively at least it's hopefully clear like what it means for something to be embodied or not these are two examples of i guess some of the most maybe popular ai systems or popularly known ai systems uh this is ibm's watson uh so this was the jeopardy uh playing system that beat the then uh i guess jeopardy uh champion uh this is deepmind's alphago system that beat lisa doll who's the go champion and this was back in 2016 and this is like a major milestone for ai if you look at this kind of image here you'll notice that there are two humans right so there's lisa dole that's the human human and then there's this other person who's kind of a representative i think he's also an actually accomplished uh go player who's on the defined team but he's the one who's moving their physical pieces right so he's the embodiment of the ai system bi system b and deep minds alphago so yeah i think this one i guess if you kind of ignore the the human uh is not a physically uh embodied system uh we would say it's a ai system and not a robot um one thing to maybe think about like this food for thought is maybe there's no like binary definition this is a real water or not uh if we think about this uh this definition that we provided robot is an embodied agent that can be programmed to perform physical tasks uh maybe there's some kind of like degree of roboticity like degree of being a robot um and i i guess it's a kind of high level point i think the lack of a generally accessible or generally kind of acceptable definition uh hints at some deep philosophical questions uh and also could be an indicator of the youth of our field so robotics is a relatively young field i'll talk about some of the history and the pre-history of robotics in a little bit uh but it's not that old just a few decades old in its kind of modern form anyway um so i think the fact that we can't fully agree on what is a robot uh is an indicator that there's still a lot more to do in terms of research in robotics um yeah like i said i think maybe this there's not a binary notion of robot or not robot maybe there's some degree of roboticity that has to do with uh like a degree of embodiment or a degree of autonomy or a degree of complexity or degree of programmability uh this was kind of implicit in some of the alternate definitions that i gave so if you think about like a variety of physical tasks that phrase was one of the definitions that hints at maybe the idea that you need to have some continuous measure of whether or not something is a is a robot uh but we don't have formal kind of mathematical like definitions for any of these terms so yeah we don't really we use these terms a lot in robotics but we can't kind of mathematically say what does embodiment mean what does autonomy mean what is complexity maybe that one we can sort of play programmability uh yeah it's not uh these are not like mathematically like well-defined uh terms and again i think the that the fact that we're relatively like not so much your field and maybe like 100 years from now we'll kind of agree on some of these terms all right i guess any questions on on this okay so um so according to our like definition that that we provided the embodied agent programmable uh able to perform physical tasks um let's now discuss what a typical robotic system uh looks like so what's the the anatomy of a typical robotic system and again i'll use the the crazy fly drone or drones in general as a running example of a robotic system throughout this course so robotic system has something has some actuators so these are things that basically allow it to perform physical tasks something that moves so the four motors of a quadrotor those are the actuators the things that are being actuated um it has some sensors so this drone in particular has a optical flow sensor so that's a downward facing camera that basically allows the robot to measure its velocity uh relative to the ground so i'll say more about optical flow all of that kind of details uh later on in the course but it's yeah you can think of it abstractly as being some sensor that's a downward facing camera that's allowing the robot to estimate its velocity relative to the ground it has other sensors so there's a height sensor to maintain uh some height uh later on in the the course the kind of last third of the course we're going to put a camera on the drone and use that for autonomous navigation in the final project also has the inertial measurement unit so i guess that's one of the most important sensors as a accelerometer and a gyroscope which tells you about how quickly it's accelerating at any given point in time and also its orientation as well um and then there's some computation so the computation on this drone is relatively kind of lightweight uh we'll use some of the onboard computation for some of the labs but for some of the labs we're gonna do off-board computation so we're gonna send the sensor measurements onto a laptop or a desktop computer that does some communication um and there's a kind of feedback loop right so the robot senses its own state and the state of its environment it performs some computation based on those transfer measurements uh finally it uses uh the results of those computations to tell its actuators how to move and this kind of loops around until it accomplishes whatever task it's supposed to be accomplishing uh so this general cycle is known as the the sense think act cycle so you sense something uh you think you perform some computations and then you act uh and then you do this in a kind of feedback loop uh and most robots are kind of architectured in this way there's some sensing there's some combination and there's some actuators uh and the general software for the loop the system you know operates in this uh sense thing and loop all right so just a brief kind of pre-history of robotics uh this is uh yeah i mean obviously not something i want to quiz you on it's just uh just to uh give you some food for thought uh and i personally find it interesting so i'm gonna discuss it uh this is a free history of robotics not quite a recent history and i guess the main point i wanted to make with the slide is that uh i think our fascination for intelligence and for robots uh like dates back many like millennia if we think back to uh greek mythology uh there's this thing that we could think of as a robot called talos this was around like 1000 bc in greek mythology uh this is a kind of giant uh mechanical system uh that was apparently designed by zeus like the the king of uh greek gods uh to protect the island of crete uh so this thing would basically walk around the island in its kind of mythological form and protect the island of crete what's interesting is that this was not conceived as a living being it was conceived as a machine so as a robot as we would kind of think of it today i guess skipping a little bit ahead in time uh some of the actual things that we might argue today our robots were early uh automata so these are machines that perform some relatively kind of interesting tasks so things like a water clock for instance uh that is operating basically autonomously maybe you pour some water you let it go and then it keeps track of time for you more sophisticated automata were envisioned by leonardo da vinci in the 1500s if you look at some of these his sketches they look sort of surprisingly modern so this one looks like a some kind of humanoid robot that's maybe playing a musical instrument or something he also built or not built i guess mostly designed this in his head and in his notebook uh things that look like uh helicopters or quadrotors like drones in general uh yeah like i said he didn't really build many of these systems uh he built kind of prototypes or something but he has tons and tons of sketches many of which are still available today in philosophy uh there's a kind of interesting development that happens in the 1600s where descartes in particular and his contemporaries started thinking about living beings so animals humans uh as being uh automata so it's kind of going the other way uh instead of replicating uh humans and creating robots thinking of humans and other animals as physical machines uh and yeah i think this is kind of maybe somewhat obvious uh today or may seem obvious but there's something like deep about this right like going from thinking about living beings is kind of its own category you're thinking about them as being quote unquote just machines um more complex automata like appeared in the 1700s i think my personal like favorite one is the mechanical turk so this was kind of advertised as a autonomous chess playing machine so the inventor of this mechanical turk went around many countries in europe and basically had this machine play autonomously uh against uh like some of the royal families uh in uh all around europe uh it turns out that this was a hoax so uh this kind of picture shows you what's going on there's basically a human who's hiding human of course was covered because this was advertised as an autonomous like a machine that's playing tests automatically but really there was a human who could see the state of the chessboard and could take some actions to move the chess pieces around even then i guess without the kind of ai part it's still a pretty impressive piece of machinery just moving yes pieces around someone sitting there probably very uncomfortably for hours uh yes still uh still a really impressive piece of kind of mechanical engineering the ai side of robotics i think you could argue begins in the 1800s with charles babbage so he envisioned these mechanical systems these physical systems that perform computations so arithmetic for example are solving algebraic equations uh so he developed the difference engine and the analytical engine uh i don't think he yeah he didn't kind of get around to building the entire um like computational like machinery that he had in mind he built portions of it uh as prototypes but i think he ran out of money they think this was pretty expensive to build in the 1800s but yeah it kind of foreshadowed the computational age with during and his definitions of universal computation the name robot itself i guess this is a little piece of trivia that uh you can take away from this the the term robot comes from this play uh in the the 1920s by carol cepek who's a czech uh playwright uh it's from the the player awesome universal robots and the word robot roughly uh translates to work or or labor in in czech does anyone speak check by the way go ahead does this seem accurate i visited the uh okay awesome yeah cool yeah no maybe not a future for this course but if you find yourself there feel free to check it out yeah that sounds interesting all right and finally okay i guess getting to actual uh robotics so the earliest kind of modern examples of robots appear roughly around the 1950s uh this one the the unimate robotic manipulator is one that i think we would recognize today as a relatively modern uh robotic manipulator that was able to perform some basic manipulation like pick and place uh kinds of tasks of course since the 1950s and beyond 1960s so today there's been a massive amount of progress and interest and funding in robotics uh so robotics has grown into its own academic disciplines we have courses in robotics we have robotics conferences journals all of that and there's been a huge amount of progress in theory algorithms and also hardware in in robotics i think the most commercially successful applications of robotics have been in factories so for manufacturing so if you visit any kind of modern manufacturing plant like auto manufacturers for instance like car manufacturers you'll see a lot of robots robotic manipulators in particular that are helping out with assembly manufacturing for for these factories uh we also have a number of other systems that are kind of coming onto the the markets uh things like drones uh maybe not quite as much humanoid robots to some degree and autonomous vehicles that seem to be always on the horizon right so we don't unless except at a couple of cities you can maybe go and ride so in arizona for instance or some parts of san francisco you can ride some autonomous vehicles or if you have a tesla i guess that's uh you can turn on that full self-driving mode but these other systems like the non-factory applications are not as widespread as one might think right you would think that uh maybe robotics is in a mature state and drones and humanoid robots and autonomous vehicles are here already but they're not quite uh and i think there's still a really long way to go until we see a completely mainstream deployment of these more advanced robotic systems okay so i guess the next natural question is why right so why do we not why can't i call uh some autonomous vehicle uber right now to take me to the airport or whatever it is yeah why why do we still have a long way to go what are the technical challenges in in robotics um so i'll give you just some case studies uh so these are just kind of like funny uh videos uh just to make you think about what some of the technical challenges in robotics are so this one is from the the darpa robotics challenge so these were humanoid robots that were programmed by different mostly academic teams across different universities around the world to perform tasks that were motivated by search and rescue missions so the humongous was supposed to go into some like disaster stricken zone and maybe clear up some rubble turn a valve those kinds of things the direct motivation was the fukushima nuclear disaster in japan that kind of sparked some funding agencies this one was darpa that's the defense advanced research project agency to fund some research into robots that could perform some useful tasks for these kinds of disaster reasons so anyways i'll play the video and then i'll so [Music] i guess i should say this video doesn't do justice at all to all the kind of bloods volunteers that you've seen put in this is a blooper reel it's funny it's easy to show and so i'm showing it but uh yeah i think the people who worked on these projects spend basically like psu working on this gets really annoying when this is the only thing you highlight so i just want to highlight there's a massive amount of like actual like hard work uh many years of hard work that went into this and you should see also videos of not bloopers but anyways i guess the fact that bloopers exist means that there's some uh important technical challenges yet to be solved uh here's another interesting one this is uh i guess a real world example i'll just give a quick kind of explanation of what's happening in the video and then i'll play it uh so it kind of makes sense uh so this video is taken from the perspective of a bus so this is a just a regular kind of human driven bus uh the interesting thing is this vehicle over here so this is around 2016. uh this is one of google's autonomous vehicles uh this is out in california uh and basically this vehicle is like parked uh kind of close to the [Music] the curb and it's going to try to merge into the bus's lane so the bus is just going straight uh this vehicle is going to try to merge into the buses lane because the video is taken from the perspective of the bus it's not like the highest quality video but hopefully it still is going to make sense like what happens [Music] there you go actually what happens is as the car tries to diverge in uh there's a collision to the car to light the bus from which the video is being taken and this is actually the first uh i guess completely documented uh [Music] operating on the roads um here's another one this is just for just for fun yeah i guess this was meant to be autonomous but it didn't quite work all right so so what's going on in these videos right more broadly why are we struggling so much to deploy these systems out in the real world in a mainstream deployment so i think in one word uh the main technical challenge in a lot of these different application domains has to do with uncertainty so uncertainty is going to be a recurring theme that we visit and revisit throughout this course and i guess this is one thing maybe i want to really emphasize in this lecture that essentially all the the major technical challenges in robotics at least from my perspective uh come down to the challenge of dealing with some form of uncertainty um so for instance this could be uncertainty about the the dynamics of the robot so how does the robot move around so with the drone there's some potentially complicated aerodynamics maybe there's a wind gust or something that's making you uncertain about exactly the motion that the the drone is gonna execute uh dynamics of the world um so if the robot is operating close to humans for instance with an autonomous vehicle the robot has some uncertainty about what the other agents in the world are going to do with the humans in the world what actions they're going to take exactly so the uncertainty in the dynamics of the world you can think of as an explanation for the collision between the autonomous vehicle and the bus um so what was happening roughly that you can so google actually has a report that they published online analyzing exactly the reasons for the crash but roughly what happened is that the the autonomous vehicle was trying to merge into the lane of the bus uh the vehicle was expecting the bus to yield which turned out to be a mistake the bus did not yield and that led to a collision and there's other kinds of uncertainty that we'll encounter in different parts of the course uncertainty about the the geometry of the world so things like occlusions for instance there's a whole bunch of chairs over here i don't know exactly what's behind some of the uh more broadly uh i don't know like exactly what's kind of outside this room so what is the geometry of the world gonna look like i don't know it until i start uh sensing it uh uncertainty in sensor measurements that's something you'll definitely encounter when you're working with with our drone the drone has a reasonable estimate of where it is what its velocity is what its height is and so on uh but those measurements are not completely perfect uncertainty the user's intent so if you have a robot that's trying to interact uh in some way with a human or taking the instructions from a human uh the robot might not know exactly what the human uh wants to do or what it doesn't want to do to do and so on yeah there's a whole kind of long list of different kinds of uncertainty uh that uh either the robot encounters or the robots designer encounters uh and i would argue that essentially all the technical challenges that uh researchers and robotics think about and all the technical challenges that we encounter when deploying real-world robotic systems boil down to trying to handle different kinds of funds all right any questions i guess so far on any of the things we've covered good yes maybe i'll just repeat the question so are we going to talk about robots and learn about the environment the answer is yes so i'll say a bit more about the technical topics that we'll cover in the course but the last module will be about machine learning and that's all that's where we'll uh talk a little bit about that good all right so i guess before doing that i just want to maybe answer the question uh why should we care so i guess you're all taking the robotics course so uh you care for some reason or the other but maybe let me uh give you my arguments for what why as a society or as an individual you might want to care about robotics so i think the most maybe direct easy argument is that robotics is a topic with massive impact massive interest from industry and government entities so many companies many governments are spending uh billions of dollars uh trying to develop different robotic systems from autonomous vehicles autonomous drones and many other robots for different applications like agriculture infrastructure inspection repair and so on another i guess factor is the important economic consequences that robotics is having and will i think continue to have in a more accelerated way over the next few decades uh this is a study uh that was conducted by the office for national statistics these are basically predictions like probabilities according to people who conducted this the study for jobs that are at some risk from automation so some risk of being transformed in a meaningful way from automation um and there's a whole range so i guess going from waiters shelf fillers these are tasks that require a kind of physical labor uh all the way down to medical practitioners higher education teachers maybe i'm in that category [Music] but anyways yeah this is the whole spectrum the point i want to make here is just that these numbers are relatively high right so these are really high in the 70s or so but even 20 is not super comfortable right like 20 of uh yeah of high school higher education teachers jobs being changed in some fundamental way by automation um you can of course argue with these numbers and many people argue with these numbers but the point here is just that there's some relatively high probability that many jobs are going to be impacted in some non-trivial way by robotics and automation more broadly um other kind of maybe more intellectual reasons for for caring about robotics there's some really fascinating technical challenges many of which we're going to discuss in this course and many beautiful connections with many other fields like beyond robotics you mentioned machine learning ai feedback control theory computer vision optimization information theory theoretically computer science applied math and this list is very very long like almost any like technical topic that you might be interested in you can connect and someone has probably connected some way or the other uh to your robotics um so it's yeah it's kind of like a nice amalgamation of many of many different uh technical topics in this space for everyone in in some sense um for me personally uh one thing that i find really satisfying and interesting about robotics is that it provides a lens on the really big kind of philosophical questions so things like what is intelligence what is consciousness what is free will thinking about these questions from the perspective of a roboticist i think gives these questions more technical clarity than you often encounter in completely philosophical uh discussions so thinking about free will for instance you could pose this question as a following question which is what would it mean for a robotic system to have free will right so that provides a slightly different angle on the question which is not completely philosophical but if i was to design a robot how would i convince someone else for instance that it has free will so there's a more operational aspect to that question uh when you think about it from the perspective of robotics uh and finally of course like we're beginning uh really i think most roboticists do robotics because it's really cool right like that that's really what it boils down to i think if you ask about this why do you do robotics we'll give you all these other answers and then finally it'll boil down to i do it because because i enjoy it uh and yeah i guess my goal in some sense for the course is to convince you that it is really cool to get you excited about it and cover some of the basic uh technical topics as well of course as we're going through the course okay um maybe just one kind of final high-level comment about the structure of the course um or structure of robotics uh this is something i i thought about when i was developing the curriculum for the course uh which is how should we think about robotics as a whole like how should we organize the topics uh one relatively reasonable way this is uh often how people especially in the the kind of popular like press think about robotics uh is by dividing it into different application domains right so you can think about aerial robotics or drones medical robotics human right robotics underwater robotics so different applications or different kinds of robots another option which is what we're going to do in this course is to think about robotics break it up by concepts or techniques that cut across various application domains so for instance motion planning localization mapping these are maybe one level of abstraction above uh concrete applications uh i'll talk about the specific uh concepts and techniques that we will cover in this course uh i personally prefer this way of thinking about robotics maybe in some sense this is a more academic perspective but i would argue that thinking about robotics in this way dividing it up by concepts and techniques allows us to get at some of the kind of core uh technical meet the the fundamentals of robotics the fundamental ideas and allows us to appreciate that many different applications have many of the same underlying challenges uh and the same kind of theoretical or algorithmic tools might allow us to tackle many of these challenges in different application domains all right so i guess here's the the actual kind of plan for the course these are the topics that we're going to cover uh so roughly lectures two to five this will be the first module that we talked about in this course will be on dynamics and feedback control so we'll talk about equations emotions how do we describe uh the dynamics the motion of robotic systems we'll talk about feedback control and specifically we'll talk about the linear quadratic regulator um the next module will be on motion planning so this is a question of how can a robot figure out how to get from point a to point b uh without for instance colliding with obstacles in its environment uh the third module will be on state estimation localization and mapping uh so how can the robot figure out its state so where it is for instance uh and also the state of the environment and that it's operating in uh and the final kind of main module will be on computer vision and learning uh so we'll talk about some basic aspects of cameras the physics of cameras uh optical flow uh some classical relatively classical computer vision techniques and then more modern techniques for computer vision and machine learning which is based on neural networks uh and finally just at the very end we'll say a little bit about some broader topics in robotics so we'll have some discussions about robotics and the law uh ethics policies and and so on so broader societal uh implications beyond these technical topics uh one really important aspect of this course uh has to do with hardware implementation so i think implementing the theory and algorithms that we introduced in this course on actual robots so these crazy fly drones in particular uh is a really good way to appreciate some of the technical challenges that we face so i think when you implement something in simulation or just on pen and paper everything just works everything is kind of neat elegant but if you really put it on an actual robotic system like a drone you'll find that there's a gap between the theory and the and the practice and i think seeing things work mostly not work for a while until it does work in in practice uh is a really valuable experience to go through uh for any kind of robotics uh course so yeah like i said we're gonna be using these uh crazy fly uh drones uh crazy flight 2.1 to introduce the technical topics that we'll cover in this course uh i think this this platform is really ideal for teaching purposes so it's open source so you can go in and lead and modify all the software uh it's obviously really small it's lightweight which is important for safety purposes uh like i said a lot of the time it's not gonna be working so it's important that when it doesn't work it doesn't uh hurt anyone or uh crash into things and so on and it's ideal for testing so it's gonna crash a whole bunch of times and it's not gonna break that often or if it does break it will break in a way that's uh easy to fix um so the overall kind of overarching goal um for the the course is going to be developed gonna be to develop the techniques and concepts to make the drone navigate autonomously so at the end of the course we're going to put a camera on the drone and that is going to allow the drone to move around autonomously in some environment and we'll build up kind of gradually to this grand overall goal and we'll build up using these different modules that i talked about so the first module dynamics and feedback control will have an associated lab component which is going to be to make the drone hover so hovering just in place is arguably the most kind of basic physical task that you might ask a drone to perform so we're going to use some of the techniques that we introduced from feedback control to make the drone hover the second module motion planning will also have a associated lab so we're going to be introducing techniques that allow a robot to plan how to get from point a to point b without hitting obstacles and you'll implement that with pvc pipe obstacles in our lab spaces um yeah i think that's that's going to be a fun experience the third module won't have a kind of direct associated labs a state destination localization nothing the drone is kind of capable of doing all of that and it's doing some of that already and you'll see that in some of the other labs you'll get a kind of indirect sense for what that looks like but we won't implement it directly on the hardware and finally computer vision and learning that will have another associated lab the specific algorithm will implement is called optical flow which like i said allows the drone to estimate its velocity relative to the ground and also estimate the velocities of other objects in its environment relative to itself and we'll talk about obstacle detection as well and finally the final project is going to allow you to put some of these pieces together to make the drone navigate autonomously so we're going to again hang some obstacles from the ceiling we won't tell you where the obstacles are going to be exactly until you kind of show up on that day and the drone is supposed to navigate based on just this camera feed that it's getting and navigate autonomously to some target object so sorry maybe just to give you a sense for what's in store so this is a video that one of the student groups uh created this was the first iteration of the course uh things have gotten a little bit smoother i would say on the logistic side but uh i think this gives you a sense for some of the challenges and i guess fun that you're going to encounter while working with the crazy fire drills i'll just play a couple of minutes and then you can watch the rest of the video later oh first test of the look at drone [Music] uh [Music] [Music] oh the propellers [Music] okay [Music] [Music] [Music] at some point [Music] this is better why are you going so fast [Music] oh [Music] all right so yeah i guess that's the that's the end result so this was part of the final project where the drone is navigating uh completely autonomously just based on its uh camera measurements that it's all getting so here are some logistics for the hardware implementation so the lab part for the course um so we'll be doing the the hardware assignments in teams so everything else is going to be individual the hardware labs will be in teams teams of four in particular so i guess they're the first to do so like i said there's no assignment uh that goes out tomorrow uh the first thing to do uh this week is to form teams uh by next wednesday which is september 14th um once you've formed your team uh just email the course staff so by course i mean myself and the five ais as well do copy the the ai's their email addresses are listed in the syllabus so we'll keep track of uh exactly who's in what team um i guess uh hopefully you kind of know some other people in the the course and you perform teams uh if not we also have uh at discussions to facilitate uh team formation you don't have to do the information now you can do it offline but yeah we have a discussion uh channel to facilitate uh deformation i guess one kind of general comment is that it might be a good idea to have different majors or students from different majors in a in a given team so i think students from computer science electrical engineering mechanical aerospace engineering will have a different kind of backgrounds and you benefit from from having different majors represented in each team [Music] uh some more logistics for the hardware implementation so we have two lab spaces so g105 and ac alleger 012 that we use for implementing the hardware assignments um there's a general lab safety training that you'll need to complete if you haven't done so already so i think most people have completed this if you're a mae phd student or undergraduate student or ece student then you've already done this i think if you're in computer science or rp maybe you you haven't done this nominally maybe just raise your hand if you know what i'm talking about and you've completed that requirement um okay i guess less people than i have talked all right i'll send an email about about the safety training and you get a chance to complete it this week and you'll need to do this to get access to the lab spaces uh so please do this before the the first lab goes out all right so just some final comments uh before we before we end uh so robotics borrows a lot of different ideas from many different fields i mentioned uh control theory computer vision machine learning optimization and so on uh what i've tried to do in this course is focus on topics that i think are kind of quintessentially robotics and don't directly belong to any other field at the same time it's impossible i think to avoid uh discussion of some neighboring topics like feedback control like computer vision at some point uh you might think that this is a survey course which is which it is not intended to be i guess it's intended to be kind of a intro course uh not uh exhaustively covering all kinds of different topics in robotics um and finally different parts of the course may feel more or less challenging depending on your major so i think the the first module in particular which is feedback control is the most uh mae friendly so if you've taken some feedback control then you might be familiar with some of the concepts um the second module which is on motion planning might be the most computer science friendly and so on the general point is just to accept that that you're gonna have a stronger background in some topic or another and maybe less of a background in other topics um okay and then just finally there's a number of topics that we won't do this to uh so things like inverse kinematics grasping and manipulation design so hardware design of robotic systems uh algorithms for walking and running these are all topics that are definitely within robotics that we won't have the time to cover in these 24 lectures uh and other topics that have the kind of flavor of robotics and an x where x equals ethics or biological ethics robotics in the law robotics and the economy and so on we'll briefly talk about some of these topics uh politically towards the end of the course but in no way we do justice to these important targets uh and finally uh so i guess this is kind of a caveat so things are not going to be perfect anytime you're working with actual robotic systems uh yeah things are far from uh from perfect um so we need some help from you so if you have feedback or suggestions or things we can improve uh we're always trying to improve the course i think we've improved it uh year and year but there's always little things uh when you're working with actual uh robots so yeah i'm just i guess asking for your questions and feedback and suggestions that you have all right so i think that's pretty much the end i guess any questions on logistics or any of the topics we'll cover yeah so the midterm um will cover some feedback control motion planning and some uh estimation yeah so essentially the first three uh modules of the course other questions all right so yeah i guess we'll start with the dynamics and feedback control module on thursday [Music]
Introduction_to_Robotics_Princeton
Lecture_16_Princeton_Introduction_to_Robotics_SLAM.txt
all right let's go and get started so the plans for the day is to finish up this module that we'll be looking at for the last few lectures on localization and mapping uh and just to remind me of uh where we are so far so in the last two lectures we've been looking at uh applications of Base filtering to the problem of localization so localization is the the problem of figuring out or estimating the robot's location um given a map of the environment and we did this two lectures ago so I think this was lecture um uh 14 I believe uh where we the localization and in the previous lecture we consider the problem of mapping so estimate map of the environment and if you look back to the previous lecture for the mapping part we assumed [Music] that the robot had some way to localize itself uh in some kind of global reference frame and all it's doing is figuring out uh the the map of the morning uh and I guess along the way we discussed different kinds of maps so location-based maps feature-based maps uh for the most part we've been working with occupancy grid Maps uh I guess those are relatively kind of conceptually simple and a lot of the algorithms we've discussed also apply to other kinds of maps um so yeah I guess if you think about this neither is completely satisfactory right so localization we were giving that we had a map of the environment uh mapping we're assuming that the robot can someone localize itself in practice what you'd like to do is just deploy a robot in some new environment uh and do both of these things simultaneously right so the robot should be able to figure out its location maybe in some some reference frames and also as it's moving around it should be able to build a map of its environment so that's what we're gonna do in in today's lecture uh so this goes by the name of simultaneous localization and mapping uh usually abbreviated as flam [Music] um and I guess what we're going to do is kind of put these two components together uh in a yeah in a way that we'll get into um to jointly estimate so the the problem of of doing slam is to jointly estimate the robot's location and a map of the Mormon okay and I guess this is a quick uh caveat so there's a massive amount of work or I guess multiple decades on the problem of Slam uh we're not going to do any kind of Justice to this large literature uh yeah even nowadays like people are still right uh like lots of papers on Slam uh we're gonna look at this one particular approach um called uh grid based fast slam so the grid base part comes from the fact that we're gonna be looking at occupancy grid that's going to be our Mac representation uh the fast part I guess it was fast when when it was first introduced which was uh some time ago now maybe like 20 years or so ago um but yeah I think this is a decent like starting point for for slam so um I guess without any kind of further knowledge if you just want to implement something from scratch uh this is a decent place to start and there's a whole bunch of like belts and whistles that people have developed over the years like I said um so I guess before going into the uh the algorithm description let me just say a bit a little bit more about slam yeah so this is chapter 10 in the probabilistic robotics book uh and it's useful to consider uh two versions of the problem the two versions of Slam uh the first version is what's known as online slam um so the problem in online slam is to estimate uh some I guess probability distribution [Applause] over the instantaneous [Applause] location and map [Applause] so basically at every point in time that the robot operates as it moves around in some environment um the only thing is that it's really that it cares about maintaining uh is where is it at that point in time so I believe over its location and a belief over the map at that instant in time so this is in contrast to uh the other kind of version of the problem which goes by the name of full flam where their goal is to estimate [Applause] again some probability distribution over the entire trajectory foreign visited locations and also a map so I guess pictorially and let's say this is the online uh version so we just have some map of the environment some obstacles um and a estimate of the location so really you should think of distributions over the map distributions over or the location but the full slam version of the problem uh what you're doing is maintaining a distribution over the entire trajectory so maybe you started off here and then you're here at time t um so you have a distribution over trajectories that you visited so the first one version is kind of harder than the online uh slam version in the sense that you're trying to do more right you're trying to estimate not just where you are at the current time step but you're also trying to ask them everybody you've been uh at previous time steps um in some sense I guess you could use so if you were able to do online slam you could use that to get an estimate for a full plan just by like at every time step you maintain a belief and you just concatenate those buildings together and that gives you a belief over the entire trajectory but with full time you could do a bit more so if you get a little bit of extra information if you get the new information at time T you could kind of refine your estimate of where you were previously so which version of the problem you look at kind of depends on the context maybe I suppose there's a question I guess you see does someone see or yeah go ahead yeah okay well I guess I was gonna ask that as a as a question so yeah can you can you think of uh how practical applications of knowing where you've been when might you use it for uh robotics application go ahead [Music] yeah yeah that's reasonable um other other thoughts [Music] yeah yeah you could that that's reasonable as well I guess the one I had in mind was that some kind of like searching rescue uh Mission so uh if you're deploying a robot in some like disaster stricken like area let's say and you're trying to find someone uh then something you want to do is like coverage right so you want to make sure that you've covered the environment and and you've yeah if there's like someone who's kind of stuck in some part of the environment then you have in this sense um so to figure out whether you've covered the environment it's like useful to know where you've been if you know where you've been then you can decide like whether or not you've covered the DMR question then like just concatenating you previously yeah so it can be more accurate to do the full-time version we'll get to that uh later in the lecture I'll show you some specific instances where doing like full flank and uh um yeah can can like uh like get you more accurate estimates than than just doing online slam yeah I'll show it like that but would you ever like kind of use the whole slam to go like back and now that you know that you have like a kind of higher certainty that you were in fact at a certain location and that inform like what you're like where you currently are like where you have to reduce information um I'm not sure I followed like to go to go back uh so you can use the information you need to go back and see but then now that you've had a better idea of what your trajectory was a better employment oh I see yes yes you could do this kind of like iteratively so I guess both plan doesn't say that you need to do it that way but that could be one way of doing it like you refine your estimate of where you have been and then you kind of go forward in time so you do things like forward like backward uh so that that's one way you could do it uh yeah there might be like other ways you could you could do try to tackle them before uh plan problem as well okay so I guess what I'll focus on today is is the uh the online uh version of the problem I won't say two enough about the full-time problem if you're interested uh basically like a similar kind of technique that I'll describe today that can be applied to their personal problem uh that version is described in the uh the book uh if you're curious but yeah I'll Focus this for Simplicity on the online slam problem so like I said the the specific technique that that we're going to discuss is called grid based uh for slam [Applause] um so this is chapter 13.10 in the probabilistic robotics book and the basic idea is actually going to be fairly straightforward we've done all the hard work in the the past two lectures where we thought about localization given a map and mapping given localization so the basic idea is to combine the particle filtering approach to localization [Applause] uh so that was lecture 14 two lectures ago with uh occupancy grid mapping which was the previous lectures in lecture 15. so that's the I guess the high level approach um so I mentioned we're going to be doing particle filtering [Music] um so I guess as with so similar to uh when we looked at particle filtering for localization um so for localization the picture was um well I guess we're reviewing that we have some map of the the environment so we we know what the object would look like and the particle filter maintains this approximate belief uh this approximate like distribution over over it's a possible or the robots possible locations in the form of a set of particles so the robot is very confident of where it is when the particles are kind of uh close together maybe there's some floating particles elsewhere so that's the representation for localization here for mapping um in the previous lecture I guess there's a slightly harder to draw but we were maintaining a belief or Maps right and the way we did that uh with a without in the previous lecture uh was that we had some uh probability uh associated with each cell being occupied or not and then we updated those probabilities as the robot moved around and collected more measurements um so with slam uh what we're going to do with this like specific approach like this grid based fast slam is maintained foreign where each particle actually has two things associated with it so one is a a candidate location yeah and a second there's a map um so you can think of a particular location a particular map as corresponding to a particle like that's one particular hypothesis about the robustification and about the map uh and in this particle filtering approach to slam we're going to maintain a whole bunch of these pairs so I think maybe it's easier to visualize just with a picture [Music] foreign [Music] so here's a H so here's one particular let me draw a few of these how many one two one more okay so these are meant to represent particles so let's say there's this is particle one this is particle two and this is up to a particle uh capital M capital M again if you think of it just some large number thousand ten thousand million whatever it really fits on your computer um and each particle as I was saying here corresponds to two things one is a location and the other is a map so here's one I guess possibility so this is what the map looks like so these are these cells are occupied um and let's say this is a candidate location so the crossover here um so maybe the way to think about this is that this is the robot's location marked with a with the X and like the uh um the occupancy kind of map here represents one particular map so this is one specific hypothesis right so one like Point kind of estimate about the robot's location and the map so with the particle filter we're maintaining or we're going to maintain a bunch of these [Applause] um so let's say this is the location estimate so this is another particle corresponding to a particular location in a particular map and and so on we're going to have capital n of these uh pairs of location and math questions tell you to notice like yeah so I think intuitively the way to think about this is that we're trying to represent a joint distribution over the location and the MMA so that joint distribution is some extremely complicated um like obviously that's like very high dimensional right so it has Dimension uh equal to uh the robots like location Dimension plus uh how many hour um uh like a cells you have in your in your grid so it's a very high dimensional space that we're trying to represent a distribution over but we're doing this approximately via particles and yeah each particle is one particular um like instantiation of a location and a map okay good yeah so I think this is really the main idea I want to convey I feel like if you get this the rest follows relatively uh straightforwardly uh so if you have questions I'm happy to try to answer okay all right so let me switch to the slides I think okay so another picture uh same concept is what I was trying to represent before these are three different particles uh each particle corresponds to a particular hypothesis just one point estimate of the location so that's the the green circle and one particular hypothesis about the map that's represented by these three pictures so I guess what you would hope is that if you're very confident about the location and the map then all the particles look kind of the same so all the particles have roughly the same location and roughly the same kind of looking map if you're uncertain then maybe the particles on the maps are going to look very different from each other but yeah the point is that we're going to maintain some large number of these capital M of these and we're going to update them as the robot moves around and as an update as it collects more sensor measurements um okay so here's the the algorithm for grid-based uh fast slam let me just go through the structure of it and I'll describe each function a bit more carefully um so this has the structure of a particle filter I guess hopefully you're familiar with now with the uh the structure of the particle filter so at each time step we're going to do a bunch of steps for each particle so in the first for Loop K is indexing uh the different particles so We're looping over the different particles index from one to capital M um yeah I'll go through the functions a bit more carefully but roughly what they're doing uh is some kind of Dynamics update so if you have knowledge of the the Dynamics of the robot um it's up yeah and it's also updating a map so that relies on on the technique we discussed in the previous lecture uh and the WT so that's a important switch that showed up also when we were looking at particle filtering for localization which roughly corresponds to How likely is the sensor measurement given a location and a map yeah I'll describe all of those in more detail so that's that's the first uh loop you can think of it as basically doing a Dynamics update and calculating uh these like importance with w and then the second for Loop is doing the resampling step in the the particle filter so for k equal to 1 through M we're drawing index I with a probability that it's proportional to WT so WT was was high let's say like uh 0.9 then we sample index I with probability uh proportional to the point time um yeah and then we just keep sampling capital M times and then add the location XD and the map empty so that particle consisting of the pair of location and map we add that to St so okay I guess the last part is identical to particle filtering as we saw it before when we applied it to localization or state estimation more broadly the only distinction here is that the particles contain more information so previously a particle was an estimate of the location here a particle is location and map but the last for Lucas is identical to what we have seen before I guess questions on the general structure before I describe the details go ahead and sensors updating the map yeah so roughly it's what we were doing uh actually dealing the technique from the previous lecture so yeah let's go through uh the different functions maybe I can just write it here instead of putting the projector up again so yeah I guess the the let's go through each of the three functions so the first one is the sample uh okay everyone see when I write here I think you should be able to maybe raise it if you can't see all right so sample motion model um so what this is doing is is the Dynamics update which we discuss kind of versions of or ways to implement it uh two lectures ago when we were thinking about localization given a map so what this is doing is sampling um x t k so K again is indexing the the particle so we're sampling a state at time time set T give from the conditional probability distribution x t given x t minus one so that's the previous set the previous control input this is okay um oh yeah sure okay good thank you all right yeah so we're sampling uh XT xdk from this conditional distribution so this is our Dynamics model and there's a map uh m t minus one okay um yeah so given uh a location or a state at the previous time step given the control input that the robot took which it knows and given its previous estimate of the the map um or yeah given some estimate of the map as captured by a particle K uh we're sampling some possible next stage for the robot um so yeah if you understand the the Dynamics of the system um this is uh like capturing that like the sampling from that Dynamics model which potentially could be probabilistic um yeah I guess we had discussed like different ways of doing this a couple of lectures ago where you ignore the map you ask yourself what were there or what I've done if there were no obstacles and then you take into account the fact that some of the portions of the environment are are occupied all right any questions on on this function okay so let's discuss the the second one um so the updating the the occupancy let's do it here again uh updating the occupancy here in uh map so empty okay updated foreign secret map so this is taking in as input the current sensor measurement and typically for for these kinds of applications uh you have some range sensors so some like distance sensor something that gives you distances to different points in the environment it also takes in this updated uh location estimate so update it after you sample from the motion model and the previous estimate of the the map so the way this is implemented uh or the way this could be implemented I guess in many ways of doing this but one way the default way of this algorithm is to use the occupancy grid mapping uh from the previous lecture um and yeah I guess one one way to do that is to think about this map like MD minus 1K um so the the map estimate from the previous time step uh as corresponding to a prior um and then we can use the occupancy grid uh mapping kind of update of the beliefs that we described in the previous lecture uh where you take the prior over maps and then you incorporate the sensor measurement that you just received at times at T which could be noisy and then you incorporate this location XT so I guess what we're doing here is that we're assuming so we're applying the the technique from the previous lecture the occupant figure not technique but assuming that the location is correct so we're kind of alternating like we're first assuming that the map is correct and then updating the uh the location and then we're assuming that the location is correct and then updating the uh the map or a belief on the map and so this is kind of like a general strategy that shows up not just in inflam but many other uh like areas as well like often like problems have the structure where you can figure out a given B and then you can figure out B given a but what you want to do is like jointly estimate A and B so yeah the trick is to kind of alternate between those two algorithms like you first assume that b is correct using some initialization estimate a and then assume a is correct and get p and you hope that that converges this is not like globally like Optimal in any sense but and it often seems to work uh recently well last question you have to know the 10 size of the map before we start this yeah good question so we're assuming that we we've like committed to some to some size of the map uh and I guess size could mean two different things like size could be uh like the spatial like extend of the environment or the number of cells you're considering uh and uh yeah in this uh algorithm we're assuming that both are uh like you've yeah you've committed to some size and some uh like resolution for the the distribution of the uh the map and one more follow-up yeah just when we're updating the map are we like adding new values to some math like variable or are we are things set to like null or zero or something um so okay so I guess I didn't say how to initialize this um but you can assume that you had so in the previous lecture what we did was we assumed that we had some uh prior over Maps uh and the prior was pretty simple right like each uh like the way you you draw a sample from the prior Maps is you look at each cell and then you just do a coin flip uh a bias conflict we said like the probability of a cell being occupied or something like 0.2 or 0.3 um so you can do that before you initialize this so you uh have a bunch of samples uh where each sample map comes from that prior uh and then you also assume that you have some prior or locations and that's how you initialize the set of Articles before the remote like does anything like before the robot gets any measurements question do we have a big deal are taken at each time so for example something being localized where it also thinks there's an obstacle okay yeah so yeah good question so that's where actually the the third uh function comes in so the third function estimates a likelihood um so WT K so for each of these particles we are evaluating uh the measurement model so this is taking in three arguments again so ZT that's the current sensorization c k that's our best estimate right now after a location and then empty K which is our best estimate right now of the uh the map and what this step is doing is Computing the the likelihood so P of ZT so the likelihood of receiving the sensor measurement that the robot receive right now given the location and the map um so this is the the measurement model which we're reviewing that we know and we discuss kind of versions of if you have a rain sensor a couple of lectures ago so I think ideally this step is the one that should like filter out things that are not uh yeah not likely right like not not feasible non-consistent so if you're in some location XT uh and your map is this and these are inconsistent in the sense that you said like where your location is like inside some obstacle uh then like the well given that that's not actually possible and the likelihood of receiving the sensor measurement that the robot received like should be very low or zero so when you do the resampling step uh the probability that this pair of location and map gets sampled that probability should be very low or zero ideally should be zero right it's like truly inconsistent uh then you should just like not uh like sample this when you do the the resampling so the weight over here should be either very small or zero and so when you for example yeah you're not gonna get that you're gonna filter that away question for like a moving Dynamic map uh yeah that's a good question so the way I described it so that General structure um should still be uh like valid the extra thing you're gonna have to do is uh like have a Dynamics model on the map uh so it could be that the math is not just like static obstacles maybe there's like humans as part of the environment that you're thinking of as like being part of the map if you had like some Dynamics model for the map itself then you can basically modify each of these like functions to take that into account as well it's a bit more complicated but actually like practically like very uh useful because often like things do move around and you need to account for that yeah but yeah I guess going through that might take a bit of time but uh not the rough idea like if you had if you had some way of sampling uh how the map can change then you can like consider variations of these functions good other questions all right okay so that's um yeah that's the basic algorithm maybe just one more uh one more question so in practice I guess what do we use maps for um so one application of mapping or slam could this be that you literally just want to build a map of some environment right like you want to go in somewhere you want to figure out what the amount looks like often what you want to do is this planning so you want to use the robots kind of updated information about what the environment looks like to plan a path for the robot to get to some location or to cover the environment or so on um so I guess how can people think of like ways of using this like particle filtering waste approach to Islam for doing planning as well maybe simple ways are more simple getaways [Music] all right I guess here's a maybe a relatively simple way so at the end of this process like at time set T we have capital M particles and each of them have some kind of likelihood associated with them like that's the the WT so we can just do maximum likelihood estimation so you just pick the location in the map that has the maximum likelihood and you treat that as the ground true right so you just say okay I'm gonna commit to this like particular like hypothesis and I'll plan some like path using all the different leg techniques that we discussed like uh like Bradford storage a star likes or all of that or or other like algorithms given that specific uh like hypothesis for the location in the map so that that's I guess reasonable uh I guess can you maybe can you think of problems with that or or uh like change like variations of that to make it more sophisticated we actually have like more information than than just a particular estimate right like we have some uh like notion of like uncertainty good if you wanted to you could actually make multiple plans yeah or respective articles and uh you can sort of take you can see the distribution of actions and then take yeah okay so so I guess you can right I think so for each of these you can each of the particles uh you can make a plan uh and then you can see like what's the the action like the control input that yeah so like if we have one particle that we're very confident about it and based on how we want to go left yeah but we have many other particles that are maybe just a little bit less probable but we have tons of them and they say we should go right yeah yeah the consistency in the plants of the yeah yeah I think that makes sense there's a I guess a variant of that where you find a single plan uh and then you evaluate the expected value let's say of collisions uh given these particles so for each so let's say I have a particular planner and I want to evaluate uh some like estimate that that that plan leads to a collision what I can do is look at that plan for every particle from from one through M given this like set of particles that we have and then we just look at the the way that like some weighted by the WT and that gives us like some estimate of whether a particular plan is going to lead to a collision or not and then we find some plan that minimizes that expected value of collisions and then maybe maximizing some probability of like getting to the goal so that's another way as well that's a bit well yeah I guess these are kind of more computationally intensive because you have to find a plan that works like in expectation over all the particles and you have to evaluate the plan for for each of the Maps if you have a lot of maps then that could be computationally intensive but the good thing is I guess a lot of these things are paralyzable like you can evaluate a plan across different Maps completely in parallel and if you have like multiple cores on your CPUs and or GPU that something that could be efficient but anyways I guess that's just to give you a flavor of the kinds of things you could do with this so this is giving you not just a particular map in a particular location it's giving you some estimate of like uncertainty as well and often like planning algorithms can benefit from uh that kind of uncertainty estimation all right uh let me switch back to the slides [Music] okay so this is grid based uh fast slam uh like working in practice yeah I'll show you more sophisticated videos but and this is this one has a particular thing that I I want to point out so I'm using this uh yes this is a laptop on so yeah this is the laptop is kind of going around it has a range sensor and let me see the left is uh the the estimate like the best estimate basically for for what the location and the map is um again like what we really have is more than that right it's not just one particular location one particular map we have a bunch of particles what's being visualized is just the the most likely um location and particle uh so the robot goes around this kind of corridor like environment uh what's on the right is the instantaneous measurement that the robot is getting simultaneously it's rain sensor is giving it some information about occupancy just around its current location but the plot on the left is is the the slam part this is the most likely based on the way yeah yeah just based on the weight so that's the largest weight yeah exactly yeah what's interesting uh so we'll come to it in just a few seconds the error that it's making where well we can guess that Etc no Building looks like that right so the corridors are not um kind of perpendicular to each other and yeah I guess the next portion is the interesting thing okay yeah right there so let me just play that again so yeah just keep an eye on the plot on the left [Music] yeah does someone see what what happened there and maybe have a uh an essay hypothesis go ahead we have a lot of particles that have a particular guess of what the sort of upper right now yes and some of them are correct but maybe they're not the most likely ones but like they don't have the best guess we have is something awkward angle yeah but we still have some like particles that yes that we have to correct yes just in the background yeah yeah that's exactly right and I guess the the specific thing that's happening is that the robot is like coming back to a location that it's visited uh before uh and that kind of makes the the map uh like Snap well the likely the most likely man like Snap uh the corridor into a ship that's uh that's actually uh correct uh so this like process is called Loop closure in the Islam literature uh so as the map is being constructed there may be errors that accumulate uh so this is another example where obviously the corridor isn't actually at this like rear angle from from the portion of the building on the left um but yeah as the the robot leg moves around like it goes far through the corridor kind of loops around and then comes back uh close to some location that it's been at before um so if the robot um comes back to a location and that is previously visited and then it can kind of like correct just like automatically uh like through this particle filtering process uh like correct these like large errors in the the map so this process is called Loop closure because uh you're kind of closing the loop like you're coming back to some location that you visited before and that helps you update um the the estimate that you have for the map and your location as well but it's basically because you have some particles that are representing like the true uh like map and those become more likely uh when you receive some like new sensor information um like when you come back to a previously visited location um so this is showing I guess before Loop closure and after Loop closure so before the closure the corridor is that some like weird uh kind of angle uh once the robot gets back to some location it's it's visible before uh what happens is is uh is that the corridor kind of snaps into a place becomes perpendicular like as you would expect from a real building um so yeah I guess fast time kind of automatically does this uh loop closure like the intuition is that the particle corresponding to the correct map uh will have a higher importance rate um once once you get this new information from the sensor when you get back to a previously located previously visited a location um but yeah I guess to for this to happen you need a sufficiently large number of particles because we need like some particles to represent the truth I guess if you have like no particles that covers the truth then yeah you've kind of completely lost like that to reality and you're not gonna like come back uh and like get the uh the true kind of like correct like estimate for for what the map looks like um yeah and and just I guess as a reminder the particle filtering is like throwing away uh particles with low importance weight uh when we're doing the resampling step so if you don't maintain enough particles like it could be that you somehow like lose the particles that like correspond to the the actual like ground truth question yeah and the correct conclusion yeah it's fairly common like before the look closure step like it's pretty common that you'll see like these weird kinds of maps that don't make sense but uh like after you come back to the same location uh like those errors like get uh get corrected uh yeah and I guess um I mentioned this already so uh maybe this address is your question as well so I asked the time between like closing the loop gets larger uh your like errors can like accumulate over diamonds you can end up with like more and more like kind of wacky looking Maps uh one interesting option that that reflexes in this paper is to do some kind of like active Loop closure so you try to actively promote Loop closure by visiting locations that you previously visited so you go somewhere and you kind of backtrack a bit and then go somewhere backtrack a bit so you don't let like errors accumulate over long uh periods of time does that interesting um yeah I was gonna ask him like like are there like instances where like they're more likely to like thank you you were visiting somewhere but you're actually not like you're actually in your location yes so that is also possible so if you have locations in the environment that look very similar uh that could potentially mess you up quite a bit right uh so you could think like maybe you're visiting some previously with the location but really it's just some other place that looks very similar uh and that can that can mess this up and then you get like pretty bad looking uh Maps yeah um yeah so I guess there's I just like very briefly touched on this look closure uh phenomenon so there's a massive amount of work like in the same literature uh specifically addressing Loop closure um like Beyond just that kind of loop closure that happens automatically with with uh this grid based platform there's a lot of lot of techniques for um doing Loop closure using feature based uh Maps so we're going to talk about computer vision uh starting from the the next lecture but basically if you can recognize that um you you're visiting some previously visited location uh so like the visual features and the environment look pretty similar even if the exact Viewpoint is not the same um uh you can kind of do this like feature correspondence you can look at portions of the image and correspond them to um portions of a current image corresponding correspond them to portions of a previously visited image you can use that kind of information as well to help Loop closure I won't go into the details on this I guess the point is just uh like tell you that this is an interesting phenomenon and there's many different ways of of doing Loop closure Beyond The Loop program that comes out automatically of the particle filtering approach um all right questions on on that go ahead yes yeah yeah that's a good question so this does require uh a large number of particles because the the space over which uh you're like representing the probability distribution like the belief is like not just the location but but also the map and so you want to make sure that you have like good coverage of that distribution um partly it depends on the quality of the sensor so if your sensor is very good then you can like often get away with with uh not like a ridiculous number of particles uh if your sensor is pretty bad then you're relying on uh like your Dynamics model uh and the noisy sensor measurements uh and so you might have a like this fundamentally you might have a belief that's pretty like diffuse if you have a good sensor then like all your particles should roughly be uh like concentrated around the correct map and then you don't like lose that as you to kind of move leg forward in time so yeah I think good sensors can can help quite a bit and yeah but but in general if you don't then then you need quite a large number of Articles another question yeah yes in the past we've been given like those probabilities yeah yeah so this depends on uh on the sensor so I think maybe two lectures ago we had discussed one particular um uh sensor so this was a range sensor um and we discussed uh uh like a yeah particular like uh probability model like measurement model for the rain sensor which takes into account uh I think it was like three or four different factors like one of the the actual like distance uh to an obstacle uh the other is the probability that you miss an obstacle uh the other one was the probability that you see it a kind of uh obstacle that wasn't there and then I think the fourth one was just like some random noise and then we concatenated those probabilities together so that that's one way or that's one particular Center model so if your robot has a range sensor um you can characterize uh like those error probabilities like the probability that you're rain sensor sees an obstacle that's not there and the other Robot reviews and get a measurement model that way so in the the algorithm that I described we're assuming that this is given to us in practice of course it's not like like we would have to go in and do a bunch of experiments on our particular sensor to come up with a probabilistic model uh corresponding to the measurement model and then plug that into the quality control string algorithm yep good other questions okay so let's yeah I guess let's look at some uh more sophisticated [Music] implementations uh so this is from the the grass flat but at UPenn uh this was from a few years ago where yeah they're doing uh slam indoors so the right is the third person view top right is the first person view so there's a camera on the Drone uh and the the video on the left is showing the map that there are about but as it as it goes alongside exploring this environment I think here is actually operating autonomously I believe um so it's doing some like past planning um to explore uh previously like unexplored regions of the space and as it's doing this exploration it's building up a map of the environment and it's locally itself within that map and that that's what's being shown on the left uh one thing to keep in mind is that if you see your own videos indoors these are like GPS denied environments like GPS and you don't have like good like GPS localization and so the localization challenge uh is harder outside so like over here for instance uh you can get a decent like GPS signal as you can localize yourself pretty well but as soon as you get indoors um yeah I guess you can try this with drones that have GPS the GPS estimate just gets like completely marking and you should basically uh mostly like throw it away um this is a commercially uh commercial like implementation with skydio this was the company and I mentioned a couple of times they make things like aerial photography drones uh let me just play the video the audio actually explains a little bit about their slam our initial Focus has been to deliver on that promise of the autonomous line camera and they make it real basically to create a film crew that fits in your backpack it's a camera that understands the scene it's looking at and has the ability to move itself those two things together are just enormously powerful this is a very ambitious product and to make it possible we had to develop custom Hardware we built a device with 13 cameras that see in every direction and at its core it uses an Nvidia tx1 it's the same thing that's found in self-driving cars and that's what runs the skydio autonomy engine the first step in division processing pipeline is understanding where the vehicle is and how it's moving to do this we look for regions of high texture in the environment and we track those as landmarks in the real world that we can triangulate and track our motion that motion estimate is the foundation for everything else we do the next step in the processing pipeline is building up a 3D understanding of the world to do that we compute stereo depth maps from each pair of cameras and these get fused over time into a dense 3D understanding of the environment all right yes I guess the last part you can kind of see the representation that they're using so it looks like a occupancy period uh map right like it's a bunch of like voxels uh that that's representing the the map uh they're doing a bit more I guess with computer vision and some of the stuff they mentioned with like tracking landmarks and so on we will get to in a couple of lectures um the sensors they're using here are uh like stereo like depth cameras or like stereo cameras that give you uh depth so it's a kind of range sensor uh it's not a lidar but but yeah deaf estimation with multiple cameras we actually have like 13 cameras I believe the 13th camera is like the high definition the camera that's like tracking that's like capturing the footage um of like whoever like the biker for instance the other 12 cameras are for doing uh like mapping and uh like obstacles like detection and so on okay questions on on that all right so I guess the last thing I want to talk about so like I said this is the last lecture on on Slam I just want to zoom out a bit uh and uh I feel like we often think a lot about the the really like technical uh like aspects of these things uh it's it's important I think to zoom out from time to time just think about some of the broader like societal uh implications of these Technologies we did that a bit when we thought about like Optimal planning and optimal control with the value of alignment problem like how do we make sure that some objective functions and cost function that we program uh onto a robot like actually reflects our like human values uh I think the uh interesting thing here has to do with like privacy so slam like mapping and localization is obviously a really powerful technology with lots and lots of uh different applications I'm driving cars drones and so on um but but yeah I think there's a important question about privacy so I guess what happens if you have like Millions of robots that are like deployed across many many different homes you might think this is far-fetched maybe like are we really going to have like millions of robots across people's homes doing slam uh turns out we already do and we have had them for uh for a long time this is an article from 2015 I think so like seven years ago where the iRobot Roomba like the robot vacuum cleaner they use V slam with me it's just like visual Islam based on Envision for like mapping out like an indoor environment that you want to vacuum up so yeah I think that there's probably like room for technical Solutions so maybe there's some way for the robot to still do what it needs to do like let me in an environment but then not building a map or somehow building just enough of a map to do what it needs to do so not to build like a full kind of uh like detailed map of the environment or like not track humans in the environment that's the other thing but it's probably not just a purely technical questions or some societal questions as well yeah thoughts or questions I wrote that or anything like that yeah yeah so uh maybe just to repeat the question if the robot doesn't actually share the information maybe just like builds them up locally and doesn't share the information um what's the the problem I guess I don't know do people have thoughts like is that still a problem or uh or is that fine in some sense it seems like it's fine right like we have these robots and no one is complaining that much um yeah yeah good thing about memories yes so I don't I don't know for sure I think the answer is yes I don't think it deletes I guess if it did delete the map it's just gonna build up another map right yeah do you remember like new map um but like how do we know right I actually does someone know like whether that information is being like someone maybe who owns the Roomba like you know if uh if that information is being shared go ahead you know I don't know I was just gonna say like like technology do you remember okay all right uh go ahead on the back and then I know uh yeah that would be a reason to connect with Alexa yeah yeah I'd like to say yeah all the audio Yeah the space yeah everything you like to buy okay yeah or mess it up right I know it's like some of this technology there's a push to like do a lot of company to the edge yeah so I guess that kind of what it helps yeah if if the computation is like truly localized and if if we had like some way to verify that right then I think like that that is one possible like tactical answer like if yeah if the company just says look here's a proof uh that the robot is not sending much information uh all the computation is like happening on the edge like on the the device uh and maybe there's like little bits and pieces of information that are that are being shared but uh um but but not all of it like in some sense there is an incentive to share uh like data to some like centralized like processing system right like maybe there's some home that's like has a weird geometry and the Rumor uh like has trouble um that might be useful information like a process that fixed the algorithm so that it doesn't make mistakes in a weird environment so I feel like there's an incentive to share in some in some kind of centralized way but uh good Ness yeah well I guess not not yet but in principles [Music] sure yeah yeah I think right [Music] yeah yeah yeah it's kind of tricky yeah go ahead [Music] and I think like related to that I think one okay so that like even if you if you are yeah yeah that's a good idea yeah I guess in general like the the legislation and like policies around this are fairly like loose and there's been like more and more of a push towards like thinking about these kinds of questions with like privacy and like fairness as well when we get to uh but yeah the point here I guess is just I mean not to come to any conclusions with this to give you a sense for these societal questions and and uh yeah maybe some of you will come up with uh with bottle Fields uh somewhere around the line and that will help us back with some of these questions I guess any other final thoughts or questions all right I'll see you next week [Music]
Introduction_to_Robotics_Princeton
Lecture_11_Princeton_Introduction_to_Robotics_The_Nondeterministic_Filter.txt
[Music] but before we do that let's just take talk of what we've discussed so far uh what we're able to achieve with the concepts that we've introduced and what we still need to do so so far in this course we've basically discussed two main topics so the first one was feedback control where we set for ourselves the the goal of making the Drone hover kind of autonomously and the second module that we've covered so far has been on motion planning foreign motion planning with feedback control and together we've this kind of uh set of techniques allows us to find trajectories for our robot that are collagen free right that don't collide with obstacles and then these techniques also allow us to correct for deviation from our plan so we could use motion planning techniques to come up with a kind of nominal motion plan that avoids obstacles if everything goes kind of perfectly according to plan but if you have disturbances like wind gusts or other sources of uncertainty then we can use feedback control to correct back to the plan the motion plan that we were trying to track um but we made two really big assumptions so far in this course uh which we're gonna try to uh relax basically in this module uh which we're going to start today uh so the first assumption that we made uh is that the robot is equipped with sensors that allow for perfect State estimation so what I mean by this is that the at any point in time the robot knows exactly what it said is all the exponents of the state and the robot just has some sensors that are giving you this information perfectly and if we think back to our forms for the feedback controllers so they look like this the U of x there's some nominal control input plus some Game Matrix times x minus x0 um and we were basically assuming so to implement this this kind of feedback control law uh we assume that we know uh exactly uh what the state is right and this is not a kind of perfectly reasonable assumption in practice uh the Drone has some error in its state estimation process so it doesn't know exactly what the state is it has some some noise in its estimate of the state so that's one big assumption and the second big assumption which we made when we started discussing motion planning in particular uh is that the robot knows thank you uh kind of beforehand so before it starts moving around so I'll say knows our priority um exactly where the obstacles are or at least it has some kind of approximation um of uh of where the obstacles are so it has sets that it knows that the obstacle is lying and then can basically avoid those uh those sets so yeah these are the main assumptions that we've made and I guess just to remind you of the kind of overall structure of the course we're building towards getting a drone to autonomously navigate uh through obstacle courses uh where the Drone doesn't know exactly where the obstacles are beforehand so it's not given some kind of map of the environment um and so in this module uh what we're gonna so for the next I guess six or so like lectures including this one um we're gonna talk about uh how we can relax both of these assumptions so we're gonna develop techniques for fifth estimation [Applause] [Music] imperfect measurements from the verabot sensors and then we're gonna develop techniques for mapping where obstacles are foreign in a given environment it has let's say a camera or some kind of depth measurements that allow it to build a map of its environment as it's operating in that environment and as we'll see kind of towards the the end of the uh the this module of six lectures we're going to be able to I do both of these things simultaneously so conceptually is going to make sense to try to get the robot to both at estimate at state so maybe it's location and its velocity and also figure out where obstacles are in its environment uh in in the same kind of algorithmic or conceptual framework and this uh it's gonna lead us to a pretty uh kind of uh exciting area of Robotics which is known as simultaneous [Applause] localization and mapping usually abbreviated a slam slam and we're going to discuss some techniques for for doing uh slam for for robotic systems um okay so a good reference for this material and I'll point to a specific chapters I guess as we go along uh is the textbook on probabilistic robotics [Applause] so this was one of the three Russian textbooks that I mentioned in the very first first lecture and I'll I'll point to some specific chapters from this from this podcast uh all right I guess any questions on this overall plan okay all right so there's still some remnants of uh Chuck I I wrote my own job but I guess I have to give the same job to everyone else who uses uh this Blackboard as well uh all right so if you have trouble seeing this let me know and I'll try to clarify um okay so we're going to start off with a simple example [Applause] which will help us build some intuition and for today's lecture and I guess for the the next uh maybe lecture or two uh we're just gonna try to tackle uh problem number one um so we resume that the robot has liquid potentials that allow it to perfectly estimate the state so we're going to relax that assumption and we're going to develop techniques for doing State estimation given some imperfect sensor measurements uh we're not going to worry about the the obstacle part yet we'll kind of make that in a couple of lectures from now okay so yeah simple example so let's say we have a robot with State and then I'll call as usual X bar and the state is just going to be two-dimensional and it's just going to be its X Y uh location so we're kind of going to ignore Dynamics so there's no velocities as part of the state we're just going to say that the state of this robot is its location in some x y uh clean okay uh and let's say the robot has a sensor um which reports the robot's distance to the origin so the robot doesn't know so at any given time the robot has some actual State uh some actual X and Y location but it doesn't know exactly what that allocation is the only information that it has access to uh comes in the form of a sensor measurement which is just a scalar so this one number at any given point in time and that scalar corresponds to the robot's distance from the origin um and so I can call that uh distance r so let's say I have some given point in time uh the robot receives a sensor measurement which is just some number R so what can the the robot infer so given the sensor measurement what can the robot and for so the only inference that it can make again we're just looking at just one point in time the only influencer can make is that the location so the true state is on a circle right with with radius r uh so the robot says the distance to the origin is little r uh then the only information that the robot has that is that it's it's on the circle uh which has radius r okay so I guess to formalize this the robot knows at the square root of x squared plus y squared equals R so in other words that the X Y location is on the circle of radius r okay so I guess let's make it slightly more interesting so suppose the robot has another sensor and this sensor provides uh the robot's distance from a different point so this is the distance away from the point one one um so I guess given these two sensor measurements so the first sensor measurement corresponds to the distance from the origin the second centimetrical corresponds to the distance from the point one one given these two sensor measurements can the robot figure out exactly what a state is or not see some head shaking and go ahead do you wanna um no okay yeah so so in general this is still not sufficient to figure out exactly where the robot is so in general the robot just knows that it's on the intersection of these two circles yeah so just pictorially uh there's one Circle which corresponds to the first center measurement as it knows that it's on the circle given the second sensor measurement which is the distance from this point one comma one um the robot knows that it's also on that uh also on that Circle so in general it knows that it's on the the intersection of the the two circles so that we have drawn it it's kind of narrowed down its location to these two locations um it's possible that the the circles intersect just at one point in that case as the robot does kind of have the exact location but in general you only get two kind of possibilities for the location of the robot so we can do this one more time so let's just add in one more sensor measurement [Music] so now it's it has kind of three different uh radii that it gets so another sensor give the distance from another point so now it knows that it's it's in the the intersection of three different circles so this was the first one um this is the second one this was the 0.11 uh and then the the third one that says the distance from from this point so in this case the robot is going to be able to uh kind of exactly figure out uh like uniquely figured out it's a location so this is called triangulation so it has three different sensor measurements uh these are gonna uh intersect at a particular point and this is kind of sufficient information for the robot to figure out its location exactly all right any questions on on this example so I think it's going to be kind of useful to have this in uh in the back of our minds as we generalize this so it should be I guess fairly clear I think geometrically but uh but yeah I'm happy with a question okay so yeah let's try to turn this into some kind of General Flex scheme [Music] foreign based on this the intuition from this simple example so in general let's say the robot has some sensor measurement so the robot receive some sensor measurement which we're going to call Z and this is going to be some some Vector in general this is also known as a sensor observation or this observation depending on which I guess Community you're working in so the reinforcement learning community typically the color observations and the control query Community people say like sensor measurements um but yeah I'll use the terms kind of interchangeably so in the previous example this sensor measurement Z belong to R3 right we were saying we have three different scalers that the robot received each corresponding to the distance from some uh particular point from three different points so three different measurements as they belongs to R3 um yeah in general this is going to be some some vector uh like potentially High dimensional but maybe maybe low dimensional uh depends on the setting uh we're also going to assume that we have a model of the sensor uh and specifically I guess just for for now this model is going to be deterministic so we're going to say that we have some mapping age um which tells us that if the robot is at any given state uh then it's going to receive a sensor measurement three so this function h is known as the sensor mapping or sensor function and I guess this is the exercise so what does age correspond to in the the previous example good uh yeah so well okay so I guess depends on which version of the the example but uh in the most General version there were three sensor measurements um so the the previous example um the the version where we had three different uh sensor measurements the the first component uh would be exactly what you said so the square root of x squared plus y squared and then the second component would be the the square root of x minus 1 squared plus y minus 1 squared so the distance from one one and then the the third is the distance from the third point um right so so yeah I guess that's just to make it concrete but the main point here is that we're going to assume that we know what this function is so at any given State we know exactly what sensor management the robot would have received if it was in that state X so I guess notice that we're not assuming be inverse right we're not even assuming that this is invertible so given a sensor measurement we're not saying we don't want to say it is we're saying that given a state uh we know exactly what the sensor margins is um Okay so we're gonna Define this notion called the pre-image which is going to be really helpful in generalizing the the example uh we discussed previously so the pre-image of some of the sensor mapping is defined as follows so we're going to denote the pre-image by age with a superscript of -1 [Music] um and it's going to take in argument Z so that's the sensor measurement and we're going to Define this to be the set of states such that hfx equals z so capital x here let's say it's the state base so in the previous example we were saying the state space is just R2 so all possible locations um okay so let me just pause and make sure this uh definition is clear um so we're saying this thing is called the pre-image um associated with the sensor measurement as the um so the pre-image is the set of fits uh such that the sensor mapping from those States equals the the sensor measurement that you received so the intuitive way to think about this uh is that the the pre-image corresponding to some particular sensor where you can see uh is a set so a subset of the safe space corresponding to all the states that are consistent with the sensor measurement Z right so these are all the states from which you could have received the center measurement V so does this definition make sense I get any questions on this uh so just a word on notation so what I'm using here so this notion of pre-images is not just for like sensor mapping this kind of a general mathematical definition and this is the notation that people use so it's reminiscent of the inverse right so it sort of looks like H inverse but we're not assuming uh that H is invertible in fact in our setting uh the cases that are interesting non-trivial are cases where H is not invertible right so if H was invertible then sometimes the sensors are giving you the state you can just take the sensor measurement inward the sensor yeah measurement invert the mapping age and exactly figure out the state uh so in the cases that we're interested in uh yeah the function H is not invertible in general um so in cases where so I guess this is a useful exercise to make sure we understand this emotional pre-image uh suppose each is invertible um what's the size of this set the pre-image corresponding to some measurement Z go ahead yeah one so so it's a Singleton so a set with a single element and it's exactly at the inverse right so the uh that one element in the set is the inverse of of Z in the case where H is invertible right so the inverse of H would be the only element X and that's consistent with having received some transformation Z in general this is some set that has size whatever like not necessarily just a single point okay so we can use this to kind of generalize the iteration we had from the previous example [Music] um [Music] so again the previous example uh with the the three circles uh what we were doing was we were looking at the pre-image corresponding to the sensor measurements Z which were the three distances from the three points in index in the x y space um and yeah we said for a single uh sensor measurement so a single distance from the origin um the pre-image is a circle right says the set of points that are at distance R from the origin uh if we have two sensor measurements uh then the pre-image in general contains two points and if we have three sensor measurements then that allows us to uniquely figure out what the location of the robot is in that case the the privilege is just one one point all right any any questions on this so far okay so so far we've not uh accounted for the Dynamics of the robot right so we're just looking at one particular Point uh we're saying the robot receives some sensor measurement Z and all the robot does is looks at the the pre-image so looks at all the uh set of states that are consistent with receiving the center measurement and it just says I know that I'm somewhere in this pre-image of my Essential management so things get more interesting when we take into account the Dynamics of the robot and for this module on State Association and localization and mapping uh we're gonna assume that I guess useful to assume uh discrete time Dynamics so so far we've mostly assumed or pretty much entirely assumed uh continuous time Dynamics right like the Dynamics are written as some differential equation uh here it's a bit more kind of mathematically convenient to work with discrete time Dynamics so what I mean by this is instead of uh X naught equals f of x and U we're going to say x at time step D plus one is some function which we'll still call F which is a function of the previous state and the previous control input that the robot took or we can I guess that's for the the purpose of making things more simple we can ignore the control input so if you've baked in a particular feedback controller we can say x t plus one it's just a function of the the previous state that the robot was in uh and for I guess the next kind of uh 15-20 minutes or so I'm gonna work with this uh definition so this this kind of form of the Dynamics we're just going to ignore the control inputs everything I'm going to discuss will be easy to generalize to the case where you also have control inputs so I guess there are multiple ways to arrive at a dynamics of this form so you can start with differential equations that you derive from f equals Ma and just discretize uh in times if you do Euler integration for instance and that's one way to get the free time Dynamics or somehow yeah you can just I kind of learn dynamics that are in this industry time form okay [Music] okay so let's start with we're going to go through the same kind of same process of developing an algorithm that allows the robot to estimate what a state is given this additional knowledge of the Dynamics that the robot has uh so let's assume to start off with that we know or the robot has some estimate or let's say the robot knows some set called X 0 with a hat which is a subset of X which is the state space so there are one knows the set which contains the state at times that equals zero so as the world is initialized uh the robot basically has some uh said that it knows that it's in um in principle this could just be the entire State space right so that kind of uh makes no assumptions so the robot just knows that the state is something uh so in that case the x0 Hat could just be equal to X as a set of all possible States uh and yeah you know kind of more realistic setting the robot might know and that it's in some subset of the state space rather than just saying that I don't know where I am at all um and then we're going to kind of recursively use the robot sensor measurement so after about operates it gets more and more Center management over time uh we're gonna update this estimate of the robot State over time uh using the knowledge of the sensor mapping the sensor function and also using the the knowledge of the the Dynamics of the robot okay so yeah let's do this kind of one time step at a time and then I'll describe the the overall algorithm um so at the first time step or first yeah let's go downstairs um what we're going to do is look at the the initial kind of estimate that the robot has of well State could lie so this is the the set x0 hat so robot knows it's it's in this set we're going to propagate the set forward in time through the the Dynamics of the system um so the way to think about this is that look at every state so every point in the set and see where that point ends up under uh the Dynamics uh which I guess right better so under like those Dynamics so if you know F where is this at this point uh going to end up so just evaluate f of x uh for let's say the specific point x and so you're going to end up somewhere here let's say and then you do this for every single point in this set so some point over here maybe ends up over here at some point over here end up over here and so on so with that kind of gets transformed through the the Dynamics into some other set maybe it looks like this um and this is yeah this is the set of possible States so I'll Define this kind of precisely in just a second but intuitively this is the set of states that you could end up in knowing that you were apt or you knowing that you were in this set at the previous time step so at times of zero you know you're somewhere somewhere over here and this is a set of states that you could end up in given that you knew you were here so just mathematically uh we're gonna call this set F of x0 hat so we're taking the set and passing it through the Dynamics F and again for now I'm going to ignore the control inputs you can either leave drop rate control inputs here but this assume you have some feedback controller and so the Dynamics just depend on the state so the set is going to be defined as the set of states that are consistent with you having been in this set at time Step Zero so um let me decide it down and then I'll explain The annotation okay and this uh I guess invitably use that exists all right so let's sparse this definition um so the set F of x0 hat is the set of States so it's some subset of the state space such that there exists some point uh in the the set x0 hat which was our initial estimate of the uh the state uh starts by F of x0 equals X so I guess there's this definition makes sense and like I guess I see a bunch of not in here uh questions uh questions on this or or is it yeah clear or not clear good um doesn't each so that's a set of possible seeds could end up in yeah um yeah good question so for now we're assuming that we uh the Dynamics are deterministic um so that uh f is really a function so given a a state at some time T So given an XT you know exactly where zirubak will end up at the next time sir uh but yeah I guess you're right that this is a it's a kind of like strong assumption you might not know exactly where the robot will end up in and I'll discuss that in maybe like 10 or 15 minutes or so good other questions on this go ahead yeah yeah the definition sure so the intuition is you take this set of steps that you know you're in at times of zero and you propagate that forwards in time through the Dynamics uh so the mathematical definition uh is that we're looking at the set of states um such that there exists some point uh in this x0 hat so in the step that you knew you were in at the previous time set such that like from that point uh you end up uh at X right so it's a set of all X's uh that satisfy that property so any any point over here um we're saying is there's like some x uh such that you could have ended up at that point if you started off in the set X 0. does that make sense okay yes good so not yet so that's going to be the next step uh we're completely ignoring the center observation so far we're just exploiting our knowledge of the Dynamics other questions on this definition okay so yeah I guess let's do the uh the next part which is taking into account the robot sensor measurement as well [Music] so let's say app time t equal to one so the next times the robot receives the sensor measurement which we're going to call Z subscript one the subscript is going to correspond to the time so in general Z subscribe key is the sensor measurement at time so the first time the robot receives this sensor measurement C1 all right so I guess here's the question so how should the robot update its set of states and that it could be in given this new center measurement Z1 any any thoughts so we have two kind of pieces of knowledge right so we are assuming that we know the Dynamics uh and then we're assuming that we know the the sensor mapping as well so this function age so I guess how do we update our knowledge of whether State could be at times that won't go ahead yep yeah perfect so that that's exactly it right so at times have zero we're saying we know uh the set of possibilities for the the state and we're calling that x0 hat um and then time step one given our knowledge of the Dynamics uh we're saying we can be over here which we're calling f of x0 hat and now we receive this kind of new piece of information which is the essential measurement at times F1 um that tells us that we must be because maybe slightly bigger so we must be in the pre-image of Z Z1 right the robot state has to be consistent with sensor measurement that I just received at times of one um so if we know that we're in this set we know that we're in the set we know that we're in the the intersection um so we're going to call the intersection X one hand so capital x one hat and this is f of x0 intersected with the pre-image of Z1 all right so now we've exploited our knowledge both of the Dynamics and of the the sensor mapping any questions on on that stuff okay yeah maybe let's assume just one more time step and then we can write down the the general expression [Music] so yeah so doing this again so now at time step t equal equals one uh the robot knows that it's in that imperfection so it knows what's in the set X1 hat we can do this one more time so we can propagate this set now through the Dynamics so again we look at every state that's in the set x one hand we see where that state ends up given the Dynamics that defines a new set which we can call F of X1 hat and then at times step two we receive another sensor measurement Z2 we look at the pre-image of that and then we again calculate the the intersection which we're going to call X2 which is f of X1 has intersected with the pre-image Z2 uh all right questions on this before we write down the general expression okay so I guess maybe just one one thing to think about uh is the size of the effects so I guess what is your intuition for the set f of x one hat as compared to let's say X1 hat so is it going to be smaller in the volume let's say or larger are not necessarily I guess either one of those go ahead yeah yeah yeah so for now we're assuming that the Dynamics are deterministic um but but yeah the second point is exactly right so you could have stable Dynamics or unstable Dynamics uh and that's going to determine whether a set kind of gets larger or or gets smaller um so it's not necessarily the the case that uh our kind of estimates at any given time step so X2 have as compared to X1 hat uh are like getting a smaller in terms of volume right so we could be getting more uncertain uh because the Dynamics are really unstable so even if you knew like pretty precisely whether about those attacks are zero at times that one it could be at a much kind of larger set of sales okay so yeah let's write down the like if we do this recursively and every time stuck so this is called the non deterministic filter so I'll say a little bit about whether the name comes from in a second but let me just write down the steps um so at uh times 30 we're going to do three steps in this procedure so the first one is we compute f of x T minus 1 Hat and we're going to call this the Dynamics update the second step is to calculate the pre-image corresponding to the sensor measurement that the robot receives app time step t um and then the the third step is to calculate the intersection and call that XT hat so it's the intersection of f of X at T minus 1 and the pre-image of VT and the second step is called the measurement update so the first step we're just exploiting knowledgeable Dynamics and the second well I guess second and third steps were exploiting the the knowledge of the the sensor measurement um yeah in the third step we're combining the knowledge of the Dynamics and the uh the sensor measurement um all right any any questions on the procedure okay so the name and non-deterministic filter uh so the non-deterministic part uh is to differentiate this from probabilistic uh I'll say more about that in uh later in this lecture so we're not really yeah we haven't talked about probabilities at all right so um uh so yeah I guess that's kind of where this name comes from now to domestic uh it's not deterministic uh because we don't kind of determine exactly what the state is at any given point in time we just maintain a set of possibilities for where the robot could be so that's why it's not deterministic and it's not probabilistic and the name kind of given to these kinds of things that are not the domestic and not probabilistic is non-deterministic so that's where there's a name comes from a filter uh some I think pretty old probably from the 1930s or something terminology we're filtering out uncertainty or noise and that's that's where this name comes from so we have some uncertainty about whether the state of the robot is and then as we receive more Center measurements we're trying to filter out the the uncertainty or noise uh in in the yeah sorry the noise or uncertainty in our state estimate so that's where that kind of terminology comes from okay so just as an exercise so I won't go through this here um but it might be useful just to check your own understanding of the the sound of domestic filter so go through the same process but also take into account [Music] sensor oh sorry yeah control inputs all right so we were ignoring the control input here I think it should be relatively straightforward like you basically just modify this Dynamics update step to take into account the control input that the robot took which it knows it took right so whatever controller you can use that knowledge to propagate the set like at C minus one hat to to XC hat so that's a relatively straightforward extension um all right so I guess what are some of the challenges with this non-deterministic uh filter so this is something you would Implement interactive let's say for the the Drone or or or what are the challenges maybe that people encounter uh if you try to actually implement this for the Drone but yep yeah so there's a kind of computational challenge um so in the description of the algorithm I wrote down so I said compute this compute that and so on these things might be really hard to compute right like the the SAT x t hat at any given point in time uh could be really complicated um it might not even be connected right like it could be disjoint so at Time Zero like maybe you said okay I know I'm either in this set or in this set so the union of two disjoint sets uh somehow you need to kind of propagate that forwards in time and so XD hat at any given time T may not even be a connected set so it could be some complicated so actually doing these computations especially in real time like as the robot is operating is pretty challenging I guess other uh tops or challenges with this yeah yes yeah good so all we've done with the founder domestic filter is narrowed down um the set of possibilities uh to just some set right like and and that that set is not like a Singleton like doesn't have this one element so how would you implement a feedback controller um so with a feedback controller like the general form of our feedback controller was some Game Matrix K multiplied by x minus some nominal X zero so what is the x that you would use right so we don't know exactly what state there was in we just know that the state belongs to some set and so you have to somehow kind of arbitrarily pick like a representative element from the set XD hat to do to it actually like Implement a feedback controller and the other challenge which I think someone raised at that uh uh like previously in the lecture is that we're not taking into account any uncertainty in our knowledge of the Dynamics or the sensor mapping so yeah I guess we assumed that f and H our functions that we know exactly so the function f takes a particular State at X State x t and tells you exactly what the next state is going to be at the next nine step and the same same thing with h so if I give you the exact state that the robot is in then we're saying that there are there's only one possibility for the sensor measurement that the robot could receive and that's h of H of x um so yeah it turns out that we can take into account some notion of uncertainty making some notion of uncertainty with this non-deterministic filter so uncertainty in the Dynamics so I'm just going to sketch this I won't go through the second formal notation but let's say we have some uncertainty in the Dynamics that takes the following form so given some state Little X t um suppose I can tell you uh I can give you like a set of states and that the robot could end up in at the next time step so this is um let me write it like this so the X P plus one so the next time Step at the state of the next time step belongs to this set suppose I could give you a set so for for every step suppose I could give you a a set that is guaranteed to contain uh the state at the next time step so I guess how would you modify the non-deterministic filter algorithm to take this kind of uncertainty into account right yes yeah perfect so um so it's basically the same structure of the algorithm right so at any given time T we have some estimate of where the state could be into XD hat so for every Point here uh we kind of look at the set of possibilities for whether or what could end up in at the next time set assuming you started off here so at this point uh maybe you could end up over here at this point maybe you could end up over here and so on so you can take the union of all those sets and that if you take the union that's kind of our Dynamics update so if you take into account the knowledge of the Dynamics with this kind of extra uncertainty you know that the robot must be in this in this set a disconception again like actually combination implementing these things is really uh challenging but at least conceptually hopefully like it makes sense and then the second step is still the same uh so you take the intersection uh of this kind of uh update uh through the Dynamics with the pre-image of your sensor measurement and that's your new estimate for the where the state could be questioned so definitely representing every [Music] uh fastest power or um I guess this is more um so if you have some uncertainty in the physical parameters of the the system so let's say you have a good estimate of the moment of inertia parameters but not exact so you could say that I know the moment of inertia or moments of inertia up to some confidence like interval so I know that the mode of inertia belongs to like some set of possibilities um so that's one way or that could be one way to define these sets so if I assume that the moment of inertia is exactly something like I uh then the robot was gonna start off here and end up at a particular Point uh if I assume that the mode of inertia is something slightly different and then the robot still starts off here but ends up at a slightly different point and so on so I have a set of possibilities for these physical parameters like the mode of inertia then given any state that defines a set of possibilities for the robot to end up in uh yeah so I guess that's kind of one way to get these sets at least that's actually again they're actually implementing this is pretty computationally challenging all right so that's Dynamics uncertainty we can also take into account some form of sensor uncertainty or like sensor mapping uncertainty [Music] [Music] uh so for example going back to the the first example we thought about this lecture with um instead of saying so I guess instead of the distance from the origin so this is just an example so instead of the robot receiving the exact distance from the origin it receives robot receives the distance uh let's still call it R but like plus or minus some Epsilon yeah so so the robot knows that whatever sensor measurement it got is within a plus or minus of the true distance uh from the the origin um so I guess what so let's say I ignoring that Dynamics part for now let's say you receive the sensor measurement uh with this kind of known uh uncertainty in that Center measurement what can be better what say about it fit so I guess what would the the third possible States look like given a particular sensor measurement good adding the margin of error in terms of the study yep yeah exactly so so geometrically it corresponds to a ring right so you look at the circle of radius r um we're just going to be in in the middle here and then you look at plus or minus Epsilon so that's the margin for error and that defines a ring and you know that the robot must be in this ring given the sensor management so again you can modify the non-deterministic filter to also take into account uncertainty of this form in the sensor mapping this General structure we algorithm is exactly the same so you take your previous estimate for where the robot could be propagate that through the potentially uncertain Dynamics take the intersection with this kind of new like pre-image uh the set of states that are consistent with you having received this this sensor measurement all right so it might be a useful exercise again just to make this more formal so like mathematically write out these stats and like formalize the uh like these like Notions of uncertainty in the Dynamics and the sensor manage okay all right questions on this all right so yeah I guess that's the make a couple of comments on this system on deterministic filter um so as we mentioned uh this is challenging to implement computationally and yeah especially when the the dimension of the state space is large so for a drone we have a 12 dimensional space that's already been a large representing these uh like sets Computing intersections and so on uh can be like really computationally challenging especially if a robot needs to do this in real time which it does uh to use this for feedback control for instance so in practice um actually let me make this point in a second the the Second Challenge is what we also discussed uh previously so we don't uh have an exact estimate uh or let me say it slightly differently we only have a set of possibilities so some X hat T that of possibilities um for the robots state uh and if we're doing feedback control this is kind of problematic right so we're not getting an exact uh estimate of the the state uh which is getting the set of possibilities XT hat and to do feedback control so to calculate feedback uh sorry you can't hear the control input um I guess it's not necessarily uh X actually I'll leave it like this so u0 plus k x minus x 0 so this is our standard form of a feedback controller we need to somehow like pick an element from this set um and sometimes there's no particular element that's better or worse than any other element right so it's not like one element is we're somehow like more confident about another one being here than we are about the robot we care we just know the only thing we know is that the state belongs to the set and we have no further information so this is a pretty major challenge as well um so this motivates probabilistic methods for State estimation [Music] which we're going to start discussing in the next lecture uh the basic idea is instead of maintaining a set of possibilities for the state at every given point in time we're going to maintain a probability distribution over the state space and that gives us a bit more information right so we can say okay what's the the mean of that probability distribution maybe I can use that as my best estimate for the state of the robot at the time T and then plug in that estimate when we're doing feedback control um so in practice this non-deterministic filter doesn't really get used so I guess we could be spent yeah we spent more than an hour talking about another filter we didn't waste our time I wouldn't have spent the whole lecture if this wasn't uh useful so these probabilistic methods that we're going to start discussing uh in the next lecture um one way to think about them is that they're like probabilistic analogs of the the non-deterministic filter and they're actually going to have a very similar structure uh to the non-deterministic filter but just everything instead of having um like a set of fits that we maintain we're going to have probability distributions uh and yeah sorry another question here yeah yeah exactly so it's going to give us a more refined uh quantification of the uncertainty I mean sometimes we're already quantifying the uncertainty here but the alternate qualification is just in terms of a set right we're just saying that we're uncertain about whether the state is but we know that it's in this set uh so yeah probability distribution is going to give up give us a more uh like refined estimate uh for uh or yeah more refined quantification of the uncertainty that we have over the state good all right other questions on this okay so in the last I guess 10 minutes or so I just want to talk a little bit about the lab so the next assignment is going to go out uh tomorrow so Wednesday um so I guess I mentioned that the the first lecture uh that the schedule for this academic semester is very different from the scheduled in like past years and so like one of the kind of concrete differences is that we don't have a lecture during Thanksgiving week so if you look at the academic calendar I think Brinson is following a Friday schedule on the Tuesday before Thanksgiving so we're gonna not have a lecture uh and yeah there's a couple of other kind of differences as well so in the past I've always managed to not have anything new uh in terms of assignment or uh fall break or spring break and I at the beginning of the semester I spent like two or three hours trying to figure out uh assignment uh kind of scheduling and so on and there's no way to not have anything that was uh kind of over the break uh having said that this assignment that's going to go out tomorrow is going to be due the after the fall break so two weeks from from tomorrow and this is a relatively short assignment so I guess specifically I don't expect you to need to work on this overfall break so you can just ignore this uh like I think absolutely like over the rig the only kind of uh component of this assignment will be the lab so there's no Theory component there's no kind of individual coding component so it's just a lab component and I think in the the past like teams have like completed this in just like a couple of hours like maybe like two to three hours max I think not not more than that and you shouldn't I think you shouldn't need more time than that computers assignments so yeah I guess if you're worried about uh things being due or fall rig I think you should just be able to not work on this if you don't want to over or break okay so yeah I guess what is this lab so we're gonna use the ocean planning techniques uh rrt in in particular uh to plan uh trajectory uh for a drone for the crazy flight drone uh to avoid obstacles so this is an example uh of one of the the setups that we have in the G105 lab space this is not the exact startup I think this was this picture was taken previously but it looks sort of something like this so we have a bunch of PVC pipes that are hanging uh from the uh the ceiling um and there's a starting Square uh I think that's the one that's marked in in blue over here uh it's going to be like clearly marked when you show up to the lab and there's a ending Square which is marked in yellow over there and the goal is to yeah they start off at the starting square and then plan a motion plan for the Drone that avoids the obstacles and lands in the the ending Zone so as with all the motion planning techniques that we discussed previously you can assume that you have a map of the environment so you know exactly where the obstacles are and the way you do that is just by like measurements so you'll show up there we have the tape measures so you can measure like the locations of the obstacles there are that many was like five I think five or six as you can measure exactly where the obstacles are relative to the starting location you can measure the the radii of the obstacles you should take into account the radius of the Drone as well um to have some like buffer if you're creating the uh the robot is a uh really yeah I guess we'll walk through all of that in the uh the assignment instructions uh but the the general idea is to use a RT to do the function planning uh avoid collisions and the Drone has its own feedback controller that it's going to use to track that trajectory that you gave it so I guess in principle we could have made things more challenging or interesting by making you use the lqr controller that you wrote previously but yeah if we start doing that then errors start propagating right so if you by the end of this time by the end of the course like nothing is really going to work so we try to make the new assignments kind of modular so we're assuming we know how to do feedback control we're just going to use the the drones in build feedback controller to do predictive tracking so the only thing we're going to do here is the actual motion planning using uh rrt uh this is the other setup in uh in the anglinger space uh there's really two setups uh separated by the The netting uh I guess hopefully uh you're kind of familiar with the logistics uh from the first uh lab assignment um so if teams can show up to these spaces and kind of like organically like coordinate to like test something and then debug and let a different theme at work I think that's a pretty uh good uh kind of easy way of working um and so here's a video of what uh it should look like um so these are with the unpainted PVC pipes we have them looking slightly better now the Drone takes off it follows this motion plan and then it ends up somewhere in the the ending Zone [Music] so yeah that's what it's going to look like if everything goes smoothly at some point maybe it'll Collide but uh yeah hopefully not too many times uh all right I guess questions on the lab or or Logistics or anything [Music] sounds good I'll see you on Thursday [Music]
Introduction_to_Robotics_Princeton
Lecture_17_Princeton_Introduction_to_Robotics_Intro_to_Vision.txt
all right yeah let's go and get started uh it's super quiet I'm just quiet okay so we're gonna start off with a new uh topic today uh so computer vision and this is going to be the last uh major kind of module for the course originally combined with uh with machine learning uh so this is motivation uh since the last module where we're discussing localization mapping State estimation uh we discussed these kind of pretty abstractly so we had a abstract sensor model some probability distribution on sensor measurement VT given the the state of the Robotics and we discussed some concrete instantiations of these sensor marbles so we spent a little time for example talking about laser rangefinders or rainfinders in general in the next few lectures we're going to focus on one particular sensor modality which is particularly powerful which is which is vision and so just to give you a sense for why vision is as powerful and what kinds of things one can do with vision I'll give you a few different example applications so there's one this is drone racing this is not autonomous but like human drone racing [Music] with together [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] I won't be there the whole video but I guess what's impressive about this is that this is purely a vision based right so this is a fpv for stores and view goggles that these competitors are wearing and the only thing that they're relying on to navigate through these obstacle courses with vision they of course don't have a like direct access to an IMU or anything so it's purely Vision based navigation uh here's the autonomous version uh this is relatively recent I think maybe from the last year or so uh from a group at the University of Zurich led by Davida so this is growing racing Out outdoors and this is still human pilot this first clip [Music] and this is the autonomous version and computation we demonstrated in many environments like forests received navigation in these environments is challenging for existing algorithms our drone can fly at speeds up to 40 kilometers per hour PID agility observations to collision-free trajectories see it's not quite uh human level yet I think but but still like pretty impressive like 40 kilometers per hour is really fast especially through these clever uh kinds of environments and this is again Vision based there are other sensors as well uh like the standard sensors that are known as but the primary you can take modality of or something obstacles in the environment here is the vision and of course it's not just drones that the news Vision autonomous vehicles uh use version quite a bit uh Tesla in particular is like really committed to this vision of only Vision uh like doing navigation purely based Envision not using lidar or anything like that and I guess part of the argument is humans can do it like humans rely only on Vision to drive and so that's kind of an existence proof in principle we should be able to get uh autonomous navigation purely based on Vision as well and not using other sensors like lidar at least that's kind of what what they're trying to do and progression you can extract lots of information so this is one of the reasons version is particularly powerful uh one particularly important piece of information has to do with geometry so depth estimation so using Vision to figure out where obstacles in the environment are in particular distances to different points in the image uh this is one example uh of doing depth estimation uh in a kind of vehicle uh setup so there's a car driving around there's a camera on the car and they're estimating uh depth to every pixel so the color here corresponds to the the estimated depth for for the different pixels uh here's another example so this is a tracking objects with a drone uh this is actually from the same group that I showed the video from before uh here they're using a camera to track a Target on the ground which you'll see in just a second to the ground robot like a wheel robot that's moving along with a x uh drawn on it and the Drone is tracking that um uh leg marker and then using that tracking to land on the Drone I think that's faster somewhere from Arabian okay another key kind of piece of information that one can extract from vision and this is from videos uh has to do with velocities or Optical flow we're actually going to spend a bunch of time uh in the next lecture talking exclusively about Optical flow but just to give you a sense uh what's going on here is this is a video taken from the perspective of a car that's moving around and the arrows correspond to um uh like estimated Motion in the image so basically each the arrows are pointing this way it seems like objects in that portion of the image are moving uh in in that direction and the magnitude of the the vector corresponds to the magnitude of the estimated velocity um and this is what you don't use for doing velocity estimation um so I guess you might have noticed that the qualified drones and other drones as well have a downward facing camera so we've mentioned this a couple of times before and I'll yeah we'll talk more about it in the next lecture but this is kind of what it looks like so you have a camera that's facing downwards and that can see the ground moving and based on the apparent motion of the ground the Drone can figure out its own velocity you need to also know that the height that the Drone is at to get the the magnitude kind of correct but if you have a height sensor you can combine the information that you're getting from your downward facing camera to actually add some of the velocity of the Drone and then you can integrate integrate that to get the position as well uh here's another important information that you can extract from Vision so it's not just geometric information like like depth or velocities but you can also get semantic information so for instance doing object detection and segmentation so here again there's a video of like different themes and each pixel has an Associated label so there's like a road sidewalk car building and so on and so basically like the the images are getting automatically labeled but each pixel in the image is getting automatically labeled with some semantic category that that's going to be estimated from the video uh activity recognition and that's that's another kind of interesting application uh so what yeah what's going on in the video this is not a state-of-the-art model I think this is from a few years ago but yeah I guess it has 95 confidence uh that uh that someone in the in the video is uh is playing the the piano and there's some random stuff other like labels as well with some low probability um another kind of important application especially like these days is uh captioning images so automatic image captioning so if I give you these images can you figure out like automatically write some label some caption for the the image uh and yeah it's kind of relatively uh good so the first uh image the man in black shirt is playing the guitar that's a pretty accurate caption of what's going on in the first image a construction worker and orange safety West is working on road that's also a pretty good caption and it's like unsurprisingly detailed right like it's getting the color construction worker safety rest and so on the third one is not quite as accurate so two young girls are playing with the Lego tower uh yeah it's probably not two young girls I think it's like a young girl and her and someone like older maybe an adult but it's still like reasonably good and these things are getting like better and better as we uh yeah progress question or are they more just like oh here's here's his data set yeah videos yeah yeah so definitely the the latter uh so the semantic information is being automatically learned uh and yeah we'll say a bit more about that in later lectures but yeah it's very much there's a gigantic data set basically the entire like internet all the images on the internet uh with Associated captions and learn how to take a new image and come up with an Associated caption there's I guess more kind of Nefarious applications as well that people have been talking quite a bit about let me just play a few seconds of this that's nice deep face our videos or say the results can be great fun take for example these hilarious Goods at Nicholas Cage starring movies and a way to spread political misinformation to learn more about the Deep phase yeah I want to play the interview I guess if you're interested I put the URL to the the video if you want to take a look but okay so I guess these are many different uh applications all involving missions or the other if you abstract away the kind of details one way to think about problems in configuration is coming up with some kind of mapping some function from either an image like a single image or a sequence of images like a video to some label right so image or object recognition so given an image figure out what's in the image so is there a cat in the image is there a dog in the image that's one example of this where you go from image to a category of object activity or activation you go from some sequence of images in a video to some kind of estimated activity Optical flow you go from a sequence of images to this Optical flow estimate and I guess if you abstract things away like enough maybe this is not a particularly useful abstraction but you can think of Robotics as some mapping from pixels to darts like some sequence of images to control actions for your robot uh and nowadays there's a kind of been a massive amount of progress uh on the inverse problem so not going from image to label but rather going from label to image uh so as an instance uh you could give us input uh some like piece of text like a sentence as input and the desired output is a image that's automatically generated corresponding to that text um so yeah there's like many different uh machine learning models that have come out over the last two years uh one of the most recent ones is called stable diffusion this is open source you can actually I put the link so you can go and play with uh with this nozzle don't do it now because it's like super distracting but maybe after the lecture uh try out some of these I was trying out a couple of these last night so uh basically the way it works is you type in some sentence in English so that's the sentence at the top and it generates a bunch of images uh corresponding so automatically generates a bunch of images corresponding to whatever you typed in so I typed in a robot contemplating the meaning of life and these are actually surprisingly good right like this yeah it's like you really get the sense especially from the maybe the yeah the first three uh that these robots are really contemplating the the meaning of life and and they're also like pretty kind of photo realistic or like realistic uh looking uh images uh here's another one this is a bit more like wacky I guess is to make the point that these are not images that exist on the internet right like these are just automatically being generated on the Fly uh so quadrone that looks like a parrot um so I think maybe three of them are pretty reasonable uh I'm not sure what's going on the top right that way yeah a parrot I don't think it's a quarter but but yeah the other ones especially the bottom left one I think is is like pretty pretty good but yeah you can type in all sorts of like random things and it'll do its best to come up with images that match those uh descriptions and you can even specify Styles so you can say a quarter that looks like a parent in the style of like Picasso or something and we'll come up with uh with images in that style okay all right so again just abstracting things away uh you can think of computer vision as mapping from images to labels or labeled surf to images and computer vision is all about coming up with these functions like these mappings somehow either and manually specifying these functions which is kind of the old school way of doing computer vision which will spend just a little bit of time on one lecture or learning these functions from uh gigantic data sets which we'll also spend a few lectures on all right so I guess why is Vision powerful uh it seems like more powerful than other sensing modalities like uh like lidar for instance um and I think the the main reason is that Vision provides an extremely rich source of information so it's not purely geometric information that you're getting from Vision but you get something about uh like semantics right something about like meaning uh from from vision and that is kind of hard to get purely from a range sensor like like lidar another important kind of practical fact factor is that cameras are usually passive not all cameras but I guess many cameras are are usually passive so they don't rely on having to emit signals of their own so with lidar there's like a lidar laser like beam that's going from the lidar like to the environment and then being reflected back that's not necessarily the case with just a standard kind of regular camera so this makes Vision uh pretty energy efficient as compared to lidar and can make the cameras like much lighter as well than than something like lidar so this is important for robotics applications in particular where like the payload of your robot might be pretty constrained the payload of a drone or even an autonomous car I guess if you have enough foreign scars that are using uh more kind of heavy duty sensors that can like add up to a larger energy expense so that's another kind of practical even for for just using like pure vision um okay so I guess exhibition hard so I think we know the answer now uh is like vision is super hard uh but back in the 1960s when people first started thinking about uh automated kind of computer vision it wasn't obvious at all and that Vision was was going to be hard uh in fact the the expectation was that original was going to be super easy uh and in fact it was going to be so easy that you could solve it in a summer so this is the summer Vision project at MIT uh led by Sam or papar who was a mixer at MIT and the description for the project is that the summer Vision project is an attempt to use our summer workers effectively in the construction of a significant part of a visual system so the particular task so they're thinking of all of vision as a task I was children partly because it can be segmented into sub problems which will allow individuals to work independently and yet participate in their construction the system complex enough to be a real landmark in the development of pattern recognition see I guess needless to say there was no real Landmark uh like in the development of pattern recognition back in in 1966 just based on a summer's worth of work uh and I think the like our intuition right is like completely off so vision seems easy because it's so easy for us as humans so we go about uh like solving all sorts of really hard vision problems uh in a seemingly uh kind of effortless Manner and I think just to get some iteration if you haven't thought about this kind of thing before or even if you have I think it's useful to uh to just like sit down and like think about how you would do something like object recognition so given an image just decide whether or not there's a cat in the image so yeah think about like what makes a Gap a gap right and how would you describe that as some kind of mathematical function so yeah you could you could have uh like years or with cars but like describing those things uh geometrically is actually uh geometrically and again mathematically precisely to cover the space of all cats is like a really challenging from all right I guess any questions so far pause for a second okay so to get some more intuition for for why vision is hard so let's start with uh the real kind of basics of version so what is the image so we said vision is about going from images to labels or labels to images so what exactly is our image um so lastly an image is just some digital representation of visual information that you get from a camera so you can think of it as a array of pixels and there are different kinds of images uh kind of based on the the application that you're looking at so the simplest kind is what's known as a binary image so each pixel each location in the image is just a zero or one and basically what this corresponds to is whether the intensity of light that's being measured and the pixel is above or below some threshold so it's about some threshold and that's a one that could be lower some threshold and that's a zero slightly more uh of a rich representation is a grayscale image so this pixel value or pixel value for a grayscale image represents the level of intensity of light and that's being measured and that other pixel uh and color images so that's a where a pixel value is a three by one vector uh representing like the intensity of specific colors so RGB that's maybe the most common representation so the intensity of create randomly intensity of green and the the intensity of blue um and yeah I guess this is an example of a grayscale image a heavily pixelated just so you can you can see kind of what it looks like so the image the grayscale images on the the left and on the the right is what the the computer is getting right so somehow from this array of numbers corresponding to a different light intensities you have to decide whether or not there's a face in the image and so yeah maybe this gives you a sense for for why it's super complicated there's a massive amount of information that's embedded in that array of pixels and somehow from that gigantic amount of information that high dimensional Vector you need to make some decision about whether or not there's a face in the image okay so I guess how are you going to just formed yeah so the simplest model of image formation is a pinhole camera and this is actually like a surprisingly useful abstraction there's a lot more going on which I'll mention in a bit but yeah this is a reasonable place to to start thinking about the geometry of image formation uh so the setup here is we have a camera that just has a small hole a pinhole that light can pass through so light reflected from from objects in the environment uh passed through the pinhole and basically hit the the screen at the back of the box at the back of the camera and you measure the intensity of light at every location of the the screen at the back of the camera um so here's a two-dimensional kind of model of the pinhole camera um so here the light is coming from the right and the screen is at the uh is on the left and you have some point uh that's at some distance Z away from uh the kind of front plane of the camera uh and at some like height y away from the line and the the middle of the camera the camera has some focal length so that's the the distance between the plane that contains the pinhole and the plane that contains the screen so the image Flame if you denote the the world coordinates like the coordinates of some point in the uh out there in the world is Y and Z and then the image coordinate is just one coordinate because we're looking at 2D image formation is given by y Prime and just using some some basic trigonometry you can see that y over Z that's the tangent of the the angle equals y prime or f f is a parameter of the camera that's something you can measure until you get that y Prime equals FY over Z and I guess one thing to note here is that the image is flipped right it's a positive y uh corresponds to to the up Direction in the world positive y Prime corresponds to the the down Direction in the image frame uh you can also generalize this to 3D so now you have XYZ those are coordinates of some point in the world again you have some distance F that's your focal length from the plane and that contains the pinhole to the the image frame and now you have two uh image coordinates X Prime and Y Prime and again just using some basic trigonometry you can derive how some point that's located uh at XYZ gets mapped to an image coordinate so X Prime and and white running and this depends on the parameter of your camera which is the focal length question okay yes yeah exactly yeah yeah yeah uh and yeah if the focal length is larger if your camera is extremely large uh then the object will look uh larger as well good other questions okay so I think yeah like I said the panel camera is a useful start to to understand how images are formed but the real image formation is much much more complicated and you need to take into account a whole bunch of different factors including the specific lens that your camera has distortions that that come from the lens uh pixel visceralization like the fact that you have a finite number of pixels on your on your camera [Music] um like a 10 megapixel camera or higher Reflections as well and many other factors so there's a whole I guess field of computer Graphics that looks at uh how to like do realistic image generation and some of this is being automated like I showed with uh with these machine learning models like stable diffusion okay so that's one part I think of what makes uh version hard so just the image formation process itself is a is a complicated thing um I think the the main reason or what I guess the other big reason that that vision is hard in robotics in particular uh is that the world itself is is pretty uh complicated and there are a number of uh maybe even a counterintuitive factors that you need to take into account uh when you're thinking about uh waiting so I'll go through just some of uh what these challenges are one challenge has to do with projections um so this is a actually let me play the video yeah it's an image that's painted on the ground uh and yeah it looks like a little girl uh picking up a ball or a balloon or something like that and if you look at it from the right uh distance like something like this it actually looks uh like like it's 3D right so some 2D painting on the the ground is being projected onto your like retina and your mind is perceiving this as as being a 3D and I think this the purpose of this was to to make people slow down this was in some School area I think and people were running faster they yeah this is kind of I don't know if it's like safer or not maybe maybe it's not that safe but but at least it gets people to slow down um another important challenge which you'll encounter uh I think when you when we do the final project and another like Vision assignments uh you'll get I think you'll get a real appreciative appreciation for some of these challenges uh when we do the assignments on the project but a yeah really important one has to do with illumination or lighting so these are three images corresponding to the same underlying scene so the objects in the world are identical um but the only thing that's changing is the lighting so the first one there's some light source right in the the middle of this ring of penguins or whatever they are and yeah the segment are have lighting uh at a different location and the resulting images look very very different right so they don't like if you just focus in on any particular pixel and compare the intensity of light at that pixel across the three different just they look very different and that poses a pretty major challenge so if you want to do something like object recognition you need to somehow discount the effect that changing lighting conditions have so you want to they will say that this is a cat or a dog no matter what the the image looks like because of the fact that the image was taking under some other like lighting or different lighting conditions uh here's a really interesting one uh Shadows uh so this is a optical illusion that goes back that elephant um I guess how many people have seen this illusion before okay uh some of you all right so so yeah what's going on here is that the cell a and the cell b uh actually has the same intensity from the point of view of the image which is not yeah it's not what your mind is perceiving right so your mind is perceiving a to be darker than b uh well I guess yeah probably for most of you uh your your mind perceiving that ASR to the B but if you look at the actual image uh so here's a helpful way to look at it it's the exact same image the only thing that's been added here is a little like great strip uh connecting A and B uh now you see that A and B like in the image has the exact same pixel like intensity the same like grayscale value uh but yeah again somehow your mind is [Music] changing that and perceiving a to B darker than b uh because uh like it has some like prior knowledge about Shadow formation question kind of like to behave yeah yeah that's a good question so it depends on what exactly uh I guess we're trying to to do uh I think if we're trying to um understand the 3D scene uh somehow like maybe it makes sense to uh to have the same uh like kind of biases that humans have but yeah if you're like simply querying like is the pixel value at a the same as the pixel value of B that's something that that's kind of an objective thing and you'd want like any uh like algorithm to say that those are equal I guess if you have specific applications in mind yeah I I'm trying to think like without somehow incident Ally I I assuming this is actually yeah yeah I think like I said it depends on the specific uh application uh yeah for some applications it makes sense to have like making some of the prior knowledge that the humans have like above like 3D uh kind of like Shadow formation in this case and for all the applications it may not make sense um uh yeah I guess another important challenge has to do with scale so this is particularly important when you have a single camera so monocular camera that's looking out there in the world and this is going to be a challenge when we do the defining project because our drone has a a monocular camera not a stereo camera uh and so it's uh fundamentally impossible without further assumptions to resolve scale right so you don't know like just looking at the size of some object in the image like doesn't tell you that the side of the object like out there in the 3D World unless you make some some assumptions uh about like maybe the size of of like known objects in the in the scene uh Viewpoint is another uh like variation uh usually a kind of nuisance that you want to get rid of and it's the same kind of thing with with lighting so these are images of the same object that are just taken from different viewpoints uh and the the images look drastically different uh but if you want to do something like monk detection and then yeah you want to like discount the uh the changes that are appearing because of this change in the in the viewpoint occlusions uh this is a super important one in robotics uh so often things are are kind of are included so here there are two humans uh whose kind of lower like bodies are included by by these guards uh but if you want to do pedestrian detection um uh you need to I guess somehow be robust to these kinds of occlusions uh in this case I guess the humans are are moving to the the left or they're facing the left and probably gonna move to the left but uh it could be that uh like humans might start going to the right and so if the image here was taken from the perspective of autonomous vehicle uh you want to First detect that there are humans uh behind the car if you want to detect their pose like which way they're facing and then use that potentially to make a prediction about where they're going to go so they're going to go to the right from this perspective uh then the autonomous vehicle might Monitor and just like stop and not move um and yeah background as well uh so depending on uh what it is that you're trying to uh to see in the image uh and what the background looks like that can be another factor that you want to Discount right so you want to do object detection uh with uh with different backgrounds like no matter what the background is uh here's another interesting one deformations um so often I guess we assume that things are rigid like especially in robotics uh there's a kind of a common assumption that objects in the world are rigid which of course they're not so you can have articulated objects or deformable objects um and that's another kind of nuisance Factor right like you want to be able to detect that this is a person uh even though like the exact like geometry of the person is like changing uh or deforming um as uh yeah this is the case over here I hear they're doing like 3D reconstruction so they're taking this video uh and and basically coming up with a a model uh that's that's deforming that captures the 3D uh kind of likes based on the person that is occupying okay questions on on any of these factors all right so let's I guess look at a kind of concrete division problem to build a bit more uh intuition and to build some of the basic techniques that we're going to describing more detail uh in later lectures uh we're going to think about this problem of Lane detection as a kind of toy uh vision problem this is a running example so this is motivated by an autonomous vehicle application let's say where you want to do Lane keeping so you want to just keep the vehicle kind of centered in the middle in the in the lane and for that you need to detect the boundaries of the lane and that's what we're going to think about a kind of toy version of this um so I guess the the way uh one could start to go about doing this is to do some kind of edge detection right like figure out where edges in the scene are and then once you can do Edge detection you can figure out like what are the the edges uh corresponding to the Lane and the vehicle is in um so yeah even looks like relatively kind of simple sounding problem of edge detection is surprisingly uh complicated so so what is a what exactly is an edge so you can have an edge uh or an apparent Edge you know image that's caused by many different factors so for example surface normal discount immunity so if I have a an object like this if I look at the the normal Vector to the surface there's a kind of sharp discontent media sharp change going from uh like somewhere over here to somewhere like over here and that appears as an edge in the image depth discontinuity so if you're looking at again let's say this object from over there there's a change in the depth so the depth is a certain value over here and then if you move sideways the depth like suddenly changes to something else and that appears as an edge as well color discontinuity but that's another case where you see edges illumination discount community so the boundary between the Shadow and not the shadow that also appears as a as a match in the image um so if we think about a really kind of simple way to do Edge detection um so actually I guess maybe we can pause this as a question so what would be a simple way here let's say that the image is just a grayscale image corresponding to like a one-dimensional array how would you uh how would you go about detecting edges in the in the image okay difference between adjacent pixels yeah good so you can look at each adjacent pixel and compute the difference and sorry yeah I guess what changed is the the the value here so going from 4 to 152 that's a big jump uh 41 to 113 that uh a smaller jump so it's a slightly less clear of an edge but you can do some kind of thresholding right you can take uh the differences between each adjacent pair of pixels uh and look at where that difference is is high in magnitude what I'm doing here is looking at the difference between a pixel uh on the right and it's kind of a left neighbor there's some kind of padding that needs to happen here so you have to be careful about what goes on in the engine uh what I'm doing implicitly here is extending the image a bit to the left and to the right um or actually not the right so just just to the left uh so I'm assuming that the pixel this kind of a fake pixel commercial pixel on the left of this image that has the same intensity value as the leftmost pixel uh so a value equal 35 and then for the adjacent pixels we're just calculating uh the difference so seven minus five is two six minus seven is negative one and so on and you see that there's a kind of peak here so at 72 and you can set some threshold and say that if the image kind of difference or intensity difference is larger than a threshold and that corresponds to a to an inch um all right so here's the specific uh operation that we were doing uh one kind of useful way to to do this computation uh is we are what are known as convolution so I'll describe this in this kind of uh Edge deduction uh scenario but this general idea is going to be super useful uh later on as well um what we're doing here is looking at uh a one by three uh Vector so corresponding to negative one one and zero and we're basically going to slide this Vector along the the image and calculate the the dot product between the vector and this one by three vector and the three corresponding pixels in the image um so over here uh so yeah I guess let's do this one so um uh so it's negative one times five plus seven times one plus six times zero so that's that's equal to two and then we do this at every uh location so we look at this one by three uh vector and calculate the drop product with the corresponding portion of the the image so here it would be seven times negative one plus six times one plus forty one times zero and uh so on uh and that's that's the the exact same uh computation that we did before just described together as kind of sliding dot product rule I guess any questions on on that okay um so we choose a particular value for this Vector that we're Computing the dot product with so negative one one zero negative one one zero but you could have picked other kind of choices as well so for example zero minus one one uh or like minus one zero one where you look at the difference not between adjacent pixels but between kind of pixels that are one removed uh and yeah I think think about how this extends to 2D images as well it's the same basic idea instead of a one by three Vector you would have maybe a three by three vector and then you take a DOT product and slide that three by three Vector as like different portions portions of the 2D image inside of the the 1D image uh all right I guess can someone see what uh challenges with the with this approach are for for doing Edge detection so just looking at that the differences uh how well it's not gonna work in in practice if you're using these is maybe a softer Edge than just yep yeah uh that that certainly uh uh an important uh consideration so it might be uh not as sharp of a edge so the threshold value uh kind of like determines like whether or not you pick up edges a rather be a kind of shallow like rise in the image intensity then this might not pick it up other challenges that people see with with this uh yes okay yeah so the extension to do the uh is I guess slightly more complicated right so you would you would want to detect edges at different orientations uh and the way we described it here it's only doing uh like only getting 1D images that you can modify a bit you can uh like set up multiple arrays that correspond to differences along different directions linear yes yeah uh yeah I guess that sort of thing before so you can come up with uh actually this might be a useful exercise like I mean like if it's like diagonal yeah yeah so you could you could do like diagonal like edges as well but um yep yeah good yeah uh and that's I guess related to that the previous point about uh just doing a very local uh computation right we're just looking at uh like a three pixel like patch and doing computation based on that yeah so we will see other challenges with this good um yeah yeah um so actually the the opposite is the the challenge that I wanted to point out so if you have an image that's um kind of very like sharply changing so there's a bit of noise in the the image and again we're thinking of this as just a 1D image so the image intensities are basically the same uh for a bit and then kind of rise up and then uh like flatten again because locally there's a bunch of noise uh if you look at the uh the derivative uh so just the three derivative it looks extremely uh noisy uh and setting some kind of like threshold here is not going to work uh particularly well questions yeah so one option is to smooth things out um actually this is I guess before doing that this is another uh just a kind of 2D version of well this looks like is a original image on the left uh and just a little bit of noise just random uh noise added to each pixel um that that causes this uh kind of uh noisiness in the image um all right so yeah like you said uh one option is to smooth out the image before we do this Edge detection computation um again there are many ways to do this maybe the simplest way is called mean smoothing um so mean smoothing you can Implement with this uh like array where it's like one third one third one third and it's the same operation as before so you take this array and you take the dot product with the corresponding portions of of the image and basically what this is what this is doing is replacing each pixel value with the average of itself and it's two neighboring pixel values um another way to implement smoothing is what's known as a gaussian smoothing so you take the probability density function of a gaussian distribution uh and then you like discretize it and this is a very coarse crystallization we're going from like one uh I guess one quarter to one half to one quarter um so what that is doing is giving more weight uh to the pixel that we're looking at also taking in some contribution from the neighboring pixels so it's taking a weighted sum with a weight for a particular pixel is one half and it's neighboring pixels R1 quarter that's another way to to do smoothing uh the nice kind of feature about this way of doing smoothing is you can um like change how much smoothing you're doing so if you look at the the variance of this gaussian distribution if you make it wider then that's doing more smoothing if you make it like very uh kind of picky if you make the very small then that's not doing much smoothing I guess that's the extreme case if you just add zero one zero that's doing no smoothing at all right this is taking each pixel and replacing it with the the original value and you can think of that as the limit as the the variance of the gaussian goes to zero um yeah if you want to do more smoothing I guess a different way is to choose a larger window size so there's nothing fundamental about like a one by three Vector you could take a one by five vector or even larger and you can do mean smoothing or just like gaussian smoothing uh with this larger Vector so the vector is called a kernel uh the vector that you kind of slide along the image and take dot products with and you can extend these to 2D as well so here's mean smoothing with uh a 2d convolution so now instead of a one by three array we're looking at a three by three array uh each element of which is is one line and you take this little kind of image patch like this three by three patch and then you slide it along different portions of the image and you calculate the the dot product so you look at a three by three batch in the image take the element wise product between that three by three patch and this 3x3 Matrix of of all one lines and then some of those element wise products question are the rest of [Music] them yeah so at the corners or at the edges you have to be a bit careful there's different ways of handling the edges uh one way is just to take the the boundary of the image and just extend that so it's basically like padding the image um and then yeah then you can do this like convolution uh operation query so is this uh um changing the center of that square like as it's moving over there yes yeah exactly so for each uh Center like as Andrew in the center you calculate this dot product and then you replace the pixel value at that Center uh with the other value that you get after the dot product question so this would only help in where there are too many edges in the sense like it's finding too many variations of margin but not the case where it's not finding yes yeah so this is specifically targeting this issue of of noise uh so yeah it's kind of what you're saying there's like too many edges in the sense that the the derivative is jumping quite a bit and so by smoothing it out you're only focusing on the kind of large jump and not the small like local gems um all right so so yeah I guess that's what this looks like we have a noisy image uh you do this uh smoothing and you get a slightly blurrier image with this mean uh smoothing filter um and this is the the gaussian smoothing versions instead of the meat smoothing uh so you take a two-dimensional uh gaussian probability density function correspond to the graphene you discretize it and that's kind of the the patch that you get this is a five by five patch you can again work with other sizes but it's the same basic idea right you take this image patch you slide it along different portion of the image you calculate the product and you replace each pixel value with the computer and Dot product um and yeah like like I was saying before uh I guess one nice feature about the government's moving uh is that you can change the amount of smoothing by changing the at the variance of the gaussian so on the left of the original image uh and then you kind of sequentially uh larger and larger variances for the gaussian um so you're basically taking uh averages across a larger like neighborhood around each pixel uh and yeah it's kind of uh yeah I guess you can see that it gets like blurrier and blurrier as you increase the the variance of the the underlying gaussian all right I guess questions on this right um that's a good question so not necessarily like you are losing some uh yeah you're losing some information right um but listen do whatever you are I don't know if you're yeah I guess could you in principle uh reconstruct the original image from the WR dot version um no I think because you're you're replacing each pixel value uh with this with the average of the neighbors uh you're only getting the average you're not getting the exact pixel value to the neighbors if you're losing some information so I think you can kind of reconstruct it to some degree but you're not going to get it yeah good other questions uh okay so what we described uh right now is called linear filtering uh linear because we're taking a linear combination so a dark product of some image patch so weighted sum with uh of like nearby pixel uh intensity values and just a piece of terminology like I mentioned before these arrays or matrices that you involve that you take the dot product with are known as current loads okay so just as a as a check to I guess make sure we understand uh what this kind of convolution operation is doing let's say I give you this kernel uh what what is this doing like what's the transformed image going to look like once we uh like do this convolution operation go ahead yep exactly yeah so most of the image uh one pixel to the left uh and uh yeah the reason is if you look at if you just focus on one particular pixel uh what's going on is that that pixel value is being replaced by the pixel value of its neighbor to the right uh so the the kind of aggregate like effect if you do this on every pixel is that the whole image is shifted to the left you're just copying everything over uh one pixel uh to the to the left okay um so we can apply these these filters and do these like operations uh sequentially uh so I guess for instance you can first apply a smoothing operation smooth out the the image and then apply like this derivative operation uh to do Edge detection so you can think of this as kind of like multiple stages of processing and you can make this arbitrarily uh complicated more and more filters that you apply sequentially all right so uh a question sorry [Music] yeah so you can you can do a lot with uh with different kernels uh so you can yeah like I said you can rotate the uh the image uh like maybe 90 degrees or 45 degrees and it's useful I think to think about what like these geometric operations look like and what are the corresponding kernels are uh but yeah I guess these simple operations especially to like chain them together you can do uh like really complicated uh like image transformations good other questions good yeah [Music] so the former so it's not like you I think the way to think about it is that you do all of these operations in parallel so you don't do one and then uh actually I guess it might be easier to okay just draw this out so if I have some let's just look at a 1D array so it's like two four seven thirteen um and let's say we're doing the uh the mean smoothie actually let's make it slightly bigger um so I guess the first thing we do is like bad the the image uh somehow to extend the image um so we can like add in like a an extra like two uh let's say over here um so we're taking the the average uh of these three pixel uh locations so two to four um so yeah I guess that's uh eight by three [Music] uh so it's not the case that when we do the next uh calculation we use like eight by three as the pixel value uh what we do instead is look at the average of of these three so it's two four uh seven so that's 13 over or three for for this uh and and so on yeah so so I think the maybe the cleanest contextual way to think about this is you do all of these operations these are products uh in parallel and then uh create the transformed image with that does that answer that question yeah good other questions okay so uh yeah I guess what I wanted to say is that until around like 2012 or so um computer vision was like largely like dominated by hand uh engineered like approaches uh so basically what people would do is like come up with clever and clever kernels uh for uh like implementing different uh kind of geometric operations uh so handled in kernels corresponding to different like uh features in the the image so things like edges corners uh more and more like complicated features and then you chain these computations like this kind of sequential processing together to do something like uh like object detection for instance [Music] um nowadays uh vision is like uh entirely uh dominated by by Deep learning uh the basic computations are actually uh like similar or even identical to what I described like specials especially specifically uh convolutions uh but the idea is instead of specifying these kernels by hand uh you learn them given lots and lots of data and this image like processing pipeline still has the same basic structure where you do some operation that's implemented using a this convolution operation this operator operation and then you take that and you process it more and so on but that processing is learned automatically given a gigantic data set all right I have a question uh yeah so it's uh like some image representation uh so this depends on the on the data set usually it's like RGB uh rally so the the images is represented as a sound like array of RGB values and then you have some Associated label um so like what object is represented in the image for instance and it's the same basic pipeline like doing these calculations sequentially with some liquidational combinations as well that will go into more detail on uh but you basically learn what these kernels are instead of specifying them by and that's the major kind of difference all right I think the um actually I guess one more comment uh so we'll get to deep learning in uh two lectures but I think it's actually useful to spend a little bit of time like trying to do things by hand like just to uh appreciate the the challenges and also just to build some more like intuition about about Vision so we're going to do that in the next lecture um so in the next lecture we're going to talk about Optical flow which is a problem figuring out to where things are moving in the image or where things seem to be moving the image and we're going to do this in kind of the old school way so we're gonna think hard about the problem uh we're gonna come up with an algorithm uh and yeah going through this process is going to help us understand uh like some of the challenges and also just some of the features of computer vision and then after that I really think about other use neural networks to learn these uh kind of computations uh automatically given some data set all right I think that's all I had I guess questions good say that one more time yeah okay yeah good so typically uh like a given data set will just have like the same size images or if they don't then you do some kind of like sub sampling or super sampling to to make it the same size images all right so yeah I guess we left some time to hand out uh camera modules uh for the Drone so you'll install them uh it's relatively simple to install you'll install them for the next assignment and then we use them of course not to put the assignments as well so yeah I guess if each member one member from each team or each team can just come near them and Eric will hand up the camera models I'll see you in closer [Music] um
Introduction_to_Robotics_Princeton
Lecture_19_Princeton_Introduction_to_Robotics_Intro_to_deep_learning_for_vision.txt
all right Dad maybe we can go and get started so the the plan for for today is to continue our discussion of computer vision so just to remind you in the previous lecture uh we discussed Optical flow uh which is the the apparent motion of objects as they appear in some kind of sequence of images that a camera is capturing and we talked a little bit about the kind of biological uh inspiration uh for for uh thinking about Optical flow and in this assignment we are thinking about how you can use after the flow to compute velocity uh of the Drone and also time to Collision uh to do obstacle for items for example um so we spent the the lecture I guess last lecture uh using kind of like old school methods uh the locals Canada algorithm in particular uh which was developed around like 30 years or so ago um and we saw some of the the challenges right of uh of doing uh things in a kind of almost like model based approach where we think about the physics of the problem or make some assumptions uh and then come up with the algorithm to compute the quantities that we want to compute and the main challenge I think has to do with the fact that we made some relatively strong assumptions uh when we implemented the Lucas Canada algorithm for for optical flow in particular it was like spatial coherence assumptions so Optical flow of one pixel is similar or the same of the optical flow of pixels around it and then we also assume that things were moving relatively slowly in the scene um and the last assumption was on this color constant so objects when they move like basically appear in the same after they've moved so none of these assumptions are like truly satisfied right in the in the real world and if the assumptions break then the approach can work pretty poorly and I guess that's something you should try to do as part of this assignment maybe get a video where where some of these assumptions are are not uh completely valid and see exactly how the the approach breaks um so what we're going to start doing today is uh is talk about learning based approaches to computer vision specifically deep learning based approaches to computer vision and maybe before we start I guess I'm curious uh just to get a quick poll so I think year to year the number of people who have seen uh some deep learning I think keeps increasing so I'm just curious kind of what the uh the fraction is this year so how many of you have already seen let's say like convolutional neural networks and okay uh all right so not not everyone all right so yeah I think this is this is a bit of a challenge because some of you probably like haven't seen much deep learning so if you've maybe taken courses in computer vision or in machine learning where you've seen some of this material I'll try to make things a bit more kind of Robotics Focus but if you have some if you have seen some deep learning before then there might be some or for sure all right so I guess just a quick uh kind of intro to the recent history of deep learning so as maybe some of you know uh there was a kind of like big shift around 10 years ago so 2012. um particularly in the context of image recognition um so these are error rates uh for doing image classification for the imagenet data set so imagenet is a is a large uh well I guess at least at the time it was kind of a gigantic data set my today's standards maybe it's not as gigantic but it's a relatively large data set of images uh corresponding to different objects and classification or classes labels corresponding to those objects um so back in 2010 2011 the error rates were relatively high so 25 or somewhere between 20 and 30 and then suddenly in 2012 people started using deep learning based stir to really get like state-of-the-art performance on image recognition and you can see that the the error is kind of pretty dramatically go down from 2011 to 2012 so it jumps from around 22 23 error which is pretty high uh to 16 or so percent which is significantly lower and the error rates have kind of continued to to go uh down um so in 2017 which is already like five years ago the analytics were down to about like two percent or so and human level performance on on this image recognition task if you get a bunch of humans to spend like weeks uh like understanding the different kinds of dogs and different kinds of gaps and all the different kinds of objects that appear in this data set so human level performance was about like five percent so we surpassed uh human level performance uh back in 2015 or so so more than seven years ago for this data set at least okay so uh so what yeah I guess what's What's led to this uh revolution in machine learning uh so I think there's uh three things roughly or three uh categories of things um the first thing is data lots and lots and lots of data I was already mentioned this imagenet data set which had a massive impact uh on the on the field uh there's a whole bunch of other data sets for pretty much the computer vision or machine learning task that you can imagine so I guess these are some examples from imagenet so you can see that it's relatively like obscure categories of objects and they're kind of hierarchically categorized so going from like mammal to placental accountable or Canine dog and specific kinds of dogs and specific kinds of vehicles specific kinds of cats and and so on um more relevant I guess to robotics is a city kit uh data set so these are uh these are images uh from different uh cities uh and associated with each pixel is a semantic label so but each pixel there's a label that says Road or pedestrian or bicycle or tree and so on and so these are kind of like densely labeled images essentially labeled with the corresponding semantic categories um yeah so like I said uh different cities 50 different cities uh and pixel level elevation of different uh semantic categories uh so we talked about Optical flow in the the last section I think someone asked other data sets for uh for optical flow uh so there's one called uh flying things uh 3D so these are like synthetically generated images or videos uh of a bunch of kind of random flying objects in in simulation um so I guess you can see in some of the the images it's like a car there's a whole bunch of random things like sofas and and other things that are like flying around uh in 3D uh well synthetically in simulation and then you're basically generating uh what an image would look like or what a video would look like if you had some camera looking at these randomly moving uh uh the nice kind of thing about this is because everything is happening in simulation uh you can calculate the the ground Truth uh Optical flow like you know exactly where like each point in the scene is moving uh and so that's your kind of label which you can then use machine learning techniques to go from a sequence of images uh to uh like these labels corresponding to Optical Optical flow um yeah I guess Optical flow is like harder to do uh with real world uh images you can do it if you had a motion capture system for example you could place like yeah markers on some object move that object around uh track the the object uh using a motion capture system so you know exactly um like the ground root like motion of the object put a camera kind of on the side see like what different motions correspond to in terms of video so that's one way to do it but that's like pretty data intensive like here you can generate lots and lots of videos of kind of random objects doing random things and this seems to work reasonably well for for optical flow um and yeah I guess more recently some autonomous car companies have also started making some of the the data some of that data openly available so there's the waymo open data set that I think would release maybe a year and a half or so ago so this has a whole bunch of things so it's not just for computer vision but it's also for uh production trajectory predictions so they have like real world data of pedestrians and bicycles and like other cars moving around next to an eagle vehicle that's collecting all the data and so this is like important if you're trying to do autonomous navigation in cities so if you want to use some like planning algorithms one kind of useful thing that that you might want your or you probably do want your autonomous vehicle do is predict what other agents in the scene are going to do like is The Pedestrian going to stop as a person going to move and so on so these kinds of data sets allow you to to make learn to make those kinds of predictions yeah and so this is just like a quick kind of snapshot uh of different data sets that are available as I mentioned there's tons and tons of data sets for all kinds of different uh tasks uh that that you could possibly imagine um and more specifically it's not just uh massive amounts of kind of genetic data but a lot of the success behind deep learning and and in computer vision in particular in other areas as well uh comes from the fact that we have labeled data so with image recognition the original data set and other data sets have corresponding to each image some label like the the category of object that's represented in that image with the optical flow as I mentioned this kind of like ground truth Optical flow that you can get if you're doing things in simulation with the semantic data set as well can people have like humans have annotated uh the the pixels with the different semantic categories like pedestrian Road and and so on uh so nowadays there's a whole uh kind of economy a whole ecosystem uh around data labeling so there are companies uh that basically do this uh kind of as their as their main service you can get some raw data without labels uh you can kind of describe what you want as labels and yeah this companies of people doing this kind of annotation partially by hand but also partially automated so with the the scene the semantic labeling example one thing you might do is segment the image so you can break it up into different portions and then the human just needs to say what the label is for just the segments instead of going like pixel by pixel and labeling like each pixel and saying exactly what is it so yeah I guess the tricks you have to make this faster but it's still pretty uh pretty like human intensive to generate high quality data sets to do this kind of machine learning [Music] um so yeah I guess what does uh label data allow you to do so it allows you to use a supervised uh learning so it allows you to learn uh in a supervised manner so just to make this concrete again in the context of image recognition let's say you have some training data set so a bunch of different images and their corresponding labels what you can do is learn that mapping right so you can learn the the function that takes as input some image and then outputs a category of object and then at test time so when you actually deploy this learned system you can give it a new image that was not part of your training data set and then you can ask what's the corresponding label and you basically apply your learned function to get some prediction about what the the label is uh all right so that's that's kind of uh uh one major factor uh that's made uh modern machine learning uh really successful um the other one is computation so lots and lots and lots of computation uh and more specifically uh gpus so graphical uh processing units uh have really been the kind of computational Workhorse for machine learning originally these were developed for video games to make like Graphics run faster for for video games and it turned out that a lot of the computations that you needed for that like for video game applications were also useful for machine learning applications so they're particularly good at computations that are highly paralyzable so if you're doing if you're multiplying like big matrices with with big vectors A lot of the computations there are paralyzable they're like simple computations that you can like paralyze so gpus or that um yeah so I guess this is just my desktop computer it has like four gpus which is uh something I mean you can do something with it but if you're trying to do some like really large scale things and nowadays people have servers with uh like hundreds of gpus uh like particularly if you want to do anything in the context of large language models or even for computer vision and other machine learning applications okay so yeah I guess that's that's the the second major factor uh the third major factor is neural networks like algorithms and architectures uh for uh um so I guess that's a very quick uh kind of intro to neural network so that the history of neural networks originally they were inspired by biological neurons uh this is a very very very loose uh inspiration uh and yeah I would suggest not taking it super seriously uh it's just kind of historically people were thinking about biological neurons had some like very rough models of what a biological neuron does it basically takes on a bunch of inputs like from blend rights and it has an output and that dual outputs are then the input for other neurons and so on um but the actual uh neurons in our brain are significantly more uh complicated than uh what we call neurons in artificial neural networks uh so yeah I guess this is an example this is the the eminist uh data set so it's a data set of labeled uh images corresponding to digits like handwritten digits so each image is is just a black and white um or gray scale image and there's a handwritten digit and we have a label uh corresponding to through each image um so yeah I guess what what happens uh like the the complication that happens is you take as input uh the image like one of these handwritten digits uh you pass it through a neural network and that gives you some uh prediction like some uh output uh for what we think uh the label is um so early neural networks uh so back in the 1950s that's when the roof Kansas the first wave of neural networks uh and specifically the the perceptron machine that was uh designed by the person the picture Frank rosenblatt so this machine was actually connected to a camera that gave you a 20 by 20 image so this is like super super low uh resolution right I guess if your phone had a had a 20 by 20 uh camera that you'd be very disappointed but that's that's what they had to work with at the time and he was trying to do or the team was trying to do uh recognition of alphabets so I guess similar to uh to the mnist data set and this is the actual uh machine like this is the actual like neural network so it's a whole bunch of like wires that are instantiating uh the computation that this like the perceptron machine is doing nowadays of course we don't have this like we just run a neural network on a computer but at the time they have to like physically build build some machine with a whole bunch of wires that instantiated this kind of input to our to our relationship so it takes in as an input some representation of a 20 by 20 image those like electrical signals like pass through this like physical machine and that gives you some output label okay so all right so I guess a single uh artificial neural network uh has this kind of General structure so you dig in a bunch of inputs so the inputs here are represented by X so you can think of X as just some Vector uh in a finite dimensional Vector space you take each of the dimensions of X and multiply them by some width uh so you essentially taking a DOT product of a weight Vector W with the input Vector X and then you add in some kansu term which is called the bias and then you pass that through some non-linear function f so that's that's basically the main kind of computation that each neuron in a neural network does so you just take the input X you take the dot product of X with some weight Vector W you add in a constant device term so that gives you a scalar and then you pass that scalar through a non-linear function f and we'll say kind of more about the specific choices of f in a in a little bit uh so yeah I guess just to to make this uh concrete let's think about a particular example so let's say we're trying to do pedestrian detection which could be useful of course for autonomous vehicles or other robotic systems um so let's say you have some data set like someone has curated a data set of different images Each of which is labeled with a zero or a one so it's a one if there is a pedestrian in the image and a zero if there's no pedestrian in the okay so uh let's say we have images of a particular resolution so here we're going to look at the RGB images so each pixel uh corresponds to a three-dimensional Vector like the r g and B values uh and just for the sake of containers let's say we have a relatively low uh resolution images so like 30 32 by 32 images so each image if you think of this as an input corresponds to a vector that lives in our n where n is three zero seven two so each image is a is a point in this High dimensional Vector space 3072 dimensional Vector space um so what we're gonna do uh so I guess what I'm going to describe is just the computation that a single layer of a neural network does and then we'll see how we can make this computation more rich but just just to focus on on single layer neural networks um this is the computation that I showed you before so we're taking each uh component of of X and multiplying that by some weight wi and then summing over all the pixels well I guess not just all the pixels but also like the the three channels in their pixel so we're going something from one to uh 3072. uh that gives us a scalar when we take this dot product between the the weight Vector W uh and the input Vector X we add in a scalar um B and then we pass that through a non-linear function that I'm calling Sigma here as I guess any questions on this this computation so we're mapping an input which is which has Dimension 3072 to just a scalar value questions on the computation right uh yes yeah so the uh yeah the Milestone is like outside the the summation good all right other questions okay um and yeah I guess just a piece of uh terminology so that non-linear non-linear function Sigma uh is sometimes called the activation or just the non-linearity we just got some specific uh choices for that in a bit um so if we Define uh our width uh Vector W so we can think of this as a column Vector so column Vector that has Dimension three zero seven two uh the input if we think of that again as a column Vector of Dimension three zero seven two the biases a scalar we can rewrite this computation as about products so we're taking the dot product of the weight Vector with the input Vector that gives us a scalar we add in the slice term that's another scalar and then we pass that through the the activation function okay and the activation function are just mapping from scalar to scale um so you have some particular choices um so what was our question go ahead uh the weights don't necessarily have to have to come to one uh so there's some normalization that can happen uh using the uh the activation function so I guess here's one possibility for the the activation function this is just a sine function sigm so if the dot product plus the biasm so the input the sigma is positive then Sigma maps that to positive one if the input to Sigma is negative then you map that to negative one so this is one possibility for an activation function um I guess here are a couple of other possibilities so you can choose a sigmoid function that's kind of a smooth version of the sine function um which specific version you choose kind of depends on whether your labels are zero or one or minus one or one in The Pedestrian detection example that I gave before that will yeah I guess continue to use as a running example of saying that the labels are zero or one so one if there is a pedestrian zero one composition so then you might want to do something similar to the the left if your labels are negative one or one then you might want to do something similar to the the right uh another uh I guess really popular activation function or non-linearity function uh is value so this is the rectified linear unit uh if the input to Sigma is positive uh then the output is just whatever the input is so if your input X let's say or not X maybe we can follow something else like Z is positive then you return Z if Z is negative then you just returned a zero so this is another kind of popular one uh yeah there's a whole bunch of different like options for uh for non-linearities uh we make like some difference and it's important to understand what the different choices are but yeah typically it ends up not mattering that much like these non-linearity that I mentioned tend to work pretty well question yeah yeah good so so especially for uh if you use a sigmoid non-linearity you can interpret the output as a conference um so the output is yeah some number between zero and one so if the output is very close to one you can think of that as a confidence or you can think of that as expressing confidence that the label is one and if the output is close to zero you can think of that as expressing confidence that the zero you have to be super careful though about like interpreting what that means because actually it means nothing uh or it can I mean it can mean something but but usually it doesn't necessarily have the kind of probabilistic uh like interpretation that that you would want so what you would want is um some kind of like interpretation where if you have let's say like 95 confidence like if you're uh stigma value like output is larger than 95. you would want that and for 95 of images uh you get the correct label if you use that as kind of your thresholds but but that tends not to be the case in practice so you can like very very Loosely interpret it as confidences uh but it doesn't at least without further work like it doesn't have the kinds of probabilistic interpretations that that you would want [Music] all right so uh so I described the computation that that one layer of a neural network does um there are a couple of kind of intuitive interpretations for for what's going on uh with that computation uh so one interpretation is that we're basically doing uh template matching um so if you think about the the weights W uh as an image in its own right so w if you remember has Dimension 3072 so you can kind of rearrange that into an image that has Dimension 32 by 32 and kind of RGB uh values um you can think of that yeah like I said you can think of that as a as an input image what we're doing when we do this like dot product calculation is just seeing how similar uh the input X is uh to the the weight image image and W um and so if you basically train if you learn these with uh W uh for like an image classification problem using some like data set like image net or in this case sci-fi uh these are the uh like weights uh if you kind of visualize them as images that you end up getting so if you squint hard enough they kind of look like the the label right so I don't know which one is the uh the clearance they're not like super clear maybe that the car one uh seems sort of like a car so that's like a you can think of that as a kind of like a canonical like template of a car so that's the the vector W um so if you were given a new image uh x uh we take the the dot product of X with like this template image uh and like the dot product is a is a measure of similarities of the dot rod that's high uh that means that uh there probably is a car and the image of the Dropout doesn't know uh then that means that there's probably not a car in the image at least according to to this template all right so it's a yeah it's a kind of like pattern matching right like template matching we have some template uh described by uh the vector W uh we see how close the input image is uh to this template uh and then we have a threshold that says yes there is a card or no there isn't a car uh based on that similarity metric I guess is that interpretation make sense any any questions on that yeah uh so in this case there's there's no um points really to I mean you don't need the the bias term um it depends on uh like whether uh yeah I guess it depends on like exactly what the the non-linearity uh like function is uh so it becomes useful actually when you look at like multi uh layer neural networks which we'll do in a in a bit uh but uh yeah you can uh you can ignore the device term like just set it to zero for for this example um I guess maybe another another answer is uh comes from this this interpretation um so another way to think about what uh a single layer uh neural network is doing is that it's doing linear classification um so we have like this function like w transpose X plus b uh you can think of that as defining some hyperplane um so anything that's on one side of the hyperplane uh has double transpose X plus b being positive anything that has um that's on the other side of the hyperplane has W trans with X Plus V being negative uh so we're basically dividing up the space of all images uh into two portions uh with a with a hyperplane and the hyperplane is defined by the weight Vector W and the bias term B um yeah I guess if you think about it this way and then the bias term uh basically allows you to not have the hyperplane passing through the the origin uh which yeah I guess can could be useful uh depending on uh like where uh like the data lie in your space of images so this is a I guess a visualization of what I said so uh like the red hyperplane in this case we're just drawing a line uh corresponds to like a car classifier so anything that's on one side of the hyper brain one side of the line is a car anything that's not uh sorry anything that's on the other side is not a car and then different lines like different hyperplanes uh correspond to different classifiers for for different objects I guess any questions on this interpretation okay so yeah I mean so we described just a single layer neural networks which are pretty limited um so I guess what can a single layer neural network not do there's lots and lots of things that that cannot do these are some examples so uh if you think about two classes and let's say like pedestrian or not uh or or whatever the little glasses happen to represent that are kind of divided up uh in any of these ways then you can maybe convince yourself that there's no uh single layer neural network that will classify each of these perfectly so there's no line that separates the red from the blue right so if you look at the first image uh any line that you pick on one side there's going to be some red and some Blue uh and on the other side there's going to be some red and some blue so there's no way to have a line where one side is only red and the other side is only blue same thing with the the disc any line that you pick is going to incorrectly classify some of the examples uh and then I guess the same thing with with this third example as well any line that you pick is going to have on one side like some red and some blue um it's not going to be able to a perfectly classify that's perfectly like separate red from uh from Blue uh so I guess these limitations uh so back in the uh the 1960s or so uh people realized uh there was a book actually that that described uh some of these limitations of single layer neural networks and that actually had a massive uh impact on the the field so it was a book by Marvin Minsky and Timor backwards from MIT uh the specific thing that they uh I guess pointed out that you couldn't do with a single layer neural network is xor so that's the exclusive or function uh yeah I don't have it oh wait no yeah the first example actually corresponds to xor so I basically realized that you couldn't do that you couldn't Implement an inclusive or function with a single layer neural network and then from that they kind of convince people to essentially stop working on uh on neural nets for a while so that was the first like winter of uh of this kind of uh back in the late 1960s I think early 1970s um all right so I guess the answer to that which I think even at the time like people realized but somehow it didn't like permeate the the field is that you can uh move Beyond a single layer neural networks and and we'll describe how to do that in a bit uh the other question I yeah I guess I I haven't addressed yet is how do we actually choose uh like the weight term and the the bias terms so I just described the computation that you do with a single Learning Network but of course ultimately we want to learn uh what the weight vector and and the bias term should be from labeled data all right so I guess I'll switch to describing that unless there are any questions on on this go ahead foreign yeah so it doesn't directly uh so yeah the temple patching is like specific to a single layer neural network um you can think of multi-layer neural networks let's see I guess we'll just have in a bit more detail as doing the template matching like over and over again uh in a kind of like abstract Flex space like you start off with the the input you do some template matching and then that gives you some output and then you do double matching of that and so on so you can think of it in that way but uh but yeah at least just a kind of vanilla Temple matching uh interpolation that I gave is just for a single layer yeah other questions okay let me yeah let me switch to the board and then describe how we can go about doing the learning for for the weights and bias uh yeah let's see so XR uh is uh so it's one if so it takes in like two Boolean like variables as input uh so it's one if uh I guess if and only if or you know because if uh X is uh one and uh Y is zero or Y is one and X is zero and it's yeah I guess it's uh zero otherwise so yeah in the picture um yeah so if you replace if you think of uh not boating variables but like X and Y like positive and negative okay uh like that that's the uh yeah other questions okay all right so yeah I guess let's think about how to uh learn W and B given data and just to yeah just to keep things complete let's think about this kind of pedestrian classification example so we have some data set that we're given corresponding to images so each image we're going to think of as a as a vector and the superscript is going to index the different images in our data set um and then we have corresponding labels um so these are y1 Y2 and up through y capital n and these these labels are just zero or one for the the sake of concreteness um so basically what we're going to do is is try to find some W and B some weight vector and some bias that will allow us to uh match the data that were given as part of our data set so this is called empirical risk minimization [Applause] or erm Yeah so basically find W and B foreign function fits the data well and the eggs will be a little bit of exercise what exactly what what that means in a in a second so we need to yeah say what do we mean by by first the data well um so I'm going to pick a particular activation function so we're going to choose Sigma which again is a mapping from a scalar to a to a scalar as this specific function so it's going to return positive one if the input is positive and it's going to return zero if the input is less than or equal to zero we can then Define a loss function so this is going to be a function that tells us how well any particular wnb does on any particular image and label um so the notational uses is a level l so that's going to be our last function it takes in two arguments so one is the predicted or you can think of this as the predicted label and the actual label why I and the function outputs zero if uh actually let's give this a name so this thing we call it a y i predicted and why I is equal to why I predicted [Music] otherwise Yeah so basically this function is looking at the prediction given a particular W and and B uh comparing that with some true label like whether it actually is a pedestrian or not and then if the prediction matches the reality then we assign a lot of zero of the production doesn't match and then we assign a loss of one foreign calculate the training loss and so we're going to call this capital l which we can think of as a function of w and B and I'm just going to Define this to be the average so n against the uh the number of uh images that we have in our data set so it's the the average loss on our training data set I'm sorry that should be should be a i here to the eye image [Music] right so We're looping over or something over all the images in our data set for a fixed W and B if we think of this as the predicted label like whether there's a pedestrian or not we can compare that to the actual level uh that gives us a loss that we average over the data set question right and then um so here we're not using that version of the loss I guess the last function we're using uh is is what I wrote over here um so we're just looking at whether the prediction like the predictive label which is like the output of the single layer neural network matches the the true label um uh oh I see uh yeah yeah so it would be equivalent uh so in this case since the the accuration uh like this gives you zero or one um yeah you can also write I guess we can write it like this um so why I minus like this thing it's quite yeah it would be the exact same thing all right other questions okay all right so what we're going to do then is is find some uh so we can choose uh W and KB the weights and biased to minimize our training loss like to Fed our data as well as we can so to minimize this loss function [Music] um and all right so I guess this is one specific choice so we chose a particular activation which is over there we chose a particular loss function uh and then we can try to to minimize this this training loss um the challenge with this well I guess there are a couple of challenges but uh but one challenge uh is that this is a hard optimization problem and it's hard because things are kind of not uh smooth right so if you just see if you try to visualize this of course this is all in a high dimensional space but if you think about um let's say kind of w and B on the the x-axis and the loss the training uh loss on the the y-axis uh then it kind of looks like this so it's like flat uh in some portions is continuous so if I change W and B a little bit uh my prediction could go from being correct to not correct well actually sorry I guess yeah this is fine yeah so one like the prediction on one of the images could go from uh correct to non-correct and that can discontinuously change the last question oh no sorry this is this this is on the on the Blackboard I can't wait yes I think it's like paint or something out there yeah so just just the average yeah yeah just the average uh across across the different examples other questions okay all right so how do we get around this uh challenge of non's movement so we can try to smooth things out so instead of choosing discontinuous functions for the activation and for the last we can choose uh something that's a bit more smooth so one choice is what's known as the sigmoid activation so in this case we'll choose something that goes from zero to one asymptotically approaches one [Applause] so as the input so this is let's say the input Z Sigma of Z so the as the input we have more and more negative uh Sigma output something that's close and closer to zero as the input gets more and more positive the amount could contain closer and closer to one and yeah I guess the specific form of this is one over one plus e to the the minus Z and that gives you this kind of sigmoid curve um and yeah I guess as we were saying before you can kind of loosely interpret this as a confidence uh for whether or not the the label is one so if the output uh Sigma of w transform X plus b is close to one you can kind of think of that as corresponding to a higher confidence that the label is one if it's uh closer to zero then you can think of that as higher confidence that there that the label is zero but again like I said you have to be like really careful about interpreting uh those like taking it too seriously all right so that's a one I guess one option for the uh for the activation function we can also choose a different form for the loss function so one popular choice is called the binary cross entropy loss um so again this last takes two things is input so that predicted label and the the actual label uh and an output a scalar so here's the specific um yeah I should have got a I guess a visualization of this but if you plug in numbers if you if you plot this as a function of the two arguments uh what you'll see is that this assigns uh lower loss if the prediction is close to the actual and a higher loss if the prediction is is far away from the uh from from the actual level why um the good thing about this I guess as compared to the the previous last function we had chosen uh is that this is uh continuous so I can change the why predicted like the predicted y a little bit and that's only going to change my loss by by a little bit um and yeah I guess the the benefit of that um so the benefit of using this specific activation the sigmoid activation in combination with the binary across entropy loss is that it Smooths out the loss landscape so instead of the the last kind of looking like that it looks more like continuous foreign [Music] gradient descent or its variance to optimize the training Lots um all right so we'll we'll I guess spend more time uh doing that in the uh the next lecture [Music] um I'll describe the basics of reading this time and some of the variants that are used in the next lecture but I guess any any questions on this all right so yeah I think uh yeah we'll do two things in the next lecture so one is uh talk about multi-layer Networks uh and then talk about optimization of the rates and and biases uh and the last thing uh we'll we'll talk about is actually like super super important uh which has to do with generalization uh so the scheme that I described so far so we're fitting uh we're choosing some weights and biases uh to minimize the training loss like to fit our training data as well as possible but of course in practice we don't really care about our training data right like we already have our train better like if we're given some image some input in the training data we can like we know what the label is uh what we care about in practice is given some new input by giving some new image uh figuring out uh whether The Pedestrian or not or what I guess objective is represented in that image so that's what I mean by generalization uh so generalizing to new inputs new images that were not given to you as part of your training data set uh and yeah I guess getting neural networks to generalize uh can be tricky uh it seems like most of the time or a lot of the time they seem to have some pretty good generalization out of the box but there's some metrics to make Network okay uh any any questions on that all right I think that's all the I guess technical uh material I had for today uh I have one maybe quick uh poll that I wanted to uh to take uh so this year because of the academic calendar the uh we basically have an extra lecture so in previous iteration of the course I used to do the midterm review in lecture uh but yeah because of the way things were structured was like not possible to uh to do this year so we have a uh basically an extra lecture so I wanted to get a sense for I guess what would be most uh interesting for the the cluster uh to discuss so this will be not next week but the week yeah the week after that um so I have some possibilities and maybe I think some other like a poll later as well but this is a quick uh informal hold here [Music] so in the next I guess two or three lectures we'll talk about multimeter networks and different architectures like convolutional neural networks uh for doing computer vision um so I guess one possibility which we didn't do in previous lectures is to talk about some reinforcement learning so not just supervised learning problems but uh um yeah getting like a robot to learn to do something via trial and error so that could be one I guess possibility for uh for what to talk about the other possibility would be to talk about Robotics and language so specifically in the last I guess five years or so there's been like a massive amount of progress on large language models I mentioned this a bit two lectures ago uh and yeah people in robotics are trying to take advantage of this progress in language to do many many uh different things I think that would be another possibility I guess the third possibility would be to talk about some broader uh topics let's say in Robotics and in machine learning more specifically we'll do this a bit in the the very last lecture we talk about the impact that robotics might have on uh on the economy uh the intersection between Robotics and the law uh and kind of one or two other similar things but I guess I was thinking talking more about uh fairness for example uh and maybe a couple of other related topics could be could be another possibility uh yeah what do people think any maybe I can make a quick uh so if you're uh if you want to think about or if you want to expand time uh Dr Rod who won maybe raise your hand by the number of people right maybe the second one okay all right I actually I guess I wanted to count uh sorry let's go back to the the first one I'll count quickly yeah about like 14 or so pixels all right the the second one robotic new language and then broader topics it's gonna be 14 again right more formal numbers are there other topics that that people have like suggestions for that usually be interested in go ahead like Transformers for example or yeah like those thank you I don't know if Transformers yeah yeah that could be yeah okay I guess that's gonna be another all right yeah robotics robotics to people okay all right good to know other thoughts good yeah okay okay good yeah yeah so I guess that's my I was I was probably gonna do that as part of the the resource of learning yeah but yeah any other okay sounds good all right yeah it should be fun I'll see you next time [Music]
Introduction_to_Robotics_Princeton
Lecture_12_Princeton_Introduction_to_Robotics_Bayes_Filtering.txt
all right now I think we'll go ahead and get started so just to remind you of uh where we left off in the the previous lecture so we started discussing a new topic uh which is the topic of State estimation so basically how can a robot use its potentially imperfect sensors to get a decent estimate of a state which you can then use to uh to do things like like feedback control and the main Technique we in the previous lecture is the thing called the non-deterministic filter which is sort of like a conceptual algorithm it's not something that one implements in practice but it's going to be the the conceptual basis for techniques that do get used in practice which we're going to start discussing today and I guess just to remind you of the the structure of the algorithm which again we're going to revisit today uh is you start off with some initial estimate for where the state of the robot could be so this estimate takes the form of a set that we're going to call or that we did call x0 hat uh we then propagate the set forward through the Dynamics of the system which initially at least we were assuming were deterministic we get some other set which we call F of X zero and I guess just to remind you we're using discrete time Dynamics here so x t plus one is f of x t and then potentially you can take in the control input as well and then we also had a sensor model which was given by this function h uh so at any given State XD uh we know what the sensor measurement the robot is going to get so the robot gets the sensor measurement which helps it narrow down uh which uh statistics it could be in so we introduce this concept the pre-image so the pre-image is the set of all states that are consistent with the robot receiving the sensor measurement that was received which we're calling ZT and then we take the intersection of these sets so we say that uh X One hat which is our new estimate for where the state can be uh is simply the intersection um sorry let's say this is the one the first time set so it's the intersection of uh of these two uh so the initial set propagated through the Dynamics and then the pre-image of the essential measurement that the robot receives at times of one uh and in general uh the steps for the non-dynamic filter are the following is at every time step so we do this kind of recursively so we first take our previous where the state can be we propagate that through the Dynamics so compute f of x at T minus one and then the second step you compute the pre-image the sensor measurement that you receive at the current time step so pre-image of ZT and then finally you take the intersection of these two sets so X T hat is f of X at T minus 1 intersected with the pre-image so we're using our knowledge of both the Dynamics and the sensor model so this step is typically called the Dynamics update so we're updating our estimate of whether about State could be based on our knowledge of the Dynamics and then these two steps well yeah I guess that kind of depends on the context but either this step or usually these two steps combined is called the measurement update where we incorporate the new measurement the new sensor measurement that the robot receives combine that with our knowledge of the Dynamics together updated estimate of where the state can be you know sorry yes yeah um yeah x one hand yeah X1 hat so our new estimate of where the state can be uh is the previous estimate propagate through the dynamic they intersected with the pre-image of the new sensor machine uh yes sorry it's not the best picture this is pointing at the intersection of the the two sets yeah okay all right okay um right so I guess as we discussed in the the previous lecture uh actually calculating these sets uh especially in like high Dimensions so even like high here means even like for a drone developmental State space uh these things can be like pretty hard to compute calculate the intersections uh representing these sets and that kind of computationally uh efficient way all of that is challenging and so we don't typically implement this the other kind of major conceptual problem with this now deterministic filter is that at every time step uh the thing that we're maintaining is a set so a set of possibilities for where the uh the state can be and if you're trying to do something like feedback control we really need a bit more of a refined estimate of where that's taking Visa maybe pick out a representative element from that set um or or do something else to apply a feedback controller um so that's what we're going to try to fix so the kind of second problem uh the problem of not just maintaining a set of possibilities but maintaining a more refined estimate of the uh the state so the topic for today is going to be to discuss probabilistic methods for State estimation yeah these methods are also going to then allow us to do things like mapping at the robot environment and localizing its own but today we're just going to focus on the problem of estimating the state question uh okay so we're not necessarily assuming that the environment is discrete in the sense that we have a dis like a finite or discrete number of states so the state space can still be continuous and the non-deterministic filter is still well defined in that setting and then this creepness here is in terms of time so we're assuming that yeah the world is evolving time step to time step and it's not a continuous time model uh and we'll yeah we kind of keep the that same kind of structure uh today the algorithms are yeah the algorithms I'll described uh will work at least conceptually if not necessarily computationally efficiently uh for continuous State and control input spaces and sensor management spaces but still discrete time all right so yeah as I mentioned some probabilistic methods for for doing State estimation and the main idea and the kind of key difference between what we're going to discuss today and what we discussed with the non-deterministic filter in the previous lecture uh is that we're going to maintain uh what's known as a belief in this kind of literature on probabilistic state estimation to maintain a belief which is really just a probability distribution uh over the state so instead of just having a set of possibilities this is a slightly more and verifying quantification of our uncertainty about whether the state of the robot can be um and so I guess what I'm going to start off with is just a quick recap I mean hopefully there is a recap of just some basic probability uh Concepts so I realized uh that yeah I guess people will have many different backgrounds and especially I found like soon to the Mae background often haven't seen probability in a long time so I'm gonna I'll just do a quick uh recap of some basic probability Concepts and I'll also point you to enter in the textbook here so a good reference for all of this material on probabilistic state estimation and so on is uh the book on probabilistic robotics and I linked I think to this book in the the slides in the very first lecture and I think it's yeah it's a reference in the uh the syllabus as well so you should be able to find it uh and chapter 2.2 in this book has a has a pretty nice recap of these probability basics uh so the first thing is just defining some some notation um so the main notation looks something like this so we have a random variable which is typically denoted by capital x which can take on values which are denoted by Little X so you can think of this as coin flips for instance so which side the coin comes on uh comes out on when you flip it is around a variable the specific like head or tails is the value that the random variable takes um so this is the the probability that the variable X takes on value Little X so for instance the probability uh that access heads could equal the probability that X's tails [Applause] uh which is 0.5 for an unbiased coin and usually we won't kind of write everything out fully like this we're just going to abbreviate so we'll write just P of Little X and yeah from contacts like it should be clear that what we really mean is that the this is the probability that some random variable like takes on value a little X so we'll typically use this this abbreviated notation so we'll write like probability of heads or the probability of tails like without explicitly writing the random variable okay so yeah I guess that's the most kind of basic uh piece of notation [Music] foreign [Music] fact about probabilities maybe the most kind of basic fact is that probability is sum or integrate do one and whether we look at summations or integrals depends on whether the random variable takes on a discrete set of values or potentially a continuous set of values so under discrete setting this corresponds to summation over all the possibilities Little X of P of x equals one and then the continuous setting so if you're looking at things like distances uh like range measurements that uh robots range sensor gives it then we look at integrals over X P of x DX equals one okay uh the other I guess important uh concept is that of a joint distribution so if we have multiple random variables let's say two random variables then we denote the The Joint distribution by P of X comma Y and again this is really shorthand for saying that this is the probability that the random variable capital X based on value Little X and the random variable capital Y and random variable capital Y and value a little while so if you have two coins that you're tossing the outcome of the first one could be denoted by X the second one by y for instance all right um and then if you have independent random variables uh so two independent iron variables then The Joint distribution over X and Y is this the product e of x multiplied by by P of Y foreign [Music] the next important concept which we're going to use quite a bit in today's lecture is that of a conditional distribution um so we're gonna denote this by P of x with a bar that's a p of x given y um and again this is really shorthand for saying that this is the probability that random variable capital x takes on value Little X conditioned on the random variable capital Y taking on value a little boy and by definition so we're just going to Define this notion of conditional distribution from these previous Notions of joint distribution so we're going to Define be of x given y as P of X comma Y which is the joint distribution divided by P of Y and this is used that P of Y is not equal to zero otherwise we're dividing by zero and this is not defined so as long as we apply is strictly larger than zero we can Define the conditional probability distribution as such uh and this has a I guess analogy check so we can see what happens uh if X and Y are independent uh so this over here so in that case the conditional distribution P of x given y uh you would expect intuitively is just equal to the P of X right so Y is not giving you any information about X Y doesn't inform your probability on x um so by definition this is p of x gamma y p of Y um and then if X and Y are independent uh then just kind of by definition here of Independence The Joint distribution equals the product divided by P of Y and again we're assuming py is is not equal to zero and so this is then P of x which checks out just individually all right I guess any questions afterwards okay all right let's say a couple more facts about probabilities I have a question yeah just like what does that mean conceptually that X I mean it doesn't depend on why like is that like what Chuck is saying uh yeah so it's independent of why right oh I see it yeah yeah so if you have like two uh coins that you're flipping uh just I guess it's hard to avoid the word independently but yeah independently in the kind of colloquial sense I just do yeah you just flip two two coins one after the other uh then that's an example of that legal of Independence and so the probability that your second going comes up hands uh given that the first one came up builds like the fact that the first one came up there was like it doesn't matter and so the the probability that the second one comes up heads is just the probability that the second one comes up heads which is 0.5 for a unbiased point a good other question Okay so yeah another I guess important fact is the what's known as the theorem of total probability which actually follows so this might be a useful exercise to prove this from the definitions we gave above so this states that P of x is equal to the summation over y of the joint distribution which is the the same as uh so the joint distribution if you look at our definition of the conditional distribution up there is p of x given y multiplied by P of Y so this is in the discrete setting so when you have a discrete set of possibilities for what X and Y can can take on in the continuous setting we simply replace the summation with the integral yeah and as I mentioned uh this fact was like theorem of total probability followers from uh the facts that we stated before and when we useful exercises to go through that and prove that and this quantity so the P of x uh is then known as the marginal distribution all right and then the final uh [Music] result which is I guess the most important one in our context to base rule [Music] using cleanly even with the new jacket [Music] yeah I'm trying to figure this out over the break yeah okay okay so yeah the most important result is the base rule um so this let me save it in the uh yeah let me just tell them then I'll approve it using the results we discussed before uh so P of x given y is equal to P of Y given X multiplied by P of x divided by P of Y so this is base rule we can write it in a slightly different form by expanding out the denominator um to be a y given X divided by P of x we can expand out the probability of Y using this law of total probability so we can write the denominator as a summation over X of B of Y given X multiplied by P of x okay so yeah I guess this equality just follows from the law of total probability which we don't over here it's kind of inverted I wrote it in terms of summation over y here we're summing over X but it's the same thing this one this equality and this is like base rule so we can prove this using some of the the previously introduced Concepts so the proof is if we look at P of X comma y so that's the joint distribution I can write this in two different ways so I can write this as P of uh x given y multiplied by P of Y and this follows from the definition of the conditional distribution that we wrote over there so just yeah multiplying both sides by P of Y up there or we can also write this joint distribution B of X comma y connect in the other way so B of y given X multiplied by P of x again using the definition of the conditional probability distribution and so now we can just rearrange the terms here so dividing both sides by B of Y assuming it's not zero so P of x given y is the y given x v of x divided by bfy okay yeah I guess any questions on this so the the main reason based role is important so one way to kind of interpret it is that it's allowing you to invert conditional probabilities right so here what we're trying to calculate is p of x given y uh what we have on the right hand side over here is p of Y given X is kind of the the inverse conditional and this kind of inversion is often exactly what we want to do for State estimation for instance so I'm kind of previewing what we're going to do but uh yeah just just to give you the high level idea we're going to assume that we have some probabilistic sensor model so some probability distribution that tells us probabilities over the sensor measurements which in this case let's say are represented by y given that the robot is in state X uh what we actually care about is not that we care about the the inverse right so we care about what's the probability that the state is something given that the robot receives some sense of measurement why and that's exactly what that kind of inversion is exactly what base rule allows you to do okay we'll see that in much more detail uh as we go along all right yeah and I guess as I mentioned if you're kind of rusty on some it's probably a V6 uh the chapter 2.2 from the probabilistic robotics book gives you a bit more uh background [Music] Okay so what I'm going to do is just look at a specific example and then we'll kind of generalize things to uh to a broader setting um so the example is adapted from section or chapter two point four point two in the probabilistic robotics book I made some like small modifications to it just so I can kind of fit it in in one lecture but it's basically the same example if you want to take a look so in this example we have a robot this is my stick figure drawing of a robot that's looking at let's say like a building and there's a door [Music] in the building um so yeah this is the the world that the robot is operating in the state of the work so the entire state of the world uh it has to do with just has to do with whether the door is open or not so in this case so for this example um I guess there's a question about whether we have continuous or discrete variables uh here the state is discrete so this is actually just two possibilities either the doors open or closed uh as I mentioned we'll go through this example and then generalize to a broader setting uh so let me just write it out so the state here which we're going to denote by Capital XT so you can think of this as a random variable corresponds to uh door open or not all right and the robot is equipped with some imperfect sensor which gives it some information about the state of the work so about whether the door is open or not but it's not uh 100 accurate and more specifically so I'm going to write down a specific sensor model so the probability so this imperfect sensor gives you some sensor measurements which we're going to call Z T so the sensor measurement at time T and this is going to be a random variable and I'm going to define the probabilities for this random variable so the probability that the robot this is a underscore so sends underscore open so the sensor measurement reports that the door is open given that uh the door is actually opened is some let me see races some value so we're going to say it's point six um and then the probability that the sensor reports that the door is closed given that in fact the door is open is this one minus 0.6 so the probabilities have to sum to one so it's 0.4 and then the other way so the probability [Music] that the robot senses that the door is open given that the door is in fact closed is 0.2 and the probability finally that the door is sensed to be closed given that the door is in fact closed so this is just one minus 0.2 which is 0.8 a probabilistic model of the sensor uh we're going to assume that this is given to us so given to us like the robot designer somehow we've acquired it and I guess one thing to note here is that we're saying that the sensor is a bit more accurate when the door is closed right so when the door is closed uh the probability that the robot senses that draws closed is relatively high so it's 0.8 and the probability that makes a mistake is 0.2 and when the door is open the robot is slightly less so the sensor is slightly less accurate so the probability that the robot senses that the door is open given that the door is inside open is this point six rather than uh point eight uh so I guess one question here to consider is how would you get a model like this so we're going to take this as an input but in practice we're going to have to actually acquire some kind of probabilistic model like this so what is this I mean conceptually even like how do you go about uh getting a model like this in practice go ahead yeah like yeah yep yeah yeah that's exactly right so this is basically acquired with a tedious experiments right so you just keep your sensor in front of open or closed door do this experiment multiple times and then look at the empirical probabilities and then you can take that as you are probabilistic model so yeah we're going to assume that that experiment has been done we've characterized the errors that the sensor makes and then given this uh probabilistic model of a sensor we're going to try to do a state estimation so that's the plan okay so [Music] um so yeah let's say I guess one more piece of information that we're given foreign belief about whether or not the doors open so this is the probability that the robot assigns to the door being open or closed before it starts kind of operating in the world so before it receives any such a measurements before it does anything the robot just has some prior belief about whether or not the door is open uh here we're just gonna make kind of a simple choice so we're going to say that the probability the prior and the probability that the door is open equals the prior probability and that the door is closed equals 0.5 and so the random variable here so this is the state of the door at time Step One is open or closed we're just going to say this 0.5 so it's equal probability that the door is open or closed in practice I guess you could kind of do another experiment here so you go around all of Campus maybe even even more just count the number of doors that you see open or not and that can inform this prior like belief about whether any new door is going to be open or closed we're just going to say it's 0.5 for now okay so I guess here's uh where it starts to get interesting stuff uh time step t equal to one let's say so the first time the robot receives some sensor measurement uh and specifically just for concrete terms let's say the robot uh sensors that the door is open and now given this sensor measurement and given what it knows about the error characteristics of the sensor and its prior belief or whether or not the door was going to be open or not it needs to update its belief about whether or not the the door is open uh so we're going to denote this updated belief by Bel short for belief this is the notation that the the book uses as well so the probability so this is the probability uh that the door is open at the first time set or the yeah I guess the belief that the door is open the updated probability that the door is open updated given this new sensor measurement at time step one um so we can so really what this is this is is like the probability that the door is open uh given that we sense uh that the door is is open right so that's what we're assuming the robot senses so this is where base rule comes in so we have this conditional uh probability that the door is open given the center measurement um we can we can apply 3s rule here so so P of x given Z is p of Z given X multiplied by P of x divided by P of C so we can plug in some I guess any questions on that all right so we can plug in some numbers to make this more concrete [Music] so let's look at the numerator first so the numerator is p of Z a sense to be open given that the uh the door is in fact open so that comes from our probabilistic model of the sensor right so the probability that we sense the road to be open given that the door is in fact open is equal to 0.6 by assumption multiplied by the prior probability uh that the door is is open and that again by assumption is equal to 0.5 so that's the uh the numerator the denominator we can expand using the law of TOEFL probability so the probability that we sense the door to be open we can expand out as the probability that we censored out to be open given that the door is open multiplied by the probability that the door is open so you kind of sum up over all the the possible values for the the door being open or closed so Plus the probability that we sensored to be open uh given that the door is closed multiplied by the probability that the door is closed so again we can make this concrete using our sensor model so the the first term is exactly the numerator so that's 0.6 multiplied by point five and then the second term we could get this term from the sensor model so the probability that we sensit row to be open given that the door is closed is 0.2 uh and then multiplied by the prior probability that the door was closed which was again point five okay so the the updated belief that the door is open uh is this the ratio of these so 0.6 times 0.5 divided by 0.69.5 Plus point 2 times 0.5 is what is it 0.75 which intuitively makes sense right so the robot is now more confident that the door is open so it assigns a higher uh probability to the door being open which makes sense given that it sensed uh the the door to be to be opened so it's not absolutely circus which again makes sense because the sensor is not perfect but it's it's more confident than it was previously so previously just had a 50 50 chance that it was assigned to the door being open and now it's 0.75 confident uh and then I guess we can also write the the other possibility [Music] so the probability that the door is closed is just one minus this so the belief that the door is closed is one minus uh 0.75 which is 0.25 all right any questions on these calculations so I think it's maybe just uh this would make the point about Bayes rule again uh it's interesting to see what role base rule is playing over here so we applied Base Road uh right there right so what we want to calculate is the probability that the door is open given that we sense the door to be open uh what we actually have is kind of the inverse which is the probability that we sensed the load to be often given that the door is open so that's comes from our sensor model we also have this prior belief about whether or not the door is open the denominator you can calculate uh just from those same conditional probabilities and the prior probabilities and so base rule is kind of allowing us to do the inverse to go from the sensor model to an updated belief about the the state of the world all right so let's make it a bit more interesting um so so far we haven't really accounted for any Dynamics in the world right so the door is not changing its state uh just yet uh we just looked at the very first time step um but yeah let's make it a bit more interesting and think about Dynamics as well um so let's say the robot has some action so some control input that it can apply which corresponds to just pushing the door so there's only let's say one action that we're going to consider so it can only push the door or I guess if you do nothing um so again we need some model about how the state of the world will change if the robot takes this action and again we're going to use probability distributions to represent this model on the the Dynamics of the system um so I'll just write it down and then we'll kind of intuitively think about what it means so the probability and that at time step t the door is open given that at time step T minus one so the previous time step the door is open and the control input that the robot applied at the previous time step is equal to push push the doors that's the only action we're considering right now is equal to one and so what this is saying is that if the door was already open at the previous time step and the robot pushes the door then it's just going to remain open with the probability one um and so the probability that the door is closed given that the door was open the robot took this action of pushing the door is this one minus that which is a which is zero uh and then yeah I guess the the other version so the probability uh that the door becomes open uh at the next time step given that at the previous time step up the control input that I applied is pushing the door in this case let's say that the the probability is point eight uh so the door was closed and the robot pushes it it has a 80 kind of success uh probability of opening the door and finally the probability that the door is closed at the next time step given that it was previously closed and the robot pushed the door so this is just one minus 0.8 which is a point to all right and again we can think about how how one might acquire such a probabilistic model again you can think of doing a bunch of experiments you take the robot put it in front of many doors and look at its uh the probability that the door becomes open if it was already open and the probability that the door becomes open if it was closed and you empirically measure these probabilities and then we call that our probabilistic Dynamics model all right so at the next time step so I'm going to keep this on areas we'll leave the numbers [Music] so all right so at the next time step um so we've done one time step so at times I've won the robot sense that the rules often will be updated our belief about whether or not the door was open given that information now suppose and time step one uh the robot pushes the door um and so the the question is uh how can it update its belief about whether or not uh the door is going to be open or closed uh once it's it's taken this this action [Music] um all right so let me see um I guess just to recap what are the numbers go all right then so uh before it does this the robot assigns the probability of 0.75 to the roaming opens 0.25 uh to the door being closed so now it pushes the the door uh now it needs to update its relief uh and then I guess what's going to happen just to preview a little bit is that it's going to receive another sensor measurement and that will help but refine this estimate of whether the door is open or not um at the the next time step um so I guess I'll say that equal to two the robot receives another sensor measurement and let's say again that it at the next time Step at equal to two it senses that the door is open um so what we're going to do is something pretty analogous to the the non-deterministic filter so we're going to break up our updates into two steps so the first step uh we're gonna call the Dynamics a bit and the the intuition behind this as we propagate our previous belief which was 0.75.25 uh through the the Dynamics of the system given that we know that the robot has pushed the door at times for people there to one so that updated belief we're going to denote with a bar so belief bar is what happens when we update our belief given just the knowledge of the Dynamics uh not taking into account yet the new sensor measurement but the robot receives a Time step two so belief bar uh that the state of the door at the second time step is open um I guess how how should we calculate this any any thoughts so we know the the action that the the robot did right so it's pushing the the door um yeah I guess how can we Castle it uh it's updated belief knowing that it pushed the door good the second stage but also yeah yeah so actually it's just the last of the probability here that we that we use um so this is the probability uh that the door is open at the second time step uh given that we pushed the uh the door um so this is um the probability that the door is open given that previously it was open and that we took action which is pushing the the door multiplied by the previous probability um that the door was was open so we're just applying the law of total probabilities of Plus the probability that the door is open given that previously the door was closed again the action was just pushing multiplied by the probability uh that the the door was closed right so we're just enumerating uh the possibilities for the previous state so just either open or closed um all right so I guess the next question here is like what are these probabilities what should we take here so the prior probability that that door was open versus closed so these two terms over here what are they going to be right yeah so we we already have a belief right so it was 0.75 and 0.25 so 0.75 for the door being open 0.25 for the door being closed so this is our prior belief uh before we kind of incorporate the knowledge that we pushed the door um right so we can just I guess calculate this we already wrote down a couple of returns so these two terms are given by our probabilistic Dynamics model which we wrote over there uh so P of X2 being open uh given that the row was already open and the robot pushes it so that's equal to one multiplied by 0.75 and then the second term the probability that the door becomes open given that it was previously closed so it's 0.8 multiply by point two five so if you multiply this out so it's 0.95 so the robot becomes even more confident that the door is open which again makes sense because it pushed the door and usually when you push a door it becomes open and it was already reasonably confident so like 0.75 confident that the door was open after the first time all right so I guess so the final step here is to incorporate the new sensor measurement so this is where base rule comes in so step two is the measurement update uh and we said that we're reviewing at the at the second time step the robot receives a sensor measurement which is uh telling you that the door is open um so again we'll represented this represent this updated belief as Bel believe that the door is open uh so really this is like shorthand for saying that the probability that the door is open given that we received this sensor measurement at the second time step which is that the the door is open so this is where our base rule comes in um again we can kind of invert this conditional probability so this is the probability that the door is sent to be open uh given that the door was open multiplied by foreign belief divided by the total probability of receiving this sensor measurement yes questions yes sorry there's an action here as well yeah thank you so U1 equals push so given two things the one that we push the door and that we sense the door here nice okay all right so this uh first term we're going to again get from our sensor model uh which was over there so that's the probability that the door is sent to be open given that door is impact open so that was 0.6 uh I guess the interesting term is is this one so the primary probability uh but the the door is open so what do you think that is going to be go ahead 0.95 yeah so that that's uh basically our most up-to-date knowledge uh about whether or not the door is open uh the most up-to-date knowledge takes into account the Dynamics of the robot well the I guess the door uh uh and so that was a 0.95 so we can yeah again just uh I'll write it here look at both the numerator and the denominator so the numerator is what did I say 0.6 so that that comes from our Center model multiplied by point nine five and the denominator um again we can use the law of total probability uh to expand this out so that's the probability that we send the draw to be opened uh given that [Music] the door is open uh sorry I'm gonna run out of space here multiplied by Illustrated over there is this foreign [Music] [Applause] so that was the probability the total probability that the door is open so that's if we do is open uh given that the door is open multiplied by prior probability that the door is open plus the other possibility so the door is open given that our door sends to be opened given that the door is closed multiplied by this so the first term is the same as the numerator so you're always going to have this so one of the terms in the denominator when you do this expansion is always going to be the same as the numerator so let's point six times 0.95 and then the second term we can get from our sensor model again so the probability that the door is tends to be open given that the door was closed was 0.2 multiplied by our prior belief before we incorporated the sensor measurement that the door was closed which was 0.95 minus sorry 1 minus 0.95 which was 0.05 so 0.95 was the belief that the door was open so 0.05 believed that the door is closed um all right so if you then just take this ratio so the new belief so the second time step and that the door is open uh is if you just take the ratio of these it's about point nine eight uh three uh so if the robot is now very confident uh that the the door is open again this makes sense uh so initially it was just kind of like 50 50. uh it sensed that the doors open so that made it 0.75 confident that was doors open uh and then there were kind of like two additional pieces of information that increased its confidence so one that had pushed the door which made me believe even more that there was going to be open and then Stanford there was open um and that made it about like 98.3 percent uh confident that the the door is open all right so let's maybe just like think about the structure of what we did over here so there were like these two steps right in the in the second night stuff so one was the Dynamics update and the second was the measurement update and these are pretty analogous to the non-deterministic filter so the first step the Dynamics update is basically propagating our previous sorry questions the uh um so I guess how would you uh how would you do oh I see uh so that you would just take into account the uh sensor measurement and then uh uh like plug in the the prior belief without the the Dynamics and then update that through the Dynamics yeah uh [Music] yeah that's a good question let me think about this I think I would need to write it out I don't know the answer off the top of my head yeah I guess is there an intuitive way to figure it out I think it matters but I'm not absolutely sure out after yeah the reason I think it matters is uh like just a kind of like semantics of it so we're if we incorporate the sensor measurement that's telling us something about time step two right so we're we're saying that our belief that the doors open or closed at times step two uh is something like given that we sense that the uh the door was open or closed to do the propagation uh yeah actually I think it doesn't quite make sense to do it the other way right so I guess how would you propagate that uh forwards in time uh so we've already update our belief update our belief at time step two um and so we're not going to propagate it to time step 3 yet right so we still care about going from time to one to time step two uh so I think the only order that really makes sense is uh to take the previous belief uh propagate that through the the Dynamics um and then then updated with the center measurement I think that's right I guess that someone see uh uh whether or disagree with that okay I think that's correct but I don't know do you disagree with that or uh it's kind of hard yeah I think I think that's correct uh because yeah I feel like it doesn't quite make sense to do it the other way because we're updating our belief at times or two and then we don't want to propagate that thanks to three we want to propagate the previous belief After Time step two uh so I think the other order doesn't make sense but yeah again I'd have to write it out to be absolutely certain it's a good question question uh in practice this seems like this measurement update can be done multiplying because I assume that uh sensor Arrangements yeah so right now we were kind of reviewing that the Dynamics update on the sensor measurement updates happen concurrently in in time or like yeah simultaneously um and so the robot is taking actions at every time step um yeah I guess what you're saying is like maybe the robotics actions at a lower frequency so maybe it takes a while for the robot to like Push the door and in that time it receives multiple sensor measurements potentially in that case you could do a bunch of updates with just the sensor measurements and then as soon as you get some information about the action that they're about there you incorporate that as well yeah does that answer your question or yeah okay all right so let me just write down yes question so so the robot taking action and that is yes yes yeah so that's exactly actually let me just I'll write down the uh the steps kind of generically [Music] okay so at each time step so just to abstract this away and make it more General so that each time step uh t we're doing two steps so the first one is the Dynamics of it so we're getting this updated belief which we're representing with the bar I believe bar and actually this is for we calculate this for all possible values of x t in our simple example there were just two possibilities the door was either open or closed in general the state of the world might have many possibilities um so yeah the first step is uh summing over oops so we're just applying the the law of total probability so we sum over uh the previous possible states of the world of the probability that the state of XT sorry to say the time step T is actually given that at the previous time step it was XC minus one and you took action for your bladder control input UT minus one multiplied by the prior belief so the belief Act time step x t minus one and then the second step is to incorporate the sensor match and this is done with base rule so this is the probability of XT given ZT multiplied by sorry the other room probability of the new sensor measurement given a state multiplied by our prior belief having taken into account Dynamics bar divided by bu ZT so this is again called the Dynamics update and this step the second step is called the measurement update and I guess yeah one point I want to this emphasize again is that this is similar to uh to what we were doing with the non-deterministic filter right with the non-deterministic filter our representation of uncertainty was just set so a set of possibilities that the robot could be in here it's more refined it's a probability distribution we're taking our previous probability distribution so our previous belief about whether the state was at the previous science app x t minus one and propagating that through the Dynamics of the system uh and then we're incorporating our sensor measurement and that square base rule comes in so this is exactly a base rule all right uh so this uh I guess kind of General algorithm so this is called the base filter um and okay so if we have a discrete set of possibilities for the state of the world so if x the state of the world can take on financially many values then or yeah I guess discretely many values so then this kind of makes sense like we're summing over those possibilities uh in the continuous case uh so the case where let's say you have a drone right so in that case the states are not discrete rather they're continuous uh in that case the only difference here is that we replace anything that has a summation with a integral but the general structure is kind of exactly the same so that's I guess one comment uh the other comment is so I haven't really given you a formal justification for [Music] for why this makes sense I mean I think [Music] the justification is that it's pretty similar to the the non-deterministic filter we do a Dynamics update then we do a measurement update but yeah if you want to see more like details and justification this is in chapter 2.4.3 of the the probabilistic robotics folks that goes through a more kind of careful uh derivation of the uh the base filter uh the other comment is that the base filter is like pretty computationally foreign challenging to implement if you have continuous States so continuous valued States so for our simple example the state of the world could take on this two possibilities so we could basically do everything by hand in a slightly more general setting maybe the state of the world can take on finally many values and then you have to calculate the summations if the number of terms is relatively small then then it's fine but I think it gets really challenging when the summations get replaced by integral so when you have continuous valued States and then integrals become pretty hard to calculate analytically a question foreign yes uh yeah I guess that's a good point I don't write it here explicitly implicit here is that we have some initial belief uh which we had we were saying was just like 0.5.5 for the the door example uh but yeah that's a good point this is huge that you started off with some initial belief uh that you then propagate uh to the first signs that okay um yeah I guess the last uh comment I want to make is um that so this above the second computationally challenging part uh so there are certain settings where one can actually Implement uh these two steps so the steps in the base filter uh analytically and like very computationally efficiently uh even in the case where you have continuous valued States or sensor measurements so specifically if you have uh linear Dynamics and a linear measurement model and I'll say a bit more about exactly what that means and if you assume uh gaussian uncertainty it turns out that one can computationally efficiently implement the base filter and exactly so there's no approximation there and this leads to something called kernel Kalman filtering which we'll discuss in the next lecture in general this is not possible to implement exactly in a computational efficient manner and so we have to make some approximations and yeah I guess one of the really popular techniques for for approximately implementing the base filter is called particle filtering yeah which is very widely used in practice which we'll also discuss in the next lecture all right I guess any final questions cool so I'll see you after your break then [Music]
Introduction_to_Robotics_Princeton
Lecture_5_Princeton_Introduction_to_Robotics_Linear_Quadratic_Regulator_LQR.txt
okay uh let's go ahead and get started uh so just a quick reminder that the first assignment is due tomorrow by midnight and then the first Hardware lab component assignment is going to be assigned tomorrow and will be due uh the Wednesday after I'll say more about the logistics for the lab uh tomorrow 11 59 yeah yeah yeah I guess it'll be really nasty if I said midnight and I meant uh today every night yeah okay all right yeah I'll say more about the logistics for the lab at the end of the lecture uh today so just to remind you of the material we covered in the previous lecture so we continued our discussion of feedback control and we introduced some important definitions some important Concepts uh trying to formally specify exactly what we want out of a feedback controller um so the first kind of most basic definition that we introduced was that of a fixed point or a equilibrium point so this is some state which we usually refer to as X subscript zero such that there's some control input u0 substance x dot is exactly equal to zero and I guess I'll put like a Zero Bar just to denote that this is a vector of videos and a fixed point is somewhere if you start off there at that stage if you apply that control input then nothing happens the state doesn't change over time and then we discussed some Notions of stability and specifically we discussed global or local asymptotic stability and I mentioned that global asymptotic stability implies local academics ability always no matter what your system is and for linear systems specifically uh Global and local asymptotic stability are actually equivalent and this is something that you'll get to prove in the assignment that goes out tomorrow it's not a super difficult proof and then we discussed this notion of stabilizability so can we even make our control system locally or globally asymptotically stable and we saw an example of a linear system in the previous lecture that is not stabilizable a question yes of course uh yes it's the thing that the robot is directly controlling so for the Drone it's the propeller speed um foreign exactly so that's one choice so maybe let me just write that down so so the other thing that we discussed uh is a PD controller so proportional derivative controller and this is basically one specific form that the feedback controller could take arguably the most popular uh form and yeah the general form is kind of like this so U as a function of X this is a feedback controller equals your nominal control input so if you're thinking about making the Drone however that control input that keeps the drone at the hovering configuration plus some Matrix K multiplied by x minus at 0 and this is Matrix Vector multiplication here and this thing is called the gain Matrix and we applied this form of feedback controller to the one the kind of planar quarter system to stabilize it to the to its like our configuration yep yep yep yeah so the way the Dynamics um so the Dynamics doesn't factor into this computation but this controller impacts the motion right so it impacts the Dynamics and so we Define this notion of a closed loop system and this is basically the Dynamics you get once you plug in the feedback controller so we have x dot equals f of x and then U so if we make u a function of x so I can write it like this uh now the Dynamics are purely a function of the state and so I can just rewrite this as FPL um yeah so the feedback controller impacts the Dynamics impacts the motion of the question yes exactly yeah yep yep yep yeah exactly so so you apply control input the Drone does something you look at the state or drone looks of the state doesn't mean so on and basically the the goal that we're we've set for ourselves is to make that process stable so no matter where like what initial State they don't start off and we wanted to asymptotically converge to the hovering State um or so I guess what I say there is like the global version no matter where you start off or if you start off close enough to the lowering State and then you want to asymptotically stabilize so the hovering state is that does that only depend on the initial Dynamics but also yes good question so stabilizability has to do with just the original Dynamic software so stabilizability is asking can I find some feedback controller and so does that exist something feedback controller that will stabilize the system so it could be like a really weird non-linear like function that stabilizes it but the question is is it possible to stimulus okay uh I thought maybe one more question somewhere yeah yeah okay so we'll talk about that today I guess that's going to be kind of the rest of the lecture uh talking about like video control like more in much more detail and why we linearized that Dynamics I haven't really said why we linearized like I just said it's a useful thing to do I guess today we'll see why it's why it's useful and then finally at the end of the the lecture uh yeah um last week uh we discussed some other considerations Beyond stability right so we discussed some of the limitations of just considering stability we said maybe we want some kind of optimality we want to expand as little energy as possible for instance to stabilize the system so there's some other performance criteria we want to think about as well um so I guess this is the question we're going to consider today so how can we both achieve some kind of stability but then also optimize some performance Criterion like energy or time or something like that in a feedback control system so yeah I guess the high level kind of strategy is going to be to take our non-linear system so the non-linear Dynamics XL equals f of x u um we're gonna come up with a linear approximation of it so linearize the Dynamics about some reference point again just canonically just think of however for the Drone and so we get Dynamics like a multiplied by x minus x0 plus b uh U minus u0 um and then we're gonna design a feedback controller for this linear system and the reason is that doing uh like designing a feedback controller for a non-linear system is a challenging problem it turns out there's much more uh like well established and well understood techniques for Designing feedback controllers for linear systems so we're going to take our non-linear system linearize it design a feedback controller but then we're going to apply um so I can write down the form of the controller we'll still end up with like you this kind of PD control form so this kind of feedback controller and then we're going to apply it to the non-linear system like we're gonna actually run it on the physical robot in the lab assignment that goes out tomorrow and we're going to hope that this works right so this what we're going to be able to guarantee is that the controller we design stabilizes the linear system uh and then we're relying on the goodness of approximation of the linear system to the non-linear system and we're hoping that we can stabilize the linear system then that stabilizes at least to some degree for some reasonable set of initial States like it stabilizes the non-linear system as well so that's the overall kind of plan so this maybe suggests why yeah why were linearizing uh it's mostly because that that's like for linear systems we know how to come up with feedback controllers that stabilize them okay so the specific technique that I'm gonna discuss is what's known as the linear quadratic regulator and this is a really popular feedback control design like method um and it's going to achieve two things simultaneously far our linearized system the first thing it's going to do is stabilize it's actually let me just say globally stabilized the linear linearized system so we're going to achieve global asymptotic stability for the linearized system with this feedback controller that we're going to get from this lqr method and the second thing is that it's going to optimize some performance criteria so some objective uh Beyond just stability right so we were saying in the previous lecture you could achieve stability asymptotic stability but it could be really slow they are drawn like gets to hover in like a year which is not a super useful thing so we're going to introduce this performance Criterion and that this lqr method is going to try to optimize okay um okay so we're gonna start by assuming that we've already linearized our system so I guess you know how to do this from from this week's assignment so we take our nonlinear system we come up with the approximation x minus x0 plus b U minus u0 about some reference point and I'm just gonna assume so this is true for the hovering state but I'm going to assume that x 0 is a fixed point so we're trying to stabilize our system our linear system uh to this reference point x0 which is a fixed point okay so just to make things a little bit more kind of notationally compact I'm going to Define this new variable X tilde so this is just going to be X minus x0 so the deviation of the state from the reference fit um I'm sort of abusing notation a little bit so this thing is also a vector right this is the difference between two vectors so maybe I should write like extend a bar but that's going to make things messier than it needs to be so Excel the is still a vector even though I'm not writing the bar um and then finally I'll define utility which is the difference of the control input from the reference uh like a nominal control input um so with this new innovation I can write down X tilde dark um so if we look at this definition of Excellence is x minus x0x0 system comes in your time so X till the dot is x dot and x dot is just this the linear Dynamics so at a minus x0 plus b U minus u0 and by definition x minus x 0 is X tilde and U minus u0 is U Delta so we have X tilde Dot is a x tildew so this is just uh to make things more compact instead of dealing with the slightly longer equation we're going to deal with this equation essentially we just shifted the origin to x0 and u0 okay so all right so I guess the next thing to think about is what is so we definitely want stability right we want to come up with some feedback controller that's going to stabilize this linear system but I said we also want to have some performance criteria some like objective so let's think about what a reasonable uh performance criteria reasonable objective might look like so beyond stability asymptotics ability what might we want our feedback controller to do um so I guess one thing we might want is to minimize deviations [Music] of the state from the reference state right so we can achieve asymptotic stability for the Drone potentially if elected we wanted to hover here I could say oh my feedback controller aphidically stabilizes the system but it goes like 20 kilometers away before it comes back right so that would uh status by the definition of stability but we want to penalize that right so we want to penalize activations of the the state from the state that you want to be in and similarly you might want to minimize deviations of the control input from the nominal control input and Loosely you can think of this as minimizing some kind of effort so if I want to again stabilize my my drone I could ask it maybe to apply control input Supply Plus that are just extremely high right something that's not achievable by the actual physical Motors that exist on the Drone so we want to kind of penalize a really high control inputs or maybe even really low control inputs so there's deviations from the nominal control input as well so if we combine these I guess any questions on on that General motivation okay so if we combine these things what I could write down to one potential I guess option so deviation of x from X zero so that's basically X till the U from u0 that's that's you till that so I'm going to Define this function that I'm going to call J so J is going to be a function of the initial State essentially so the initial X tilde so X tilde at uh time zero so that's their parentheses annotation over here and I'm just going to Define this to be the integral so let me write it down and then I'll explain it so this notation again so X tilde at time T so this is X yeah X till the parenthesis C is X tilde at time t uh assuming you started off at this initial condition so the way to think about this is you have some feedback controller prefix you come to me with some feedback controller I'm going to basically tell you how good that feedback controller is from this evaluated at this initial or evaluated from this initial state uh so maybe pictorially I can draw raw picture here so let's say this is the state space uh we're picking some specific initial condition uh and you come to me with a feedback controller so I'm going to run that feedback controller forwards in time I guess all the way up to time equals infinity so I'm going to run it forever just conceptually and I'm gonna see how much your state deviated from the nominal state so that's basically this quantity so at any time T we're looking at x tilde so that's the the deviation of say from the normal set and then we're looking at U tilde that's the control input that your feedback controller is applying at that time T and we're integrating these two quantities uh over time uh going all the way to Tiny full's Infinity question please yes I'm so excited thank you yeah so so we have a DOT product um so the X tilde you can think of as a column vector so this transpose is a row Vector so this quantity just X transpose X is the norm squared right so that the distance squared from the distance of the state from the the Russian State like that Western square and then similarly we're looking at utilda so the norm of of util describe question lost function is only a function of the deviation at times zero zero uh yes yes yeah so the way to think about it uh is that you are coming to me with some feedback controller so right so some fixed feedback controller and all I'm doing right now is defining some performance Criterion like something that tells me or tells you uh how good your feedback controller is from this specific initial condition and the way to think about it is I'm going to take that system started off with that initial condition to run your feedback controller so now that's a closed loop system and I'm just going to evaluate this quantity so I'm going to look at how much your state deviated from the nominal State how much your control input deviated from the normal control input at every point in time and I'm going to integrate that for all time and that's the kind of number I'm going to say this is how good your controller did from that initial State this is just a definition right I'm just like hooking up something that tries to capture this intuition all right I guess questions on on this I want to make sure that the definition is clear so we're integrating a scalar over time uh time 0 to infinity and we're just saying that's how good your controller is and that from that initial state questions on on this and maybe how it connects to this sure is where yeah middle term yeah I'll come to that maybe in just a second like middle over here uh yeah oh I see uh so you want to take the square root I guess instead of that yeah so uh turns out the math doesn't work out as nicely if you take the square root uh but that's that's a perfectly reasonable thing you could ask for as well uh yeah the math is a lot cleaner for this version rather than the square root version good elevations okay I guess does anyone see other issues with with this definition that I gave yeah any thoughts on uh it doesn't like weigh what we care about more Yep yeah perfect so we're just saying just take the the distance right uh from the state from the reference State the control input from the reference controller for just sum them up and that's your objective but we might care about certain States more than others so maybe we really care maybe we're operating like close to a table we really don't want the the direction the dimension to change that much so we want to say like really penalize deviations of the height from the reference height much more than anything else uh maybe I don't care that much about the the speed of the Drone as long as it's in the uh the rough uh um like facility of speed equals zero that's okay but really they don't mean the height for that much so that's something that we could ask for uh same thing with the control input um right now we're kind of assuming that the deviation group State and deviations in control input are of the same uh scale but maybe there are they're not maybe you care about some control inputs more than others and so on so we can kind of expand this definition so I'll write down another proposal who's gonna have the same kind of general form but I'm just going to introduce to waving terms so these are two matrices uh so Q is a n by n Matrix where let me just write it here where n is the dimension of the state space and similarly that's a little m is the dimension of the the control input space so R is another Matrix that's a m by n animations right so if we do this matrix multiplication so we have a 1 by n times n by n times n by 1 versus a one by one scalar same thing over here one by m and by m m by one that's another stable uh and specifically I'm gonna so these q and R matrices they're basically user defined um and they have to have some constraints for them to be meaningful so Q uh we're gonna assume is positive uh semi-definite so I'll remind you maybe all that means exactly and our is positive uh definite and both of these are also symmetric matrices so Q transpose equals q r transport equals R and yeah just a reminder I guess hopefully this is something you see in your linear algebra course uh so symmetric positive uh semi definite Matrix so it's usually abbreviated like PSD positive as any left in it Matrix is one that satisfies extra X transpose Q X is greater than or equal to zero for all actually let me write let me write z just uh prevent the confusion of the state yeah so Matrix Q is part of the semi definite of this condition holds so any Z if you look at Z transpose Q Z that's a scalar velocity um nothing if it's non-negative for for all Z and you call that Matrix symmetric while assuming it's also symmetric you call that a positive semi-definite Matrix a positive definite is if this uh inequality is a strict inequality so strictly greater than zero for all Z that's not equal to zero so that's one definition it turns out that this is the equivalent uh to looking at the eigenvalues of the Matrix and if the eigenvalues are all statistically positive uh and the Matrix is symmetric then that's equivalent to a positive definite Matrix all right so I guess why do we need this for the the justification for for having these matrices to be positive I mean definitely positive investment is that we want to think about this function as a penalty function so we we I guess our job as a feedback people designer is going to be to come up with a feedback controller that minimizes this quantity so if these storms are non-negative which is what this condition is implying if these things are non-negative then we're minimizing an non-negative quantity which which makes sense we want to bring that as close to as we are as possible if we made a negative then it felt like a reward because in the morning the fit for deviating from the normal state which is not what we want right so that's the justification okay let me just pause here for a minute I guess this yeah quite a number of things I introduced uh yeah any any questions on Father's family definiteness or like the justification for this performance objective or anything okay okay so let me just mention that in practice I'll say a bit more about this later in practice the most like common choice is to just take the Q an arm matrices to be diagonal uh and so if you write out this product it's all we're doing then is just waiting the different components of external that will be and that yeah it captures your situation every dollar it doesn't have a different weight on the different components this form is the more General one and turns out the solutions we write will work in this formula case but yeah in practice like almost always have yeah written like fpr controllers for management systems like usually a diagonal joints for Q R foreign okay so what is our goal then so yeah we're trying to develop this methodology for coming up with the feedback controller that the key is two things simultaneously one it globalized globally stabilizes your linear system to it optimizes specifically minimizes the performance criteria in this metric that I've defined over here and we want to do this for every initial state so no matter where your drone starts off and I want to come up with the feedback controller that makes this quantity as small as it gets possibility so this is a pretty strong thing that we're asking for uh but yeah it turns out that I guess magically there's a pretty clean solution for linear systems that allows us to do both of these things simultaneously okay so let me I guess talk about uh what that methodology yes so the name of the technique is yeah lqr linear quadratic regulator and hopefully you see kind of where the name comes from is linear because the Dynamics are assumed to be linear uh quadratic because this performance objective is chronometric is quadratic in the state um yeah I'm not going to go through the the derivation basically this could be the result um but yeah I guess the main claim is that we're going to be able to find a feedback controller uh that has the the form that we've seen before so you as a function of X is the nominal control input plus K I'm just going to give a little like Star here K star times x minus x0 um so the controller that we're going to come up with will will have this form and K star is just to remind us that this is an optimal controller optimal in the sense that this feedback controller will minimize this performance Criterion that we've defined from every International Commission so if you I guess if you haven't seen this this should probably glue your mind right it's amazing it's a such a thing exists like a linear feedback controller if you'd like to development form that no matter where you start off it takes control inputs it takes actions that will make this quantity as small as it could possibly there's no other background feedback controller better in the sense that you can make this smaller than what this controller is going to make it from any uniform state okay so yeah let me describe the the process again I want to go through the direction I'll just tell you the procedure for finding this K star that has this kind of optimality property um so there's going to be two steps in the process yeah there's gonna be a two-step procedure for finding that sort of magical Keystone or magical Game Matrix and that makes that optimizes that performance review so step one uh is to solve the following Matrix equation so I'm just going to write down the equation and uh we will discuss how to how exactly to start with in a couple of minutes so the matrices here so there's I guess uh the Q Matrix is exactly that Q Matrix so that's part of the problem definitions right we as the user uh we're going to specify the Q and R matrices that basically say exactly how we want to penalize deviations in the different components of the state or control input uh q and R Matrix these so those are given by the user by us the A and B matrices those are the AMA matrices that Define that that the linear Dynamics so the only unknown here is s so this is a n by n Matrix you can kind of check maybe offline but the dimensions match up so the zero here is also a matrix this is a n by n uh zero Matrix Q is an N by n Matrix so at least that checks out right so the only unknown Matrix here is the information f um I'll say exactly how to solve it follow this equation into just a little bit but let's just say somehow we can find some s that satisfies this Matrix equation so the solution we're going to call that s start just like denote by star and yeah I guess just to give the thing a name so this Matrix equation so it's called algebrae ricotti equation or just regarding equation for for sure um all right let me write down step two um yeah where is this so yeah step two is simpler and so I'm gonna Define something that I'll call K star it's minus r inverse and B transpose s start and this is just matrix multiplication so everything here is known right so assuming we've done step one we found some s-star that satisfies this we got the equation we can just plug that in so multiply that by minus r inverse R again is that Matrix or there b is part of the Dynamics transpose s star and we're just going to give that a name we're going to call that case star and then our feedback controller is going to be U let's think of it as a function of X tilde up first I'll Define this to be K star X tilde at some time t and we can just expand this out a little bit so you sorry it should be utilda you tell the by definition with u minus u b X tilde by definition is x minus xero so U tilde is U minus U zero I'll just bring the U zero on the other side so U as a function of x equals U naught plus two star and then X tilde is x minus X 0. right so these are just equivalent uh just expanded up the utilver and X tilde to get to this okay so what I want to do is just briefly recap the steps as I haven't yet talked about how we solve this equation but just imagine we can we get a star we plug it in to compute this K star and that's our gain Matrix and we end up with this feedback the specific feedback controller that depends on four things so depends on A and B the Dynamics and the Q R the matrices that Define our objectives what I want to do is to make sure that the claim I'm making is clear so I've improved this claim and the proof it takes a a lot of work I won't do it through that here but I just wanted to make sure that the claim is there so what I'm claiming is if you go through this procedure you find this feedback controller it has this magical property um but no matter what initial condition you start off your drone or whatever robot system is at least this linearized system and if you apply this feedback controller it minimizes this metric this performance type unit that we defined and there's no other like better feedback controller that anyone could come up with that means this quantity lower than what does feedback controller um questions yes yeah in the other Majesty is that they find the linear Dynamics another question is right yes good uh so I'll spend a few minutes um talking about the the like more intuition for for how to specify those uh for now just think of them as some matrices that tell you how you want to weigh deviations in different dimensions um let me yeah let me I can give a quick explanation maybe here so if we choose Q to be diagonal effect um so we're gonna have like different like components of X is still there so I've already like transposed extended um so let's say I choose U to be this diagonal matrix so if I multiply this out what I'm going to get is q11 times x Total 1 squared plus q q 2 times x so the 2 squared and so on Q and N X tilde N squared um and so this Matrix like the diagonal elements of this Matrix are telling me uh are giving like specifying your weights for how much I care about the first component of the state let's say deviating from the first component of the reference State the second component of the state deviating from the second component state so if I make q11 large I'm basically saying make try to make q1 tilde squared small so don't deviate too much uh in terms of the first component of the state so that that's roughly the iteration for what they mean exactly how to choose them I'll spend some more time talking about it just all right other questions okay is the claim clear so like what what I'm saying this feedback controller does anyone not clear on this I'm happy to explain all right okay so I guess it's a couple of comments on this foreign so the first one is this how do we go about doing step one so we have this Matrix equation that we need to solve um so I won't go into the details I'll kind of give you just the process so yeah there are like built-in like python or massage you won't use massive in this course but just in case functions for solving this Matrix equation and then I guess for doing that uh step two as well so yeah essentially this is a function you can call like LTR and you give it four arguments four inputs so a b remain just to identifying the Dynamics q and r the matrices that Define our objective and it's solved with um equation just regarding for you and gives you the the key for a star as well so under the hood there are some numerical like techniques that go into solving this the thing that makes this tricky to solve is that this is non-linear so if you look at the form there's a quadratic dependence on S right so there's an S and then there's another s in the same term so this is not just some like linear Matrix equation but then you can solve easily but there are nice numerical techniques that solve this that are implemented in math lab or python yeah there's a bunch of research on like how exactly yourself this major situation I think back in the 60s or something and my understanding is like one paper that basically is solved it and then no one really did much research after that because the numerical techniques are so nice and um okay so that's yeah that's basically the the process that we'll use when it's calling the built-in Matlab or python functions and this one node as a warning so python and I think Matlab as well uses a slightly different sign convention so their controllers the convention is that if you tell the if minus K star uh X soda so there's a minus sign over here so for that creative fly I guess that's the convention but uh that we're going to be using but the output that you get from the built-in python function is used this is a form so you just have to take the case star that you get from the python function industry yeah I'm mentioning this because if you don't think that that's going to make your control unstable which is not what you want we'll remind you in the assignment but uh just beyond the course we're using uh like these functions just yeah remember that the signs over here reconvention is important okay uh one more thing I want to mention before talking about how to um I shall leave this up here so so far I just presented this kind of as a black box like technique you solve this Matrix regard equation you get this K star in this in this following way but there's actually a nice interpretation that this Matrix s star which is the Matrix that solves or satisfies the record equation with a nice interpretation for for it it basically gives us the optimal cost actually sorry there was one more piece of okay so that function which we defined I want to write it down again but like J which is a function of X Zelda times zero we defined it to be this integral so that function J is called the cost function so that's just the the name given to that function cost because you're paying your penalty for deviations in the normal sorry let me saved all the uh the control input um yeah so it turns out that the solution to the regard equation is f star allows us to calculate the optimal cost for any initial fit so you could do it the way I was telling you before like you take your controller uh you've started off you start off your system from some initial condition you run it for like all time like zero to infinity and then you do this integral but that's not uh at least the way I decided right now like to stimulate your system uh you have to simulate it for for all time um turns out there's a kind of shortcut which is to which comes from the solution to the Ducati equation so this is another claim I'm not going to prove it on the state the claim so the ga star so the optimal value of the cost function from this initial state is equal to uh X Fielder at Time Zero multiplied by F star multiplied by after that time zero so this Matrix has star basically it does the interval for you so you don't need to somehow figure out how to read that integral that's a claim I'm not I'm not proving it I believe it's true so you just take your initial condition you multiply that well transpose of that multiply that by F star and again by by the initial condition and this column is a scalar so s again is a n by and Matrix this is with the transporter so one by n this is a n by one so this is quantity on the right hand side is the cost that the controller this lqr controller achieves from this initial condition and that's the optimal cost as a capable so K star gives us the let's specifies the feedback controller so when we like actually run the like drone that's it we're going to program in that specific K star that we get by solving these two secular card equation and step two so there's going to be some actual like numbers so when you linearize the Dynamics for this specific drone you have a specific Like A and B uh you're bringing a specified specific q and R as well with that in a bit that's going to give us some specific numbers for k-star and that's the controller and we're going to run on that question marker yeah um yeah so this function so we're thinking of J as a function of the initial state the name for that function is cost function so I guess that's all I mean but yeah the reason it's called cost functioning is because the thing that it's measuring uh is like deviations of the state from the normal State and the control input from the normal controller question two questions yeah yes okay okay yeah so I haven't uh yeah that's a good observation so right at the beginning of the lecture I said we were going to do two things simultaneously so this lqr process is going to do two things simultaneously uh one which I kind of claimed explicitly is that it's going to minimize like optimize this performance objective performance checking in the previous the second thing I claimed was that it's also going to stabilize your non-linear sorry your linear system uh and that basically implies that the integral is always well defined so if you are if you're a feedback controller was starting off here and then blowing up like not antibiotically stabilizing systems and this integral could be infinite but yeah if your system is stabilizable uh then the this like process as your process will find this feedback controller that will stabilize your system in addition to minimizing the cost function second one is uh okay yeah yes uh good question um so I won't talk about that uh here um but uh yeah the idea of a discount function is maybe I care about things that happen soon in the future and I don't care too much about things that happen like far away in the future there are ways to modify this to take that kind of discounting into account as well and you can still solve uh like the cell QR process and get a controller um yeah it's just slightly more complicated but you can good other question okay all right I think I yeah let me just State this second I kind of said it verbally but let me just state it a bit more formally [Music] so hopefully the optimizing the performance type unit part is clear um but yeah let me just say explicitly so the case star um that we get so guest star from this lqr process stabilizes and by stabilizers I mean globally asymptotically stabilizes the linear system so we can look at the closed loop system right so we had so just the original linear system was x dot equals ax minus x0 plus b U minus c0 or equivalently still that up is a I've still definitely you fill that so if we plug in this feedback controller I'll do it for the tilde version so we get X tilde dot is a x tilde plus b uh utilda which we're saying is K star X over so I can combine some of the terms here so a plus b foreign and this is next to the DOT so this is our closed loop system so what we the system that we get the Dynamics that we get when we plug in you fill the star still there okay sorry again from from the cell QR process and what I'm claiming is that this closed loop system uh assuming your system is stabilizable and the first place it was not stabilizable there's nothing you can do to stabilize it but if it is stabilizable lqr will find a matrix or the Matrix that you get from lur that gave star Matrix makes the clothes look Dynamic globally acidology stable and again I'll just emphasize this is for the linear system we haven't said anything yet about the non-linear system we've just linearized our nonlinear system and then everything ends for for the linear system working okay I guess question on the stability claim okay all right and I'll just like sort of throwing a caveat again so I haven't proved any of this right I'm just like claiming all bunch of stuff and you're taking my work form if you're if you are interested at your point you do it up because that goes through some of the proofs the proofs are yeah I think they take over really you would probably be like multiple lectures if you wanted to see the proof of all of these things okay okay so let me go back to the question that uh yeah someone at the back asked how do we actually choose these q and our matrices I just said they're like user defined I give you some intuition for what they mean but yeah how's it going to work in practice and I guess how's it going to work for the first lab so in theory uh any choice of QR as long as they're like positive let's say less is indefinite for the sake of simplicity so any any such choice of q r will uh stabilize so globally asymptotically stabilized the linear system is that just a theoretical not no matter what q r i pick the lqr process will give you something that stimulates the system uh in practice it's not so simple and that's kind of the point of the first slide is to see in the practice that it's known as simple as that theory maybe make the theme uh so in practice there's a whole bunch of factors that uh yeah make things not ideal so the first one is that the system is not linear right so the Drone Dynamics we wrote it down we have like non-linear Dynamics so what we're relying on is like a the linear linearized system being a good approximation of the normal aerodynamics which it should be in some region around the hovering state but if you deviate too far it's not necessarily going to be a good like the first order interview is not necessarily going to be a good approximation um so yeah that's one kind of known one source of a known I deal with the other one is that there are some imperfections in our Dynamics model so from this assignment that you're doing now uh you're calculating the thrust coefficient and the moment coefficient those are like empirically observed right so we did some experiments putting the drone on a Trust Bank and calculating and learning about your experiments so these are like approximate they're not exactly describing their Dynamics like the full like the propellers and so on so there's some like model mismatch and then there's also some state estimation errors so in order to Implement that lqr controller we're assuming that at every point in time the robot can exactly sense its state right so the right hand side over there requires like the state interactive that's not exactly true there are one of some sensors like a IMU inertial measurement unit a bunch of other like sensors like a downward facing camera and so on that are estimating the state like estimating the speeds the orientation and those are not exactly the physical state of the actual electricity so there's some like estimation errors but so yeah in practice what you'll find is that some choices of the QR matrices are better than others and that's basically what you're going to do in the first Hardware assignment so we're going to give you so we've already linearized the Dynamics of the Drone so that part what makes this well will be done by midnight tomorrow do you have the A and B matrices um and then the the first lab we're going to attune the Q an army system so we'll give you some poorly tuned QR matrices uh that are not going to make the Drone like a crash completely but I'm like it's gonna right now uh really like stabilize it to hover and your job is basically going to be to choose the QR matrices uh in a way that resolves in good like performance on the typical uh drone so I just want to give you some like intuition for how that tuning process uh might work um so yeah so that's why I kept this Matrix these Expressions up over here again in practice you're just going to choose diagonal q r matrices so let's just look at the cue part for a bit um and imagine that you've chosen some specific values for that Q Matrix maybe just the values that we gave you as default but then you see that the Drone is like drifting in Z so there's I forget exactly what the height is I think it's maybe a meter or half a meter off the ground but the drill needs to stabilize to yeah imagine that it's kind of like just going like like this up and up and maybe it like stops somewhere over here like way above the let's say it's one meter height that you wanted to to stabilize that so how would you go about updating the cube trick so this view Matrix again is a 12 by 12 Matrix we're going to just use it to be diagonal so any thoughts on what you would do what do you try next like what update to the the queue Matrix pretty much yeah the increase the the weight for for z um and I guess which of these yeah the third component so the three three component so if you remember the state Vector for the 3D drone so it's XYZ roll to tr x dot y dot v r and then the angular velocity components um so yeah if you see the behavior that I was describing then you take the yeah so this is the q11 q22 q33 up all the way to q12 um yeah so you take this element the third element and you crank that up so maybe let's say by a factor of 10 or something and you see what that does and you should see that it stops like uh exhibiting that behavior so it should now like paralyze the the uh height much more but suppose now you start seeing like once you do this you try a lot on the hardware system and you start seeing it do this so kind of just like oscillates so it's doing a reasonable job on average I guess of being like around one meter above the ground but it's our training like this so what's another thing that you might try next play updated Thank you images good uh yes not quite acceleration so the state vectors yeah it only has uh has the speeds um yeah so so that's exactly right so you penalize the uh the uh the Z dot component of the state and the iteration is uh from the previous lecture right so uh if you go back maybe to the note from the previous lecture uh when we just had the proportional controller so your feedback controller that just depends on the Z position of the Z dot uh there's no damping right so it's kind of we have like a spring mask like oscillator without any any napping so if you add a component that penalizes there's the dot as well that basically adds in like a damping term so that uh we're gonna slide down the state again so X bar is X Y Z role which yeah and then I start y dot V Dot and then pqr uh so it's the ninth component right that's the Z dot components you crank that up um yeah and then that's kind of the intuition so you see like roughly how the drones behaving you figure out like what should I penalize more and then you penalize it um the control input part the the r part uh yeah you can think about that as well so let's say that the Drone is like not really doing like anything so it's like deviating like far from the uh the state that you want to be at uh and that's yeah it's basically like nothing is no state of like being like that particularly well so what would you what do you think you would do then this is like modifying the r Matrix we are controller yeah so you reduce the uh the control penalties you bring R down so let's say you just divide R by a factory reflecting um so that's going to be the the process and yeah the goal with this is to just get familiar like see that the theory with the linearized system and so on like doesn't quite match the reality because of all these factors and other factors yeah this is sort of like an art form like you you'll see that it takes some tuning and it builds on the calculation for what lqr is doing okay let me switch to the slides just to give you a sense for what's possible with uh with our QR so I have a bunch of videos of lqr working in practice for different systems anyways yeah I'll just verbally describe uh on the planet so we have two lab spaces um reserved for this course so one is G105 that's in the e-quad the other one is ace012 and linger Center those are both I guess on the slides and I'll maybe send a canvas announcement as well they have like netted Arenas um so the analinger space has two netted Arenas the GE 105 space has has one uh G105 we have four uh desks that are reserved uh for for this course so they're kind of on the left uh part of the lab uh it'll be pretty obvious those are the ones that are clean uh the other ones have a bunch of other equipment for other courses so you're welcome to use the four desks uh that we have reserved for this course um everyone should have access to the uh to both spaces uh so you might just have to go hit a hotspot uh to update your tiger card but as soon as you do that uh you should be all set and everyone I think everyone is completed the Online safety trainings you should have access to the lab spaces um and I think it was some like hours on the the lab spaces so you can use the hours you can go in like whenever you want like your team um for the assignments can go in whatever you want and I think it's up to like 1am or something like that that you're allowed to use it technically and then 1am through it's either like six or seven a.m you're not supposed to use it and then 7 A.M uh onwards again you can you can use it um yeah and then yeah I guess you'll do the assignments in in teams so just go to the lab space uh with your teams uh in the analinger space I think around like five or six teams can work uh relatively comfortably so there's no sign up sheet so we tried to sign up sheet in the very first year we offered this course and we found there was a mess like people were signing up like five minutes before a time slot and just showing up and saying hey like we signed up uh even though it was just five minutes before so there's a kind of like organic model that we found works better so just show up there uh if there's like multiple teams that are working at the same time just share the space so with this lqr lab uh there's it's kind of like an iterative process you'll tune the controllers you'll go back and like try it out on the actual system you'll think some more and maybe while you're thinking you can let another team try out their controller costs or games on the the hardware so that's roughly the uh the process that we'll follow uh and then yeah we've described exactly what you submit in the assignments is basically a video and some logs showing that you've successfully made the Drone hover like pretty well anyway all right yeah questions on on this good after the semester no yeah so just for for the semester yep other questions okay so I'll see you on Thursday
Introduction_to_Robotics_Princeton
Lecture_4_Princeton_Introduction_to_Robotics_Stability_and_PD_Control.txt
all right i think we can uh go and get started so i'll try to end today's lecture about uh just a few minutes early maybe five seven ten minutes early uh to hand out drones uh at the end of the lecture but uh yeah i guess for now we're just gonna continue with our technical materials that we started off in in the previous lecture so just to remind you where we left off and what we covered in the previous lecture so we essentially completely discuss the equations of motion [Music] um for the 3d [Music] wardroller um so if you're interested in this uh if you're interested in going through uh all the equations in a bit more detail maybe understanding the physical meaning of all the terms that that appear so i posted a reference on canvas so yeah this is not mandatory reading just in case you are interested in learning more about the details uh so materials for the material in this paper up to uh section i think it's 16.2.5 is relevant uh to what we're discussing right now in the course if you want to if you want to take a look i guess just one slight note is that they use a different euler angle convention so i mentioned in this course we're always going to be using the the space one two three convention uh just for convenience they use a different convention so if you look at the actual form of the trigonometric expressions that you have in the dynamics they're gonna be slightly different but the general form is still going to be exactly the the same um and yeah the general form [Music] of non-linear dynamics that we end up with when we analyze the equations of motion for some robotic systems and mechanical system looks like this so x dot equals f of x u x again is the the state vector uh u is the the control input vector uh and this is a set of first order differential equations that you get from converting second order differential equations which you get from f equals m a to first order form as we did a couple of lectures ago and then what we talked about in the the last lecture is how to linearize the dynamics so linearizing the nonlinear dynamics about nominal or reference point and really i guess my point i mean state and control input and we said that yeah if we take uh dynamics of this form uh we can linearize it about some x naught and some u naught however for instance if we're looking at that problem and we get equations that look like this so a times x minus x 0 plus b times u minus u 0 where x 0 is the reference state and u0 is the reference control input and the a and b matrices are these matrices of partial derivatives evaluated at the reference state and control input and we went through uh for the planar quadrotor and kind of analytically computed what these matrices a and b are in general it's a little bit tedious to do this you'll go through one more example in this assignment uh that went down yesterday and calculate a and b matrices for some particular system uh but in general we'll be using uh python for calculating these matrices we're doing this linearization in a kind of automatic way um and then finally we started a discussion of feedback control and we said that the goal of feedback control theory is to find a feedback controller or control law that's it goes by many different names which is basically a function from state to control input that achieves some desired behavior uh and we left us a little bit vague so what does some desire behavior mean that's something we're going to formalize in this lecture today uh but just intuitively if you're trying to make the drone hover we just want initial conditions initial states that are not exactly at hover to basically approach the hovering configuration and one option [Music] i guess one of the most common options uh for the functional form of a feedback controller uh is as follows so u again is a function of x uh control input as a function of state equals the nominal the reference control input plus some matrix k times x minus x0 so this is basically saying compare the state that your robot is actually in right now compare that with the state that you want to be in and look at that difference multiply that by some matrix and then add in the nominal control input and so if x equals x0 just apply the nominal uh control input if it doesn't equal x0 then add on an extra correction term uh so yeah i'll say much more about this form later on in today's lecture but this is just an example of what a controller might look like and finally we ended the lecture with a discussion of the benefits or potential benefits of feedback control and the main kind of point there was that it allows you to deal with uncertainty many different kinds of uncertainty so it probably wasn't obvious exactly why feedback control or exactly how feedback control can allow you to deal with all the different kinds of uncertainty that we discussed at the end of the lecture but we'll start with some of that discussion more like formal discussion today okay so yeah that's kind of where we left off so the goal for today is to continue with our discussion control and try to formalize exactly what we want from a good feedback controller so what what exactly are we trying to do with feedback control so we'll start with some important uh kind of basic definitions [Music] so the first one is what's known as a fixed point also known as an equilibrium point um and a fixed point or equilibrium point so let me just write out the definition um so this is the fixed point is a state x0 for thousand systems [Music] whose equations of motion are given by x dot equals f of xq such that x dot if we evaluate at x0 and sum u0 equals 0 and this is for some choice of uh u0 all right so i guess that's the formal definition this is the the most important part so we're saying that a fixed point is just some state which we're calling x0 such that there exists some control input which we're calling u0 such that if you apply that control input at that state nothing happens right so nothing happens means that the time derivative of the state is zero uh so intuitively a fixed point is a state where uh you're just gonna remain there uh remain at that state if you apply this corresponding control input so an example of this was what we discussed in the previous lecture so the hovering configuration hovering state and control input about which we linearize our dynamics for the planar quadrotor that's an example of a fixed point so um i guess here's a question maybe just to test our uh understanding so [Music] yeah let's say we have some we're looking at the planar quadrotor can we have a fixed point with the following form um so let's say i told you i have a fixed point that looks like this so x0 y0 beta0 or actually let me just say it's zero um and then x0 dot y0 dot and hit a zero dot where these three components so the fourth fifth and sixth components uh are none to zero yeah suppose i gave you some state that like this i can make it even maybe slightly more clean by just zeroing out the first three components again this is specifically for the planer quarter order uh so can we have a fixed point of this form is it is it possible or not okay no why not yeah yeah and then the last yes yeah exactly right so if we looked at the equations of motion for the planar quadrotor maybe you can refer back uh to our previous lecture um so the first so if we write down x dot equals f of x u um then the first three terms uh of the time derivative of the state are the time derivatives of x y and theta right so the first three terms here are x dot y dot um and if we say that these or any of these really are non-zero uh then this is going to be non-zero and that contributes the definition right so the definition of the fixed point is saying that all the components on the right hand side when we look at x dot all of those are zero and yeah intuitively the so i guess the physical kind of reason for this uh is that this kind of state has a non-zero velocity uh right so non zero x dot y dot or theta dot um and intuitively a fixed point is where one where if you start off there you're gonna stay there you have some nonzero speed and then if you start out there just by definition you're not gonna stay there right so that's the intuitive reason for why you cannot have a fixed point of this form yeah i guess any questions on this definition for example okay yes yes um so this one is for the planar quadrotor uh and the planar quadrotor the control input doesn't appear at all in the first three components and typically for a mechanical system uh the things that you control are related to the second derivatives uh and physically the reason is that we can apply forces or darks which affect accelerations which are second derivatives um yeah i guess you could cook up systems where you have a control input on the speeds as well and in that case yes but at least for the plane reporter you can good another question okay all right so that's the first definition fixed point uh so let's talk a little bit about what we're trying to achieve with with feedback control [Music] okay so let's say we have some controller some particular feedback controller uh again this is just some function uh which goes from a state to a control input um we're gonna say that we have some closed loop dynamics uh and specifically these are the dynamics that we get if we apply this controller so if we have x dot equals uh f of x and u um u we're saying is a function of x uh right so this function here is now just a function of x because the first argument is x the second argument is u which is itself the function of x and so i can just rewrite this as f c l so cl stands for closed loop so fcl of x and intuitively these are the equations of motion the dynamics that you get if you apply this feedback controller so all right so i guess what do we want from feedback control [Music] bro um so intuitively i think what we want is some kind of stability [Music] right at least this is one thing we might want uh so if we're trying to make the drone hover in some sense we're trying to make it a stable to the hover state we're trying to make sure that it doesn't fly away from the state that we want it to be so in that sense we want it to be to be stable um so what we're going to do is try to formally define exactly what we mean by stability and hence what we want or what we might want one thing we might want from a feedback controller uh it turns out there's many different definitions of stability there's many different ways you could formalize uh exactly what stability means so i'll go through some basic definitions um and then maybe just point you to some others so there's no one kind of right definition it just depends on exactly what we're trying to achieve [Music] so um [Music] yeah the first kind of uh of stability uh that i'm gonna define is what's known as attractivity and i'll just put a qualify in here what i'm going to define is what's known as global attractivity uh i think this is easiest to explain with a picture and i think it's in some sense this is like the most maybe intuitive like definition for what one might mean by stability and just for the sake of drawing pictures i'm going to assume that your state is two-dimensional so let's say you have x1 x2 so the state vector just again so i can draw pictures uh it's two dimensional uh first component is x1 the second component of x2 you can think of x1 corresponding to some kind of position x2 is corresponding to the speed it doesn't matter this is just yeah just kind of abstract pictures so global attractive attractivity uh says that no matter what initial condition or the initial state my system starts off in uh it's going to asymptotically approach uh some reference point uh just for these pictures imagine that the reference points that we care about uh maybe the hovering configuration for the drone is the the origin so no matter where we start off whatever initial condition we start off in [Music] it's gonna uh asymptotically converge to our desired uh reference uh which is let's say a fixed point which again in this picture is the the origin so that's that's the the picture so i guess maybe let me write it down uh a bit more formally um so for any initial condition initial state x of zero so the parenthesis notation here at time zero so this is some initial state yeah so for any any initial state the limit as time goes to infinity of the state at time t uh minus some reference state or desired state equals zero the borrower is a scalar so this is the state at time t and let's say this is some desired reference state uh intuitively just for the quarter think of this as the hovering state right so hopefully the mathematical definition lines up with the picture we're just saying no matter where you start off whatever initial condition you start off in uh at some time t you're going to be at some state which we're calling x of t if you look at the difference between that state and the state that you want to be in if you look at the norm of that vector that difference then that distance goes to zero asymptotically as time goes to infinity all right i guess any questions on this definition or picture so maybe let me ask a question uh does this seem like a reasonable thing to ask for from a feedback controller so we're going to choose some feedback control control law controller that's going to make my closed loop dynamics so the dynamics when i apply that controller globally attractive does that reasonable go ahead okay there might be uncertainties um there could be but i guess let's say uh so you're saying this might not be possible uh because there might be some forces that like make it yeah make it so that you you cannot come up with a controller uh yeah that's that's definitely that's possible i guess maybe go ahead yeah yeah yeah so the fact that it's we're looking at time goes to infinity is a little bit weird right so this might take like a billion years we're saying in a billion years your quadrotor is gonna uh get close to power and that would not be a useful thing so yeah i guess the time goes to infinity is kind of uh uh maybe not super satisfactory i guess other thoughts yes okay so there's no constraint here so for any initial condition if your drone is like traveling like 0.99 times the speed of light right like we still wanted to uh to come back to this power configuration and that's not kind of practically like reasonable so yeah this definition has has some um sorry yeah this definition has some practical issues so we're going to fix it or propose a different definition which we're going to call local attractivity um and we're gonna fix the the last point that we discussed which is i'm basically putting some constraint on the uh the set of initial conditions uh which you want your controller to stabilize uh and then yeah this is the picture again just imagine things are two-dimensional for the sake of pictures um so we're going to say that there's some ball of radius r uh around the state that you want to be in in state space such that if your initial condition is within that ball um so it's it's it's in this region over here and then you're required to asymptotically approach the desired state so no matter where you start off in this ball of radius r uh if you start off outside uh then you can do or then you're allowed to do whatever so maybe you go after it infinity uh maybe not like maybe you maybe there are some initial conditions still where you still go to the at the origin but you're not required to do that if your initial condition starts off outside this model of radius r okay so yeah we're just constraining the initial conditions that we want to stabilize to be within some ball of radius r um and just formally what we're saying is that for any initial state x of zero such that the distance x of zero minus the state that we want to be in is less than or equal to r uh we have the same condition that i wrote up there for global attractivity which is that the limit [Music] as time goes to infinity of x of t again x of t is the state that your system will be in at time t minus x0 equals zero okay so yeah essentially the same definition is that uh just putting a constraint on the set of initial conditions that are required to asymptotically approach the reference point yeah i guess any questions on this yes yeah and that's exactly right so local attractivity is weaker than global attractivity in the sense that one implies the other so global objectivity implies uh the the local um [Music] uh actually no sorry hold on that's that's not quite right i was gonna say uh the global one implies local uh yeah i think we have to be slightly careful i think this may not be uh there might be counter examples to that where no matter like what initial condition you start off in uh you might go or whatever balls you start off in you might go outside and then come back in there might be counter examples i actually think about this but it's at least like intuitively weaker but i i think uh it's not the case that global always implies look out after or double check yeah i guess just that probably not answer your question but address your question okay is um oh yes yeah nevermind sorry thank you yeah i uh all right yeah i dig that back thank you so so global does uh imply local um so with local we're not we are allowing states to uh move away from the uh the ball of radius r uh and come back again so that is allowed we're just saying for any initial condition uh that starts off in the edible uh you should asymptotically approach the point outside the ball um not necessarily right so so we're not saying that uh all we're saying is that anything that's within the ball previous r yeah anything that's within the ball of radius r needs to go uh and approach the origin or the reference state uh outside we're not saying anything so it could still stabilize but but not uh necessarily yes yes we don't say like they're suddenly yeah yeah there's no stochastic stochasticity here because we're just saying that the equations of motion are of that form so it's just a differential equation no uncertainty no noise or anything like that okay all right okay so so yeah these are two definitions let me give you one more uh definition so [Music] um yeah so we're going to call this stability in the sense of lyapunov and yeah this is usually abbreviated to i s l or sometimes just lyapunov stability um and again let me just draw a picture i think it's easier to explain in the picture and then i'll write down the formal definition um so this picture says that for any uh ball of states again this is our state space so any ball of state of radius epsilon we can find some ball of radius delta such that if our state starts off in this ball of epsilon sorry ball of radius delta then it's going to remain within the the ball of radius epsilon let me repeat that maybe uh so for any ball of radius epsilon that i choose in the state space uh i can find some other like smaller ball of radius delta uh such that anything that starts off in the ball of radius delta uh remains within the ball of radius epsilon and i can do this for any epsilon okay let me write down the formal definition then i'll take uh questions on this um so for every epsilon greater than zero there exists some delta which potentially could be well usually will be a function of epsilon that will depend the value of delta will depend on the value of epsilon such that we have this implication which is that if you start off in a ball of radius delta around the desired state then x at time t minus x0 will be less than yeah they say less than epsilon for all t uh greater than or equal to zero all right so hopefully this mathematical definition matches up with the uh the picture i was drawing here and the important thing here is that this uh condition needs to hold for any epsilon greater than zero um whatever ball phrases epsilon we choose we can find some while previous delta such that if you start off in that smaller ball then you can remain in the larger any any questions on this definition go ahead uh yeah good question so so the deltas um are well typically you want to determine them analytically uh it's often really hard to prove that some non-linear system is stable in the sense uh so you could determine them uh just numerically by by sampling but i guess that won't be a proof uh so yeah there's like many like entire like papers dedicated to proving that some particular non-linear system stable in the sense of uh okay so yeah i guess in some sense this is a weaker definition than local or global attractivity we're not saying that things need to converge asymptotically to our desired state it's just saying that things are not blowing up too much that's the intuition all right so let's just put some of these together [Music] um and just give them names so i'm gonna define two other kinds of stability that just depend on utilize the definitions that we just introduced so the first one is called the global asymptotic stability um and yeah we're just going to say your system is globally asymptotically stable if two conditions are satisfied so the first is that the system is stable in [Music] the sense of lyapunov and second if it is globally attractive right so if some system satisfies both of these uh then we just call it globally asymptotically stable if we're just giving a name just for the sake of convenience and the same thing with local asymptotic stability [Music] so if it satisfies two conditions so stable isl in the sense of lyapunov and second if it is locally attractive uh then we call that locally asymptotically question is yeah yeah so it's saying that uh your trajectories are not gonna uh blow up uh too much uh if you start off uh close enough to the desired state that's the intuition um one exercise ah yes question yes yes yeah good question so delta uh yeah i guess i didn't i wasn't super careful here so this should be strictly uh greater than zero um not not with this uh caveat which i wish i should have added uh so delta is strictly larger than zero uh then it's not just one element right we're saying there's a ball of radius delta so that ball will also be like could be yeah it could be super tiny just not exactly equal to zero yep good question yes okay so that was going to be my my next point uh i won't go through it here but uh yes i guess the the fact that we kind of define things in this way like we're saying global asymptotic stability is if you have both of these so stability in the sense of lyapunov and global attractivity um means that maybe these things are are not uh like one doesn't apply the other so global attractivity does not imply stability that the sense of behavior i think it's kind of a useful exercise to even just pictorially find some two-dimensional example where that is the case where you have global attractivity but you don't have stability in this sense if you have enough uh intuitively what happens is um even if you start off really close to the uh the desired state uh you have to go like really far to then come back in and approach the desired state asymptotically uh yeah you can just as an exercising drive like something that has the property good all right other questions okay so all right so we've uh defined uh global stability uh or global asymptotic stability local asymptotics abilities uh and yes let me just explicitly [Music] address the uh or just see i clarify the confusion uh i initiated before as a global [Music] asymptotic stability does imply local asymptotic stability so the first condition is just identical uh and uh yeah global uh attract attractivity uh implies uh local attractivity uh as well uh because this we're saying holds for any initial condition and this one is like a stricter definition where we're saying initial conditions in some ball of radius r if you have this global objectivity then you also have local activity uh in general for non-linear systems the converse is not true so local does not imply global in general like you can find systems that are locally asymptotically stable but not globally asymptotically stable but for linear systems so x dot equals a x minus x 0 plus b u minus u 0 the two definitions are actually equivalent so global let me write it the other way local asymptotic stability implies global asymptotic stability and yeah i guess i guess that global always implies local so this is a special thing about linear systems this is not something i'll prove here but uh this will be in assignment two um yeah the proof is actually not uh super difficult just a couple of nice questions uh fixed point yes um so in fact the uh when we were looking at the planar quadrotor in the previous lecture um the hovering configuration that we chose the state and the control input uh when we linearize the non-linear planar quadrotor dynamics uh that state and control input are a fixed point even for the linear system and i guess we can kind of see it just by inspection um so if we're at if the state is equal to x0 and if we apply u equals u0 uh then this will go to zero and then this was a zero as well and so x dot is zero good another question okay all right so yeah we've just described what we want so we want our system to be stable in some sense so probably local asymptotic stability that's what we want from a feedback controller and there are other like i said other definitions of stability which are quite reasonable as well so get to some desired state in a finite amount of time that's something reasonable one could ask for or just don't blow up to infinity that's another that's a pretty weak one like uh if you start off in some ball of radius r okay within that ball of radius r that's another thing one could ask for so there's different definition of stability we won't deal with the other ones we're primarily going to deal with local economic stability and global asymptotic stability okay so i guess the next thing we can ask is is this even possible like we're saying that we want to find some feedback controller that makes our system locally or globally asymptotically stable um but but yeah is that is that feasible is that actually possible so here's an example [Music] so this is a two dimensional [Music] system so x dot is v or dt of x1 x2 x1 x2 is the state um and yeah x dot is x1 and then x2 plus u so here the state is two dimensional state spaces to directly u and the control input is just a scalar for just one dimensional uh so i guess what do you think is this system stabilizable in the sense that can i find some feedback controller that makes uh the closed-loop system either locally or globally asymptotically stable and i'll just point out here that this is a linear system right so if you look at these components here the linear and the statement control but the global and local here uh for my claim over there actually equivalent so yeah i guess can you see maybe just from inspection is it possible to stabilize this system to find some feedback controller that makes the closed loop dynamics locally slash globally uh [Music] yeah yeah good okay so if we look at the two different components so we're saying x one dot so the time derivative of x one equals x one the time derivative of x two equals x two plus u uh so these are completely decoupled right so we just have two uh taylors x1 and x2 that basically have their own dynamics and they don't impact each other so x2 doesn't appear over here x1 doesn't appear over here so if we just look at the first component so x1 dot equals x1 um this is just a scalar system and let's say this is x one equals zero um if i choose uh or if i look at some x one that's maybe over here and then x one dot is positive right so we're gonna go in this direction similarly if we start off over here then we're going to go even more like positively and then here x1 dot is x1 which is negative over here we're going to move in this direction and even more negative over here uh so we start off even slightly away from x1 equals zero we're just going to blow off to infinity and there's nothing we can do right so there's no control input in the first component and just by construction um so there's nothing yeah there's no way we can impact uh what's happening to the x1 component of the system uh x2 we could stabilize uh but that doesn't impact okay so that makes sense questions okay all right so even for linear systems um it may not be possible to stabilize it um so we can define this notion of stabilizability so and this is with respect to wrt as some desired state x naught and stabilizability so the condition is that any initial state can be so any initial state x 0 can be asymptotically driven to x0 by choosing some feedback controller and yeah i guess what i described over here so any initial state and this is like global stabilizability uh you can define local stabilizability just by saying any initial condition and symbol of radius r uh around your desired state uh yeah this just like formally captures kind of what the issue was over here so this system is not as stabilizable but yeah if your system is stabilizable then you can try to find some feedback controller um that that stabilizes it all right yeah i guess questions on this definition okay um yeah maybe just as a note uh so for linear systems yeah things are pretty nice and convenient often for linear systems which is why we linearize non-linear systems for linear systems it turns out we can check stabilizability by looking at some eigenvalue conditions uh on the a and b matrices that define the linear dynamics i won't go into that but yeah just i guess it's good to know that it's uh it's possible so if i give you some linear system you can just ask a computer to compute eigenvalues or do that by hand and check if that linear system is in fact stabilizable or not all right so i guess let's look at some uh examples so we've so far just defined or described formally what we want from a feedback controller so we want to stabilize our system locally or globally um so of course now the question is how do we actually do this right how do we find a feedback controller that stabilizes some system that we care about uh probably the uh the most like popular kind of controller is known as a proportional derivative some new chalk over here [Music] was our question yes yes oh sorry thank you sorry myself the order here [Music] yeah so the most kind of popular uh kind of controller is what's known as a proportional uh derivative or pd controller [Applause] and to illustrate this we're going to go back to the very first system that we looked at which was our planar quadrotor but even more simplified where we constrain all the motion to be in the vertical direction so everything uh like all the motion resistant that the y direction the center of mass uh exposition is constrained to be zero there's no orientation just fixed the orientation is also fixed to be theta equals zero so if you remember the dynamics this was just in the second lecture the dynamics were given by y dot equals u1 over m minus g where u1 was the total truss total force from the propellers right so this just comes from f equals m a uh so the state vector again if you look at uh our notes for lecture two so x is y and why not um and suppose the desired state so let's say that we want this thing to in a power act or come and and asymptotically converge to let's say that desired state x0 is going to say some desired height y0 and zero speed so zero velocity this is our uh the state that we want to stabilize to so the question is um actually sorry just one more point [Music] [Music] um which is that if we choose [Music] um u now so some reference control input to just be mg uh then this state over here x0 is a fixed point so if we just cancel out gravity so we can look at our dynamics of y uh double dot that's going to be mg over m minus g which is zero uh and y dot the desired white outward thing is already equal to zero so this uh state uh this state over here uh with a control input that just cancels gravity is a fixed point um okay so the question is how do we choose or design feedback controller that stabilizes the system and the feedback controller we're just going to say is some function again u as a as a function of uh state so controlling what as a function of the state uh so here's one possibility that we can just quick up [Music] i'm going to say let's choose u and actually in this case the control input is just a scalar so i'm going to erase the the bar so u is a function of x i'm going to choose it to be just define it to be u0 which is the control input that cancels gravity plus some constant that i'll call k subscript uh p so yes a little bit more about where this notation comes from but just imagine we have some constant times y minus y zero so the height minus the desired height and let's say this constant we choose it to be negative so this is something i i guess i cooked up so let's try to analyze uh what this kind of controller would do [Music] um so with this controller the closed loop system so if i plug this controller into the equations of motion over there what we get is y double dot equals the control input which is the u0 plus kfe y minus y0 divided by m over there minus g um u0 or m is equal to g uh this is mg over m uh minus g over here these are going to cancel out so the only term we're left with is this one over here so plus k p y minus y zero uh divided by m uh so i'm gonna just for the sake of convenience uh define this new variable i'll call y tilde so this is going to be the deviation in the height which is y minus y zero so i can then look at the dynamics of y tilde so y tilde dot just from definition is this y dot right y naught is a constant if i take the time derivative i just get y dot and then similarly y tilde double dot is y double dot okay so then y tilde double knot which is y dot which is what we have over here is just k p over m y minus y naught which by definition is y tilde so this makes it a little bit more uh compact and maybe brings it in a form that might be potentially familiar to you all right so i guess let me ask the other questions so do these equations of motion we can remind you of any system you've encountered previously maybe like high school or your first year engineering and physics courses yeah good so oscillator right so like a spring mass system uh right so i guess in principle you should be able to write down the solution to this differential equation uh we won't do that but uh yeah we're just gonna say that this looks like a spring mass uh oscillator um so harmonic oscillator so then we can ask the question did the controller that i just hooked up over there and does it actually do what we wanted to do which is to stabilize our system do some shaping alexa no it won't yeah yeah good so this is just an oscillator right so this spring mass there's no damping term over here so there's no term that depends on the y till the dot so this thing is just going to oscillate forever if you just start off the system start off the drone uh at some height that's uh away from the at the desired height you're just gonna oscillate around that height forever you're never gonna asymptotically approach the uh all right so i guess we can fix this so intuitively we can have some damping term so we can add some term to the controller that depends on the speed [Music] so i'm gonna hook up a different controller so another option just based on this intuition that we need some mapping so u [Music] uh 0 plus this u is a function of x and is u zero plus k p uh y minus y zero so same thing over there plus uh another term k d times y dot right so i just added this extra term to the controller and again we're gonna just choose kd there's some constant uh we're gonna say that it's gonna be less than zero so strictly less than zero now we can go through this again uh so just getting the equations for y tilde uh double dot uh with this new controller i won't go through the algebra i guess you can double check it so what we end up with with this additional term is y tilde double dot equals k p over m y tilde so that's the same as what we had over here plus k d over m y tilde dot and because we're choosing kd to be less than zero this is a damping term right so this is now a spring mass damper system and that's something that does have the desired behavior so if we start off anywhere so any initial condition we're going to asymptotically approach the desired height and the speed is going to approach the desired speed which is just zero and yeah i guess i'll just point out so these dynamics are linear right in the uh the state and the uh the control input so local uh stability here the same as global stability so we've made the system uh globally uh stable globally asymptotically and stable okay so this kind of controller that we wrote down [Music] this is an example of a proportional derivative controller so the kp term that we chose so this is known as the proportional gain so proportional because uh it's the factor of the multiplier uh that's proportional to the deviation of the height so in this case y uh from the the desired height and kd [Music] is known as the derivative game the kind of like general form so if we look at actually this controller uh where did it go oh yeah right up there that we that we chose so if you look at that controller [Music] um this is just my observation like has this kind of form that i entered that at the at the end of the previous lecture um so k is some matrix which is kp uh k d the whole thing so u equals u naught plus k p k d times x minus x 0 so the state is y so the first component is y minus y zero the second component is just y dot minus zero uh right and that's exactly so if you just want to write this out we get exactly what i wrote up there so this general form and this is kind of the the general form of a proportional and derivative controller um this is also known as a linear controller linear because this is a linear or really affine function of the state so we're just taking the deviation of state from the desired state multiplying that by some constant and adding in our reference okay um all right i guess any questions on this so just one comment which is that if we pick kp and kd to be negative for this example uh any any sub choice so any choice of kp and knee being negative will result in global uh acetonic stability right we're always going to end up with some particular spring mass stamper system um so kp and kd the particular values uh just determine kind of how quickly things are damped or not damped but any value of kp and kd being negative is going to satisfy our definition of achieving lower asymptotic stability so i guess does that seem reasonable like are we done essentially so like whatever kp and kd i choose we're going to be happy with or do you think we might want more than just a global asymptotic stability yeah yeah good so uh depending on exactly what values you choose so maybe if you choose kp kdb to be kd to be gigantic so like million or billion the control input that you get might just be unachievable right so you might end up with a desired thrust of some magnitude that your propellers just cannot produce or maybe they can produce it but but you're going to waste a lot of energy in trying to stabilize the system yep yeah you might be interested in time efficiency or energy efficiency or different other notions of efficiency so yeah so far we've just talked about stability as one particular criterion but yeah ultimately we might want other performance criteria that we want to optimize somehow and that's what we're going to talk about in the next lecture all right questions before we hand our drones okay good all right so yeah this is the end of the technical material i guess where are the yeah they should have brought that oh there right there at the back okay good all right so yeah if one person from each team or at least one person from each team can just come down we're going to form a production line and end up [Music]
Introduction_to_Robotics_Princeton
Lecture_9_Princeton_Introduction_to_Robotics_Differential_flatness.txt
all right let's go and get started so I'll start off with a reminder of what we covered in the previous lecture so we're continuing our discussion of motion planning and specifically emotion planning in continuous spaces so in the previous lecture we discussed randomized algorithms for planning and the main algorithm that we discussed kind of in this class of randomized algorithms is the rrt so the rapidly exploring a randomized tree um and the nice thing about these algorithm variety and it's uh its many variants uh is that they operate directly in the continuous configuration space [Music] so this is in contrast to the previous algorithms that we discussed BFS DFS diagrams are the graph search algorithms which basically required some kind of a priority disc derivation of your space these algorithms don't require that they operate kind of natively in the configuration The Continuous configuration space and these are extremely useful and Powerful in in practice it's extremely like widely used and as you'll see in the see in the assignment uh that goes out tomorrow so you'll have a chance to implement the RT algorithm they're quite straightforward to implements and I think the the fact that they're relatively straightforward to implement is one of the reasons they became so so popular let me just turn the light up one more there you go yeah so the fact that they're relatively easy to implement I think contributed to their popularity uh and yeah you'll see that in the next assignment and then we'll get a chance to implement the RIT algorithm on the crazy flight drone uh to make it avoid obstacles in the form of PVC pipes that we're going to hang from the feelings of the lab spaces okay so yeah I guess these are the the kind of benefits the the things that are like really powerful about uh the RIT and and more generally randomized algorithms for promotion planning but we made one really big assumption uh when thinking about not just the graph storage algorithms but also these uh randomized algorithms that operate in the continuous configuration space um which is that we can perfectly execute any trajectory that the planner outputs right so whatever motion plan the RT comes up with our robot can execute it perfectly uh in practice it's not so simple and I think there are two main considerations we need to think about uh when we're thinking about this assumptions not this assumption um so the first consideration is that your robot has some Dynamics which you need to potentially take into account in the configuration space are going to be physically executable by the robot so the motion plan needs to be dynamically feasible so dynamically feasible this means that it's consistent with the Dynamics of Europe can actually execute it so one example of this is a color so just think about like a standard car if you ask the car to move sideways right that's not something you can do so you might have to sort of go from one configuration where the car is oriented in a particular way to another configuration this kind of sideways with the same orientation the car like cannot just translate sideways it needs to go through some potentially more complicated motions and to get between those two configurations so that's one one uh consideration just the Dynamics of the system uh the second one is uncertainty which as I guess we had discussed in the first lecture is going to be a recurring theme uh throughout this course um so even if you come up with some motion plan to get from some starting configuration a to sample configuration B that is dynamically feasible something that a robot can actually physically execute there might be sources of uncertainty that prevent you from following the trajectory exactly so for instance for a drone like maybe you start off uh following this trajectory but then there's a wind gust that comes and blows you away from the trajectory that you're trying to follow how can we get back uh close to the trajectory that we're trying to follow um so I guess the kind of hint here is to use feedback control this is a slightly problem than what we had seen before when we're thinking about hovering so with hovering there was a particular state that we wanted to make stable uh here we're trying to to track a trajectory but similar techniques to what we discussed when we talked about lqr for instance are going to be applicable here as well so the the plan uh is basically to talk about uh this point so coming up with motion plans that are dynamically feasible uh for today's lecture and I think most of uh next lecture as well and then at the end of uh the next lecture we'll spend some time thinking about different ways in which we can handle uncertainty like wind gusts all right so so yeah let's go back to our rrt algorithm and just think through the the implications of this assumption that we made and maybe how do I have to fix it so if you remember how the DRT algorithm operates uh it has multiple iterations and that it goes through where it like incrementally and randomly builds a tree to get from some starting configuration a to some gold uh configuration B and just as a quick reminder for every iteration of the algorithm you randomly sample some configuration in your configuration space maybe it's this point over here you look at the line that connects that randomly sample point to the closest vertex in your existing trees I'm drawing the kind of first iteration here and then you extend towards that randomly sample point and then you keep iterating so maybe just another iteration here so you sample some point over here you look at the line that connects uh to the the nearest uh Vortex and then you extend uh in that direction and then you incrementally you kind of build a tree so what we're gonna think about is um this extension operation uh right so our extension operation in the the RP algorithm that we described in the previous lecture was really simple like just move in the direction of the randomly sampled configuration so just a align segment um but yeah in in practice uh that kind of extension may not be dynamically feasible so in the the previous lecture I guess just spell it out explicitly um when we did this extension we looked at a line segment from uh Q Mir which is the nearest vertex in the existing tree to Q Rand which is the randomly sample configuration um and yeah the problem with this uh is that this extension uh this kind of little segment of the trajectory may not be dynamically feasible so it might just be like physically impossible for the robot uh to to move in this way and I think a nice example is the one that I described before let me just draw it out so imagine that you have a car so this is a top for you these are the wheels of the car let's say it's pointing in this direction so this is a Three Degree Freedom system X Y position and then Theta is the orientation so let's say you wanna get from the starting configuration a to this uh goal configuration which has the same orientation um and yeah I guess that's working on Craigslist maybe think of one of these as really being like Q near let me relabel this alcohol there's a q near and then maybe this is Q ran um so the standard version of the RT algorithm would connect these two configurations with a line segment in configuration space and the line segment is just going to be move sideways right so keep the orientation constant and then just move the the X Y location from here to here and that's not something that a car uh like a standard like garlic can do unless you have like only Direction wheels or something like that but uh but yeah so that that's the the main like challenge so uh what you might have to do so you it might still be possible Right to get from Q near to Quran but just not via a line segment so you might have to do something more complicated but the car moves forwards and it kind of goes like this and then reverse is back uh and that would get you from Q near to Quran but just not by moving fibers okay so the question of course then is how do we find uh this kind of extension uh like this middle world that will get us from a QR to cure and in the cases where just the line segment is not a few questions okay good yeah so that is uh that is one possibility uh we're gonna actually think about that and Vary into that in the next lecture uh what we're going to do today is think about a particular class or the systems class of like robotic systems for which this extension operation becomes relatively straightforward [Music] thank you yeah so today we're going to look at what are known as differentially flat systems and this is a relatively Broad classic systems so examples include some of the systems that we've thought about here in this in this course so the planar what order is a is going to be an example of what's known as a differential flat system I haven't defined it of course but that's going to be what we do for the rest of the lecture so brain recorder 3D uh quadrotor um this kind of car example that I sketched on the board over there car with a trailer like attached to it yeah there's a whole bunch of officials that are going to satisfy this notion of differential platforms um and yeah I guess for these systems it's going to turn out that this extension operation that is required for the for the RIT can be done in a relatively straightforward at least computationally like straightforward way Okay so I'm gonna first go through a example for the planar quadrotor [Music] before providing like the general like definition of differential platforms [Music] um and then yeah then we look at the general uh definition so yeah I'm just to keep things relatively uh kind of conceptually simple at least for now we're going to ignore obstacles in the environment and just focus on this extension operation so the extension operation for the rrt so basically getting from some configuration queue near to some configuration moving towards some configuration like qrand um Okay so yeah let's go through this example so this is a reminder the planar Quadrunner it's a three degree of Freedom system six-dimensional State space and two control inputs um so the control inputs here um I'm gonna use F1 and F2 as our control input so these are trust forces produced by their propellers we could also use U1 and U2 like the total thrust and the total moment but yeah I'm just going to use the individual like trusts produced by their Motors as our control inputs uh X and Y uh let's say the center of mass location and then Theta is the the orientation of the Drone and if you look back at our equations emotions I think this was from lecture two probably we have these three second order differential equations so with all this equation one U1 plus U2 cosine Theta divided by m minus G and then Theta double dot U2 minus U1 times l divided by I I is the moment of inertia L is the this length from the center of Mass to the propeller axis whether the philosopher is produced okay so all right so what does differential flask between in the context of the planar quadrotor [Music] so suppose um let's say you come to me with a trajectory foreign for the center of mass of the the drones [Music] so you basically give me X as a function of time and why as a function of time for let's say there's some finite period of time some finite time Horizon like the um zero to account will be and then the same for why so what I'm going to claim foreign [Music] two things so let me write it down and then we'll make sure that the claim is clear and then we'll actually approve this so the first thing I'm going to claim is that I can yeah [Music] uh whoops sorry yeah I meant to use I mixed up my Edition here uh this should be F1 and F2 and then this was F2 minus F1 yeah thank you um yeah you want you to usually we refer to that of total trusts and the dot moment uh F1 F2 other trust from the individual propellers okay so the first claim that I'm going to make is that if you give me some arbitrary uh trajectory for the center of mass of the Drone there's going to be some conditions technical conditions will come to that later but just for now pretend that you come to me with some completely arbitrary trajectory what I'm claiming is well twofold so the first is that I can recover uh a control input trajectory which yeah we can say is like f 1 as a function of time and F2 as a function of time thank you that achieves the center of mass trajectory that you gave me so that's claim number one and then there are second claim is that I can recover full State trajectory or the trajectory of the full state of the system so X as a function of time where X is the the six-dimensional state of the quarter system so I guess just to remind you so the state for the planar core router has x y Theta and the next Dot y Dot and Theta Dot okay let me just pause and see if the claim is is clear let me just go through the game so the first part is is something that you'll give me right so you come to me with this arbitrary trajectory uh for the the center of mass of the Drone uh and then I'm claiming that I can do two things the first is that I can come up with a control input trajectory so some propeller like trust commands over the same time period so zero to capital T that actually makes the center of mass do what you desired right what what you specified and then the second part is that I can recover what the entire state will do so the full like six dimensional State I can recover that from this uh like Center of Mastery actually that you gave me questions what are the restrictions yeah so I'll come to that uh like the exact restrictions but they have to do with the smoothness of the the trajectory so basically the only thing we're going to require uh is that these X and Y trajectories that you come to me with are sufficiently differentiable in time and we'll see exactly uh what we need like how many times we need to be able to differentiate it but yeah I guess this is an extreme example you could say okay like let's say you come to me with a trajectory that looks like this so that time equals like zero we are Central muscles over here and then at the very next like kind of time step that you teleport right so that's not obviously I'm not gonna be able to uh come up with a control input trajectory that's going to make the Drone teleport but this is an example of something where uh which is not so yeah you go like this and then you like immediately jump and then you go like that so this is not even continuous um so we're going to require that the Central master is actually uh that you give me uh is sufficiently like Smooth so not just continuous but also like differentiable to a particular order which we'll specify in a bit yeah I guess other questions like is the the claims here I haven't proved this yet but is is like what I'm claiming uh clear questions on that go ahead um [Music] yes exactly yeah so I can uniquely recover the full State trajectory from the the center Mastery actually that you provide yeah so it's not like I'll give you a bunch of options and then one of them will be correct I'll give you exactly the full State trajectory question uh yes yeah yeah so that's what I was I guess I was saying um [Music] so I think what you have in mind is maybe like just specifying the start and end uh what I'm saying is that you come to me with a whole like trajectory for the center of mass so you give me X of T and Y of t for all like T in this like window of time from zero to your capital T so in your example like you would tell me that you like keep oh I see you're saying the orientation might change right yeah [Music] um foreign yeah I guess that's let me see yeah maybe there's going to be some uh non-uniqueness [Music] yeah I think it'll become okay I think I think it's a good point so if you like change the if you give like some trust Force for propeller two and then like a negative trust Force for propeller one uh then the Drone would just fall uh so you'd have to uh let me just add like data is also specifically uh no so I think we're gonna not want to do that uh so yeah we're gonna this is the specific library of the claim like if you just give me the Central master directory I can uh yeah I can give you the control input which is actually and the default State okay yeah maybe let's put like a little asterisk we'll come back to it I think the the match will like make clear whether it's it's Unique or not I think it's Unique but there might be some uh yeah good other questions okay all right so let's try to actually prove this claim [Music] so I'm actually going to prove the yeah we're gonna look at the second part first so recovering the full stage trajectory uh so again what's given here is a Central Mass trajectory from that we're going to try to recover the full State trajectory so yeah the proof of part two for the planar court order so from where did I erase the equations oh no they're right here okay yeah I guess from these two uh equations so from equations one and two and then a couple of three um so if I look at if I try to calculate the tangent of theta Theta again is the orientation angle so that's sine Theta divided by cosine Theta um so I can multiply the numerator and the denominator by F1 plus F2 and then denominator same thing in F1 plus F2 cosine Theta so from equation one we see that F1 plus F2 cosine Theta sorry F1 plus F2 sine Theta is minus X double dot m and then in the denominator from the second equation we see that F1 plus F2 cosine Theta is y double dot m y double dot m uh minus mg or a plus sorry plus mg so the M cancels here and then we're left with minus X double dot divided by y double dot plus G so this is uh tangent of theta so Theta is arctan uh minus X double dot divided by y double dot plus G okay so yeah I guess what we've done here is recover Theta as a function of time from the trajectory of the center of mass that you gave me so again we're assuming that X as a function of time is defined Y is the function of time is defined so if we know X and Y as functions of time we can differentiate those twice right so to get X double dot y double dot at any point in time and then the right hand side over here is completely specified and I can recover Theta for any time um so we see that we need the trajectories for the center of mass like acts as a function of time y as a function of time to be at least twice differentiable in time right so I've recovered well I guess the first two components of the State uh X and Y we're assuming are given this is showing us that we can get Theta as a functional time as well and now I can differentiate so I can differentiate X as a function of time Y and Theta to get the fourth fifth and sixth components of the state so they get x dot uh why not and Theta Dot okay let me pause so we'll get back to the uniqueness question but I guess let me pause here uh see if the there's any questions on the map or or the meaning right so I want to just make sure that oh yeah convince you that I've proved the second part of that claim that I've recovered the full State trajectory given the center of mass trajectory any questions on that okay so yeah let's go back to the uniqueness question uh so I think yeah the answer might depend on the specific trajectory uh so I think in your example you were saying basically X and Y are discounts and less is zero yeah yes yeah so then we have this uh so we have like X double dot and Y double dot are both like zero right uh so we were allowed to choose like Taylor then uh like to be to do something like words that goes around and keep the Central Mass the same yeah I think with the the G here uh like we we have yeah uh we can like exactly recover uh Theta um so yeah it's a good point so it may not always be unique and I guess this like gives you uh like looking at this like tells you like whether or not you can look uniquely back out so like arctan is like uniquely defined given external doesn't want to hold up then we're uniquely getting the uh uh like data as a function of time all right good other questions on this okay all right so that's the second part of the claim so let's look at the the first part now [Music] [Music] all right [Music] thank you somebody using the Blackboard like doesn't fully erase the guy I thought it was the deal is like bought a new eraser for them it's still crazy so yeah I guess if you say that again spray yeah is this gonna cause like so many bosses in the lecture if that looks pretty and it is someone does clean it every week so like every Tuesday it's like super clean there's a similar right on it uh it gets messy again yeah if you can't see something just ask okay all right so yeah let's look at the the first part of the claim so given the Central Mass trajectory how can we back out or recover the control input trajectory that's going to make the Central Mass trajectory do what we specify so um yeah so given let me see I'll call this equation four so this is the equation for Theta which we've now recovered given the Central Mass reaction so from uh iteration for we can recover or we can differentiate twice so time differentiates twice to get here a double dot um I received the equation sorry let me just put them back [Music] [Music] foreign to three right so from equation for which was our equation for Theta we can differentiate that to get either double dot s uh yes exactly yeah um so we're kind of doing things in a reverse orders approving two firsts which gives us Theta as a function of time we differentiate that twice to get a double knot so once we have theater doubled up we've specified the left hand side of this equation so of equation three and so we can then combine equation three [Music] with either equation one or equation two um so yeah let's say either one equation one to get two equations in two unknowns so the first equation is Theta double dot equals F2 minus F1 L divided by I and then the other equation let's say is equation one so X double dot is minus F1 plus F2 times sine Theta divided by m uh so now if we look at these two equations everything other than F1 and F2 uh is known right so Theta double dot we said we can get by twice differentiating equation four uh which in turn we had got from the Central master directory like X double dot y double dot L and I those are these physical parameters will be drawn X double dot we can get by twice differentiating the Central Mass trajectory uh Theta we got from equation four m is this physical parameter so there's two unknowns here F1 and F2 and two equations and these are linear equations in the unknown so we can just uh yeah because we know how to solve linear equations we can just solve these you can just do it by hand like this analytically kind of rearrange the equations get F2 in terms of F1 plug it into equation one and so on so yeah we can solve these to get F1 and F2 [Music] f24 any time in this window of time that we're looking at so yeah I guess what's the uh the claim here um so this input trajectory this control input is actually the the thrusts F1 I have two responsible time uh will make the Drone uh execute we'll make the Drone Center Mass like execute the trajectory that you specify um so the caveat I guess I mentioned this at the beginning so we need to make sound assumption on the trajectory for the central Plus and we can see exactly what the assumption is by looking at these equations foreign so let's see so this Theta double dot to calculate this we needed to differentiate equation four uh twice so to get uh Theta double dot we need uh so the fourth time derivatives so D 4 x DT and the four y sorry db4 because yeah we're like Theta itself depends on second time derivatives and x and y and then we're differentiating uh twice more uh so we need yeah like four times uh time differentiability for the X and Y trajectories that were given so this is the the assumption and I guess this is the only assumption on the Center of mass uh trajectory yeah let me pause here and see if there are questions I guess either on the math or the meaning okay so yeah I guess why is this powerful um so it's powerful because any like any trajectory that's sufficiently smooth for the center of mass we're saying we can figure out how to make the Drone move uh to make the central monster you actually do that and we can combine this with the rrt algorithm so I guess two people see how one might combine differential flatness with the the RT algorithm s okay yeah yeah that is one way to do it I guess other thoughts on how to combine flatness with the with the RT algorithm so the I guess the hint is that we can use this uh idea of like differential platforms for the extension operation in the RT algorithm right this is giving you a way to do the extension operation uh go ahead um so so far we're ignoring obstacles but if we combine it with the RT algorithm we can actually take into account obstacles as well um so yeah good sorry so just just those differential flatness like the idea that we can make way money too yes yeah yeah exactly yeah so uh this system maybe let me Define some terminology here so I'll describe the the full kind of general definition of flatness in a in a bit um so these the center of mass like position so X and Y the first two components of the state and this is known as the flat output foreign outputs for the the planar quarter system so where does the name comes from I'm not totally sure like why it's not like flat I'm not not 100 sure on that but that's yeah that's the terminology that people use see I guess it's going back to the the RT uh algorithm um one idea is to use differential flatness uh for the extension operation in the RT algorithm but one restriction we have to make is that uh we grow the tree so we grow the rrt in the flat output space which for the planar quadrature the x y space um so I guess when would this work so when can we grow the rrt in the uh the flat output space like what uh assumption or like approximation we need to make to allow for that so we're kind of ignoring the orientation of the Drone so if we think about the Drone of the sphere so we ignore the the geometry of the Drone just say we have some sphere that encapsulates the Drone uh that's the case where it's actually okay to grow the rrt in the flat output space in the center of mass uh like the XY space because then the the obstacles can just be translated to that space right so we have physical obstacles in X and Y if we just treat the Drone as a sphere or as a circle we can inflate the obstacles as we discussed in the previous lecture by the radius of the Drone um and then we just grow the RT in the the flat output space uh and then then everything is I guess hopefully relatively clear like when we do the extension operation when we go from uh some queue near like vertex uh towards a qrand uh what x we can actually look at the line segment which is now a line segment in the center of mass uh like space in the XY space and we're saying that for any like sufficiently differentiable uh Central Mass trajectory I can tell you what control inputs I need to make the central bus do that and moreover I can recover the default State as well so when we do the extension operation if we're doing it in the flat output space and then we can use differential flatness to actually like do the extension by like going through this process of getting back the control inputs I guess does that make sense questions on that I'll show you like videos of that on an actual um yeah yeah and then and then we we can like just interpolate however we want as long as as it's uh sufficiently smooth uh and so actually smooth just means like we can time differentiate uh four times for the player quarters okay so let me give the general definition of flatness [Music] [Music] [Music] all right so yeah suppose you have some kind of General dynamical like systems and equations of motion x dot equals f of x this is our general form um and then just to make the dimensions concrete so let's say x belongs to RN U belongs to RM um so the the system is said to be differentially flat if two conditions are satisfied so analogous to what we talked about with the plane of what order uh if so that exists some function which I'm going to call Alpha such that [Music] foreign outputs [Music] so X and Y for the the planar quarter versus the Z is Alpha which is a function of x U and then potentially other time derivatives of hue up to let's say the pH time derivative so you superscript b is the the time derivative um [Music] foreign yeah so the system is set to be differentially flat if if you have some function Alpha which is potentially a function of X U and M higher order time derivatives and that function the output of that we're calling that the flat outputs uh such that uh we can write so first we can write X which is the full state of the system as a function which we're going to call beta of Z and it's time derivatives so up to order Q let's say and then the control inputs we can write as a function gamma of the same quantities so the plan outputs [Music] okay so I guess let me just make that connection to the planar quarter rest of the planar quadrotor I said that the flat outputs are X and Y and the first two components of the state um this definition is allowing for a much more General dependence right so your flat outputs can depend on not just the state so in the parent portal Alpha was just the Z was this a function of the state right so you just pick out the first two components of the state that's what we're calling the flat output space this is allowing for much more so in general you can allow the like what you call the flat outputs to be a function of the state the control input and up to like B time derivatives of their control input um usually for like most examples that I've seen the flat outputs are just a function of the state like typically not even a function of their control inputs but in principle you can allow it to be a function of the control input and its time derivatives and these two claims are analogous to the claims I made for the planar court order no matter reverse the order but the first claim is saying that we can recover the full State trajectory so there's some function beta which takes as input the flat output and and its time derivatives and gives you the default state and there's some other function gamma which takes the flat outputs and gives you the the control input that that's required question errors within the state yeah so the flat outputs uh it depends on the system so there's no like kind of General um [Music] so it has a physical significance depending on the system so for the planar quad rotor so basically you would say that a system like X naught equals f of x u is differentially flat with some specific like flat outputs and I guess what we proved over there is that the planar quadrotor is differentiate flat uh with X and Y like the Central Mass position uh being the the flat outputs um yeah so I guess does that make sense okay let me draw a picture hopefully which will clarify things further actually I'll do with that so again just to kind of illustrate the the process um so what we're saying is that someone comes to you with a trajectory in the flat output space so Z as a function of time um so let's say I don't know whatever I'm just drawing uh like the Z is like univated here just uh for me to be able to draw pictures but in principle it's higher dimensional so someone gives you a z as a function of time which is sufficiently smooth so which you can time differentiate after let's say the keyword time derivative so what differential flatness is saying [Music] is that we can recover you so the control input as a function of time that's required to make the flat outputs do whatever you specified right so you specify some flat output trajectory I'm going to be able to come up with a control input trajectory and that's going to be able to make the flat outputs do exactly what you specified and moreover I can actually recover what the full state is going to do as a function of time as well like just given the the flat output trajectory question and then uh no good question so uh so I guess let's think about that so do you think that the payment board order is differentially flat if I Define the flat output to be x y and Theta like do you think it would satisfy like the two conditions that you need to satisfy flatness [Music] okay I guess why not or what's the intuition yeah so the answer is no so uh you cannot you're not going to be able to specify some arbitrary uh trajectory in x y and Theta space and be able to like make the Drone actually do that so an example of that is actually similar to the the car so here's a non-example I guess so cleaner quad with the people there x y and Theta so actually does someone see a trajectory so let's say this was the trial output space water trajectory in the flat out took place for which we cannot recover our control input sequence that's gonna make the Drone do that go ahead uh yeah that's a good example uh well I guess that assumes that the thrusts are only positive so if you allow for negative thrusts you could do that so if you yeah so so far we haven't made any like limitations on the control input uh but you could you could make that so you could say like the thrusts are only going to be positive and in that case that the Drone is inverted and you want the Y to increase uh that's not something that's going to be feasible so that's a good example I guess other examples of trajectory is going orientation being 90 degrees yeah exactly so so that's a that's a yeah that's a great example so the Drone is like this okay a few other propellers and you say that X and Y remains constant over time Theta also remains constant that's not something that drugs can physically do right it's just gonna like fall off because the control inputs correspond to trust which are like perpendicular like in the horizontal Direction um yeah so so I guess not every choice of alpha like not every choice of like function of the state and control inputs is going to make your system differentiate flat um so yeah like what the flat outputs are depend on the the actual like system that that you're looking at uh all right yeah I guess questions on that picture or or the definition of flatness okay yes go ahead yeah so with the planar court order uh the analog was where Sophia's X and Y the Central Mass position um so we recovered I guess I will raise the equations that when we recovered Theta from X double dot and Y double dot uh right so we needed at least like twice differentiability so Z double dot uh in that function gamma over there uh and then to get uh Theta sorry to get the control input we need a Theta double dot so we needed to differentiate twice more so we needed like fourth time derivative so Q uh in that example would be equal to four does that make sense okay good all right so let me State some uh I guess facts uh about differential partners now so I mentioned a number of systems are differentially flat so we proved that the the planar coordinator is differentiate class other examples can you try the other one this one doesn't do anything um yeah I guess maybe the most uh one known example is that the full 3d quadrator is also differentially flat with flat outputs as we again just a function of the state and specifically the it's going to have four components of the state so they're safe for the full 3d coordinator if you remember as 12 dimensional so the flat outputs are going to be X y z and the yaw angle and this is the center of mass position so yeah this is like pretty powerful right so we're saying that I'm claiming I've improved this but I have a reference in the uh the notes if you want to see the the proof of flatness for the 3D chord roller um but yeah I guess what we're saying is that no matter what trajectory you specify in this flat output space so you come to me with any x y z and a yaw trajectory over some period of time that's sufficiently time differentiable I can give you a control input trajectory so I can tell you exactly how to change the propeller trust over time to make XYZ and the yaw do exactly what you specified and moreover I can tell you exactly what the full State like the 12 dimensional state of the system is going to look like over that period of time for us to be able to make the XYZ in your do what we specified which I guess that's the claim for the 3D quarter and there's a bunch of other examples so in the uh in the assignment that's going to go out today I'll reference a paper that has a kind of long list of different examples of systems that are known to be differentially flat um so the car that I mentioned uh that that's one example and you get you'll get to prove that it's your friendship flask in the next assignment um yeah other things that people look at are like car with multiple trailers foreign [Music] and then you have some like trailers that are like attached to it with uh with rigid links um yeah I guess these can like move so they give you like more degrees of freedom and I think here the uh flat outputs correspond to the trajectories of the trailers I think you can make the the trailer do uh pretty much whatever you want yeah there's a whole bunch of example systems that are like systems that are known to be uh differentiate flag uh one theorem that's kind of useful to keep in mind again this is not something I'll approve I'll just state it [Music] foreign [Music] is is that the number of dimensions of the flat output space is equal to the number of control inputs so in both the examples that I mentioned so the planar quadrator the dimension of the flat output space was two right so X and Y uh and the number of control input that had was also equal to two for the 3D quadrator so that Vector Z the flat output space is four-dimensional and the 3D coordinates for control input this is a general Trend so any system which is differentially flat uh the dimensionality of the flat output space so the number of components of the flat outputs is always going to be exactly equal to the number of control inputs so the intuition is that you basically need enough like Control Authority like as many uh like Dimensions to control the uh the flat output exactly if you want so we're saying that no matter what trajectory you're in the flat output space you give me I can make your system do that so to make it do that like I need a sufficient Control Authority so I need enough control inputs so that's kind of the very leg iteration that you can formalize listen to a theorem all right questions on this okay so I guess one question you might ask is like how do we find like how do we prove a system is flat [Music] so if you build like your own robot uh maybe you want to prove that it's differentially flat how would you go about doing that uh unfortunately there's no General recipe so there's no like algorithmic procedure for proving that a system of the Frenchie flat usually it's just a bunch of like algebraic like trickery so you just scare a big range of emotion you try to recover the control input you travel around the state and if you do that I guess you write a paper and you say that your system is differentially flat and everyone's likes your paper and so on but yeah there's no like General algorithm uh at least yeah as far as anyone knows for um just taking some arbitrary system X or equals F of XU finding some Z that satisfies the flatness conditions but yeah I guess the good thing is lots of people have thought about the French platinums for many systems and um like chances are if you're working with a traditional uh kind of robotic system like a quarter roller or even like fixed Wing uh drones uh like many models or fixed-wing airplanes are also known to be differentiate lab then you can rely on the hard work of other people at the demonstrate flatness okay so let me show you some videos of I guess what we can do with differentially flat differential flatness so this paper this is the paper from Daniel melinger and Vijay Kumar at the grass Lab at UPenn I think that paper came out around 2010 so around 12 years ago so it's called minimum strap snap trajectory generation and control for quadrators uh this paper is like really popularized the idea of differential flatness for quadrators and demonstrated what one can do with uh with this idea so here's a this was the original video we can precisely track trajectories with large accelerations and velocities [Music] we developed a method for generating trajectories to optimally fly through any number of waypoints [Music] we can quickly generate trajectories to react to Dynamic objects foreign [Music] like the the different eclipse in the video we can precisely track you'll notice that that all the obstacles um are not like super tight and in particular like all the obstacles are made such that uh it's a okay approximation uh to treat the Drone as a sphere right because the the flat output phase for the 3D quadrotor is x y z and yaw like you're not allowed to directly specify roll and Pitch so you can specify any trajectory you want for XYZ and yaw and flat output as differential flatness tells you that you can make the Central Mass on the yaw angle do what you specified but the role in which could be doing something like that you don't have control over and so if you're obstacle kind of arrangement required you to roll uh to a specific like angle or like pitch to a specific angle that's not something uh like directly that you can do with uh with flatness uh so this the first Clips over here [Music] [Music] factor to zero we developed a method for generating trajectories to optimally fly through any number of waypoints [Music] yeah I guess the most interesting ones here are the uh the ones where it goes through the hoop so what's happening here is that there are motion capture like markers as if you noticed uh yeah on the hoop there are like small like markers that motion capture cameras are tracking um so this was just ballistic motion right so if you if I give you an initial kind of snapshots of the trajectory for the hoop You can predict exactly what the hoop trajectory is going to be at Future times and then you can plan XYZ and your trajectory for your drone that's going to make it like through the hoop so that's exactly what's uh what's happening in the in the video uh and yeah I guess the last video I wanted to show you was actually one I showed in the the previous lecture um so this paper uh so this was from uh Charlie Richter and Brian necro at MIT so they were combining rrt and a KRT star I think uh with differential flatness and this was the same video that I showed in the previous lecture for the Drone was navigating through a indoor environment so they were using rrt star for the the planning and then using differential flatness for the extension operation like to actually make the drones Center Mass trajectory in your uh like do whatever the rrt like tells you to do so yeah these are like pretty pretty powerful ideas not every system is differentially flat so we need some recipe for handling more like General Dynamics constraints and that's what we're gonna look at in Thursday's lecture so I'll see you then Facebook
Introduction_to_Robotics_Princeton
Lecture_6_Princeton_Introduction_to_Robotics_Discrete_Planning_BFS_and_DFS.txt
all right I think we can uh go and get started so the plan for today is actually to start a new topic which is motion planning but before I do that I just wanted to show you a couple of the videos that I couldn't show uh in the at the end of the last lecture uh with lq art so here's a video of a drone with a lqr controller implemented and basically what's happening is that the Drone kind of operator is giving the Drone different set points so different covering States desired States and the lqr feedback controller is basically stabilizing the system to these different states as they switch as you can see on the the plot above and this is a very well tuned lqr controller so you can see the performance is like super solid right it's like right it's getting like right to the location uh hovering like completely suddenly so for our lab we're not expecting you to get like this level of performance so we could get to this level of performance but it's just going to take much more time um so yeah I guess we have kind of looser requirements for for this lab but uh this is just to give you a sense for what we can achieve with lqr controller uh here's a different system this is a maybe a more fun example this is actually worked that I was involved with uh during my PhD so this is a fixed wing airplane so it's a kind of standard airplane with a propeller single uh propeller well actually really two counter rotating propellers at the top um and it's doing what is known as prop hang propeller hang so it's kind of acting as a helicopter and it's using its wings it's aileron that's Rudder and and it's elevator to stabilize itself to this prop hang configuration and I guess you might ask like why would you want to do this with an airplane and the reason is it's kind of like a safety uh maneuver so if your drone is flying fast and there's a wall in front of it you could transition to this propeller hang and basically just like hang out there so this is also using a lqr controller uh here's a completely different examples are not drone related this is a humanoid robot that is using a lqr controller uh to stabilize itself so this is actually a hand tuned I think PD controller proportional derivative controller and then they had a yeah I guess it failed and then they have a lqr controller that I think worked slightly better on on this example okay so yeah I guess these are just the kinds of things one can do with uh with lqr so it's a pretty powerful and really popular technique for doing feedback control and you'll see some of the power sleeves in this lab that you're currently doing uh and I wonder where some of these logistics for the lab but just really quickly so this is the the space in the ammlinger building this is where we're going to have office hours throughout the duration of this assignment so yeah I guess maybe just make a note and don't show up to the equal architecture up here for product hours and this is the other space that we have in in the equal this is a G105 so for the lab you're welcome to use either one of these spaces this one is slightly smaller but the lighting is actually a bit better we're not using cameras yet but later on in the course when we work with cameras the lighting here is just slightly better but it's a little bit more cramped so there's a bit of a trade-off uh anyway I went over these uh Logistics so everyone should have access at this point so John framework confirmed that that everyone has uh access to both spaces in and linger and in G105 I just update your tiger card I hit a hotspot and you shouldn't have access um I want to emphasize this I mentioned it in the PDF but I want to want to mention it here in lecture so we take safety really seriously so this is the reason we have you do the safety training so the drones are obviously really small and they're not going to cause you uh well there's a limited amount of physical harm they can cause you but they can cause a physical harm especially if they like the propellers get close to your eyes I think that's the the biggest uh thing uh I guess we like worry about so when you're operating in the Drone cage so like close to the Drone uh please wear safety glasses at all times so we've kept a bunch of safety glasses in this space and also in the equal space um so anyone who's kind of operating in inside or like standing hanging out like inside the Drone cage you should wear safety glasses for this lab you really shouldn't need to get too close to the Drone you can set the Drone up come out run the in the chord so you do something probably fail in the beginning and then let go and switch it off so when you're switching it off that might be the only place only a case you might want to get close to it question oh that's a good question uh so they are shared uh yeah I didn't think about that that's a good question um okay we're gonna try to see um I guess you're worried about covert I imagine right yeah yeah no that's a great point I didn't really think about this okay um all right we're gonna try to maybe have uh like some disinfectants or just try to have like enough safety glasses for everyone uh I think we don't quite have enough for everyone but uh but yeah that's yeah thank you for bringing that up I'll try to understand that good other questions yes one more time you could bring your own glasses yeah but I guess I don't want everyone to like have to spend money to to do that but but yes if you do if you have them you're welcome okay got it okay actually does the school get them out uh in general or so oh okay okay maybe just a show of hands I guess how many people have their own maybe from a different class oh it's actually quite a few okay okay all right so that might we might have enough like if you bring your own and then combining that with the ones we have uh that might be enough but yeah I'll try to sort it out thank you okay yeah so that's it I think for the lqr uh components I want to switch topics uh talk about motion planning and I guess just to motivate this um so if you if you recall our overall goal that I mentioned in the very first lecture uh that we're trying to build towards we're trying to get the Drone to hover uh or sorry to navigate autonomously uh through different Optical environments completely autonomously and the first module on feedback control uh was just getting it to hover so we're going to try to move Beyond hovering in this module on motion planning um so yeah the the next five lectures we're gonna focus on basically a particular aspect of this problem of getting a drone to navigate autonomously through a cluttered environment and that topic goes by the name of motion planning um so here's a concrete uh kind of problem so it goes by the name of the the piano movers problem um so the therapist is follows so imagine that you have some geometric model of a grand piano so someone comes to you with a cad model for whatever reason of a grand piano and the question I guess for you is how can we get the piano from point A let's say that's the door of some small apartment uh to point B which is where you want to place the piano at the end uh how can you get it from point A to point B without collisions so without the piano colliding with anything that's in the apartment uh so maybe you have a cad model of the the apartment as well um so this is a fairly complex geometric problem to get the piano from point A to point B especially the apartment is kind of uh small and constrained and just to make this very concrete here's a a video um of like visualizing this piano movers problem so someone has a cad model or some kind of geometric model and yeah they're taking it through this sort of apartment like uh environment so not super realistic obviously but uh but yeah just to give you a sense that so I think that the main point here really is that I want to emphasize is that the orientations can matter right so it's not you you can't necessarily treat the piano as a sphere so the the geometry of the piano actually matters because to get through tight spaces you might need to like change the orientation like the role for it and yeah to get yeah get it to where you want without colliding with anything uh so the piano worst problem has a really long history in robotics so at least dating back to the 1980s maybe even uh I think before that and there was a lot of work uh especially early on in the robotics literature uh trying to understand the the computational complexity of this problem so suppose someone gives you a model the piano gives the amount of the apartment uh can you find a path from configuration data configuration B that is some computational problem um so it turns out that that problem is p space complete so polynomial space complete uh which I guess maybe just a quick show fans uh how many people have heard the term P space it's complete okay okay so yeah roughly yeah it like the application is that it's uh computationally uh not tractable uh so we shouldn't uh expect to find kind of exact algorithms um or these exact efficient algorithms uh to solve the piano movers problem and nowadays the the term uh piano movers problem is essentially synonymous with motion planning so if you hear someone saying oh I'm working on the piano movers problem they don't literally mean they're trying to move over piano they just mean that they're working on on motion planning okay so here's the the motion planning problem then uh so the goal is to find a path find some uh trajectory um from some configuration a to configuration B uh for your robot uh without any uh collisions with the the environment um and just just again to emphasize this we're assuming that uh we have some geometric model of the environment that's provided to us so how to actually get that geometric models how to get a map of the environment uh we're not going to worry about that right now so we're just gonna assume that someone has come come to you with a model of the the robot a model of the the environment and our goal then is just to solve this motion planning problem um so here are some examples of planning um so I guess one of the maybe clearest ones has to do with autonomous vehicles uh this is a video from a startup company real-time robotics that's trying to speed up uh motion planning so as I mentioned it's a complex geometric problem um on a autonomous car that needs to run pretty quickly um so this company is trying to basically speed up these computations for motion planning so I'll just play the video maybe just for a minute motion plan is a key component of autonomous Car Technology motion planning algorithms compute a path for an autonomous car to follow that avoids collisions with both static obstacles such as pothos and dynamic obstacles like other cars bicyclists and pedestrians motion planning is critical but slow most implementations use one or more use which consume hundreds of watts of power and are only capable of finding a single plan in about 100 milliseconds this is fine for wide open highways but it is not sufficient for much denser and less structured urban areas um yeah I guess you can watch the the whole video maybe uh the rest of it is not super relevant for for this lecture anyway uh here's a different one uh that's not that actually related to robots these are kind of these mechanical uh puzzles that maybe you've played around with um so you can think of this as a motion planning problem so there's some initial configuration of these puzzle pieces that's given to you uh there's some final desired configuration where these things are separated so essentially any desired configure any configuration where these two pieces are separated you could say achieve the goal of solving the puzzle so you can think of this as a as a motion planning problem and there's actually some planning algorithm that's running under the hood that generated this this plan to get these puzzle pieces apart in the video uh here's another example maybe this is not might be like slightly less obvious as a to think of this as a planning problem but but it really is uh so this is Rubik's Cube uh solving so let me play the video and then I'll uh say a bit about it three two one yeah so it's like whole teams of people working on solving Rubik's Cubes really really fast autonomously and the reason this is a planning problem right is is there's some initial configuration that the Rubik's Cube has so some like random like Shuffle configuration there's some final desired configuration where all the uh the faces have the same colors and our goal is to get this Rubik's Cube from that initial configuration to the desired configuration and you can think of that as a motion planning problem so the motion here is literal right there's actually like the sort of robotic or mechanical system that's like turning the faces around for the cube okay so um as with with feedback controls when we talk about feedback control um if you remember there were two uh criteria two things that we wanted oh sorry question Yes actually I don't really know how excuse work but I'm just wondering is there like a certain like number of moves I will always get you to the correct solution or does the robot actually have to have some sense oh yes yeah so definitely the the latter um so this is not what would be called the open loop plan so it's not a set sequence of operations so the robot is actually looking at the uh the specific like initial configuration of the Rubik's Cube and then choosing actions uh accordingly to get it to the uh the final goal configuration where all the faces are the same color uh yeah so it's not a set sequence the sequence of actions depends on the initial configuration regression okay so yeah when we were talking about feedback control we said that we want a feedback controller to do two things or at least two things so that the first one was this stabilize your system so we talked about different Notions of stability local asymptotic stability global asymptotic stability and so on um and then we had said okay stability by itself may not be enough like maybe we also want to optimize some objectives some performance Criterion so we want some notion of optimality as well and there's a kind of similar a set of criteria here when we're thinking about motion planning so at the very least we want the emotional planning algorithm to get your robot from configuration a to configuration B without colliding with anything so we call that feasibility just find me any path that gets from A to B without collisions but of course that path could be really inefficient right so to get from here to here I could do all sorts of like weird things and still like get to my goal so I and sometimes I want to find uh about that is the the best or maybe close to the best According to some performance Criterion that I as a user or like a robot kind of designer want to be able to Define so one example of a performance Criterion might be distance so get from point A to point B with the the shortest distance from another one could be time other ones could be energy and so on I guess you can come up with many different reasonable performance criteria that you might want to optimize so it turns out this by itself just getting feasible plans so just avoiding collisions is pretty challenging by itself that's the problem we're gonna start off thinking about to make and then optimality I think is nice to have but it's not always kind of computationally attractable to get to get the optimal parts and it's not always clear that you want like the optimal part like really you want something that's like good enough like it doesn't do something completely like ridiculous in terms of distance or time or energy but yeah we want some kind of uh optimality as well okay so just to make make the assumptions really explicit I think it's important to uh to state these uh clearly and make sure we understand what we're resuming as we go through the course we're going to relax some of these some of these assumptions like get rid of some of these assumptions but at least for now as I mentioned before we're going to assume that the geometry of the robot and the geometry of the environment are provided to us in some format think of a cad model for instance um the other assumption we're going to make is that any path that our motion planning algorithm comes up with is actually executable by the robot um so this is not necessarily true so for instance imagine you have a autonomous car and my motion planning algorithm says uh just move sideways right so this is not something at least a normal car is able to do like literally move sideways uh for now we're just going to assume kind of omnidirectional motion we're just going to assume that the robot can execute any continuous path that comes out of emotional planning algorithm uh we'll relax that assumption in just a couple of lectures uh the third assumption is that any path that that is given by a planning algorithm can be perfectly followed by the robot again this is not really true but this is some somewhere someplace where feedback control really helps that's for some deviations at least from a Target plan Target path using feedback control it's not going to be perfect and thinking about the imperfections again we'll we'll get to a couple of lectures but at least for for this lecture and next lecture we're going to make these three assumptions I guess any questions yes good beginning of like your robots Journey yeah yeah good question so for now we're thinking about the the second version of the the question um or second version of the problem where we have some initial configuration some final desired configuration some obstacles and we're just gonna do it like do the multi-planning all at once uh the Assumption there is that everything is static so the obstacles are not moving around and that everything is known beforehand uh so if those those assumptions are violated then it might make sense to re-plan so you find some plan to the end you execute it a little bit you see where things have moved around you plan again and so on we won't think about that for now uh I'll mention maybe a little bit about how to do that uh and like closer to the end of the the planning module but yeah for now it's just beginning to end all at once question you mentioned this but what do you mean perfectly yeah um so what I mean is uh basically in terms of like the distance to the desired path so let's say this is my starting like configuration this is my ending configuration um so if you have some imperfections in the model of your robot your Dynamics model of your robot or some external disturbance like like when let's say you might start off here like trying to follow this but you might end up like kind of deviating a little bit I've exaggerated the deviations here maybe there's a wind gust that came and blew your drone away from your target trajectory so we're basically for now going to ignore that like ignore uh the possibility that your drone might move away from the desired path we're just going to say that somehow magically your drone is going to exactly follow the path that the planning algorithm gives up good okay so what we're going to focus on uh in this lecture and the the next lecture is what's known as discrete planning um and the the reason we use that term uh is that we're going to discretize some continuous environment um so here's a kind of cartoon version of the the planning problem uh let's say you just have a point robot uh so robot with no like physical extent um that starts off at this point and wants to get there at point B uh these are the the articles in blue um what we're going to do with these like disputization methods is take this continuous space so our two and discretize it into some kind of grid like um regions as Illustrated on the on the slide and then we're going to solve the the planning problem for this discretized environment um and yeah I guess the the reason we're going to do this is that if if we discretize our continuous environment then we can use some pretty powerful algorithms for Graph Search which we're going to discuss today to solve the planning problem uh for the discretized version of the problem um so there's a I guess a number of different choices I've made just even in drawing this one picture so I chose a uniform grid so if you look at each cell in the grid they have exactly the same size in principle you could use a non-uniform grid maybe things that are far away from obstacles you could have like a large fell things close to the obstacles you could have a higher resolution so like smaller cells for Simplicity we're just going to say we're going to choose a uniform grid all cells having exactly the same area uh I've also chosen things to be four connected uh what that means is that the robot can go north south east west so that the cardinal directions uh but in this picture the robot cannot move diagonally that's just the choice I made it's sort of arbitrary I could have made this eight connected so you can the robot can move north south east west but also in the the other directions like Northeast Northwest uh Southeast and Southwest so that's another choice that we need to make and in your mind I think it's useful to extend this to the three dimensions and you can see that things start getting a bit more complicated if we're doing things in 3D um and yeah we also chose some resolution for the grip so I just kind of originally chose the size of the grid I could have made this coarser I could have made it finer and that's going to have some impact on the amount of computation uh that it takes to solve this planning problem so if I make it very fine uh then the amount of computation is going to get higher but we can Traverse like very narrow gaps if I make it very coarse if you have narrow gaps in the environment then if you just think about the discretized version of the planning problem you're not going to be able to necessarily find a path in that with attack space okay so I mentioned Graph Search I guess just a reminder for uh for what a graph is uh so when we talk about graph in the context of motion planning uh what I mean is a collection of vertices also known as nodes that are connected with edges um and at least for now we're going to think about undirected graph so no arrows so essentially the edges Define connectivity so your robot can kind of go from this vertex to this Vertex or this vertex to this vertex but cannot jump between two vertices that are not connected by a by Edge so I guess that's what I mean by graph in this context um so we're going to think about the motion planning problem for digitalized environments as a graph search problem and the way we're going to do this is by associating a Vertex for our graph with each cell in the grid so each of these empty spaces is going to correspond to a vertex in a graph so sometimes a point of confusion which occurs which is to think about what is these as like the the kind of Corners like where the lines meet so those are not what I'm going to call out what's these each cell here like each kind of empty space is going to be corresponding to a Vertex I'll illustrate that just to make it more clear on a couple of slides um we're gonna connect vertices with with edges either based on like four connectivity or eight connectivity as I mentioned before and then we're going to delete any vertices and corresponding edges that have an obstacle so we're just going to lay down a uniform grid associate vertices with each cell in the grid connect them using four or eight connectivity and then basically take out one season corresponding edges when there are obstacles I'll go through that process in more detail for one specific example okay so I guess here here's the example uh and just a couple of references as well um so planning algorithms which is a textbook by Steve Laval is a really amazing reference for all things uh related to demotional planning I think the textbook was written around 2006 but it's yeah it's still like pretty uh up to date and has yeah that's really really good and I've written out a specific chapter references in case you're interested in digging some of the details here okay so here's a an example uh so in this case we have one two three four five one two three four five so like a five by five uh grid um that that we've despise our continuous environment into uh again the goal is to get from uh point A to point B uh without colliding with this blue obstacle region uh and I've just written out the the process that I I've sketched on the a couple of flight to associate a Vertex with Excel connector the vertices using some convention like four or eight connectivity and then remove word series and edges that are occupied by obstacles so let me just go through that process kind of explicitly um so here is the the first step so each cell is being associated with the vertex for Simplicity I'm just going to choose four connectivity uh so your robot can go north south east west uh except at the the boundaries um The Next Step then is to remove the the vertices that have some obstacle so anything that was kind of in the blue region we take those vertices out any edges that were connected to those vertices that we took out we take those edges out as well so we're then left with this graph so hopefully that process was clear but maybe I'll pause for a second and see if there are questions on that okay all right so we're gonna have to use some labeling uh convention um so I'm going to label uh what fees with i j so this is X comma y uh so this is column comma row so not the usual like Matrix convention but it was just like X Y I think it's simpler to uh think about that uh so for instance the starting configuration a is two comma three so the second column and the third row so we're indexing starting from one not starting from zero I think it's easier to think about one indexing in this in this context and five five that's the goal configuration okay so the the start off uh at some vertex so we're going to start off at the starting vertex a we're going to incrementally explore the graph kind of expanding out from the starting vertex a and then until we've arrived at the goal vertex which has been and then we're gonna stop and as I mentioned today we're gonna just focus on finding feasible plans so just some way to get from uh what I say to what XV without colliding with the obstacles uh we're not going to think explicitly about optimality we will do that in the next lecture all right so here's the general kind of algorithm I've written written about explicitly so I'm not expecting you to parse through this in detail I'm going to walk through an example and I think it'll become much clearer what I want to highlight is just the structure of the algorithm so we're going to maintain one kind of key data structure which we're going to call Q so Q I guess a short for the word q like q-u-e-u-e so we're going to add things and take things out from from that queue there's a while loop so the while loop ends essentially When You Reach the the goal configuration and there's a for Loop here which is kind of the incremental exploration part through gonna start off with some node look at its neighbors look at the neighbors of the neighbors and so on until we finally reached the goal configuration okay so let's walk through this example so we've already discretized it in a in the last couple of slides um so this Q data structure we're going to start by initializing it with the starting vertex so the starting vertex again we're calling a which is the same as S2 comma 3. uh we're going to maintain another data structure that data structure is just a list of all the vertices that we've already visited um so a we're starting at that vertex so we're going to mark that as visited and all the visited cells the visited vertices will Mark in red in the picture okay so that's the the zeroth kind of iteration of the algorithm just the initialization so the first step separation of the algorithm we're going to take something out of the queue so there's this Q dot get vertex function uh right now there's only one thing that's in the queue that's the the starting word I say so we don't have any choice we're just gonna take that out uh and anything we take out we're going to call that X so we've taken 2 comma 3 that's the the a Vertex out of the cube we're then going to look at the the neighbors of x um so there are only a maximum of like four possible neighbors we're looking at four connectivity so if we look at the the two comma three uh vertex uh the the neighbors are one comma three so that's this vertex over here or two comma two uh that's this vertex over here so I'm just listing the vertices the the neighbors of two comma three uh we're then going to Mark these as visited uh some sharing them or coloring them with red and then add them to the queue right so the at the end of this first iteration so we've taken out the a Vertex from the queue uh and we've put in two vertices like the neighbors one comma three and two comma two into the queue all right I guess any questions on on this first iteration of the the algorithm yes [Music] yes exactly um so we are given so originally we were given the the obstacle uh geometry we're given the starting and the the end locations and from that we can um sort of derive this graph which uh encodes all the the connections like any uh like all that kind of yeah jumps that the robot can make in one step question can we visit uh uh so I'll get to that um yeah that's a that's a good question uh I'll get to that in just a second and yeah I guess the short answer is no we're not gonna sort of repeatedly uh visit vertices that we've already visited okay I guess other questions on this first iteration questions um just at the very beginning uh so this whole thing getting the commissary and then we removed it immediately uh yes in the first iteration we initialize the queue with two comma three and then we're we're just taking that out like immediately um yeah so I guess it gets more interesting at the the next iteration um so at the end of the first iteration this was our cue so we took out to gamma three we put an it's about its neighbors one comma three and two comma two uh we Mark those as visited in red and now we're gonna take out uh something again from the cube um so here we're gonna Implement what's known as a first 10 first out uh convention for taking things out of the queue so basically things that were put into the queue first are going to be the ones that we prioritize when we take them out of the queue there's some kind of eye breaking the when we list you know the algorithm we can choose some particular convention for exactly how the neighbors get added into the queue it doesn't really matter that much but for instance we could go like clockwise or anti-clockwise I think I've picked some particular convention but but maybe I'm not being super careful but yeah just imagine that uh the the some particular convention like where you start off like with the the neighbor that's on the the west and then you go like clockwise around uh and and add things in in that order okay so we're gonna take the the first thing in the queue uh so one comma three we're gonna take that out of the queue uh anything we're taking out we're calling that X uh and then again we uh it would repeat the same process so we look at the neighbors of X the one comma three is this uh vertex over here uh so it has three neighbors right so it has one comma four that's that's what x it has one comma two that's This One X and it has two comma three which was our original kind of starting vertex H and yeah I guess this hopefully answers your question about whether we can uh whether we're gonna revisit uh vertices since we've already visited a neighbor so we've already visited a that was Mountain red we're not going to revisit it so we won't add that to the queue because in a sense we've kind of already explored it so that's why I've added it but then I just acrossed it out um so we Mark these as visited again and then add the neighbors to the queue so just to kind of reiterate here so I took out one comma three from the queue so two comma two is still there so that's now like first in line into the queue and then we added one comma four and one comma two question yes yeah yeah so it's exactly Bradford search I guess I'm reintroducing if you've seen blood pressure this is exactly a red research but uh yeah you're probably seen that if you've taken the computer science courses if you have a different different background like you still haven't seen so I did mention that depending on your background things might be more or less familiar or easy uh so yeah people with a CS background they should all be relatively familiar probably other questions foreign yes yeah definitely people still use uh Graph Search uh so you mentioned I guess the Bellman operation like dynamic programming uh those operate uh in a similar with a similar structure uh they have like more uh heuristics to guide the search and I think that's something I'll mention uh in the the next we'll actually spend some time thinking about that in the next lecture but uh yeah even though the modern day kind of like search algorithms like use ideas from Graph Search I think it's useful to understand the graph search Basics to get to more like Advanced algorithms okay all right so let's just go through maybe one or two more iterations so iteration three uh this was I've just copied over the the queue from the the last iteration uh again we're implementing first and first out we take out two gamma two which is the first thing we look at its neighbors uh so two comma two is this vertex over here so again it has three Neighbors so one comma two two comma three and then one comma two uh two of these neighbors have already been visited they were the ones that are marked in red so the only new thing that we need to add is to comma one we Mark that as visited and then add that to the the end of the queue and then we repeat uh I guess I'll go through this one one last iteration so this was again the the end of the the queue from the previous iteration um uh or sorry the queue from the end of the previous iteration which we take out a vertex and the first one which we're calling which is one comma four which is this what x over here we look at its neighbors there are just two of them one of them we've already visited so we don't uh pay back into that uh the new one one comma five we Mark that as a little bit added to the queue and we yeah keep going uh until we've reached the goal vertex um and I guess when I say reach the goal vertex what I mean is when X so the thing that we take out from the queue we can check uh if it's equal to the gold vertex if it is then then we stop we terminate the algorithm okay I guess any questions on the algorithm itself yeah and you can maybe look back at the the fully like fleshed out algorithm that I had written out on the slides it's going to be posted on canvas uh and compare that with the example but yeah I guess questions on the algorithm okay so one thing I I didn't we didn't do is actually find the the path right like we just had this algorithm that started somewhere ended when it reached the goal but I didn't tell you explicitly how to get the the actual like path from from A to B out of this algorithm uh so we have to make a small kind of modification or addition to the algorithm uh which is that we keep track of parents of vertices um so basically every vertex that we explore uh was the neighbor of some vertex that we were calling X right so for every X we were looking at its neighbors for every neighbor we could say uh this is the the vertex that was sort of my parent that I came from and we can keep track of that for uh for for every vertex and then we can backtrack so we can look at the the parent of the goal configuration and then we can look at the parent of the parent and then the parent the parent the parent and so on and then we're guaranteed to get to a which is where we started off okay um I guess questions on that [Music] all right so yeah just a few notes um so we implemented this q.get vertex uh using this first thing first out methodology so anything that gets put into queue first has priority and when we take things out and yes as someone mentioned this uh goes by the name of Brett first search and the reason it's called breadth first search is that we're prioritizing uh breadth rather than than that so essentially you want the BFS rest research is doing is it searching for all parts of land K before moving on to searching for parts of lamb K plus one and that's kind of the structure of the the algorithm so we're exploring here and then we're exploring here and then exploring like parts of length two and the land three and and so on okay um so I guess there's a a different uh yes question how do we store them yeah [Music] yeah yes yeah I think that's that's the way uh or at least that's the simplest way to do it there might be more sophisticated startup data structures um but at least for the size of planning problems we're going to deal with here you won't worry you won't have to worry too much about like memory efficiency uh but yeah the simplest thing is it's just like for every um like vertex just just uh like you could create like a dictionary or a list or something uh yeah nothing more sophisticated than that all right okay so I guess there are other techniques other uh like methods we could have used uh to prioritize what we take out from the queue so another option uh lipos or last in first out uh so this I guess if you've heard of the term depth first search that's what this corresponds to so I'll go through I guess relatively quickly maybe uh the the algorithm so it's just a small modification the basic structure of the algorithm is identical to breadth first search uh the only thing we're going to change is what we take out from the Queue at every iteration of the the album um so iteration one is identical right so we initialize uh the Q with the starting vertex a uh and the first iteration there is no choice there's only one thing in the queue we have to take it out so we take out X we look at the tables uh Naruto's is visited add them to the queue right so this is identical uh to uh the steps we took for for breadth first search um yeah so at the end of the first iteration uh these are the things that are in the queue so two comma two one comma three um so there's again some specific convention maybe just think about like clockwise or anti-clockwise just pick pick something and stick to it for exactly how you add Neighbors in the queue so here we're taking out one comma three which is this vertex uh we're looking at its neighbors so two comma 3 which we already visited one comma four which is new and then we add those to the cube so I guess it gets more interesting in in this situation right so in this iteration this is the uh the Q so two comma two one comma two uh one comma four uh so one four is what we added most recently um and so that's the thing that we're gonna take out uh from the queue so the last thing we added we're going to take out we're going to call that X so x equals one comma four so that's this vertex we look at this neighbors one three and one five one tree we already visited one five was new we marked that as visited added to the queue Just One Last iteration maybe so one five again was the the most recently added thing after we take that out when we Implement q.get vertex look at its neighbors uh one four which we already visited to five which is new so we add into five and and so on all right I guess questions on on this version of the algorithm the last one first started Direction yes essentially yeah you can think about it that way so it's it's uh it's uh storing um like where to go next yeah just like what should I yeah what should I explore next that's the way to think about the cube good other questions right um Team DFS and the brand first search yeah yes okay good so I was gonna get the get uh maybe in a slide or so uh okay I'm gonna skip this this okay the illustrations I guess it's kind of just making b line uh towards the goal and get to the goal so yeah let's think about this question so I guess what is your intuition during so many like searches okay but also I guess it kind of also depends because of the depth first like in the worst case forever yeah it's the last trial to get to yep to be then that would probably yeah even worse yeah so there's no General answer to which one is better so it really depends on the specific environment uh that you are like a robot is trying to solve the motion planning problem in uh and yeah the the intuition is um so uh I guess depth first search is going to keep exploring some particular path like really like deeply uh until it either gets to the goal or have some uh some sort of like dead end basically because there's nothing else to explore like nothing unvisited to explore um whereas Bradford search uh like like I said explores Paths of length K before exploring parts of land uh K plus one um so yeah if depth first search hits at that end it's just going to pick some other Avenue to explore and it's going to go like all the way until it gets to the goal or uh or have or it's a another like that um so yeah DFS could work well uh essentially if the path from the starting location to the goal location is really long and there's only few ways to uh to get from the uh the start to the goal so that's kind of the intuition I have so that's yeah it's a long path but maybe there's only one of like four like land paths that that you could uh take to the the goal then you're gonna like if you just explore all the way to the end uh you might find the path to the goal more quickly than if you were searching uh all parts of length K before searching or other like parts of plant Care Plus One uh but yeah there's no uh like General answer right like like it just depends on the specific environment and that you're uh operating in I guess does that make sense questions on that all right um so yeah talks on like these these algorithms breadth first search and that's for search like other ideas on maybe how to improve them I think you hinted at it but uh all right maybe add a heuristic yeah first check directions out that are the biggest the the step that's a closing your distance to the to them yeah yeah so you could guide the the search so we're not really exploiting any particular structure of the environment here right like we're just solving this as like a black box uh the graph storage algorithm uh but the graph has some structure because it's coming from uh like a distinization of some continuous environment that continuous environment is like a two-dimensional or three-dimensional environment so that's not like underlying geometry which we've ignored essentially uh when we're thinking about how to guide the the exploration of the graph so yeah so other ways of implementing the the Q dot get vertex could involve like biasing like some heuristics for exploration so maybe you should explore what these that are closer in terms of euclidean distance so the distance in the decoration like space in which the motion planning problem was kind of happening so you choose the the element in the queue that is the the closest in terms of euclidean distance you prioritize that so that's one sort of reasonable way you could go about biasing the search this other like ways as well to speed up the uh the search question I mean wouldn't that sort of end up being something like reinforcement learning where you sort of had a reward function or before I do the closer you got um so the way I guess I've described it so far there isn't necessarily any learning um but but you are right so you could use we'll talk about RL like like where it kind of towards the end of the course there yeah it's good that we were already thinking about our own but uh you could use like learning based techniques to come up with a heuristic um so if you've solved let's say a whole bunch of different motion planning problems uh you could say okay like what's uh like what are some like strategies for prioritizing what to explore uh and can you like learn that from having solved a whole bunch of different uh motion planning problems so it turns out something like that is what uh alphago and I guess other uh like similar uh like go playing or chess playing like algorithms do they like because so if you think about like chess or go that's a really like gigantic search problem so the initial configuration is the state of the the board like the gold board or the chessboard the goal configuration is winning right like any configuration that has you win so you can think of that as a planning problem so it's a discrete planning problem uh because the number of actions number of options you have is this finite um but it's a it's a kind of like gigantic uh like discrete like third problem because the number of sequences of actions uh that can happen uh grows uh really really quickly so you have to really be clever about uh like using your heuristics like using what to explore in the graph and that's where like learning can can come in um but yeah real I guess yeah I want to do much about learning now but we'll spend a fair bit of time talking about learning uh closer to the end of the course yeah good question yeah there are counter or questions on this okay so yeah we're gonna see not the learning part but at least some way of biasing the search in the the next lecture um another thing that we haven't talked about in this lecture is optimality um so again we just focused on feasibility like finding some way to get from point A to point B we're going to discuss optimality in the next lecture um but in some sense so even though we didn't directly or expressively think about optimality uh when we were thinking about breadth first search or depth of search it turns out one of those algorithms actually gives you parts that are optimal with some performance Criterion I guess if someone see which algorithm and what performance objective or Criterion yes [Music] yep yes yeah exactly right so BFS breadth first search as I mentioned searches for all parts of length K before looking at parts and K plus one um right so if uh your like the optimal if you think about optimality as being defined by the path length so if you are optimal uh like about in terms of bus length has lamp K star then BFS is gonna explore like a star like bottom and K star before moving on beyond that so it's going to give you a part that's optimal in terms of the length but there might be other performance criteria that you want to optimize and that's not something that we've discussed yet all right yeah I guess other questions or thoughts yeah yeah good question so uh does the algorithm crash on other way to phrase it is so the technical term is called completeness of the algorithm um so completeness means if there is a part from A to B then your algorithm is guaranteed to find it um or I guess let me say it a different way so an algorithm that has that property like that it's going to find a path if it exists it's called a complete algorithm um and if there is no path then the algorithm will terminate in a finite amount of iterations like finite model computation time and tell you there's no path uh it turns out these algorithms are BFS and BFS are complete for the discretized problem so there's a question about how to translate the solution for the discrete motion planning problem to The Continuous one uh and that's something we haven't touched upon yet so it's not necessarily complete for the continuous version of the planning problem and the reason I guess the kind of simple intuition is if you chose two cores of reducation and your like environment has a small Gap that you need to get through to get to the goal then of course the stabilization is not going to give you a feasible path but at least for the visualized version of the problem these algorithms are complete so if there is a path they're going to find it for you I didn't prove this if you are interested in the proof I think it's not super complicated actually but it's in there the planning algorithms book in one of the good question all right the other questions cool I think that's pretty much all I had today so we'll end slightly early and I'll see you next week [Music]
Introduction_to_Robotics_Princeton
Lecture_2_Princeton_Introduction_to_Robotics_Dynamics.txt
[Music] foreign I think we're gonna go ahead and get started so the the plan for today is to start off with the the main kind of tactical meet for this course so as I mentioned in the the previous lecture uh the the main kind of motivating example that we're going to use as a unifying team throughout this course uh is quadrators so just to give you a sense for what modern kind of quadrators or drone systems can do this is a video from skydio which is the largest us-based drone manufacturer and what we're doing is basically aerial photography and aerial inspection applications [Music] don't kill me as I said [Music] so the video is showing some of the the kind of capabilities that the systems today are capable of the final project for that the course as I mentioned is going to be due to Vision based navigation completely autonomously and we're going to kind of slowly build up towards that ultimate goal uh here's another video this is motivations this is some work that I was involved with at Stanford uh this was in a lab setting where the goal was to take this uh drone take this water and make it navigate through these kinds of Hoops at relatively high speeds all right so we're not gonna do that quite yet so we will get to that uh kind of thing closer to the the end of the course uh for now just for the first kind of module on on feedback control uh we're gonna basically concentrate on uh what might be the the kind of simplest Behavior you could feasibly ask for from a drone which is simply to hover so just hover in place do nothing in a sense or nothing interesting maybe but we'll see actually there's quite a lot of interesting stuff that goes on uh just uh to make a drone hoverboard so for the next four lectures uh we're gonna develop uh techniques from feedback control theory uh to make a drone hover in place uh and the approach that we're gonna take you can think of as kind of like a two-part approach uh so the first part has to do with Dynamics so figuring out how the Drone moves how the Drone behaves uh when you apply different commands to their propellers so if you ask the propellers to move at different speeds uh what impact does that have on how exactly the Drone moves so that's the study of Dynamics and then the three lectures after that we're going to think about feedback control so once we understand the Dynamics of the system once we understand exactly how the Drone behaves uh can we figure out some mechanism for correcting for deviations from from your target uh goal so if you want your drone to hover here let's say there's a little wind that comes and blows the Drone away or maybe there's some imperfection in your model of the Dynamics of the system uh how can we make sure that we correct for those kinds of a disturbances or some uncertainty that you have in the Dynamics so that's roughly the the plan for the the next four lectures which are going to focus on on Dynamics and feedback control um so I'll just kind of mention a couple of comments here so there are entire courses that can be devoted and are devoted to each of these topics so to Dynamics and control uh so if you're in Mae then you're probably taking Mae 206 which covers a lot of the material that we're gonna kind of briefly sketch here in this course uh there's also an a542 which is being taught this semester so that's the advanced Dynamics course which covers lagrangian and how mechanics and we have a number of control uh feedback control courses ma473 434 544 so if your enemy again maybe you've taken a 433 for instance and some of the material we might cover in these uh next lectures might seem familiar but then we're going to cover other things that might be less familiar to Mae students we're only going to cover enough material of Dynamics and control uh just to kind of understand how the feedback control system for a typical robotic system works and to develop enough techniques to allow us to reliably control the Drone to this kind of hover configuration one benefit of starting off with Dynamics and feedback control is that it allows us to establish some of the notation and terminology that we're going to use throughout this course so I'll try to emphasize that again some of that terminology might be familiar to you depending on what major you're coming from uh some of the terminology might be unfamiliar but either way we're just going to make sure we're clear in terms of the the terminology and we're going to use that terminology throughout this course [Music] even though I'm using drones as a motivation for for a lot of the the technical material that we're going to cover many of these techniques actually apply to other robotic systems as well so in particular when we're thinking about feedback control the techniques we're going to introduce can also allow things like this so a humanoid robot balancing on on one leg it's not quite doing it perfectly here in fact it falls completely but uh yeah if if we think about this task it Bears some similarity to the task of making a drone hover uh so you're essentially correcting for deviations uh from some Target goal that you want your system to be at so yeah that's kind of that kind of plan we're going to start off relatively look at the fall 3D quad order just yet um we're going to start off with a planar model of the quadrotor so just imagine if it's really a buy loader so you have four propellers but just imagine that you're kind of looking at it from from the perspective that you're looking at from but then the motion is constrained just to be in the the plane so that's what I've Illustrated here on the the slides um and we're gonna say there are just two propellers um or maybe like four propellers where you aggregate to the propellers uh so you end up with two thrusters essentially uh and you can control the the thrust forces um of these propellers so F1 and F2 we're gonna call those our control inputs so we can directly command uh these motors to produce a certain trust F1 and F2 so F1 is the task produced by this motor F2 is the trust produced by the the right motor okay I guess any questions on the motivation or or setup I'm just gonna switch to the the Blackboard now uh maybe just a quick kind of handwriting check to my handwriting a little bit is not always super clear so maybe just at the the back and move that thing okay all right okay good if at some point something is not clear just call out uh because we allocated it's not always going to be completely clear when I write but yeah just ask me you know I'll have I'll be happy to write slightly larger or just say whatever okay so we're actually going to start off with a even more simplified version of the peanut butter than I presented on the the slide um so this version only Moves In One Direction so imagine the the planar one order with its two propellers but imagine that the orientation so the angle of the quadrillator is completely fixed so it's constrained to be in this horizontal configuration and the only allowable motion is in the vertical Direction so in the the Y and direction if you like it let's say this is the origin of some inertial reference frame some reference frame that's attached to the the ground or to your lab and you have two directions X and Y uh but there's going to be no Motion in the X Direction so all the motion is going to be purely in the the Y Direction and again we have these trust forces F1 and F2 so this system the way I've described it right now uh has one degree of freedom foreign degree of freedom as dof and roughly so just intuitively degrees of freedom are the number write it out so the number of independent ways A system can move in this case we can just look at the the system and say there's one degree of Freedom so we can only have Motion in the y direction so I can use just one number so the y height if you like away from the origin and to completely describe the configuration of the Drone right so again the orientation is completely fixed so if I just tell you what height the Drone is at that fully specifies the configuration uh that the Drone is in we're going to look at systems with higher degrees of freedom in in just a little bit but hopefully it makes sense that because there's only one independent way this thing can move this has one degree of freedom okay so the next step will be to try to understand the Dynamics of this one degree of Freedom system so how does the the system behave how does it move uh when we apply different thrust forces from the propellers um so we're going to use Newton's Second Law here foreign equations of motion equations that describe how this system moves so yeah Newton's Second Law mass times the acceleration so let's say the the mass of the Drone is little m y double dot that's the acceleration of the Drone again everything is just in the y direction the Y double dot is the acceleration uh equals the total forces that are acting on the system so there are three uh forces also so there's the F1 and F2 the thrust forces from the repellers and also the force of gravity so let's say we have gravity pointing downwards so we can write M times y double dot equals F1 plus F2 minus the the force of uh of gravity okay uh we're just gonna Define a quantity that we'll call U1 um and just a kind of note on notation so an equal to sign with a triangle that's what I'll use for a definition so I'm defining a quantity U1 had to be F1 plus F2 so this is basically the total Trust from the uh the propellers of the Drone so we can rewrite this equation that we have the big but why double dot on the left hand side F1 plus F2 we're gonna yeah call this here one just to make the equation slightly more Compact and because we'll use this notation later as well so U1 minus mg regulated as Z1 divided by n minus G okay and I'll label this as equation one so this is a differential equation um and this basically tells us how the the system is going to behave so if I prescribe a particular value for U1 so if I prescribe a particular total trust that the propellers are applying and if I give you some initial conditions if I tell you where the Drone starts off so what height starts off at and what speed so what why dot it starts off at you can at least in principle um integrate this differential equation forwards in time and that will tell you exactly what height the robot will be at any given given time all right any questions on on this so far okay good so yeah this was kind of a warm-up so just a one degree of Freedom system so now let's look at this planar core rotor systems so again we're gonna draw some axes so this is going to be a inertial reduction frame to a reference frame attached to the ground or the room that the Drone is operating in and this is going to be my cartoon for the the planar what rotor so often the the first question we're going to ask when we encounter any new robotic system uh is how many degrees of freedom and that's the system so how many independent waves can the system move in another way to think about it is how many numbers so how many scalars do I need to prescribe in order to completely define the configuration of the system um so I guess yeah does anyone wanna say how many degrees of freedom this has and maybe give a justification okay uh I I might have maybe not been totally clear on something but I saw a couple of other hands go up at the back there okay [Music] okay good yeah yeah so I should have I should have clarified this uh so yeah the system uh that we're going to consider uh has three uh degrees of freedom um and as you said we can think of the Center of mass location as corresponding to two degrees of freedom um so that's two independent waves this thing can move so you can move horizontally if you move vertically there's also the orientation and that's something we're going to consider maybe I guess I didn't clarify that uh the Drone can now change its orientation so this orientation angle and that provides another degree of freedom and these are all independent right so I can move purely in X keep Y and Theta FX I can move purely in y keep X and Theta FX I can move in Theta to just the angle while keeping X and Y fixed so three independent ways of moving three degrees of freedom okay so the next step again we'll kind of mimic this process and get equations [Music] for this planar quadrotor um again we can apply Newton's Second Law so I'm just going to write down the the equations what they look like I'll give you a brief justification but we won't go into the details really of the the generation so these are the equations so X double dot X again is the location XYZ that gives you the location of the the center of mass of the Drone to X double dot is minus U1 over m sine Theta U1 has the same meaning uh as what we defined over there so you want us the total thrust so F1 F2 as far as the board goes uh yeah you want to just Define as the summation of those two thrust forces y double dot uh it's gonna be U1 over m cosine Theta minus G and then finally the equation of motion for the orientation so Theta double dot equals F2 minus F1 divided by I multiplied by L so I am I and L are one of the things that I haven't defined yet let me do that I'm going to move the X Y here uh L is going to be or rather 2L is going to be the the length of the Drone so from the center of Mass to the axis at which the propeller thrust Force acts I'm going to call that L and then the same thing over here and I is another physical quantity so this is the moment of inertia foreign [Music] which has to do with the geometry and Mass distribution after the Drone so how the other questions uh yeah so let's let's double check so let's say Theta is positive so that we have drawn it this is Theta positive um so in this case the trust forces are kind of acting in this direction so if we look at the X component uh we see that uh we have a force in the negative X Direction um uh so yeah we can check that as well so let's say Theta is zero so in that case the Drone is completely horizontal in that case the plus force is completely vertical and so there's no X component right so sine of zero would be zero and so that checks out and if Theta is slightly positive and then the truss Force to the X direction is slightly higher but in the negative Direction so I think this is the same in the y direction good other questions on this yes exactly yeah so with the project force in the X direction is isn't a minus Direction in the left Direction here good okay so yeah this moment of inertia uh I guess maybe you've seen this in a physics class I just think of it as some physical parameter if you have a cad model for your drone uh the camon like she's gonna tell you well if you have an accurate CAD model for the Drone uh tell you what the moment of inertia is there are also experiments that one can do with a critical you know experiment one can do with your drone to figure out what the moment of inertia for the Drone is okay I'm gonna just Define one more quantity here just to make the equations a bit more compact again and again because we're going to use this notation later on so I'm going to Define U2 sorry defined you one after I'm going to Define U2 as F2 minus F1 multiplied by L so this is the net uh moment or Torque from the propellers so from our two thrust forces all right so with this new annotation I can rewrite these equations so X double dot equals minus U1 over M sine Theta uh y double dot equals U1 over m cosine Theta minus G and finally Theta double dot it's just U2 divided by I and I'll call these equations to a 2B and 2C all right so I won't in general expect you to derive these kinds of equations of motion so if you're an me student then you've again taken Dynamics you've actually seen the iteration of these equations of motion for a planar quad order um but yeah I'm not gonna ask you in homeworks or uh exams to use new design and derive equations of motion so we're going to assume that someone maybe your ma friends have told you exactly how to go through this process uh what I do want you to understand though is like the meaning of these differential equations so the high level music for Integrations are telling us how the system moves how it behaves a slightly more complete level uh we're saying that if I tell you what craft forces are being applied so that fixes U1 and U2 and I give you some initial conditions so I give you some initial X initial y some initial Theta and also some initial speeds some initial x dot y dots you can integrate these equations of motion forward maybe using some computer software like Python and math will tell you exactly where the Drone is Auditorium mission is going to be at any point in time right so I guess that's the main thing I hope from your differential equation course is something that that you are familiar all right any questions on that okay sounds good [Music] so I need to squeak this out [Music] do this I haven't taught in this classroom before it's gonna take some adjusting I think maybe I'll come over from the wd-14 spread okay all right so we have these differential equations uh I guess one kind of property of these differential equations is that their second order differential equations uh so second order just means that they involve second derivatives right so double derivatives in time on the left hand side uh one thing we can always do is go from a system of stacking order differential equations ecosystem of first order differential equations uh and in particular you're going to double the number of differential equations you end up with when you go from second order to first order uh so this is something in principle you've seen in your uh kind of coordinating transgressive course I found that people maybe don't always remember exactly how that process works again if you're an enemy then you've seen this in Dynamics you should be familiar with it but if not I'm just going to walk through that process uh just so everyone is familiar with that process and that's something we're going to use quite a bit in this course I think it's it's good to just go over it once in this in this lecture so yeah the goal will be to I go from second order is the ordinary differential equations the first order and the way we do this is by introducing some auxiliary variables so some new variables so I'm going to Define three new variables so the first one we're going to call v x and again I'll put a little triangle on the equal to the notes that I'm making a definition here so v x is just going to be x dot so that's the time derivative of x the X position of the Drone we Y is going to be defined as y naught this is just on the board I can't actually erase this and Omega is uh Theta dot it's going to be defined as Theta dot so that's the time derivative of the the orientation so with this definition we can rewrite these three differential equations um enforced order form so we're going to end up with six first order differential equations so three times two so the first three equations so x dot just by definition is BX so we're just going to flip in these uh why not if we why uh Theta naught by definition again is just Omega the more interesting ones are when we look at the time derivatives of these new variables that we introduced so VX Dot by definition is X double dot right so v x is x dot VX dot is X double dot so we can look at this differential equation so X double dot is minus U1 over M sine Theta so we can describe that minus U1 over m sine Theta we y Dot uh is U1 over M cosine Theta minus G and then finally Omega Dot which Again by definition is Theta double dot we can look at this last equation so it's just U2 divided by I okay so yeah we went from three second order differential equations to six uh first order uh differential equations first order just because uh all of these equations uh only involve one time derivative so uh one dot and basically what this allows us to do is write equations of motion in a really standard form and that we're going to use throughout the course so it's purely for convenience there's nothing particularly deep I guess about going from technology to first order uh we're just always going to write equations of motion in first order form and that's just going to make things kind of notationally convenient foreign [Music] [Music] so we're gonna Define [Music] the state of the system uh so this is some terminology that we're going to use really throughout the uh the course um so we're going to Define this quantity X bar so bars are going to denote vectors so I'll try to be pretty consistent uh with this throughout the course any variable that has a bar that's going to be that's going to correspond to a vector quantity so I'm going to Define X bar which we're going to call the state had to be X Y Theta and then BX VY and Omega these three are the first three variables X Y Theta uh we're gonna call these configuration variables configuration variables because if I specify what these numbers are that specifies the configuration of the system and in general the the number of configuration variables we have is going to correspond to the number of degrees of freedom right each of these configuration variables gives us one independent way that the system can move and these quantities vs voi Omega in this case this happened to be that the time derivatives but they're basically everything that appears without the dot on the the left hand side so what we can then do is just rewrite these equations in a kind of standard form that we use like I said throughout the course and actually sorry just one more piece of terminology here um U1 and U2 I'll Define another vector concatenate this and we're going to call this the control input so these are things that we or the robot can directly control right so the robot can directly control the propeller forces which directly controls U1 and U2 so U1 again was the total thrust and it will force U2 is the um the net moment applied by the propellers okay so with this introduction of this one we uh which is the the set and the control input we can rewrite these first order differential equations let me let me just do that in the following form so x dot so the time derivative of the state [Music] which is my definition the over DT of X which I defined over here y Theta v x v y Omega of the state is the right-hand side of these equations and just copy that over VX v y Omega and minus q1 over M sine Theta U1 over M cosine Theta minus G and then U2 over I foreign equations what we see is that it has the following form so x dot is some function which we can call F of x and u H um and yeah this is going to be the general form of equations that describe the Dynamics of our robotic systems so maybe just convince yourself of this so x dot x dot that matters as well basically what I'm saying is everything that appears on the right hand side over here is purely a function of X and U right so every every variable that appears here uh is some component either of the state or the the control input Vector that we've defined all right again we haven't done anything particularly in a mathematically profound we just went from technology differential equations that came from f equals ma to some system of four star equations this is purely for convenience in the future uh for instance we encountered some new robotic system we'll just say let the equations of motion for that system be described by extraordinate words F of X2 and any like techniques that we then introduce for doing feedback control and so on uh we'll just take this general form as a starting point all right so let me just pause for a minute and see if there are questions I'm definitely happy to answer uh anything to do with the process of going from second order to first order or just um yeah the motivation for this or the general form uh any any questions on any of this could go through the motivation again yeah is it more than just yeah yeah nothing nothing more than that uh so this is convenient uh it's the standard form and many maybe I'll just add one more also convenience kind of uh argument uh many software uh like toolboxes assume that things are specified in first order form um so if you use Matlab for instance it has a nice function called Audi A45 which allows you to integrate equations of motion and the standard like format that you describe equation promotion there is in personal reform uh similar functions exist for python which are I think that will use to some degree uh in in this course as well yeah so it's purely for for convenience there's nothing much beyond that right questions yeah okay all right so with functions maybe I'll be slightly more so obvious you're correct maybe I should put a bar here as well uh but uh I won't do that that's going to be implicit to the left hand side here as a vector and so we can um kind of yeah just implicitly say that that's a function is uh like mapping to a vector quantity you're correct I'm going to slightly but just to reduce the number of like bars and so on I want to put bars on functions good other questions okay so the next thing I want to talk about actually let me do that here since we have the picture up so so far we've been assuming that these uh shaft forces F1 and F2 are things that we can directly specify right so I can just tell my drone uh produce some trust F1 uh in this propeller produce some trust F2 in the outer propeller that's not quite how Motors work right so you don't directly get to specify a trust uh so there's some mapping from what we actually uh control uh to the the thrust that gets gets produced by the motor uh and that has something to do with the aerodynamics right so the propeller shape affects the the trust that you produce um if you spin the propeller at a certain rate so I just want to kind of briefly talk about that mapping so we're going to call this the motor model foreign actually even this is some uh kind of approximation but to a very good degree of approximation the thing that we directly control um is the propeller speed and by the speed I mean the rotational speed how quickly the propeller is rotating yeah so really what's going on is we're applying some current uh and that the motor spins at a certain rate that's proportional uh to the to that Direct Control input of the current uh we're going to ignore that part of it we're just going to say that we directly get to control the uh the speed of the propeller um and it turns out that there's a kind of nice relationship between the the speed the rotational speed of the propeller and the thrust the force that gets produced by the propeller uh so this comes from some kind of aerodynamic analysis uh what's really happening uh one way to think about it is that this is an application of Newton's third law so the propellers are applying a force on the air the air is applying an equal and opposite force on the propeller and that's where the thrust force is coming from uh and so that that foreskin relationship uh that has to do with the geometry of the propeller so exactly how the propellers are shaped but yeah it turns out that you can abstract away some of that aerodynamics uh and you can end up with this uh really nice relationship which which comes from the aerodynamics but you can also kind of test this empirically so the thrust Force f uh equals some constant I'm going to call that constant KF [Music] we're going to call this a Thrust coefficient yeah so it's uh the task force is this constant multiplied by the control input which is the rotational speed foreign so the particular value of this coefficient the task coefficient is constant depends on the geometry of your propeller so if you change the propeller if you change the size for instance um if you make a bigger for instance you might intuitively think that KF increases but yeah the specific um proportional accounting depends on exactly the geometry of the propellers so all right let's think about a question here which is how can we measure what this thrust coefficient is and that's something that's important right so like I said what we're really controlling we have non-direction is a plus Force you get to control the rotation speed of the propeller but our equation is emotion the Dynamics involved F1 and F2 through U1 and and U2 over here right so we need to somehow figure out so if you actually care about predicting the motion of the Drone or controlling the Drone we need to know what the thrust coefficient is what this constant is this kfs for our particular propellers for our particular drone so yeah I guess does someone have uh ideas for how one might go about measuring this empirically I go ahead the speed's quite but yeah exactly right yep yeah yeah so let me just draw the the picture yeah you described it beautifully [Music] foreign so you could somehow basically place the drone on a on a scale um and turn on the propeller at a different speeds and measure what the scale is measuring I have to be slightly careful about the the orientations of the scale um so if I do it the way I describe it so just keep the drone on a scale turn on the propellers it's just gonna fly right so so maybe you can invert it and make it go down that's one way to do it or you just don't turn on the propellers high enough exit the mass times the gravity of the Drone so there's a couple of ways of doing it but this is the the basic idea which is to collect a bunch of data where you set different propeller speeds so I'll just call the rotor or sorry propeller from speed a square on the the x-axis and then the measured Trust on the the y-axis let's say um and yeah we have a bunch of points you might have some error in your measurement maybe the scale is not perfect maybe the thing that you're commanding is not exactly mapping to the propeller speed or maybe you have some management error in the propeller speed itself um actually I guess this should pass through zero if you're applying no provider speed unless you corresponder no Force and you can fit a line right so the best fit line that you get from from this data and the slope of that line uh exactly corresponds to yeah the trust cooperation for your particular competitors so this is something that uh you'll get to do uh in the first assignment that goes out next Wednesday so it's like six days from from now um You Won't Do This experiment physically we've done it for you I guess you guys our pasta is really have done this experiment where they place our crazy five drones uh on a weighing scale with which has separation sensitivity uh collected with data so we'll give you this data and you can kind of see that this relationship holds empirically and you can calculate the class coefficient for our particular drones which are these creative titles all right any questions okay so so for the the model that that I've described uh the planar uh quarter model uh we actually have three physical parameters that we need to know in order to fully specify the Dynamics for our particular drone so the first one is the the mass of the Drone so that's easy to measure you just put it on a make scale and the second one is the moment of inertia and that one is slightly more tricky to measure um like I said if you have a good Gap model your Gap software is going to tell you what the mode ownership is there are also physical experiments that you can do I can describe that maybe later if you're interested and that's actually what we did for the crazy Flyers to get uh it's like moment of inertia and then the thrust coefficient okay right so if I give you these three physical parameters and that completely specifies the Dynamics at least for the planar that's what rotor okay so let's make it slightly more interesting now so we pretty much understand the four make a stop by just pressing on it [Music] okay yeah we pretty much fully understand the Dynamics of the the planar quadrotor um again these equations the motion allow you to predict exactly how the the drone's motion will be depending on what the initial conditions are so what the initial X Y Theta R and the x dot um y dot Theta dot r as well so v x v y Omega if I give you those initial conditions if I give you the physical parameters you can in principle integrate the equations of motion forward to predict the drone's motion uh what we're going to do next is start uh thinking about the Dynamics for the three week order so that of course is what we really care about uh we ultimately care about controlling the crazy slide drones in our in our first lab let's again think about how many degrees of freedom this system has so the 3D what the other has degrees of freedom again just the number of independent ways a system can move or equivalently the number of numbers the number of scalars I need to give you in order to fully specify the configuration drones in in space so yeah I guess I want to take a track at this how many degrees Freedom go ahead six good and why okay perfect yeah so we have six and degrees of freedom uh yeah so the possession of the Drone I can specify with three numbers so x y Z uh these three numbers could specify the the center of mass location of the Drone I still have to resolve the the orientation um and I can do that with three numbers so hopefully the position part is is clear to everyone the orientation part is a bit more a bit more tricky so I'm going to describe how like exactly what these three numbers are what the options for the three numbers are um and the specific um I guess concept we're going to use to describe orientation this is Euler angles and yeah other angles are really common for describing orientations for for drones there are some other possibilities as well to quaternions access angle conventions and so on in this course we're just going to stick with with all our angles and I'm just going to give you a brief kind of Crash Course on other angles maybe just a quick show of hands how many of you have seen other angles before okay not everyone maybe 60 or so okay good all right so I won't be wasting everyone's time with this discussion if you have seen it maybe it's a good uh refresher because it does cause a fair bit of between sometimes [Music] [Music] okay yeah like I said these are this is one way to specify orientation of some 3D object um and the way uh this is gonna work is we're gonna first Define some reference frame um so these are three directions DX y and e z now that form a right-handed frame so you can see that e x cross e y is easy I'm going to label this Frame as I and so I is our inertial reference ring some reference frame that is attached to the ground attached to the room that the drones can be flying in uh the more maybe interesting reference frame uh is the one that's going to be attached to the Drone so just imagine like a plane and that corresponds to to this plane that the four rotors are in I'm gonna Define three directions b x b y and b z uh and X is going to be forward so maybe let me just write it here so B just stands for body these are three axes three directions that are attached to the body of the frame uh so BX is in the forward Direction the Drone is facing that way uh b y is to the left uh NBC is just straight up from the perspective of the Drone uh and this actually is the the convention we're going to use so yeah just imagine that forward uh is here the Drone actually has a little indicator that tells you which way is forward so there's a little kind of piece of that the PCB that's sticking out um you'll read about that in the instructions when we actually start working with the hardware but just imagine this is forward so BX is some direction some axis that's attached to the Drone and points in its forward Direction b y is to the left and b z is up so again this forms a right-handed frame right so BX cross b y you can maybe just double check using the license rule is is PC um and this is attached to the Drone so as the Drone moves uh these directions uh move with the Drone and the origin let's say is this at the center of mass of the Drone all right so with this with these uh conventions we can introduce smaller angles which then allow us to describe the reorientation of the Drone uh so here's one way foreign the orientation of the Drone in three dimensions so we're going to follow a process that's going to allow us to describe or specify the orientation let's say of a b so the stream of reference foreign we're going to call that frame B relative to frame high so relative to the the inertial reference frame so relative to the room again if you if you like to think about it that way um and we're going to describe this process where we first start off maybe Step Zero is start off by aligning be with I so the three body directions bxuivv and just imagine that you start off by placing the Drone um that's that the BX interaction is aligned with e x uh b y is aligned with that UI and b z is aligned with that with easy uh The Next Step the first step is then going to be to rotate uh the B frame uh about the direction ex about the inertial X Direction by some angle which I'm going to call Phi and we're just going to call this angle Pi we're going to call that rule so physically what this means is um we start off with the Drone in this configuration again imagine that e x is this way e y is this way easy is up BX is aligned with those axes so you just rotate the Drone about the inertial x-axis by using the right hand rule by some angle Phi right so you align your thumb with the e x Direction you rotate the Drone is going to be and then enrolled like this so that first step makes sense I guess any questions on that okay so let me describe the the second and third steps so the second step is going to be doing B about that the Y Direction by another angle which we're going to call data um I'll just give this a name as well we're going to call this the pitch angle and finally the third step is to rotate we [Music] about the e z Direction by another angle which we're going to go PSI and we call this the yaw angle all right so let me just let's show you what the physical drone uh I also have this wall from alpha demonstrations which I think should be pretty useful if you haven't seen this before so again you start off here rotate about the inertial x-axis so that's the roll angle you then rotate about the e y direction so e y remember was to the left so I line my thumb along the UI Direction and I apply the right hand rule so that goes this way and then easy easy again goes up I align my thumb with the easy Direction apply the right hand roll so that goes like this so I guess any any kind of high-level questions and then I'll walk through that process again uh with the world from Alpha and that might make it more clear um yeah let me actually just do that maybe while I'm pulling that up I can I can see if there are any questions so these angles uh maybe this Wireless is coming up these three angles five Theta and PSI are known as Euler angles so let me just go through the the process so just a caveat this specific convention that I'm going to illustrate here doesn't quite match exactly this convention so I'll say a bit more about the different conventions in a bit this illustration is just to give you a picture of what this kind of General process looks like uh so we have two frames so right now they're both aligned so you I guess cannot tell the difference uh one of the frames the inertial one uh is the one that's going to be labeled with the little arrows at the as the pointers uh the body frame the one that's attached to the body of the Drone uh that has little spheres at the the end uh and the X Y and Z directions those are just labeled with the different colors right so we just start off with the two frames aligned uh and then we rotate about one of the axes the one of the inertial axes uh in this case we're rotating about the vertical axis you can think about that as a z for instance all right so I do this one rotation so let's say I go all the way up to here I then rotate about the inertial green axis right so that's that's what this is and then finally I rotate about I guess which one is this can someone tell us this is the inertial z-axis again right so I did three different rotations uh the first one was about the inertial z-axis uh then the inertial green axis which one is that that's you can think of that as y uh maybe X is red green is y as V is blue and then the inertial as the axis again so let me just go through that process one more time all right so inertial blue the initial z-axis that's this rotation inertial green that's initial y that's this and then inertial Z again so that's one particular Euler angle configuration um so the claim I'm going to make is that no matter what orientation the Drone is in so however it's configured in orientation uh I can describe that orientation by giving you three numbers which are these Euler angles so I guess does that claim make sense and maybe any any questions again on this process of rotations okay yes go ahead yes good okay so that was going to be the the next thing I emphasized um the fact that there are multiple conventions so I actually Illustrated two different conventions here so the first one we were doing this these rotations in a specific order so first about e x then about e y then about EZ so that has a name let me just turn on the lights again uh so what I described on the board over here so this is called the space one two three convention Euler angle convention uh so the reasoning for for where this name comes from uh space is because we're doing the rotations about the inertial axis so we actually kind of just fixed in space so not fixed to the body and one two three has to do with the order right so we're first doing the rotation about the ex Direction so that's the first Direction then the ey Direction that's where the second easy is the third uh so maybe just a check for uh well we understand the terminology or not uh what was the convention or what name would you give to this convention that I uh Illustrated here I guess right at the back baby three two three okay good so space three two three Pace again because we're doing the Orient the rotations about the spatial axis the initial axis uh three two three because we did z y v so that's three two and three that's where the naming convention comes from um so if you count up the the number of these different conventions that's 12 uh different space dimensions to make things even more interesting or confusing depending on your point of view uh there's another way to do these rotations uh which is to not rotate about the spatial Direction so not do the rotations about the initial directions uh but to do the rotations about the body directions let me see if I can just illustrate that over here so here we're rotating you can think of this as a doing a rotation about the body easy Direction we then rotate about this you can see that this rotation is happening about the the body green direction right and then finally this one is again happening about the body blue Direction let me just maybe go through that again first one uh body Z so the Z Direction attached to the body the body again is uh labeled with the little spheres at the end uh body y that's this one and then body three again so this convention that I sorry yes foreign so with this one we didn't actually so this was just blue uh green blue uh there is a way to uh to do it about the red one as well so that would be yet another uh convention I don't think this will from alpha demonstration uh quite does that uh this this illustrates a couple of different conventions uh again maybe just as a as a check um what would you call this convention that I just described uh that I just Illustrated there's still three two three uh but it's not uh space but body right so what I just uh Illustrated you would call that the body three two three dimension uh so that's 12 more of these uh conventions where you do the rotations about the the body axes and one of the conventions you would do one of the rotations about uh about the red one as well okay all right let me I guess just maybe synthesize um I'm gonna switch back again to that question is sorry the projector has to be making a little bit of time yes yes exactly right yeah maybe I'll just repeat that so for the space ones uh you're doing the rotations about these fixed axes like the fixed inertial axes with the body ones you're doing them about axes that are kind of changing right so the intermediate axes so once you do the first rotation that changes where bxbybz are you do the next rotation about that new Direction and then the third one about the the newest Direction um so I guess I'll just make a high a couple of high-level comments here uh the first one is we're not in this course we're going to be worried about the multiple different conventions um the space one two three in some sense that's the cleanest one right everything is happening all the rotations are happening about the spatial axes uh the order is clear this one two three XYZ and this is the one uh we will use for this course all right um another comment is that in practice so I guess Beyond this this course um you might encounter these different conventions and in practice this causes a massive amount of confusion and errors so just as a kind of quick anecdote so I've spent uh all right wasted I should say uh something like three months as a PhD student because I wasn't completely sure what convention I should be using so I'm working with a fixed Wing aircraft a small drone trying to make it avoid the obstacles I had a motion capture system so vikon motion capture system if you've seen those and that's supposed to track the position and orientation of the Drone in real time about 120 hertz that gives you the orientation in some Euler angle convention and my Dynamics model I've written down for the Drone was in a certain order angle convention I thought I had like double checked that those two conventions were identical it turns out I should have triple checked it that they weren't identical and yet led me to wasting like about three months because I didn't I wasn't like fully careful so if you're working with a team like Beyond this course no but this course we're just going to use this one but if you're working with with some team or if you're working with a another like piece of equipment like a motion capture system uh really like double check triple check that you know exactly which color angle convention is being used by what or by whom because it causes a really like massive amount of confusion and errors in practice all right questions okay so yeah I guess we have maybe 10 more minutes [Music] um so what I'm gonna do I guess in the last 10 minutes is just describe or just set up some of the terminology we'll use for describing the the 3D dynamics of the the Drone or the Drone like Dynamics for for the 3D drone uh so we'll just say what the states are and what the control inputs are I want to write down the the equations of motion yet we'll do that kind of briefly in the next lecture um and maybe just before I do that I'll just re-emphasize the point that we will be able to write down the Dynamics um [Music] form [Music] which is x dot equals f of x and u h x is again the state use the control input foreign er uh what are the sales what are the control inputs and then in the next lecture uh I'll kind of briefly go over what the equation the motion looks like for the 3D drone which will then allow us to use uh feedback control techniques which we'll talk about um to make the Drone cover [Music] to the states for the the 3D quadrotors the state Vector X um actually I guess if someone maybe want to guess how many dimensions like what's the what's going to be the dimensionality of the state Vector uh so with the planar quadrotor uh it was three degrees of freedom we ended up with three second order differential equations which we then converted to six uh first order differential equations uh here we said this was a sixth degree of Freedom system I guess if someone want to maybe just guess how many States like what's the dimensionality of the state Vector is going to look like six six okay why would you say that or what okay so I guess it didn't quite with the planar quarter right so for the planet quarter uh we said it was about three degrees of freedom and the state Vector was six dimensional so here it's going to be the same kind of relationship we're going to end up with a 12 dimensional Vector that describes the state um and roughly what that corresponds to the first six dimensions of the state Vector are going to be the configuration variable these are the variables that can correspond directly to their degrees Freedom the other ones the remaining six are going to have to do with the the rates the time derivative of the configuration variables so let me just say what the states will be we'll again go through this um go through these Dynamics in this form more carefully in the next lecture but just to set things up so XYZ [Music] this is the center of mass com position so these are the variables that correspond to their translational degrees of freedom Phi Theta pi these are the Euler angles again using the space one two three amazing so these are the first six uh and then we're gonna have x dot uh y dot uh Z Dot those are the time derivatives of the positions uh and then finally there's a kind of choice here so we could go with five Art Theater outside of the time derivative to the other angles uh it turns out it's slightly more convenient uh to use some other variables pqr which are uh going to be the angular velocity vector so if you haven't seen angular velocity before I guess don't don't worry too much about it just think of them as something that's related to Theta sorry to Phi Theta and PSI dot so it tells you the uh the angular velocity of the angular like rotation of the Drone so how quickly the orientation of the Drone is changing in different directions um yeah I'll say maybe a bit more about that in the next lecture but this is what we'll use as our state vector and then the control input vector foreign [Music] so we could use the four propeller trusts so F1 I have to F3 F4 um and that's yeah that that's like a perfectly kind of reasonable Choice uh it turns out that it's slightly more convenient and by convenience I just mean the equation is you end up with just look more compact uh when you work with a slightly different control input vector um so it's still going to be four numbers um but they just have a slightly different meaning so the first component of that control input Vector I'm going to call it fdot at total this is the total trust so just the magnitude of the total trust produced by the four propellers and the other three components I'm going to call them M1 M2 and three so this is going to be the moment the dark produced by the four propellers about the body x axis BX and this will be the moment about b y at the moment about beefing so this is kind of similar to what we did with the the planar quadrotor right so if you think about I guess the equations down but if you look back at the equations of motion we wrote down for the planar quadrotor uh we wrote them down in terms of two controllers E1 and U2 U1 redefine to be the global trust uh YouTube We defined to be the total moment so this is analogous so you won the first component F total of the control input Vector is going to be the total trust produced by the four propellers uh now because we're working in 3D at the net moment the network is not just about one axis but it's about three different axes BX pybz so the control inputs M1 into M3 are going to correspond to the the net moments the networks produced by different sellers about these three axes [Music] oh I won't go into I'll go into more details about this in the the next lecture uh but just as kind of a a quick point it turns out you can go back and forth between these two representations so if I give you F1 F2 F3 F4 uh you can calculate uh what these are uh what the net trust is what the net moments are and if I give you these four numbers it turns out you can invert that relationships and get these so in some sense they're equivalent but like I said it's just slightly more convenient the equations look a bit more compact if we go with this representation all right so yeah in the next lecture I'll uh we'll use this stage and control input and write down the equation promotion for that 3D Pro I'll see you next week [Music]
Introduction_to_Robotics_Princeton
Lecture_3_Princeton_Introduction_to_Robotics_Feedback_Control.txt
all right I think we can uh go and get started so just a couple of uh quick logistic logistical announcements so the first problem set is going to go out uh tomorrow which is Wednesday and it's going to be due one week later so the Wednesday after uh at midnight and you'll submit it uh through greatscope we'll have instructions for for how to do that um the other announcement is Sasha is holding a Python tutorial later today over Zoom so this is not mandatory so you don't have to to attend it but the hope is that it will get people up to speed with basic kind of python syntax uh and numpy and sci-fi and so on so if you've seen some of that uh great we'll post the recording uh later on on canvas if you haven't seen it uh feel free to attend it and hopefully it's going to be useful to you um all right and I guess the final reminder is uh to form teams by the end of the day tomorrow so by the end of the day Wednesday uh we're gonna hand out the the drones uh in class at the end of uh Thursday so that will be used for the next assignment all right so just to kind of remind you of uh of what we're doing for the overall goal that we're building towards uh towards the final project is to get a drone to autonomously navigate through cluttered environments uh purely based on on vision and we're slowly building up to that kind of ambitious goal um and then the first goal uh for the first four or so lectures it's basically just to make the Drone cover so sometimes this is the most basic thing you would want from a drone this hour in place uh so that's that's what we're trying to develop the techniques for uh and the approach the overall approach that we're taking to do this as I mentioned in the previous lecture is kind of a two-part approach uh so the first part is to think about the Dynamics of our system how does the Drone move uh if you apply different control inputs or different propeller commands how does that affect the drone's motion and then the second part is to think about feedback control [Music] so in the last lecture we started tackling the the first part so the Dynamics part and specifically we wrote down the equations of motion foreign so this was the the simplified model of a quadrilateral that we had that's just allowed to move in some plane so it's a three degree of Freedom system so X and Y corresponding to the center of mass location and Theta which is the the orientation and then finally towards the the end of the the lecture in the previous lecture uh we started like so we started uh thinking about the 3D quad rotor Dynamics so for the the planar quadrotor we wrote down the default equations of motion and the 3D core rotor we kind of just set up uh the theme for today's lecture and we just said what the state Vector is and what the control input Vector is so I'll remind you of that and then we'll write down the full equations of motion for the court order and then we'll start talking about feedback control at the end of the lecture today all right so for the 3D quadrotor so this is a lecture a reminder rather from uh last lecture um we wrote down what the state Vector is going to be so the state is a is a 12 dimensional vector and specifically it corresponds to the x y z location of the the center of mass of the Drone um three angles which we're going to call Phi Theta and PSI these are the Euler angles [Music] which we discussed in the the previous lecture and again I'll just remind you that there are multiple conventions one could follow for the other angles uh here we're just going to use the space one two three uh convention that I introduced in the previous lecture um and then we have x dot why not and read up so this is the the velocity of the the center of mass so XYZ is the com Center of mass population of the Drone right and finally uh we have three additional components uh which essentially correspond to the the body rates so we could choose to use Phi dot Theta dot sine of the time derivatives of the other angles uh it turns out it's slightly more convenient in terms of the compactness of the equations that we end up with to work with pqr which are which correspond to the angular velocity vector and specifically expressed in the body frame so if you've seen the angular philosophy I guess specifically for Mae major then that's good if you haven't seen it one way to think about it is kind of loosely is that it tells you something about the rate of change of the orientation of the Drone more specifically so this is some some Vector it has three components the direction of that Vector corresponds to the the axis instantaneously about which the Drone is rotating and the magnitude of the vector tells you the rate of rotation so that's kind of a physical interpretation uh but yeah I guess you don't have to worry too much about the uh the specific like interpretation this is just uh the choice that we're going to make for our state vector and the control input vector again there are a couple of options that we have over here uh so we could just treat the propeller commands the the speeds of the propellers or the thrust produced by each propeller we could choose those as our control inputs again just for the sake of kind of compactness of the equations uh it turns out it's slightly more convenient they worked with a different representation which is still a four-dimensional vector so F total is the total thrust from the propellers and M1 M2 M3 uh these are the moments the darks about b x b y and b z which are the three axes that Define the the body frame so again just to remind you maybe I'll draw a picture over here so we said that we have our Roadrunner Let's scan like this um so there's some forward Direction that's going to be marked under the physical drone uh so we're going to call that the X direction of the body frame uh left is going to be uh b y That's the Y Direction uh and then Z is just the cross product of those that kind of uh vertical so just physically on the Drone let's say this is the the forward Direction so that's what we're calling BX the direction to the left is to be Y and then Z is just straight up and this is a frame uh the body frame which we can denote by B and that's attached to the body and like most of the body has it as it moves around all right I guess any questions on uh on this this is kind of where we left off of the previous lecture okay so the other thing that we talked about in the last lecture is a motor model uh so if you remember we said that the the thing that we actually get to control is the speed of the propeller the angular speed of the propeller uh but the expression uh that kind of enters the Dynamics uh has to do with the force that each propeller is producing uh and there's a nice relationship between the the angular speed of each propeller and the thrust that gets produced by that motor by that propeller we introduced to this kind of relationship that you can derive from some basic aerodynamics ability to stay for granted and measure like empirically which is that the force of the trust produced by propeller I equals some constant that we can call akf times Omega I squared and so this is the thrust from propeller I uh this constant is called the trust coefficient so this has to do with the geometries for instance the size of the propeller and Omega I is the angular speed of a propeller I and then we're squaring that angular speed and this is for the four propellers so I equals one two three and fourth all right so there's this nice kind of quadratic relationship so that trust force uh Square uh scales linearly with the square of the angular speed of each propeller um for that 3D setting this was just in the planar quarter setting so we just have the two propellers um in this case I equals just one and two for the 3D setting uh it's a little more interesting so we still have this um for the the three-dimensional quadrotor we still have this uh that the trust Force produced by a propeller I scales linearly with the square of the angular speed of the propeller but there's an additional effect which is a moment that's get that gets produced by the spinning of the propeller so as the propeller spins because it's interacting with the air you get a moment kind of in the opposite direction so it's a moment due to the the aerodynamics uh so just as a kind of sketch so let's say this is just one propeller so we have a trust Force fi that gets produced by the spinning of the propeller so that's what the key basically keeps the Drone up but there's also a moment which I'm going to call mi Arrow so arrow for aerodynamic uh drag the direction of the movement is basically the bz Direction so that's the uh the direction that's coming straight kind of vertically from the uh the drones body um so there's a kind of nice relationship again between the moment the The Talk The aerodynamic uh drag moment and the angular speed of the propeller so it looks very much like the relationship for the stress force but just with a different proportionality constant so this constant is called the moment so equation and again the specific value of the at the moment coefficient just depends on the geometry of their propeller um so the the pitch kind of angle of the blades the size of the blades and so on but if you fix the propeller geometry then that fixes these two coefficients the first coefficient and at the moment coefficient um and the reason this is important the reason that the moment coefficient is important is because this is what allows that quadrature to turn in place to basically change its uh its job so let me try to just illustrate so just imagine that the Drone is facing this way uh all you wanted to do is turn this way I just think purely in place um so that at the moment uh coefficient of the moment uh Force the aerodynamic moment force uh is is what allows the Drone to do that I guess someone yeah uh yes um on this picture here uh well I mean there's only one that really matters I guess it's the uh this is the the bz Direction uh BX and b y um those are fixed to the body of the Drone so they don't spin with the propeller now does that make sense so the perpendicular Direction uh of the the propellers is always aligned with the perpendicular of the uh like perpendicular to the Drone right so this is the the busy direction does that make sense like 30 circles yes like if it is like going out yep yeah so actually I guess helicopters don't have a single loader right so they have another one yeah yeah yeah yeah so let's let's think about uh this is kind of yawing motion so I guess when someone figured out how you would make the uh the quadrotor uh yaw in place uh using this kind of aerodynamic moment effect yes okay good um so from a kind of top down view uh maybe let me draw it over here [Music] so if you look at a top-down um actually this portion of the board is still visible to people in the front yes if you look at look at the Drone from a top-down view um so if you get uh propellers to spend in opposite direction so this one spins clockwise let's say this one spins uh anti-clockwise and this one spins anti-clockwise this one spins uh clockwise uh so adjacent motor so if you look at any two adjacent uh like pair of orders uh those are spinning in in opposite directions uh and if you change how quickly let's say the clockwise one so these are both clockwise they're opposite if the clockwise ones are spinning faster uh then the uh the anti-clockwise ones then you're gonna get a motion in the clockwise that direction right and when I'm drawing here you can think of as corresponding to the moments like the aerodynamic drag moments um but I guess that's a kind of interesting uh question here which is um how does the Drone uh just stay aloft right so you might think okay these propellers are moving in the clockwise Direction uh this propeller is moving the anti-clockwise direction so maybe the thrust force is going to be opposite right so it's going to cause the majority go down maybe provide an exploration for why that doesn't happening or how you can fix that potential issue yep yeah perfect so if you if you look at the uh the pitch that's called the pitch of the blade so the angle of the blades they're basically in opposite directions uh so these propellers are always producing thrust kind of in the correct direction so it's not like it's uh it's like pulling the Drone in the opposite direction and that's happening because the the angle of the blade the geometry of the blade is flipped and that's something that that you should look so when we hand out the drones on Thursday you should take a look at adjacent propellers and see that this is the case and also when you're doing the labs sometimes the propellers will break so we'll give you some spare parts so you'll have to put the propellers on so this is super important if you don't put the propellers on in the correct orientations your drone is just gonna like flip and do something like totally weird right if you see that happening uh double check that the propellers are on correctly I will say all of that in the instructions you have to remember it but just I'm just wondering about the questions yeah uh yes so just turn in place so imagine that it's kind of aligned like this uh and you just wanted to to turn yeah just in place uh it's a yeah it's a top-down view so really it's like turning in place like this good other questions good reason why the reason why I just yeah yes exactly exactly yep yep yep good question just clarified the direction the propeller is going down to the same direction uh it's actually the opposite direction but you don't have to worry like too much about which direction the uh well at least for for figuring out the motion uh what's important is the direction of the at the moment so what I'm drawing here you can think of is This corresponding to the direction of the moment but it's the opposite direction of the propeller okay all right so yeah that's the the motor model for the um the three-dimensional uh four dollars so it's a slightly more complicated than the the planar one so the next thing I want to do is just give a sketch of the the full equations of motion for the 3D and quarter uh and again I'll just mention that since we have a pretty kind of broad set of background so students from Mae computer science or FEC and so on uh if you're an me student then this is something you should be able to do because we've done it in mma006 just realize the equation emotion for the 3D quarter um yeah if you're not memory I guess the point here is just to give you a sense for what the equations look like and also their physical meaning and their general form so the the general form if you recall from the previous lecture foreign for iterations of motion in general is x dot equals f of x U H where X is the state Vector which for the 3D chord roller arrow down over here this is this 12 dimensional thing usually we think of the state Vector as a column Vector just by convention so that's why I have the little transpose over here the same thing with the control input Vector usually we think of it as a column Vector it's a small compactorized as a row and then put a transpose so yeah this is the general form of equations of motion for not just the quadrator but for other mechanical systems as well uh you'll have a chance to kind of go through and just type out some parts of the equations of motion for the 3D quads in the the assignment that's going to go out tomorrow but yeah I just want to give you a sketch for for what they look like so uh the first thing we'll do is just Define a vector which we can call R again the the bar above any variable denotes that it's a selector this is just the position of the center of mass this is a three-dimensional Vector XYZ um so r dot is the velocity of the Centric mass for x dot point out Z Dot um so we can then use Newton's second law for people's m a um you write down an expression for our double dot so that's F divided by m the double uh forces acting on the uh the system uh divided by the mass um so there's basically two kinds of forces or two sources of forces that are acting on the Drone so one is the force of gravity um so the force divided by m is just zero zero minus t as a minus because it's acting in the the negative Z Direction uh and then the the second one for the second kind of source of forces this comes from the propeller um so the propeller forces the trust forces um are always acting in the the bz direction right so busy again is just perpendicular to the plane of the Drone so no matter what orientation the Drone is in the propellers are always producing a Thrust that's perpendicular to that plane so always aligned with the busy Direction so that's what this is so F TOEFL if you recall this from here is the total force from the propellers we're just dividing that by m because we're doing a equals F Over N question yes yeah I was just gonna I was just gonna get to that uh so this uh Vector is expressed in the the body frame uh right so the the frame that's attached to the body of the Drone the B frame that I drew over here so this R uh is a matrix so it's a three by three uh rotation Matrix um that takes a vector that's expressed in the the body frame and just changes that to a vector expressed in the world frame uh specifically R maybe I'll just make it a bit more explicit here so that's our depends on the the orientation of the Drone so it depends on by Theta and inside um the specific kind of rotation Matrix so I won't write it down here but you'll see it written out fully in the uh the first assignment but yeah you can just think of it as some rotation Matrix something that takes the vector expressed in the body cream changes it to a vector in the world frame does that make sense oh sorry the world frame the inertial frame uh the frame of reference that's attached to the uh the the room that the Drone is flying okay questions on this part all right I'll leave that up there because we'll refer back to it I'm just going to erase this [Music] all right so that's kind of the the first part of the equations of motion that tells you that gives you equations of motion for the translation of the center of mass um the second part has to do with the orientations so for this we're going to Define another vector that I'm going to call Omega and just subscript it with b w so the BW means that this is the angular velocity Vector let me just write down what this actually is it's just p q are the last three components of the the state Vector the angular velocity Vector so BW that subscript is just reminding us that this is the angular velocity of the body frame so the B frame relative to the world world again is the same as inertial um okay so then as I kind of mention briefly there's a relationship between the Euler angle Ray so the time derivatives of the the iron angles and so if I Dot hit it out and sign out so there's a relationship between the the other angle rates and the angular velocity vector um again I want to write it down fully I just want to give you a sketch for what these equations look like but you can basically go from one to the other using a matrix multiplication um so pqr that's the angular velocity Vector if you multiply that by a three by three Matrix um so I'll write down this maybe a couple of terms so it's one one uh the one one element is this one and we have sign Phi Sant Theta and then cosine Phi uh that's the first row um and then there's two more two more rows this is a three by three Matrix [Music] um and again the specific form of the the elements uh you'll see written out fully in the the first assignment but this Matrix that relates the angular velocity vector and to the Euler angle Rays has to do or depends on the Euler angles themselves just one kind of point I'll make here the specific like elements that show up like the trigonometric the Expressions that you that you get in this Matrix uh it depends on the choice of all angle convention that you use so again it's super important as I try to emphasize in the previous lecture to make sure you know what convention you're using and then you can either derive or typically then look up what this Matrix is like what the the specific expressions are all right so that's that's uh one uh part like the penultimate part of the equation the motion and then the final part [Music] [Music] tells us the rate of change of the angular velocity vector so Omega dot Omega VW dot um equals let me just write it down and then I'll explain all right so this I guess everything here is familiar except for this uh quantity I with a I can wear the calligraphy here so I is the inertia Matrix our three diagonal elements so similar to the planar quadrotor equation equations that we saw in the previous lecture there we just had a scalar corresponding to the moment of inertia of the Drone uh here we have a matrix a three by three Matrix and the specific values of these elements depends on the the geometry of the Drone and the mass distribution of the Drone and so if you have a cad model for instance with a drone an Acura CAD model the cat software will give you the amount of inertia you can also do some physical experiments how to estimate what the moments of inertia are all right so that's pretty much it in terms of uh equations of motion um again if you're if you have any background and you've seen this before hopefully it's kind of uh familiar uh if not the main thing I want to emphasize uh really like two things one is that that's the state Vector right that 12 dimensional Vector over there this is the control input vector and the equations that we've written down so these equations over here and the other questions over here these are questions over here you can combine them to get equations in this general form um so this is a 12 dimensional Vector like a state DOT equals something and that's something on the right hand side are exactly the the things that we've written down over here uh here and over here so that may or may not be obvious maybe just from instruction um but it's yeah again you'll see these equations written down in the assignment uh I would encourage if you haven't seen this before to just go through and double check that you can write these equations in this General kind of first order differential equation so yeah I guess questions on on that uh yes go ahead yes exactly yeah and yes and that's some amount uh has to do with the the orientation of the Drone so the Euler angles like the the Phi Theta PSI um so from our previous discussion in the last lecture uh that allows us to specify the orientation of the Drone and all that three by three rotation Matrix is doing is taking a vector that's expressed in the the frame of the the body the the Drone frame and just rotating that to the the inertial rf3 yeah that's certainly uh This Bar or that's that's yeah right uh yeah these are are the uh rates of the Euler angles [Music] yes exactly yeah yeah so it doesn't make a lot of sense okay so uh let me just be slightly careful here um so the rotation Matrix is this Matrix over here uh so that's the one I was saying that you take a vector uh in the the body frame and that translates it to a vector in the the initial reference frame uh the gravity for instance is like already expressed in the the international string so this Vector I'm sorry this three by three Matrix uh is not a rotation a matrix so this is just some other Matrix that's not the same as that uh the kind of role of this Matrix is to relate the Euler angle rates so how quickly because the Drone rotating uh relate that to the angular velocity Vector which is the pqr uh and yeah I guess the reason this is uh useful is if we write down if we try to write down the equations of motion in that general form actually equals f of x uh the fourth fifth and sixth components of that x dot are exactly Pi dot uh Theta dot side out uh and these equations here are allowing us to write down Pi dot Theta dot PSI dot uh in terms of some components of that state vector and specifically the components are the Euler angles by Theta PSI and the last three components are pqr does that make sense okay yeah so you can basically go through each kind of three each of these three so XYZ dot uh is just x dot y dot Z dot so the the I guess seventh eighth and Ninth component of the state [Music] um that's the one we were just going over uh that's given by these equations over here uh X uh double dot y double dot V double dot uh that is given by this because R double dot is exactly X double dot y double dot C double dot and finally P dot U Dot r dot so that's Omega dot that's given by these equations if you just concatenate each of these components you'll get equations in that general form good other questions go ahead yes yes yeah that's a great question so um I am assuming here so I didn't go into the details I'm assuming that the body frame directions are aligned with the the principal axes of the Drone uh so principal axis um basically will allow you to diagonalize the inertia Matrix uh here just by choice so I just chose the the three directions to be principal axis because I'm saying the X Direction goes forward the y direction is left the user um so just by kind of construction or by choice they are simple acting frames and so you get a diagonal inertia Matrix in general uh maybe I can just be yeah slightly more General these may not be zeros you might have some non-zero like elements there if you don't choose the body frame carefully yeah good question other questions okay all right so yeah we can write down the equations the motion they look slightly complicated but hopefully not too complicated [Music] [Music] right and the I guess the reason we're doing this again is to to build up towards uh feedback control so developing techniques to collect for deviations uh that the Drone might have from its our Target like configuration uh the next step towards that is going to be to take these equations which are non-linear right so it has like these trigonometric like functions of the state for instance there's a bunch of nonlinearities um to get to the feedback control part uh we're gonna take these non-linear equations and come up with a linear approximation of them so I'm not going to do it for the the 3D quad order because those equations are complicated so I'll tell you how we will do it for the 3D quad um but for now we're just gonna go through the exercise of taking non-linear Dynamics with a planar core order which we wrote down in the last lecture and then coming up with linear approximations of them which will then basically allow us to do some feedback control so let me go through that process so yeah so from the previous lecture so again this is just we're gonna use the the planar core now as our uh example so from the previous lecture we wrote down the equations of motion for the planar quadrature in this general form X naught equals f of x and u and I'll just remind you the specific form of those equations so x dot is D over DT of X the state Vector which was x y and Theta x dot y Dot get it done equals next song why not and then minus U1 over m sine Theta and then positive U1 over m cosine Theta minus G and then U2 divided by pi and just as a reminder uh U1 is the total from the propellers that's the summation of these two quantities F1 plus F2 and then U2 is the total moment from the propellers and we're treating these as our control inputs okay so this was just exactly what we did in the previous lecture um what I want to do is take these equations of motion which are non-linear right so it has the sine and cosine for instance well I guess those are the only non-linearities and we're going to come up with a linear approximation of these which again will then allow us to do some feedback controls [Music] thank you foreign [Music] so um yeah we have these equations in this [Music] f is is the non-linear function of X the state vector um so to come up with some linear approximation of these Dynamics we need to find some point uh so some State and some control input about which we do the linear approximation right so if we have some non-linear function just in one variable something like this we can choose some particular Point let's say this one and then we can say all right I'm going to find some linear approximation of it that's going to be a reasonably accurate representation of that function just around that nominal point that reference point um so since we're interested in controlling the Drone uh just to make it hover uh we're going to choose the hover configuration as our kind of reference point about which we do the linearization so [Music] we're going to call that reference point x subscript 0. [Music] um so the first two components correspond to their desired location of the the Drone we can just take these to be zero right just by Shifting the origin of our reference frame so without any loss of generality and we can choose x0 equals y 0 equals zero just make our lives a little bit easier the other variables the other components of the state are zero because again we're looking at the hover configuration so that the Drone is in the however configuration the orientation is zero so that's the third exponent and then x dot y dot Theta dot those are zero as well because we're saying that the Drone is not moving um the other part so we've chosen some nominal like reference points for the state about which we're going to do the linearization uh we also have to choose some reference point for the control input vector we're going to call that u0 with that bar again um so yeah I guess can someone figure out or maybe guess what we should try to choose that reference point to be again if you're just interested in making the Drone hover go ahead yes exactly so in the force we need to keep it afloat uh so that's the the first component that you want is the total trust from the propellers so that's just m times G the mass of the Drone times the gravitational constant and then the second component which is the double moment we're just going to choose that to be zero so we don't want the Drone to have any moment uh when we're when it's exactly after the hovering configuration all right so yeah this is U1 the nominal the reference point on YouTube total trust and total moment I guess does that make sense any any questions on why we chose this specific values for the state and then these products okay so yeah let's go about go ahead and try to linearize those non-linear Dynamics these Dynamics um about this reference point so I guess the first thing to note is that if we look at f of X zero u0 this is the the zero vector right so if I plug in uh some x0 let's again just to keep it simple let's just say x0 and Y zero are zero it doesn't matter here um and then we plug in this control input this reference control input and then you can check right so you can plug in the nominal the graphing state that I can control input over here and you'll see that it's exactly zero so we can kind of quickly check that so X on y dot Theta dot those are like the last three components over here so those are zero this is my choice uh since we're saying Theta zero so sine of theta is zero that zero that this component and the last component uh U2 first name is zero so that zero this component uh the only remaining one is over here so U1 over M cosine Theta minus G cosine of 0 is is one um U1 we're choosing to be mg so we have mg divided by m which is G minus G over here right so so that's that zero as well so all of the equations so all of the Expressions on the right hand side over here evaluate to zero so I guess what does this mean intuitively so we're saying that if we're already or if the Drone is already at the hovering configuration at this reference economical State and if you apply uh this control input u0 which we chose to be this then nothing is going to change right so the Drone is going to remain exactly as is and that's what we want so the Drone is already hovering exactly where we wanted to hover then do nothing right I just I don't don't or do nothing in the sense of like don't make the Drone move away from that configuration this will have to do something I guess to do nothing in the sense that you have to the propellers have to contract the force of gravity but if you contract the force of gravity then the Drone will not do anything in the sense that it's not gonna move around okay so that's that's this kind of an observation here so let's go ahead and linearize the Dynamics [Music] um so yeah I guess what we're gonna do is start off in that with that general form so x dot equals F of person and you and we're gonna just do a first order uh Taylor series approximation of f so first order Taylor series approximation of the Dynamics um so this is approximately equal to F I guess this is something from your multivariate calculus course I guess hopefully everyone has been uh Taylor series for automations uh so we have F we evaluate the function f at the point about which we're doing the linearization so that's x0 u0 Plus a matrix of partial derivatives so I'll write that as DF d x evaluated at x equals x0 U equals U zero so the reference points multiplied by x minus X naught and then Plus a matrix of partial the evidence with respect to their their control input Factor so that's you again evaluated at the reference point U minus u0 um all right I guess maybe just a quick show off and this does this make sense maybe just we have your hand if it does people are not it maybe I'll take better but yes okay good all right so yeah this is uh this is a multimedia like DLR series expansion I'll write out exactly what these matrices are a bit more carefully uh this term is zero as we mentioned over here so we just have these two terms in our Taylor series expansion if we're just doing it up to four stars okay so let's write down um I won't erase that because we're going to need it I'll just saying is this [Music] so yeah for the planar coordinator specifically and these equations are not like too messy so we can actually uh write down exactly what these matrices are and I'll just give these names so this Matrix um I'm gonna call it a so this Matrix are partial derivatives when you evaluate it at the nominal State and control input uh this one I'll call B so let's see what a is so this Matrix of for partial derivatives um we're gonna look at each component of f so we've written down what f is explicitly over here we're going to look at each component and take partial derivatives with respect to each component of the state right so the first row is going to be D x dot the X so it's the first component of f uh the partial derivative with respect to the first component of X the state vector and then DX Dot and the Y and DX Dot the Theta and then the other components would be x dot dxrty we accept Theta so that's the the first row uh the second row we're going to look at the second component of f so that's y dot we're going to take partial with respect to again each component of the state Vector so again I'll just write this one down explicitly so D sorry why Dot DX the Y Dot me bye y dot d Theta [Music] be why and yeah I guess I'm so on so the third component is partial of theta dot with respect to each uh component of the state um maybe all right let me set that one down as well and then I won't write all of them you know the same uh I'll just write down the the fourth component because that's that looks slightly different or the fourth row I should say so that's the partial of the fourth element over here of of f so that's this minus U1 over M sine Theta uh the partial of that with respect to X and then the partial of that be careful with the parentheses here with respect to Y and and so on all right so hopefully the general pattern is clear um we're just taking the the partial of each component of f with each component of the state vector and this looks super messy maybe but at least the way I've written it with my handwriting it looks messy uh but it turns out it's actually quite neat like when you evaluate these Expressions so ultimately we're going to evaluate these apt x equals x0 and U equals u0 so maybe let's just look at the first Row first I'm gonna I think I can erase this yes I guess can someone see maybe what the first row uh ends up being all right maybe let's just look at the first oh sorry you were gonna say uh all zeros for the first row so this one is zero so that is yeah uh the fourth one uh this one over here uh so yeah maybe let's just go then go through them one by one so so we're taking the partial of x dot with respect to X so that's zero right so X stop doesn't have uh any uh anything that involves X directly to This this term is zero uh the Marshall of extra respect to Y is zero the partial of x dot with respect to Theta is zero the fourth one is the one that's interesting so that's already not trivial so the the partial of X soft with respect to itself right so the partial of any variable the reflector itself is this one and then the other ones are zero as well so partial backstop with respect to Y dot partial of x dot uh with respect to Theta dot right so these are partials like not total derivative so that's why everything ends up being zero except for this fourth of the door here does that make sense I guess questions on that yes the third one zero yeah uh so the third one is uh the partial of x dot uh with respect to Theta um right so like Theta doesn't appear explicitly in like this expression x dot so we're thinking of x dot as like a variable like in its own right the fourth or sorry the first um sorry that the fourth component of the state Vector right so we're taking the partial of this variable x-star with respect to Theta x dot uh that expression that doesn't involve data um not in x dot right so uh in X double dot yes so this is D over DT of x dot so an X double dot so yeah we'll get to some terms uh where X double dots appear and that's where we get get things that are non-zero uh here is just X Out all right other questions on this okay so we can look at the second and third rows and and kind of come to a similar conclusion so this is going to be all zeros again the second row except for the Fifth Element so that's partial y dot with respect to Y dot so I'll just write that down uh and then the third row similar idea everything is zero except for the the Sixth Element the last one partial of theta dot with respect to your Theta foreign so it turns out if you so if you go through all the the elements they're actually all zero except for one this is the one that's going to be non-zero but everything else just happens to be zero um so maybe we can just look at that one uh you can verify that the zeros maybe just on your own offline yeah so let's just verify uh the the only one that I'm claiming is is non-zero uh so that's in the fourth row so one two three four uh and it's the the third uh component uh the third element in the fourth row uh so the fourth row we're looking at the the partial of the fourth row of f so that's partial foreign [Music] minus U1 over M sine Theta uh the partial of that with respect to uh the third element which is uh Theta and then we're evaluating this at the reference state and the reference control input um so we have minus U1 over M so that doesn't depend on on Theta uh the partial of sine Theta with respect to Theta is cosine Theta um and then we're plugging in uh this nominal State and control input so that state so we get cosine of zero which is one we end up with minus U1 over m and then U1 okay I guess I erase it uh but U1 we said we were going to choose just to counteract gravity so m g um so we have minus mg divided by m which is minus G so that's the the only non-zero element over here so we just end up with a minus uh G over here I guess any questions on this calculation okay so that's the uh the first Matrix so when we take the partials with respect to the state uh we can go and kind of do the same thing for the control input so I won't go through it in in that detail I'll just tell you what it ends up being [Music] so the B Matrix and that was the the partials of f with respect to U H so it's a similar kind of idea so we look at each component of F and then take the partials with respect to each component of use the DX Dot the U1 and the X down the U2 and then the Y Dot do you want me y Dot uh the Youtube and so on right so we have six of these rows and so this is a six by two Matrix again this ends up being mostly zeros and just a couple of non-zero elements so all of the first four rows end up being zero and then we have one over M 0 and then zero one over I um yeah so this might be a useful exercise just to go back and compute these partial derivative see that most of them are zero and then you see that the the only two that are non-zero are these ones over here okay so just to I guess summarize uh so we have these two matrices that we computed uh taking the partial derivatives evaluating them at the point about which we're linearizing um and so the the kind of final result of that is that we can write down the produce approximately the original equations which have this form and x one equals f of x and u we can approximate that as this a matrix which I just erased right now times x minus X naught plus the B Matrix which is what I wrote over there U minus u0 and yeah this is our this is going to be our linear really affine and so we have this constant terms over here but it's linear in the the state and the control input this is going to be our approximation that we use when we talk about feedback control um all right I guess questions on this linearization process yes so I guess this approximation is not just used for operating in the power state you already know like it's like uh yes more General exactly yeah so so the reason this is going to be helpful uh is when we think about the motion when the Drone is not already perfectly hovering so the Drone is like truly like perfectly hovering uh then everything if you just apply the control input which was to account dry gravity no torque but these equations well the original equation here and then these these approximations linear approximations help us understand the motion when you're not exactly at power so they're just slightly far away uh the goal is going to be to get to the power configuration and that's where these equations are uh are useful all right other questions on this linearization process um so uh X naught is not its own so the X naught Vector uh which I guess I'm sorry but that was the one where it's all all zeros and and that one uh is not its own variable like that's the reference point for the state about which we're doing the linear approximation um yeah I think it was an x dot is that what you meant or not uh yeah so X dots when we looked at uh for instance the the partial of x dot with respect to something like X or Y or Theta um so we're thinking of x dot as its kind of own variable uh because uh it is its own variable in the state right so this over here this like six dimensional Vector like without the time derivatives a Time derivative that's our state Vector each component of this is some component of the state so like one two three the first thing components and the fourth fifth and fixed components um yeah each of these elements you can think of as some like element of the state Vector this has its own variable so that's why like VX starts the X for instance and then I think maybe we're asking uh y x double dot is not its own variable because it doesn't appear in the fit vector right so the only variables are the ones that I've written down over here does that make sense okay all right other questions okay so yeah let's leave the kind of Dynamics part uh here for now um I'll just make a couple of comments I guess uh so the first one is that so we did this by hand like pretty much like the linearization for the playing recorder owner the equations were not super messy right uh so we can write down write them down relatively compactly um you could imagine doing this by hand for the 3D core order but it's going to be really messy right so there's a lot of partial limited team to calculate there's a bunch of like Matrix multiplications the trigonometric Expressions that show up in those equations are also complicated there's definitely not something you would want to do by hand you can try it off maybe just like one element of the a matrix one element of the B Matrix and see how far you get um so in practice what we're going to do is use Python so we're gonna use the symbolic uh kind of functionality of python to write down these expressions uh and then 500 will basically give us the linear approximation and the A and B matrices um having said that I think it's really good to understand how this process works um so I would recommend just going through this process for the the player quarter I think maybe we have one more exercise where you'll add it by hand uh linearize some Dynamics but once you've understood like how it works in principle when the Expressions get too complicated you don't want to be doing this by hand so that's where python is useful and you'll see an example of that in the assignment that goes out tomorrow okay so yeah we have these non-linear Dynamics we came up with these linear approximations uh the goal as I mentioned is to really do feedback control [Music] from the last I guess 10 minutes or so [Music] I just want to stop the the discussion of feedback control and we'll say much more about it in the the next lecture [Music] um so I guess one way to think about feedback control is in terms of the sense think and act cycle that I mentioned in the the very first lecture so the way yeah it's gonna work is that the the robot is gonna sense foreign [Music] so just imagine that the robot has some sensors that it continuously uh tell it what the the state is um so we're going to compare that with the desired state Rich if you're thinking about hover is our X note um and then we're going to choose some control input based on the difference between the state at time t and let's say that we actually want to be at so again if we're thinking about power so let's say we want the Drone to our like at some point over here but the Drone is actually here uh we're saying that the Drone has some sensors that allow it to measure its state so it knows it's not exactly at the configuration at the point that we want it to be so that the Y is higher so the Drone is going to look at what it actually is where it wants to be and then it's going to choose some control inputs that try to get it to to well actually wants to be so in this case it might be just lower the propeller speeds right so just go down a little bit and this cycle repeats so at every point in time or at least quickly enough the Drone looks at where it is but it wants to be and chooses some control input so for a drone uh the cycle typically happens at about 500 hurts so 500 times a second we looked at its state and can compute some control input [Music] so I guess a little bit more formally [Music] what we want to do with the feedback control is find what's known as a feedback controller or maybe just controller or control law it goes by a bunch of different names so this is a function so you as a function of x that instantiates this uh kind of sense [Music] think act cycle that I drew foreign just one kind of canonical form uh for for what the feedback controller or control law might look like so for example we could choose View as a function of x has to be U naught so the nominal control input plus some Matrix which I'll call K uh times x minus x0 this is a matrix um so this is just an example so basically what this is saying is if x is equal to x0 so if your state is at the desired state if you're already exactly however then this part is just zero uh and so we're only applying the the reference control input which keeps us at the hovering configuration if the state is not equal to the desired state so if your drone is somewhere else uh then do something extra which is given by this term over here in addition to the the reference like the nominal control input uh so of course the whole kind of game here with feedback control is to find what this extra part is like what should this Matrix be for instance or like why did we choose this specific form uh for the additional component and we'll talk about that in the the next lecture so feedback control gives us a principle kind of set of tools to come up with these equations so I think we have maybe just a last two minutes uh I just want to end with a final question uh which is like what is the point of feedback control like why do we need it like why do we need to do the sense think act uh cycle um and the the reason is basically to deal with uncertainty [Music] so by implementing this this sense think act cycle or this feedback controller we can deal with some different forms of uncertainty so maybe if someone want to say what kind of uncertainty might be that control help with okay perfect yeah so for instance uh a gust of wind so some external disturbance this comes and blows you away from your Hardware configuration then you want to somehow contract that and come back to being where you want to be I guess other thoughts on uh yeah so maybe some sensing uncertainty other thoughts good yep yep yeah so there's some moral uncertainty maybe you don't measure the the length of the arm or the inertia Matrix exactly so that's something that potentially this could contract I guess other ones yes a hardware yeah that's a that's a good one as well that's actually one that's pretty tricky to handle in practice so for instance like a propeller just stops working right so in principle feedback control could help you do that uh but but it can be challenging and there's one more which is actually in some sense the most direct application which is just uncertainty in the initial conditions of the Drone if you start off your drone in some initial condition that's not already the hover configuration right you want to get to the hardware configuration you don't know maybe exactly where someone will like place the Drone so that's another kind of uncertainty that feedback control allows you to handle that's kind of implicit in the gust of Queen so the gust of wind comes and glows you takes you somewhere else you can think of that as the initial condition and you want to get back all right so yeah we'll say much more about feedback control and how to find these controllers in the next lecture [Music] [Applause] [Music]
Introduction_to_Robotics_Princeton
Lecture_24_Princeton_Introduction_to_Robotics_Robotics_and_the_economy_ethics_and_laws.txt
all right maybe we can go and get started so welcome to the the last uh lecture of the semester so here's the the plan for uh for the day uh so I'm going to mention some uh topics some technical topics that I haven't really covered or haven't done Justice to uh the next thing we'll do is zoom out a little bit uh and talk about some kind of big picture stuff uh some Robotics and how it relates to the economy to ethics and to laws and then I'll briefly mention some kind of Cutting Edge research challenges that we're thinking about in our research group and and Beyond uh so these are the courses that the topics that we have covered in the course so we started off with feedback control uh so how do we control our quadrotor to make it hover for instance the equation the motion feedback controllers lqr and so on uh we then talked about motion planning so how can we get a robot to get from uh point A to point B or configuration a to configuration B without colliding with obstacles we talked about discrete planning both feasible and optimal and also planning in continuous spaces we then talked about State estimation localization and mapping so Bayesian filtering and it's many kind of variants with specific instantiations and also applications to localization and mapping and finally in the last module we talked about computer vision machine learning and a tiny bit of reinforcement learning um so I guess the the goal for for this course from my perspective was to uh basically give you an introduction to the fundamental Theory and algorithms for robotic systems with some Hardware implementation to see whether Theory and algorithms kind of break down uh and I tried to cover most uh kind of relevant topics at least from my perspective uh without getting the uh the course without making the course to survey-ish and I guess as a result not to balance the breadth of topics and the depth in topics there's a bunch of topics that didn't either cover at all or didn't really do justice to so robotic manipulation like a Locomotion reinforcement learning human robot interaction and robot design so just spend a couple of slides talking about each of these and then giving you pointers if you're interested in exploring more these specific topics uh so in this course we've mostly thought about objects in the world as things to be avoided um so even the final project is kind of based on this premise that you want your drone to get from point A to point B without colliding with with obstacles but often objects are not things that are to be avoided but rather there are things to be manipulated um and this is a kind of major area of research has been a major area of research in robotics for many decades and continues to be today so there's a one of the hardest things to do sorry let me just uh pause and play that again yeah here's a video so this is the Amazon uh picking challenge this was from a few years ago that explains some of the basic challenges behind manipulation and some specific approaches that teams were taking to this challenge oops simple as it would seem it turns out that one of the hardest things to do in the world of Robotics is something any child takes for granted simply identify an object and pick it up see how I could collapse the hand so I can get it into where it is a robotic hand just I haven't seen anything with that sort of dexterity Ty Brady is Chief technologist at Amazon robotics where they're trying to inspire some of the greatest Minds in the industry Ty looks at the human hand in ways you probably never thought of we have 27 degrees of freedom that we call so a one degree of freedom is the ability to translate in One Direction right so we have this little amazing knuckle here here and here and there's so much going on there Ty and his colleagues have come to Nagoya Japan for a sort of robot Olympics it's called the Amazon robotics challenge where teams from around the world compete to come up with a robot that can identify objects on its own with its own cameras and processors then pick them up and put them in a specific place it is not easy we'll probably fading more times than we succeed Alberto Rodriguez leads a team from MIT and Princeton a scrappy Bunch up against some teams backed by major corporations species ahead a suction cup two fingers High Fidelity tactile sensor the book for which is gonna try a flash grasp it's not going to work yeah wait it might actually work this is kind of a fancy move though there you go teams are competing for a quarter million dollars in prize money in front of a live audience the video so yeah that theme was a collaboration between MIT it's Alberto Rodriguez's group and a team at Princeton Funkhouser and his PhD students were needing that that effort um here's another video of robotic manipulation so this is a demonstration of this kind of home robot or potential home robot cleaning up a home I guess very slowly but still still doing it doing a pretty good job so it's identifying different objects um picking them up and then placing them in some kind of Target location and again this is a nice example of manipulation so the robot has to figure out what can be manipulated like what is sort of like trash in the in the in the home uh and not just like Furniture so it's not they should not try to like pick up large pieces of furniture you should pick up the things that are just like clutter identify them pick them up and then place them in some location uh there's also uh other kind of uh applications of manipulation that are uh kind of potentially down the Horizon so this is in in warehouses this is an example from Boston Dynamics that handle robot it's a kind of interesting robot morphology so they have these like vacuum kind of suction grippers and they use uh them to do this kind of pallet loading tasks so they're picking up like pretty heavy objects manipulating them and placing them to fill up the palette and one final one this is a fun example uh for another one of Boston Dynamics is robot I guess many of you maybe seen this robot uh so the person in the video is Andy Barry he's actually a former lab mate of mine at MIT at least he's no longer at the Boston Dynamics but he worked on the spot robot uh in this video he's being really mean to the robot in Real Life he's a really nice guy actually he's not he's not that mean [Music] oh my God foreign yeah and I guess there's lots of potential applications right where robotic manipulation can be really useful uh it's already being used used uh so Amazon in particular I guess has invested uh lots of funds uh to automate various parts of their kind of pipeline uh and many of the other techniques the technical like topics that we've covered in this course also directly apply to manipulation so computer vision for detecting what an object is the the pose like the location and orientation of the object uh based filtering particle filtering for instance for like estimating the the position and orientations of objects motion planning for getting your robot arm from some location to the the kind of grasping location but there are also a number of techniques that are pretty specific to manipulation so figuring out where on an object I should get the cripper to go and grasp what is a good grasp so there's a bunch of kind of match tricks for quantifying what a good grasp is so intuitively I guess I'm not gonna spill my coffee but like something like this is not necessarily a good grasp but something like this is a good grasp um and you can quantify this by thinking about uh the robustness of a given grasp to forces and darks so grasp is good if there's a large set of forces and darks that you can counteract so at least that's one definition of a good news of a grass there's lots and lots of other definitions as well I guess nowadays learning based approaches approaches based on reinforcement learning in particular are becoming pretty popular for doing robotic manipulation but there's still a fair bit of kind of planning that is useful in manipulation as well so if you're interested in uh reading more or learning more about manipulation there's a really nice review article by Matt Mason uh so he's I guess one of the Pioneers in robotic manipulation and has done a a lot of work over the last uh three or four decades so it's called towards robotic manipulation in the annual reviews of control Robotics and autonomous systems so it's from 2018 uh I guess there's obviously been progress since 2018 but I think this is a good starting point if you're interested in learning more about manipulation uh there's also actually an entire course on intelligent robotic manipulation at MIT and a bunch of their course materials are available online so if you want to really dig deep I think that could be a good resource I guess any questions on manipulation okay so another topic is is like a Locomotion uh again Boston Dynamics has a lot of super impressive videos so I guess many of you've probably seen uh their Atlas humanoid robot doing all sorts of really amazing things leaping over obstacles running over rough Terrain and at the end of the the video doing a backflip okay there you go cool so I guess one argument for like a robots which is not something we've really explored in this course we mostly used the Drone as our motivating example but one argument for for using uh like in robots in practice is that like robots are really versatile in terms of the terrain that they can handle um so rough grounds stairs and so on that are sometimes hard for real robots or even like tracked robots to uh to really kind of navigate through many of the the techniques again that we've discussed in this course are directly applicable to like robots feedback control for balancing for instance or for controlling like the the legs of the robot to track trajectories motion planning to get some kind of high level uh plan for that they're about to navigate through through obstacles or even like doing some kind of lower level planning as well for the legs but again like a Locomotion has some specific Theory and algorithms that we didn't really have a chance to to discuss uh and again like at Locomotion I guess I'll show in a couple of slides there's a kind of infusion of reinforcement learning and learning based techniques that are uh are pretty exciting but um so this is a kind of nice review article of uh more like model based techniques for like and Locomotion so this is from 2016 modeling and control of like robots so this article I guess takes the the kind of perspective we've taken in this course at the beginning which is writing down a Dynamics model writing down some kind of optical optimal control problem and doing feedback control doing motion planning and so on and I'll mention some more kind of recent work using learning based techniques to do like a locomotion all right so RL we spent just one lecture uh in the the previous lecture talking about reinforcement learning uh the basic idea again just to remind you uh is that you have some robots some agent that is operating in some environment and we don't necessarily know the the Dynamics of the environment or maybe we know them because we have a simulator but it's hard to kind of take gradients for instance of the Dynamics so at each point in time the robot gets some cost or reward the state gets updated and the robot takes some control input to try to minimize costs or equivalently to maximize rewards here's some recent work on using reinforcement learning deep reinforcement learning for legged Locomotion so this is work from pulkit otherwise group and some Vikings group at MIT so this is the MIT mini cheetah quad repair robots and I guess this was the the highest speed linear and angular that they obtained uh for for this robot and this is showing kind of robustness to different terrains like slippery ice or like rough kind of patchy grass and so on [Music] so I think in this video if I'm not mistaking the high level planning is not uh automated the the thing that is uh that the resource learning is learning is the other kind of gate uh and the lower level uh yeah control [Music] the feedback control for like recovering from Falls and other kinds of failures and adapting to diverse terrains [Music] and this kind of work I guess uh is pretty similar in spirit to uh to what we saw in the previous lecture with the reinforcement learning um so there's some reinforcement learning that's happening in a simulator using lots and lots of uh kind of parallel simulations uh and a bunch of the the tricks or implementation uh kind of details or strategies that we discussed in the previous lecture are also useful for for this work so things like domain randomization where you randomize different properties like friction and so on lots and lots of simulation capabilities neural networks and some of the the algorithmic kind of techniques that we discussed in the previous lecture uh here's another actually even more reason for just from a couple of months ago where this is from Deepak partner I can get a dramatic group where they were using reinforcement learning to do some of the high level higher level uh kind of control so like footstep planning for the legged robot question yes yes due to connection yeah good question so uh I think this video actually uh or even this Frame gives you some idea of the the kind of computation so uh this I believe is this is a unitary uh quadruped or a quarterback robot built by unitary so they've attached a little bit uh of extra computational power so the thing right in the the front is a depth sensor so that's not a computation the robot itself has some depth sensing but they I guess they found it useful to have a depth sensor kind of higher up on the robot and they've attached some CPUs and gpus to the robot but I think with these additions uh like it's all the computation and sensing is is on board so this one is untethered so typically the techniques that use deep reinforcement learning uh require some like GPU to be on the on the robot or I guess if you have a tether then on your kind of off-board computational platform um but but yeah I guess nowadays uh like especially with with uh like in videos like smaller gpus like the MX for example is a pretty powerful uh GPU that you can put on these kinds of platforms um so yeah it's possible to do everything on board good other questions okay yeah let me just play this video again um yeah so they're on the top right uh is the the depth image that you're getting the other robot is getting from the uh yeah from the camera and you can see it's not perfect right if you look at uh yeah can we just look at it again um it's mostly getting the the kind of uh like stool like surface is correct but there's a bunch of gaps that that show up every now and then uh so you have to be robust to those kinds of sensing uh errors if you want to get this kind of performance uh and so I guess many of the tricks we discussed in the previous lecture were you simulate sensor noise you simulate changes in friction change the terrain uh all of those uh kind of implementation strategies are being used uh to make this happen yeah it's kind of interesting to see some of the um emergent Behavior Uh well like flipped from the castle itself and like trips and I think in some of them it like Falls uh and it uh gets back up again yeah it's a table to handle a pretty uh kind of diverse setup of different terrains all right so question that's right yeah let me think about like a human human our knee has been yeah yeah I think for stair climbing uh it is easier I believe to to do it um yeah to have it uh this way so um yeah I guess there's a uh yeah if you look at one of the the stair videos later at the bottom left one I think that gives you a sense for why this configuration might be good you can yeah you can like I guess I talk to you with my likes because they don't run that way but uh yeah to like uh make the leg go this way so that gives you more clearance and then you can lift it up and then put it up again so I think that's the reason for for this particular kind of morphology good question good question yeah so this is the the final kind of output so the the training uh was uh in simulation uh and yeah so I guess similar to what we discussed in the last lecture so they're just uh like doing all the training and simulation uh with a lot of different kind of diverse terrains uh the one extra piece here is that they use the sensor uh measurements that the robot is collecting to estimate different properties of the environment so are they actually like estimate something that has to do with friction or like the terrain and so on so there's a kind of like online like adaptation that that happens that that leads to additional uh robustness but all of that training happens in Sim and then they deploy it uh with the real Hardware system yeah in principle you could keep fine tuning right so you could uh like keep learning uh on the on the actual like Hardware system so you get something that's like pretty good in Sim uh and then you get something that's even better by like continuously uh learning yep [Music] because sure yeah yeah that's a that's a good question so uh I guess apart a lot of that comes from the fact that uh the reinforcement learning algorithm is just trying to maximize some reward right some expected reward and the reward function uh is something that I guess someone like specified to be mostly reasonable but it's hard to capture like all the like nuances of like human Motion in a kind of really simple uh reward function um I guess one potential strategy is to use imitation learning so you try to get some reward that like mostly captures or like tries to capture the uh yeah the behavior that that you get from humans or animals I don't have a video I guess on the slides here but there's some work on doing imitation learning from uh like dogs or like other like quadrupedal animals and then basically transferring the the policies that you learn from that imitation learning onto a robotic Quarterfield platform and I think the motivation there was that well one I guess that you don't need to do reinforcement learning at least in the beginning you can get a decent policy just via Invitational learning and I think it can potentially lead to more like natural kinds of motion gear yeah so I mentioned reinforcement learning in the previous lecture so the textbook That's a classic uh and a couple of other resources as well uh and I guess I'll just mention a couple more uh additions to that list um so in this course at the beginning especially we talked about optimal control and a lot of the notation and the terminology we use from was from optimal control there are like pretty close links between optimal control and reinforcement learning you can kind of think of it as trying to solve the same problem but with a slightly different like emphasis specifically on like what things are assumed to be known like do we assume that the Dynamics are known are not known um and so this book by Dimitri bertikas itself released in Vegas from 2019 does a really nice job of providing a kind of unified perspective on reinforcement learning and optimal control and he has this kind of like table of different like notations that people in the different communities use and a bridge for what notation in one Community translates to in the other community and there's also a course that at Princeton the foundations of reinforce learning ECE 524 this is a lot of kind of theory if you're like really interested in understanding the theoretical underpinnings of reinforcement learning then I would recommend checking that out all right the other topic or another topic we 've been really done any kind of Justice to as human robot interaction ultimately of course we want our robots to inhabit the real kind of human world and this makes human robot interaction really really kind of relevant and extremely important I showed this video in the very first lecture so this was one of Google's autonomous vehicles back in 2016 trying to merge into a lane of a bus that was driven by a human and I think this was the first like documented Collision in the real uh kind of world deployment of between a human-driven vehicle and a autonomous vehicle there are lots and lots of different like Topics in human robot interaction so predicting human motion that's something where learning machine learning deep learning in particular has had a lot of impact over the last few years so how's a car going to move how's the pedestrian going to move and then once you have those predictions then you can use some of the plan techniques that we've discussed in this course to avoid uh like future places where like these other agents like cars or pedestrians will be collaborative manipulation uh so getting a robot on a human to pick up some large object like a table for instance uh communication via natural language imitation learning yeah there's lots and lots of uh fun topics uh and there's entire courses devoted to human robot the interaction um I guess I'm pointing the one here at the Georgia Tech taught by Andrea Thomas they have a kind of nice uh source of references if you want to dig deep into a human robot interaction I think this is a decent place to start uh I guess the last one I think uh technical topic that that we didn't uh really cover at all uh is robot design so Hardware design right so I guess I chose to focus the course on Theory and algorithms we only have 24 lectures there's definite amount of time so we have to focus in on something I guess here's one potential justification uh for like this choice of not really covering uh Hardware design uh that much so this is a video from a few years ago now I think maybe more than a decade ago of this really kind of impressive demonstration right so we have this home robot um I think this was a predecessor to the PR2 I mean this was the pr1 uh cleaning up this is sped up of course as you can see but it's like cleaning up this this room uh pretty pretty nicely like it picks up all the other blocks and it puts them in this bin and this could be useful right of course the robot is like kind of really large and maybe not something you'd want in your house but uh you can imagine like replacing this with a smaller like form factor robot and this could potentially be pretty useful I guess that's one caveat does someone know what or can guess what was going on in the video this is uh often I guess something that that robot videos uh leave out in this case it was intentional I guess to make a point to it um yeah so I guess this robot is not autonomous or yeah in the videos like not that honest this was tele operated by a human so the human is seeing a video feed from the perspective of the robot uh and that's all they're seeing and they have a joystick or some kind of like interface that allows them to control the robot and I guess this is you can think of this as an argument for why uh or you can think of this as an argument that what we're lacking is not necessarily Hardware like we don't necessarily need some like fancy robotic gripper to do what we saw in the video uh what we're lacking is like algorithms for detecting objects picking them up manipulating them avoiding obstacles and and all of that kind of stuff so yeah I guess this is at least to me evidence that uh that the a major bottleneck um is algorithms and theory and that Hardware design at least for these kinds of tasks is is not necessarily the main bottleneck um but like design of import of robots is still really important like it could be the case that uh you kind of co-design uh the robots hardware and the algorithms uh and it could be the case and often is the case that uh this better Hardware uh makes the algorithms uh easier or makes the job of the algorithms uh easier and I think we've seen this quite a bit uh in uh the domain of like a Locomotion so quadrified robots in particular over the last uh five or or more years maybe like 10 years let's say uh there's been a massive kind of leap in terms of the maturity of the hardware uh so nowadays I guess you can buy really good quadruped robots um for like on the order of like ten thousand dollars or so which is uh like pretty good for uh if you have a kind of academic research budget and there's lots of really amazing work on activator design sensor design mechanism design Battery Technology uh in many different uh and specifically in like Soft Robotics I guess that's an area that uh there's a massive amount of interest in and if we really have good soft robots then potentially that could make some of the algorithms like more robust because the hardware itself is like partially responsible for the robustness of the the system all right so I guess that's that's a few topics there's other topics that we haven't really done but Justice too but yeah let me just pause and see if there are any questions on this okay so the next thing I want to do is zoom out a little bit and ask about how robotics connects to some broader kind of societal questions so Robotics and the economy ethics and laws we've done this a little bit so I guess at three points uh in the uh the course uh we talked about some of the broader implications of the technical uh material that we've covered I just want to say a bit more about about that um so if you read the kind of popular press uh there's lots of coverage on on how potentially automation Robotics and automation more broadly in terms of AI uh is going to have a large maybe really danger but detrimental impact on our economy and Society there's estimates uh this kind of very large error bars I think on these estimates but estimates that yeah what could take over 20 million jobs by by 2030 and other kind of estimates on like probably ability of particular sectors of jobs being impacted in some way by robotics or automation more broadly so I think it's like pretty clear that Robotics and automation is going to have a huge impact on our economy and on employment with particular sectors in specifically that seem maybe more likely to be impacted in the short term so factory work I guess we've seen a lot of Automation in the The Factory uh uh kind of Warehouse domains truck driving that's potentially another area where autonomous vehicle companies are pouring a lot of investment to do like long-haul Trucking autonomously and other jobs might be indirectly impacted even if they're not kind of directly impacted uh and yeah possible I guess impacts of this are increased unemployment so that that's a kind of direct one uh increased inequality since lower weight jobs are often more at risk uh to automation uh than other jobs uh I guess here's the the kind of uh other side like the uh the party line if you're a roboticist uh is that uh the positive spin on this is that dangerous jobs uh maybe are the ones that going to be replaced and so that's maybe a good thing uh and um I think that the second bullet point is also like worked kind of seriously uh considering and thinking about um like it's often easier to think about existing jobs that are going to be kind of negatively impacted by automation it's much harder to predict new jobs that are going to be created and I think we've often seen that increased autonomy or increase like technology also creates other jobs so just like maintaining the robust programming robots and so on and it's not clear how those Balance out uh I think yeah uh the like I said let High error bars on on like predictions for exactly how the economy will be affected uh I think there's no question that there's going to be some impact this is like hard to say exactly what that impact is going to be uh and uh I guess a number of uh kind of uh policy people economists are thinking about uh different strategies for mitigating some of the negative impacts of automation uh these are just four different strategies that I've listed over here so lead training people uh with uh potentially like at-risk jobs uh this rethinking education right from the beginning so trying to um yeah I guess steer students towards directions that are maybe not as adversely impacted by by automation uh Universal basic income uh there's I guess a couple of really prominent like politicians trying to uh uh advocate for for uh Ubi so this everyone gets a basic income and um yeah that can maybe uh like mitigate some of the risks from automation uh robot attacks that's another interesting one I think Bill Gates is a is a proponent of this one if I'm not mistaken so some kind of tax on uh on uh Automation and uh and robots uh I guess any any questions on on this go ahead Yeah It's Tricky I mean I think like I guess the direct answer is uh like Majors that are represented like here for instance right like that that's that's the sort of uh maybe like lazy uh like answer uh and I think well I guess that makes sense if we're thinking about education from an earlier stage I think the much harder question is like for people whose jobs are going to be impacted like now maybe like someone is in the middle of their career uh and they don't have like the technical education in like computer science or mechanical engineering or electrical engineering like how do you uh retrain uh like yeah how do you retrain them and I think that that's a hard thing and I guess part of that has to come from uh like from like the government I think to uh like promote that kind of retraining and like partly it has to come from the companies as well so I think really large companies uh can do this uh so yeah there's like a portion of a company that's going to be impacted by automation then and say okay like these employees who are going to be impacted by automation maybe we're going to have some kind of training or retraining program for them and then they can kind of move to a slightly different role I guess it's harder for smaller companies to to invest that kind of money in education or retraining yeah I don't necessarily have a good answer I don't know if you had the specific I guess thoughts or other people had specific thoughts on this yeah it's good to think about a lot of these questions are things I'm going to mention I don't have good answers to they're mostly meant to I guess make you think about about these questions okay so another set of questions that we don't have good answers to is uh Robotics and and love uh so here's a thought experiment uh to maybe just get us started thinking about this so um if an autonomous uh vehicle on a scar gets into an accident uh who should be held responsible so this is a question I think we're gonna have to answer uh one way or the other uh here are a few reasonable uh options so maybe the passenger so whoever happens to be riding in the car uh pays the the price or some portion of the price the owner of the car that's another possibility these are kind of just logical possibilities the manufacturer um who built the car the car itself not totally clear what it means but it's actually kind of interesting maybe to think about that uh possibility or no one which is not a not a particularly uh satisfying answer but at least one logical possibility so I guess a lot of people think so does someone want to argue for uh programmer I was lumping that into a manufacturer um but it's actually I mean it's not totally clear right so what does programmer means there's not like one programmer like one person so it could be separate right so that couldn't be um like a kind of autonomy stack theme uh that could be the the manufacturing of the hardware itself uh there could be manufacturing the sensors uh or like other components in the in the car and they're potentially different uh uh leg entities not necessarily the same entity uh but I guess were you going to oh sorry go ahead you're gonna I was just saying like you should find like where the vault what not function what was wrong like if it was all due to like a huge humans or error of one of the sensors fan 's family we could also say oh well they decide to use that sincere uh guidance type yeah like like yeah I think if you can trace it back then then it makes sense but it's uh I guess it can be tricky to to raise it back and like you said like if there's a sensor failure maybe the car manufacturer just chose to use a bad sensor right and so it shouldn't be the center manufacturer who's maybe not trying to develop sensors for autonomous vehicles directly maybe it's where the manufacturer's fault for just using a bad sensor or not putting enough redundancy into the system with like multiple sensors I guess other thoughts or uh maybe counter arguments I guess if someone will argue for any of the other options so like the non manufacturing options go ahead yep it's like something like um yeah that might be a bit of maybe not many to prevent that yeah and like programs like yep yep good that's right the nature of the tool changes with scale so I've seen a small show I think of it like if someone steps someone else is the knife manufacturer responsible of describe but greater scale example I think of the atomic bomb in which are the scientists responsible for the creation of an atomic bomb that's so in that sense it will let the robots become if if they become huge and dangerous to like main ability might go to the manufacturers like inspector I think there needs to be a space yeah that's a good point I think I saw one more hand maybe go ahead [Music] yeah yep they bought it yeah they know there is yeah that's a good point as well yeah yeah I think we're seeing I guess some of that play out with the the lawsuits uh involving like Tesla's uh semi-autonomous like Vehicles because that is a warning right that that you are like like the the human like driver is like I'm going to be responsible for uh for things that should be monitoring but often that's not the case and there's a question about like how good was the the warning uh to begin with uh so yeah I guess we'll we'll see some resolution maybe to some of these questions in the next yeah good point um go ahead yeah yeah that one is harder to enforce maybe right [Music] um but yeah I think that that's a that's a good point as well I guess sorry go ahead I'm sorry I almost had an argument for no one okay if you look at a lot of data you can see that they actually prevent a lot of accidents but some of them still for some reason inevitably happen yeah it's almost like like such a net pause the can you believe some yep yeah no I think I think I think that's a good point and I guess this uh brings up the question of like when uh we should adopt like this technology like at scale and I think the bar for like autonomous vehicles is probably going to be higher for uh for humans so I think currently uh there are 30 something thousand fatalities roughly in the US per year from from car accidents uh and probably the the number of uh like fatal accidents or like serial accidents uh needs is going to need to be significantly lower than 30 000 in the US uh like for society to be to accept uh um like this technology but then if we get to that point uh then yeah maybe this is a case for uh no one to be held responsible especially in cases where there's no clear like component like failure or like nuclear uh like party that was responsible like it's just the data that was used to train the system somehow was not sufficient to prevent an accident and it's like hard yeah it's hard to blame any specific person uh so I was actually I was presenting uh this uh to the Princeton ethics society and someone uh argued for the car itself which I thought was was pretty interesting and I guess the argument was uh that it gives us some like a tarsis basically like some kind of satisfaction uh to like punish uh like some agent and that agent could be the the car and so maybe we have some kind of like trial where we hold the car responsible and some kind of uh like punishment for the car and maybe that gives us as humans uh more closure I thought that was an interesting argument for uh for for car itself okay so I guess current laws are written mostly for like human drivers and there's kind of a shift happening towards uh increasing uh regulation and legislation for autonomous vehicles uh and um so there's a kind of article uh from 2016 which I think is still uh quite relevant today uh describing the legal landscape around uh autonomous vehicles there's been more activity uh but not kind of a resolution on a lot of these questions that we discussed uh I guess the nhtsa the national highway Transportation uh something safety administration clarified that the occupant is not going to be responsible under current laws which I think makes makes some sense if you're just yeah if you just happen to be sitting in a autonomous vehicle then the current laws are not going to hold you responsible unless you fall into I guess one of the other categories and yeah currently the the manufacturer was probably going to Bear a large ability but I guess as we discussed like how could the manufacturer possibly predict every traffic scenario or maybe there's going to be some small proportion of uh of uh crashes uh that we're going to have to be satisfied with uh especially if it significantly reduces the the number of fatalities as compared to the scenario now and yeah I guess maybe that the federal government system connects to it to what you were saying maybe the federal government needs to take some of the financial burden of the manufacturers to help us adopt this technology and current laws are going to need to be updated um and it's not entirely clear how and I guess the just more broadly the it's not just about autonomous vehicles but we're seeing increasing adoption of Robotics technology and in many other domains and this rise of uh adoption really the serious questions about legal liability safety regulations and certifications so even before foreign accident happens like what are some tests or things that an autonomous vehicle company or autonomous drone company needs to do to be able to deploy their robots out in the real world and yeah these are questions the important questions to think about there's a case an interesting case for a federal robotics commission so there's a Federal Aviation Administration which is responsible for regulations around I guess flying uh like Vehicles airplanes drones and so on um so you could kind of maybe broaden that and think about a federal robotic communication a commission commission that's responsible for uh like setting regulations around autonomous systems in general I guess the argument here is that autonomous systems across different application domains share a lot of the the same like underlying technical challenges and maybe thinking about some of these legal Frameworks and regulations in a unified way might be the way to go as a poster thinking about them kind of piecemeal separately for each application domain there's a different argument a counter argument so this is a paper from 2018 so the future of legal and ethical regulations for autonomous robotics that kind of argues the the opposite um and the argument here is that we shouldn't necessarily be thinking about autonomous systems just as one bucket like often like different application domains have different challenges and different like requirements and we should think about the different like domains like drones separately like Factory robots separately and so on and I guess the other piece of the argument is that autonomous systems are usually based on often based on like older Technologies for which there are like pretty much your um and well-established regulations and legal Frameworks and maybe we can start with them as a good starting point and then adapt to them rather than kind of like wholesale like rethinking legislative Frameworks and yeah I guess there's good arguments I think on both sides like maybe there's something in the Middle where um the the some kind of broadly applicable sort of legal framework uh like broadly applicable to autonomous systems and then some more kind of specific uh across like specific different application domains I guess the last thing I want to talk about a lot of broad topic is Robotics and ethics and I think when we think about Robotics and ethics there's two different kinds of questions one could ask so one is ethically deploying robots so military robots for instance and the other is ethical robots so how do you get robots to be ethical I guess I'm mostly going to talk about the the second question so how do we build some notion of Ethics into robots so the starting point usually for for this discussion is the the trolley problem so maybe you've seen this problem before uh if you haven't there's a nice video I'll let it play and it does a good job of explaining uh what this problem is the moro machine [Music] sorry I think I skipped nevermind sorry skipped uh there you go [Music] a runaway train is heading towards five workers on a railway line there's no way of warning but you're standing near a lever that operates some points switch the points and the train goes down a Spur trouble is there's another worker on that bit of track too but it's one fatality instead of five should you do that many people think the right thing to do would be to switch the points to sacrifice one to say five since that produces the best outcome possible now imagine the train heading for the workers again this time it can only be stopped by pushing a very large man off a bridge his great bulk would stop the train but he'd die should you do that people say no but why not both thought experiments are cases of sacrificing one to say five what the trolley problem examines is whether moral decisions are simply about outcomes or about the manner in which you achieve them some utilitarians argue that the two cases are not importantly different from each other both have similar consequences and consequences are all that really matter in each case one person dies and five are saved the best option in each harrowing situation but lots of people say they would switch the points but they wouldn't push the man off the bridge are they simply inconsistent or are they onto something all right so the connection I guess to robotics is explained a bit in the second video a platform for Gathering a human perspective on moral decisions made by Machine intelligence such as self-driving cars moral dilemmas where a driverless car must choose the lesser of two evils such as killing two passengers or five pedestrians a as an outside Observer you judge which outcome you think is more acceptable you can then see how your responses compare with other people if you're feeling creative you can also design your own scenarios for you and others to view share and discuss Morrow machine is a project by the scalable cooperation group at the MIT media lab help us learn how to make machines moral if the music is like somewhat in Congress to the investigations of that topic go ahead I would say isn't for this about like the creation of the robot and something people like friends for instance like what data you might oh like like imitation learning for instance yeah yeah yeah yeah that's that that's uh that's a good point I guess I was lumping that into ethical robots uh but you can think of it separately like how do we um I guess make the robots uh like ethical in the sense of uh embodying like some of the the values that that we care about I guess maybe there's a separate question which is do you develop the robot at all or like are there some kinds of like technologies that that you don't uh try to develop is that where you were going or were you thinking say like the data was gathered in some very bad way and a bunch of people died like to get this data like oh I see okay [Music] oh interesting okay that's a good point I hadn't yeah I guess I hadn't thought about that uh do you have yeah I'm trying to think of examples where that kind of data collection might be beneficial I guess do you have a talk yeah good point good yeah I guess a lot of people think in in that kind of scenario I think that one maybe there's more we could probably rely on some of the the existing um like regulations around like medical like development of medical devices uh yeah especially in the US I guess there's been kind of evolving uh like scrutiny and and like thought given to uh to that question go ahead yes explicit safeguards yeah yeah definitely and I think that this relates to sort of the trolley problem question as well right uh so I guess what do people think about the the trolley problem like is it uh worth thinking about or uh yeah I guess uh criticisms of the trolley problem go ahead okay okay okay I see yeah I guess that makes it even even more uh challenging and uh yeah I guess forces us to think about uh yeah questions about like equality it's a good good thought go ahead yeah yeah true yeah so it's more about like this how we feel right like as humans uh and yeah I guess the question is like should like if you're programming robots uh should be baking the bias or uh the advice that we have uh in in terms of like answering this this trolley problem or should we uh like break in some kind of like purely like utilitarian uh like reasoning process and not yeah not not uh making the human biases um go ahead valuable of having responsibility Beyond just just that we can relate to it the same way that we would want to realize like that person making the same practitioner I I guess I would I would argue that it does matter because uh like if we're like inhabiting I guess the same like space as these uh like autonomous systems or if you're like using them there's a question about like Trust uh and I think yeah if if uh uh I guess these systems are uh like embodying uh like some of the basic values uh or like we know that they they have like baked in uh these like basic values that we care about then we're more likely to trust them and use them and just be kind of happier with them I saw another hand to go ahead yeah you sudden you're building in valleys and building in different models it's kind of interesting because you've got into the question of absolute rights are all more relative right yeah we program a car like that in the U.S and you put some morals into that's a good point I think I don't remember whether it was this project or or uh maybe some follow-on work or different work but they looked at this question of how different uh like people in different cultures like answer this question and how so age for instance uh so I guess in uh some uh like Asian like countries there's more this is like from the the studies I'm not I was trying to make this up but uh there's more of a sort of respect for age and and then in some Western cultures it's flipped and like some of that was like seen in the the data that was collected I don't have the exact study uh here but I think it was related to this again are they trying to use this to just get like a majority rule are they using that to judge or are they just using excuse me yeah I think it's more the latter so I guess if we take it like very seriously uh and like break in exactly these values that seems questionable I think it's more to just force us to think about what the value should be and and try to come to some kind of like consensus if we can um there's also like this kind of criticism of the the trolley problem specifically I think it's a good like starting point but uh um so this article in fast companies the title is like why is trolley dilemma is a terrible model for trying to make self-driving cars uh safer and I guess that the main criticism is that it's too simplistic right so we're assuming that the autonomous vehicle or the person in the trolley problem has a forced like binary Choice like you have to select one or the other and most situations where uh yeah robot uh I guess it's like pretty rare for uh for a robot to encounter a situation where it's a forced like binary Choice okay okay driving self-driving car yeah so if you're given nothing and I think that that's a pretty reasonable point I guess you could say like just like don't even worry about like these these kinds of questions just yeah just drive and like just don't collide with anything right just don't collide with uh with like humans at all uh and I think that that that that is like part of the the argument here is that like some of these kind of problem questions are not necessarily like directly relevant to the engineering problem of like how should I make my autonomous vehicle safe um but I think I it like I think it's good to think about these questions and uh I guess maybe the more like direct uh like manifestation of these questions that we're seeing like now uh like have to do with uh like fairness right so that we talked about I guess a couple of lectures ago so people found uh that image recognition uh tools or or just uh kind of models that are based on machine learning on biased data can uh like embody some of the other biases that that we have uh against like different uh demographic groups and I think their uh these questions I would argue are are like pretty directly relevant like we should think about what we mean by fairness uh and like we have to sound like mathematically Quantified to even assess like whether uh like these machine learning kind of models are uh like fair or not and of course ultimately to train uh systems that uh that are like fair by our definitions um go ahead [Music] [Music] yeah if [Music] just going on the path like unrestricted the question is then when you know like if car knows to solve the red light and that's the only time this you know people are walking yeah and I feel like that's a lot like that's very easily avoidable yeah you if you change it yeah yeah I agree with you I think with some of the the really like safety critical um like applications of Robotics the ethical questions are a bit clearer in a sense because you're just trying to to like avoid uh like collisions or avoid scenarios um but yeah I guess with with some of the the emerging applications of of learning in particular uh the questions are a bit more more or the answers maybe are a bit more working on the questions design robot robot because we're getting more diverse as time goes on so we'll have to like yeah yeah I mean I think the I guess the maybe the high level point of uh that we should be thinking about these these things right these biasis and how like our Engineering Systems like reflect these biases and I mean maybe surprisingly I guess this was not something that uh was part of the mainstream conversation uh even yeah until about like five or six years or so ago where there was like a Resurgence of or not a surgeons I guess an increase in interest in in like thinking about these questions um yeah like different like biases of machine learning models so I guess maybe that that's ultimately the uh the main kind of message I I want to convey but you're right like I guess these things are going to evolve um but try to quantify what we mean by candidates or trying to be just more precise about uh what we mean by values and and uh uh yeah like just taking those in or figuring out ways to make those into our robotic systems is uh something we should all be thinking about other I guess thoughts or questions all right okay so the last thing I want to talk about is some uh Grand challenges at least from from my perspective in robotics uh so in my opinion I think there's two uh kind of uh areas like two uh classes of problems uh that that I find particularly exciting and uh challenging so the first one uh relates kind of directly to all the things that we've talked about in this course so trying to ensure safety robustness and generalization of our robotic systems uh the other one which uh I guess we've talked about a little bit but I want to spend uh maybe just a couple more minutes talking about is thinking about like some of those theoretical foundations of of robotics uh so the first one um uh kind of relates to this this theme of uncertainty that we've seen over and over again in this course so I mentioned just back in kind of lecture one that the uncertainty was going to be a fundamental importance when we think about Robotics and we've seen this in all the different like topics that we've covered in this course so on certainly the Dynamics of your system uncertainty in the initial state state estimation environment geometry and I guess hopefully you have a more kind of visceral maybe appreciation for these different sources of uncertainty having worked on Hardware uh where you see a lot of the the challenges really kind of manifest themselves and be pretty painful um so uh I think one kind of important question is how do we try to make some formal mathematical like guarantees on on safety and performance for our robotic systems and there's lots of like technical challenges that we need to somehow address if we want to like formally gather 20 safety or performance of our robots so thinking about these challenges of non-linear and uncertain Dynamics High dimensional sensing like particularly when you have like Vision or lidar and you're like trying to process that in real time how do we do that while maintaining uh some kind of safety guarantees how do we incorporate learning like how do we incorporate deep learning in particular so if you have a neural network as part of your perception or control Loop that you've trained on some amount of data like how do you make sure that that trained system is going to perform well when you deploy it in scenarios that you've not kind of explicitly trained it to do well on and I think it's kind of maybe useful to think about some of these questions in the context of your final project so how would you kind of guarantee let's say even probabilistically that your robot is going to successfully navigate through some previously unknown unseen obstacle configuration and yeah it's pretty challenging to to try to to make these kinds of guarantees but it like forces you to think about some of the assumptions that you've made in designing the algorithms and I think that by itself can be a useful exercise to try to go through true and of course if we're deploying safety critical systems we want to have some assurances some like formal assurances that the systems that we deploy are going to be safe under some assumptions so this is the kind of thing I guess we work on in our research group so we think about enabling agile robotic systems to operate with guaranteed safety and performance in complex environments and we have a number of different uh kind of Hardware platforms that we have you'll be using our lab so the top left is a drone that's flying in our forest Style Lab space the bottom left is a robotic manipulator that we have in the knee quad and the bottom right is actually a collaboration with the Jaime fisax group where we were using a quadruped kind of navigating around these different indoor environments I guess if you're interested in learning more I've linked to our lab website and there's descriptions of our research and recent papers and and so on and you should talk to the AI as well to learn more about what we're doing the other uh I guess point I wanted to make is that there's a lot of really kind of basic theoretical questions that we don't have good answers to in robotics I've listed a few examples of these so how much memory does your robot need in order to perform a certain tasks maybe it's like navigating for a long time Horizon in a large environment how much information do you need from your sensors what kind of specific sensors do you need to get to a certain level of performance how much computation do you need and so on um other questions like when is one task harder for a particular robot then another task how do we even Define like what is harder even mean uh when is one robot better for a given task than another I guess these are like pretty basic questions you could imagine maybe like middle school or high school students like asking some of these questions and I think we have almost like no clue of trying to uh like thinking about how to formalize these questions mathematically and and I guess let alone like trying to to answer them so robotics borrows a lot of ideas and Technical tools from many other related areas like feedback control organization machine learning and so on but I think ultimately hopefully maybe we're going to have our own theoretical foundations and I think it's useful to compare the state of Robotics with the state of a slightly more mature area like computer science so many of these questions have analogs like analogous versions in computational complexity theory for instance I guess for those of you who are in in computer science hardness has a has a very specific mathematical meaning better has a mathematical meaning efficiency has kind of concrete mathematical definitions and I guess taking inspiration from that and like porting some of those ideas to robotics uh I think could be an enabler for a long-term progress in in robotics all right I guess any any questions on uh on those thoughts all right so yeah just just to end a couple of a couple of things so there's a a new course or I guess new newish course um so ECE uh 346 uh so this is meant to be a sequence uh so this course and and uh Physics course in ECE uh subtitled intelligent robotic systems um so it's uh I guess more advanced treatment of some of the the topics that we've seen in this course and also some other topics that we haven't seen in this course uh so he covers the topics like planning under uncertainty active perception uh learning Waste Control multi-agent uh decision making yeah which we haven't really covered much and some human robot interaction I believe as well and the hardware assignments around these mobile robots so it I think like autonomous trucks that move around and they use Ross so that's the Robot Operating System that's pretty useful to learn about which again we haven't covered in in this course yeah and I guess I would strongly encourage you to at least consider this course and take this course uh if you're interested in pursuing robotics further I think this is going to be the second time it's uh it's all for uh so yeah it should be a fun uh course to take um so yeah I guess last slide uh things I I hope I've convinced you uh throughout this course so the first thing is that there's this tremendous amount of excitement around robotics uh massive investment from industry government universities and we've seen examples of that in different domains so drones robotic manipulation and many other application domains as well I guess hopefully I've convinced you that there's a lot of really fascinating technical challenges we've seen some of that in this course in our various Labs that that we've done so thinking about feedback control motion planning localization mapping machine learning and there's lots of beautiful connections with with all of these uh different areas some of which we've seen in this course and many I guess which you'll see if you continue to to pursue robotics and there's lots of important legal economic and ethical considerations uh which I feel like we haven't really done Justice it was hard to do justice to but it's important to think about these these questions and I guess the most important thing from my perspective so if I I feel like if I've convinced you that robotics is really really cool and exciting then I think I've done my job uh and yeah if even a fraction of you pursue your robotics then I think that's a win I guess any last thoughts or questions all right I think that's all I had so I guess good luck with the final exam period And I will see all of you on demo day [Music] [Music]
Introduction_to_Robotics_Princeton
Lecture_14_Princeton_Introduction_to_Robotics_Localization.txt
all right Dad let's go and get started so in the last two lectures we've been looking at this concept of Base filtering and we focused on the problem of State estimation using using waste filtering uh so what we did two lectures ago was discussed the general form of the the base filter which kind of followed on from our discussion before that of the non-deterministic filter uh so one way to think about the base filter is a probabilistic version of the the non-deterministic filter um and then what we said was that if you have discrete state uh observation and action faces [Applause] so if everything is it's finite uh in this setting you can implement the base filter exactly so by exactly I mean without any approximation uh in this case all the Expressions that arise when you implement the base filter have summations into instead of integrals so in principle you can do everything exactly without any approximation although it might be computationally intensive so if you have many many many uh States or if your observation space is very large if your action space are very large it could be computationally intensive but at least in principle you can Implement everything exactly but in most robotic settings what we care about like the systems don't have finite State spaces or observation spaces or action spaces typically we have continuous spaces for for these things and in that case in the previous lecture we discussed two different settings so the first one is when you have linear Plus gaussian so usually abbreviated as L and G linear gasoline systems so this is an example where the state space is continuous the observation space is continuous the action space the control input space is also continuous uh yeah it turns out that you can implement the base filter exactly and this exact uh implementation is known as the Garmin filter but more generally [Applause] if your Dynamics are not linear or if your initial belief is not gaussian or your sensing model is is non-linear then there's no General kind of tractable implementation exact implementation of database filter so we need to make some approximations and the main algorithm we discussed in the previous lecture to handle for general settings is the particle filter [Applause] and yeah we discussed the algorithm uh showed some videos and in this assignment I will have a chance to implement it and kind of see it working in practice okay so what we're going to do today is look at a specific application of the base filter um to the problem of localization so at a high level the problem of localization boils down to answering the question from the robots perspective of where am I in some in some space um so the general kind of uh structure of what we're going to do today is going to be exactly the same as what we did in the in the previous lecture and the lecture before that this is just going to be an application of Base filtering but to this specific problem of localization and the main kind of extra information that we're gonna assume that there has access to and in today's lecture uh is going to be a mask of the environment that the robot is going to be placed in and then its job is to figure out where exactly in that map it is or at least to maintain some some belief about where it is uh relative to that map and this kind of came up at the the end of the last lecture as well when we're looking at the video the particle filter uh working um uh we assumed or in the video it was assume that the robot knew uh the map of the uh the environment uh so we're going to try to make this more uh explicit today all right so the first question is what exactly is a map right uh what do we what do we mean by a map um so this is chapter 6.2 in the probabilistic robotics book if you're interested in reading more um so roughly speaking a map is some representation of the robot's environment and there are many different kinds of maps uh depending on exactly what the robot is trying to do or just what information you have access to but roughly speaking there are two kinds of of maps or at least we can kind of categorize uh Maps into into two kinds um so the first kind is what are known as location based Maps um or also known as volumetric math foreign [Music] volumetric based Maps what they do is assign some property let's say so I'll put property in codes we made it more formal in just a second sum property to every location in the in the environment um and probably the the most common example of a location-based map is what's known as a occupancy Grid or occupancy map um so an occupancy map maybe I'll draw a picture um so an occupancy occupancy map uh so I said location-based map assigns some property to every location in the in the environment um so an occupancy map assigns the property of occupancy so occupied or not to every point in the map so these were the kinds of maps we were looking at when we thought about motion planning uh so here I've drawn a grid so this is an occupancy grid map and every point in the environment has a property uh which is whether or not corresponds to whether or not Dash location is occupied so the property here is occupancy so is there an obstacle in a given location and even with the occupancy grid so I've destroys the the environment here but I can query any point right at least in terms of like query like any point in the environment look at the Cell at that point corresponds to and then check whether or not uh something is whether that location is occupied or not um and yeah like I said was the current map we used uh so when we did uh BFS or DFS diaper or a star and when we looked at the graph search algorithms we looked at occupancy grid Maps when we looked at the continuous portion planning so the RIT for instance these were not necessarily good but they were still occupancy Maps right so for any location in the environment you can ask the map you can make a query is it occupied or not [Music] um all right so I guess the the main thing I want to emphasize here is that whatever the property is uh is assigned to every location in the environment so for every location you can query like what is the the value of the property um and occupancy is one example but that could be other properties and let you assign so for instance color uh is another example so uh for every point in the environment you could ask what's the color of the object that's located in that environment and there's no object then maybe just have some kind of extra other like symbol that you assign to empty uh environments another example is a a height map so this is used for like Locomotion in particular so this assigns basically a height to every Point uh on the in the environment typically just like a 2d environment like a ground um it assigns a height from some nominal height uh that the ground with that and this is useful when you're doing footstep planning for a legged robot all right so these are different examples of uh location based Maps uh the other I can't have any questions on them all right so the other kind of map is what's known as a feature base map um and yeah these kinds of maps contain properties and locations of some specific features in the environment I'm just gonna abbreviate to EnV for environment um so as an example uh you could have different features corresponding to different objects in the in the environment uh and in the kind of localization and mapping literature uh these objects corresponding to feature-based maps are often known as landmarks foreign also go by the name of a landmark based Maps so the map here is basically a list of different objects list of different landmarks with some properties associated with them so for instance my list could contain a door with some property so the property could be let's say just locations maybe there's a door here corresponding to this location maybe there's a window with its corresponding location there's a window here and maybe there's some other objects like a table with its location and you could have other other properties right so I'm just saying location is a kind of canonical property but you could have other property like uh like the size of the lower geometry of the table mass of the uh the table things like that um and so these are different from feature based Maps because we're not necessarily specifying some property for every object so for every location in the environment we're just picking some kind of distinct uh objects some landmarks I'm associating properties with those but if I ask you like what's over here uh there's no like real information that I necessarily get from a feature-based map or Landmark based math question yeah so an occupancy map uh uh sorry I guess which way did you say feature-based map is a um so you can think of it the other way maybe it's an occupancy map you can think of as a as a future based map um because uh but with more like conditions like extra conditions like some associating some feature uh with every location in the in an occupancy map uh so that yeah in that sense you can think of a any occupancy map as a more kind of exhaustive like version of a feature-based map but I guess the converse is not necessarily uh true right so feature based map is not associating uh properties with every location as as an occupancy mattress is required to do or as a location-based math is required to do all right yeah I guess another question okay so I think it really depends on the kind of map you use depends on what you're trying to do uh maybe you just care about certain landmarks you want to like find these landmarks uh or you want to like do something else uh like for planning uh clearly like a future based map like may not be sufficient right so this is not necessarily giving you uh information about where all the obstacles are which you might need for uh for doing motion planning um so we're gonna discuss so I guess yeah given these kinds of maps we're going to discuss how a robot can operate in that environment and specifically localize itself in a given environment all right so we're gonna let uh x t I guess as usual the note the state of the robot at time p and typically with localization we're not going to think about the velocity components like the time derivative components of the set so if you think back to our earlier discussions when we're doing feedback control uh the definition of a state for a planar quadrillator or a 3D quadrotor contains the location like the position information maybe the orientation and then also the time derivative right some velocity information typically with localization you just care about the location or the orientation but not the velocities but in principle you can also like try to estimate the velocity of the robot but yeah essentially when people say localization typically they just mean like the location or and orientation of the robot all right so yeah I'm just going to use the word state so the robot is trying to figure out its state relative to the to the environment all right so we're gonna write down like an algorithm for doing localization it's basically just going to be the the base filter but with some extra uh steps or extra information so for every time step and let's say for all States and there's going to be two steps I'm going to mess up the indentation here but this is inside the uh the second for Loop so the first step is the the measurements update so we take our previous belief about what the state could be and I'm gonna write everything in terms of summations rather than integrals so we're implicitly assuming that um the possible set of states is finite but you can extend everything here to The Continuous setting as well by replacing the summations with integrals so all right so the first step is um and then let me just it's a different color chalk here so that's the measurement update in the base filter and sorry the the Dynamics update the second step is the measurement update [Applause] all right so this is exactly the the base filter like the oh sorry question good physical Fitness uh yes so the belief with the bar is the belief the distribution that you get after you do the Dynamics update um but before you take into account the robot sensor machine so this step is just propagating the previous belief which is belief uh with XC minus one so Billy from the previous time step it's propagating that through the probabilistic Dynamics model over here and so that's the belief bar and then the second step we use base rule to incorporate the new sensor my human ZT that the robot okay yeah so this is exactly the the base filter uh with just one kind of very small modification uh which is that our Dynamics model and our fencing model uh now depend on this extra information that we're assuming that the robot has access to which is some kind of map of the environment so either a location-based map or a feature based map so yeah you can think of localization as just an application of Base filtering so if I tell you what the Dynamics model is and if I tell you what the sensor model is so let's say we specify these two things so P of x t given X T minus one so the previous state the previous control input and M which is the the map of the the environment um so this is our Dynamics model and then suppose I also specify uh P of ZT so the probability of receiving some sensor measurement or the density of receiving some sensor measurement from some state t x t given some math m with our sensor model and then I also specify some initial belief so belief over x0 so the state that at times at zero so if I give you these three things then we can use the techniques uh that that we discussed in the previous lecture um so in principle you could use the camel Coleman filter assuming that the assumptions for the the filter comes filter are satisfies we have linear plus gaussian Dynamics uh linear Plus calcium sensor model and a gaussian initial belief typically with localization you wouldn't use a comma filter typically you'd use a particle filter because uh yeah like the assumptions for the account filter are not usually satisfied in a localization setting but yeah I guess in principle if you've specified these three things we can just use any of the techniques from the previous lecture to perform localization all right I guess any questions on this is something yeah yeah definitely um so typically all three assumptions actually break so um we'll I'll describe what the Dynamics uh could look like when you have a map but if you have any obstacles in the in the map then your Dynamics are probably not going to be linear like if you collide with something like the Dynamics that are going to look pretty different from if the robot is just moving around in free space so yeah typically you don't have linear Dynamics unless you make some additional assumptions like like somehow you never collide with any obstacles if you know that a priority then the Dynamics could be linear the sensor model um yeah often is not linear as well so depends on the specific sensor I'll describe some common sensors and Associated sensor models and the the initial belief could be asking so that one you can satisfy but um yeah I guess as we saw in the previous lecture if your environment has certain symmetries like if you have a corridor that's symmetric then you don't want the belief at some time T to be unimodal like typically you want it to be multimodal to capture the fact that you have some uncertainty about like a rich portion of the environment you might be so yeah typically like particle filtering is the way to go for localization question how did the future based math help at all of the proper use um yeah so feature based map um so it might feed into the the sensor model um so if you have a if you have if you're in some State and you receive some sensor measurement uh which let's say corresponds to uh like the robot telling you that I think there's a a table with a certain size so if you have like multiple tables then um and and I guess you know like where those tables are then that gives you some information about where the robot might be because it observed a table of a certain size so yeah I guess that's the situations all right okay so yeah let's look at some uh examples um of of these Dynamics model sensor model uh and well I guess the initial belief is not uh doesn't change that much from what we looked at in the previous lecture but the main distinction here is that we need to account for the map now right so we need to when we're specifying that Dynamics model as a model uh we need to take into account this extra information that we're assuming is is given to us foreign so yeah let's look at some some examples so we'll look at Dynamics model first [Applause] so Dynamics model with a map and just to be concrete let's look at the problem of grid localization so we have some grid occupancy grid representation of the environment um that we're assuming again that the robot knows it knows like it's given this map information um and so the question is like how should we specify so this is great localization has been localization [Applause] given so yeah I guess how can we specify a reasonable Dynamics model thank you so one thing you could do is just try to understand the Dynamics of the system uh in the absence of the map right so in the absence of any obstacles uh what the robots kind of Dynamics look like um and that part is yeah it's usually rather than being straightforward so we've done this to the planar Quadrunner the full court order and so on uh but how do we incorporate this extra information the fact that the robot is operating in the presence of of obstacles um so one thing we could do is uh is basically say that the probability that the robot enters some cell which is occupied that probability is zero uh right so that's I guess what it means for for something to be occupied with a with an obstacle uh we're making an assumption here which is that the obstacle is not movable uh so the maybe the the Drone is like light enough that it doesn't like knock obstacles uh away from their location uh so under that that assumption we can say that the probability that the robot uh enter some occupied cell is equal to zero so formally the probability that the robot is at State XT given that it was at State XC minus one took some control input UT minus one in some environment for the map and is equal to zero if the cell XD is occupied all right so that's one way to modify questions uh yes okay so I guess I'll bring slightly fuzzy here so uh in the grid localization I'm assuming that you've already like crystallized everything so in that case it's really just a cell um one way to get to describe Dynamics is from like continuous Dynamics and then uh you can basically propagate The Continuous State forward in time and then ask is that state occupied or not um yeah but I guess there's four Simplicity here I was saying everything is discrete so you just have you've already like discretized the Dynamics and you have a discrete number of possible States um so okay I guess let me just let me just let's think about uh exactly how to do that step so suppose I gave you some Dynamics uh in the absence of any uh obstacles um and let's say these are like discretized Dynamics so for any cell uh and any neighboring cell I give you some probability that you end up at the neighboring cell given that you started off at the actually it's easier to draw it probably yeah so let's say in the in the absence of any any obstacles uh for every cell uh and a neighboring cell um let's say we're here I give you the probability that if you take some control input you transition from this cell to this cell um so if I give you that information Dynamics without any obstacles how would you modify those Dynamics in some kind of reasonable way to take into account the the obstacles and because so one thing we know is that the probabilities have to sum to ones right so if I'm telling I guess if we import the constraint that uh in the presence of obstacles you're not allowed to occupy something that that has an obstacle so we're setting the probability of transition transitioning from let's say this cell one uh to sell two to zero because cell two is occupied then we need to um change the the probabilities for the other cells like we need to increase those probabilities to make sure that the probability is still come to the one so yeah I guess if you impose this constraint uh then you need to adjust the other probabilities like the probability of transitioning to some unoccupied cell such that the probability is still somewhere to one when you sum overall the possibilities for the next state but yeah I guess that's kind of the adjustment you might make understand the Dynamics without a map and then make some modification to take the map into account um all right so we can do something similar for continuous State spaces as well so for not necessarily working with with grids [Music] foreign so we don't want to discretize our space so we could specify some Dynamics model so write down like P of XD given x t minus 1 uh ut-1 and and the map M such that the probability associated with transitioning to some State that's occupied is equal to zero um and there are some I guess kind of relatively standard probability uh distributions that you could use uh it gets a little bit clunky but but it's it's doable in principle uh let's just for the sake of Simplicity imagine that your robot has a one-dimensional safety for just its location so XC here is a uh is the x-axis and let's say this is three of the y-axis of P of x c given some specific [Music] a previous set some specific control input and some specific map and imagine that there's an obstacle over here so you could associate some probability distribution like a truncated gaussian distribution uh well I guess it's not quite a graphic but something else it looks looks like this so you take the gaussian distribution um and then basically say that anything that's occupied uh by the obstacle has zero probability density um so it's like zero over here um and then I get you just kind of renormalize uh the remaining distribution so the portion of the distribution that has non-zero probability uh you I guess multiplied by some constant to make sure that cell integrates to one um yeah but this this kind of thing is like pretty hard to do for uh like multi like dimensional uh State spaces so for single Dimension you could like write down a truncated gaussian distribution but it becomes more complicated for uh if you have a complicated like 3D geometries all right I guess any questions on either of these okay so all right I guess what do we do about the fact that it's like writing down some distribution like this data science zero probability for like going like into some occupied uh state is zero one observation is that when we're implementing the particle filter we actually never need to so if you look at the the algorithm the particle filtering algorithm you'll notice that you never explicitly need to write down this distribution um so maybe just a quick I guess a reminder of the product filtering algorithm uh so at every point in time the particle filter maintains some set like some finite set of particles so some like hypotheses for where the robot could be unless there's some like obstacle over here what it does so the particle filter does is it takes each of these particles and it samples some potential next state from the the probability uh like distribution like corresponding to the Dynamics right so for every point over here so if we call that x t minus one uh all we need to do is sample from this distribution uh and so in principle we don't need to explicitly write down the probability density function for this distribution and yeah I guess just to maybe make it more concrete so suppose I give you Dynamics model so P of XD XC minus one U T minus one without a map so again in the in the absence of any obstacles suppose I give you this information um I guess can you think of ways of using this uh to then sample from this distribution which is what we really care about so in the presence of a map in the presence of obstacles what's the or I guess how can we sample a possible XT like the next state given that you're at XC minus one and you took control input U1 eut minus one question ah so our answer I guess maybe yes good yeah very good so we can use I guess sometimes goes by the name of rejection sampling so I'll write down kind of a some pseudocode so we call this function example with math which is what we want to do so we want to sample from this conditional probability distribution which hap has the mass information built in we're given this so example without a map so sample with map so its arguments are going to be some previous state some previous control input and then the map and then we're basically just going to have a while loop so we're going to Loop [Music] um through these lines of code so XD we're going to sample actually let me I'll use this notation over here so we're going to sample XT from from this distribution so without without the map so we're assuming that that that's given to us if we understand the Dynamics of our robot in the absence of any obstacles um and then yeah I guess let's say we calculate uh so I'll call this uh pi which is p of XD given m um well I guess maybe a simple simpler way to do this is this Jack and Supply is going to be I guess true if XC is unoccupied [Applause] so I'll say Pi is true um and then yeah this return XD XT so yeah all this is doing is is just sampling a whole bunch of times from the uh Dynamics uh without a map so we're ignoring the obstacles uh and then we're just seeing if the sample that you get is occupied or not if it's not occupied then you return it if it is occupied then you just keep keep sampling more stuff until you get something that's uh that doesn't occupied all right so this is kind of a simple way to to take the Dynamics without a map and then incorporate the map as well uh any questions on on that okay all right so let's look at some ways of uh specifying sensor models yeah I guess what I'm giving you here are just like examples right so in practice you have an actual system with some actual Dynamics and maybe there's a better way but hopefully this gives you a sense for what you could do just genetically to incorporate Maps into your Dynamics and sensor models so yeah let's look at some examples for sensor marbles with maps um so I guess one of the most like common sensors that robots are equipped like specifically uh like mobile robots mobile robots uh have what are known as rangefinders so an example of this is a laser rangefinder so lidar lighter is like some acronym but yeah it's basically just a laser laser in finder or ultrasound sensors the lidars are heavier more expensive but significantly more accurate ultrasonic sensors are much cheaper much lighter much less power hungry but also much less accurate so most like autonomous vehicles have lidars on them I guess except for Tesla Tesla wants to do everything purely based on a vision but many of the other companies have Laser Rangefinder sliders on them um so yeah I guess what does a rangefinder do so it basically it gives you distances um from distances to the closest uh object to the closest obstacle so if you have a bunch of obstacles in your environment let's say the robot is at a certain location so it's going to give you distances along various raids if you look at uh so I've grown I guess how many over seven or eight different race uh if you look at these different directions and you propagate array forward from the robot like to the environment uh and the the number like the scalar you get associated with each of these Rays is the distance to the closest obstacle now so it could be that there's another obstacle kind of behind this obstacle uh but yeah Rangefinder is just gonna um basically give you the distance to the closest one um and yeah you can imagine like a 3D versions of this you could have Brave uh propagating in 3D and you'll get distances along along those Rays so what we're trying to do I guess with a sensor model is write down something like P of ZT so the probability of receiving some plans to measurement ZT given that your robot is at State XT and given some map of the the environment um so to write down some I guess reasonable model of uh rangefinder um we need to account for the kinds of errors that the rainfinder makes so you will hope that the rangefinder gives you exactly the correct distance right but in practice that's never the case like it makes uh like typical rangefinders have have some errors uh associated with them and there are four I guess main factors that you need to somehow measure uh associated with your particular Rangefinder to write down this probabilistic sensor model so the first factor is just the actual range so the actual like distance along some Ray so we can so if we know the map and yeah I guess I'm assuming here that we have some uh location based Maps like an occupancy map so if we know the map uh we can figure out the true distance along any particular array using something that's known as Ray tracing uh yeah it's Ray facing just means that you look so if I tell you these obstacles like where they are unless they were just looking at one particular array so I can propagate this way forward and just see where it collides with the first obstacle and that's the distance uh yeah I guess Ray tracing uh like there are many like a computational efficient algorithms for uh for like doing that like intersection like that Coalition checking and yeah these things are called Ray tracing so if I if I know the uh with crew State XC and if I know the map and then I can figure out the the actual distance along the different Rays just using some geometry um the second Factor we need to take into account is a unexpected or our unexpected obstacles uh in the environment so it could be that the map you have is not completely accurate so it's not an accurate reflection of reality so if you think about like autonomous vehicles um one of the uh like things that they use to localize themselves in a city is some map of the environment so I guess way more for instance like Google's on a car company has or I mean any of the other car companies has like pretty rich maps of the different cities that they might operate in but of course things are not completely static in those cities right so uh if one of their cars like went on a street to map it out it could mean that there's a new car in that street or it could be that this construction or something is moved in the environment so they could be unexpected obstacles obstacles that were not present in the lab but are actually present in the real environment that might kind of mess up your uh that might give you some unexpected uh range right so if there's an obstacle uh let's say over here that wasn't in the the original map that you were given uh then there's some probability that you're actually going to receive a smaller distance along this rate than you would have predicted if you just relied on the first Factor like the actual distance to the obstacles given a map so yeah I guess we have some uh probability that there's a unexpected obstacle an obstacle that was not represented in the map along any uh given trade um the the third one is I guess kind of the opposite so failure to detect obstacles so maybe there really is an obstacle over here but for some reason your Rangefinder just misses that obstacle altogether with a good raid sensor like a lighter like that's not going to happen uh but yeah if you have like really bad rains and circulating the segment obstacles uh all together or if you're using computer revision which we'll talk about later to figure out distances um using like a stereo depth algorithm then it's possible that you just miss obstacles all together so yeah that's another kind of error and then the fourth one is this Randomness let's say like just some unexplained uh uncertainty some unexplained errors uh from the the actual distance so we can take into account all of these different sources of uncertainty to write down a probability uh model so it's about that ZT is just let's say we're looking at just one particular array so we just pick one particular array that we're gonna write down uh puf VT given let's see and so ZT is the range measurement so the distance measurement from your sensor corresponding to a particular direction so we could write down P of V T so the probability of receiving some particular range measurement given that your robot is at State XT operating in an environment with map m uh here's one and reasonable model so w so there's some yeah I'll come to exactly what this is so this is W subscript hit multiplied by the probability of hitting an obstacle VP given XC m plus W uh unexpected um Bluff W field and then finally w uh this random um so this is these are like weights uh like these W terms that let's say like Sumter uh to one um and the specific rates are going to depend on your specific sensor uh so we're taking each kind of individual uh probability here like we have to be unexpected details uh be random and then taking a weighted combination of those probability models uh so P hit is the probability of receiving sensor measurement ziti uh given that your robot is at State XT and has math M uh if the array like actually hits an obstacle kind of correctly P unexpected is the probability of seeing sometimes a measurement if your re-encounter some unexpected obstacles some obstacles that does not present in the map fee fails if you fail to see an obstacle that was there on the map and P random is just some let's say like gaussian uncertainty around uh the true uh distance so I guess maybe we can think about uh just some of these so uh let's look at we fail um so what's maybe can you come up with a a candidate for what this distribution might look like so yeah so just this uh distribution over here so the probability software fail of VT given XD and M so what's uh what yeah I guess what form could that take or what might that look like so again this in words this is the probability of receiving some sensor measurement assuming that you're a bottle of that state XT and is operating in an environment with a map m uh if along this rate that we're looking at that we're focusing on uh you miss the the obstacle that is in fact present in the in the environment the present in the in the map great we could have it equal to the probability of seeing CT given XP without knowledge yeah yeah exactly so so it typically looks something like this um so if this is the the center measurement uh ZT let's see just a scalar let's say um then it's uh you you get some it's like a better like Delta like function typically like at some like Max like distance uh so I'll call it like the uh Max so what I mean is with probability one uh you receive the maximum possible uh distance um so the maximum powerful distance just depends on your sensor looks at like 10 meters so if you don't see like if you miss an obstacle altogether uh then uh like the rangefinder just says there's nothing there and so the it just returns whatever the maximum like distance is for that sensor that could be 10 10 meters or 100 meters and depending on the exact kind of Rangefinder uh you're you're using yeah I guess does that make sense okay uh yeah maybe we can look at uh one more so B Rand so the probability of this kind of unexplained uh Randomness unexplained noise in the distance measurements uh what would be like a reasonable distribution for for that just along again just some one specific area that we're looking at all right so I guess one something that's that's really reasonable foreign distribution around the true distance so again we're assuming that we know the map uh we have some specific XT some specific location that we're looking at and some specific Ray so from that we can calculate the the true distance call that uh the let's say that's that's over here so we could just have some gaussian distribution whose mean is that the but there's some like variants that's accounting for just some Randomness uh some random measurements that that your sensor is is giving you all right so I guess if you're interested in in details looking at some uh like actual models for these things written out explicitly so chapter 6.3 in the probabilistic robotics book has a has a nice description of uh of like reasonable choices uh for for these probability uh distributions all right any questions on this okay so just uh I guess the the last kind of uh thing on on this topic I want to discuss with Center models so mostly I've been focusing on location-based Maps but you could also think about feature-based maps maps The Landmark based Maps and instantiate different uh sensor models for for landmark based Maps uh so an example is let's say you know that there's a particular landmark at a certain location so the sensor model uh could be the distance to let's say maybe multiple landmarks so the probability of receiving some ziti Society has Dimension equal to the number of landmarks let's say given XC and M contains the different landmarks and their locations um so this is actually similar to what we looked at when we looked at the non-deterministic filter right we have like distances from from different landmarks so the center model could correspond to the the true distance uh that the robot of that from a given landmark maybe with some additional uncertainty [Music] um one interesting thing I guess with uh with Landmark based Maps is what's known as the data Association problem um so let's say you have some example of this is if you have uh two let's say like doors in the the environment so similar to the example we were looking at in the previous lecture uh suppose the doors in reality look slightly different so maybe there's some like visual like feature over here and that's not present in the store but it's kind of a minimal uh they look basically the same but just slightly different um so one like so a reasonable model like a good model uh for your sensor uh in in the setting where we're resuming a landmark wave map so uh I guess a perfect sensor would tell you I'm let's say exactly this distance from door one that's going to use the door one uh I know exactly this distance from from door two uh and then you can localize yourself perfectly but what happens pretty often especially if you're using Vision to detect the landmarks and there's something we'll see when we discuss version later is that you mix up the the doors so you think you're looking at door two uh when in fact you're looking at a door one um and so you have to account for that Pro that uh source of error so that's like data Association data so it means uh I'm correctly associating the sensor management I'm receiving with the correct Landmark that I'm actually looking at um and so I guess reasonable sensor models uh for landmark based Maps account for the probability of incorrect Association so saying thinking that the robot was looking at a particular Landmark when in fact it's looking uh at a different landmark and again if you want to look at like some specific examples of these probability distributions there like that chapter in the book has some some nice examples all right so I guess the point with with this discussion is just to give you a flavor for the kinds of probabilistic models you might use in practice depending on your Dynamics depending on the the sensor and so on um yeah I guess in in practice it's gonna really depend on your specific example but these are the good starting points laser rangefinders are like extremely common or rangefinders in general are extremely common and yeah you might find yourself opening up the probabilistic robotics for a couple of years down the line if you're working with a system that has a rangefinder so let me show you some videos I guess of this working in practice foreign so one example is is the one we looked at in the the previous lecture so we kind of briefly discussed this uh like when we when we looked at this video uh but this is an example where we're just doing localization right so we're assuming uh the robot is assuming that it knows the geometry of the map uh and let me just I guess with some of the discussion today um if we look at maybe one of these frames okay this is a good one um so the robot actually has a rangefinder over here and these are the the different rails uh along which the robot is receiving uh distances sensor measurements and this is an actual robot right this is not a simulation like someone is actually like running a robot around in this environment um so yeah you see some of the errors that that we were discussing uh so for some of the arrays like I think this one uh this right over here you're getting the basically the exact like correct distance right like the actual distance to the wall uh that's what the the sensor is reporting so what the sensor is reporting is represented by the the blue uh like lines so this one is pretty good this one is also pretty good uh this one it misses like all together uh right so it doesn't actually see uh like this obstacle um and so it's reporting what's probably the maximum uh like difference uh and like this one as well like it misses the uh the obstacle altogether uh and it's just giving you the the maximum uh distance and yeah there's some kind of like Randomness as well so this one it's kind of giving you the the correct distance uh but with some like uh additional like error on top of that uh so in practice what you would do is like characterize the the sensor like the clients of errors that make so probably you're missing something the problem you've seen something that's not better and then you can dig that into your probabilistic model and then run a particle filter like they're doing in this example uh here's another example uh so this is a humanoid robot the algebra now uh and there's stuck uh I think it's a Laser Rangefinder on the head of the robot uh let's see yeah yeah so so you see a little uh I think it's a little Laser Rangefinder oh yeah the laser in finder on the the humanoid this robot used to be I don't know if it's still used but it used to be used for the robocup competition that's the the robot like software competition um yeah so it used to be kind of popular I've seen less of them these days and again here they're using a particle filter uh to localize the robot they're looking at the on the robot is trying to figure out its pose so pose here means possession and orientation not the velocities like just the positional orientation and it has some map of the environment that's being represented over here this is the actual environment this is the map of the environment well it's just a map now the environment and you see that it like it gets the location um like the particles uh coalesce like very quickly let me just play that again so in the beginning there's a lot of uncertainty right like these are all I think the Red Arrows correspond to where the robot could be um and then once the robot starts operating like pretty quickly realizes that uh like it must be here so like all the particles uh concentrate closely the actual uh location and orientation and I think at some point it goes yeah it doesn't climb the stairs I don't remember let's see yeah robots have gotten better I guess we've uh or algorithms and and the hardware have gotten better and it's localizing itself like pretty well right so it knows it's like climbed at first like stare I'm not playing there the whole video it's gonna take a while to get up uh here's another cool example so this is with a drone actually a fixed wing uh airplane members without the use of an external motion capture system are typically limited to slow and conservative flight as a consequence almost all of this research is done with rotorcraft in the hover regime in the robust robotics group in csail at MIT we've developed a fixed-link vehicle capable of flying at high speeds through obstacles using only onboard sensors the vehicle is equipped with an inertial measurement unit and a laser range scanner all the computation for State estimation and control is done on board using an Intel Atom Processor similar to what is found in a commercially available netbook we designed a custom airplane to carry the sensing and computation payload while still being able to maneuver in confine spaces our platform has a two meter wingspan and weighs approximately two kilograms at any given time the laser can only see a two-dimensional picture of the environment laser scans are depicted with yellow points representing obstacles and blue representing free space even with a pre-computed map individual 2D scans don't contain enough information to uniquely determine the 3D position velocity and orientation of the vehicle to overcome this difficulty we aggregate successive scans and combine laser information with the initial measurement unit to perform State estimation another technical challenge is efficiently generating trajectories for the vehicle to follow the complicated vehicle Dynamics father we're here uh so this was from about like maybe 10 or so years ago so the the PhD students well I guess then PhD students who did the work uh Adam Bree and a back rack working with necro at MIT ended up uh founding skydio I guess how many people have had about Skyview just out of curiosity um all right a few few so it's uh I think the largest U.S drone manufacturer um they built these drones for uh I guess videography for like outdoor like activities and so it's one of the I guess relatively small number of like commercially kind of reasonably successful uh robotics companies and and yeah the work they did during their PHD was thinking about the localization and planning for uh drones so I want to emphasize again that all of the videos I showed are assuming a map right so the map of the robot is given to it or the map of the environment is given do it so the map is kind of being visualized over here so the robot is just figuring out where it is so what its application and orientation are so the next lecture we're going to think about the problem of mapping so if you didn't know the the map of the environment helping you figure out what the map is and I guess that's why like some of the background material from today's lecture has been useful and then the lecture after that we're going to do simultaneous localization mapping so the next lecture we'll assume that the robot knows its location but doesn't know the map and then we'll assume that neither is known and then put their techniques together a question how does it know it knows where it is how does it know where um so yeah I guess that might actually be the the next part of the video maybe just out of uh just for time I won't I won't pay but I have a link to the video but they discussed like some of the motion planning uh techniques they use uh I forget exactly what they were using whether it was RT or some variant of it but but yeah they're using some planner uh to actually figure out where where to go knowing what it is and knowing what the environment looks like good all right so I'll see you next time
Introduction_to_Robotics_Princeton
Lecture_13_Princeton_Introduction_to_Robotics_Particle_filters_and_Kalman_filters.txt
all right that maybe we can go ahead and get started so yeah hopefully you had a good break uh we're gonna start off pretty much where we left off in the the previous lecture so we're covering this topic of uh State estimation localization and mapping uh and in the the previous lecture we discussed this kind of General framework for doing uh State estimation known as the base filter all right um and so this was a uh I guess follow on to the technique we had discussed previously which was the non-deterministic filter uh the base filter maintains uh what we call a belief which is really just a probability distribution [Applause] over the state space and as the robotics control input as it receives more sensor measurements it basically Updates this belief iteratively so just to remind you of the the basic structure of the the base filter at each time step um we have two steps that we do for all States so for all XT and the first step is what we call the Dynamics update we basically propagate our previous fully the belief from the previous time set uh to the next time step through the Dynamics of the system so we receive a or we obtain a new belief which we call believe bar at every state and this is calculated by summing up all the previous steps the probability that you land that XT given that your app x t minus one and you took control input UT minus one multiplied by the previous belief that you had um for the state XC minus one so this is the Dynamics update and the second step is to take into account the new sensor measurements that the robot receives and update your belief using that and that's where Bayes rule comes in [Music] sorry it should be a really bar divided by P of BT and this is the the measurement update and just I guess one more piece of terminology this new belief this new probability distribution that we obtain uh once we've done our Dynamics update and once we incorporated the sensor measurement this is called the posterior belief [Applause] [Music] um and the previous belief at the the previous times that I guess it's called the prior belief okay so this was the General structure of the the base filter what I'm going to discuss today is motivated by a comment that I made at the end of the previous lecture uh which is that when your state phase is continuous and your control input space is continuous and the measurement space is continuous these summations become integrals and doing these computations so actually doing the the Dynamics update actually doing the measurement update Computing those integrals inside of the summations gets computationally intractable really quickly in general um so yeah so that that's going to be our starting point for today so we're going to discuss uh two things today um so we're going to focus on robotic systems that have continuous State spaces action spaces and management spaces so the first thing we'll discuss is a particular setting so a particular set of assumptions where the base filter can be implemented exactly so why exactly I mean without any approximation and moreover we're going to be able to do this computationally efficiently [Music] uh so the internals basically that that show up in the base filtering steps uh it turns out we can calculate exactly so analytically uh in a setting that I'll describe and the second I think we're going to describe uh is basically an approximation to the base filter that was broadly applicable [Applause] beyond the kind of specific assumptions that we're going to make for part one so part one leads to something called the Kalman filter um so if you've taken a I guess a course in control theory you may have seen it I'll just give you an overview of it uh the other approach which uh Works more broadly so doesn't make the same kind of restrictive assumptions that the common filter makes is called the particle filter [Music] and these two so the account filter and the particle filter are probably the most commonly used instantiations of the base filter when you have systems that have like continuous State spaces and control input spaces and observation Solutions all right I guess any questions on this setup okay so yeah let's start for the first part so uh what's what are the assumptions under which we can exactly and computationally efficiently implement the the base filter um so it turned off for linear gaussian systems we can implement the the steps of the base filter exactly so the reference for this is chapter uh 3.2 in the probabilistic robotics book um so if you think about what we need in order to implement the base filter we basically need three things we need to specify three things one is the Dynamics model right so we need to say if the robot does that fit x t minus one takes control input U D minus one what's the distribution or the next state so that's this quantity over here uh so that's one thing we need the Dynamics model is a probabilistic Dynamics model the second thing we need is some kind of measurement model so something that says um if the robot is at State XT uh can we write down a distribution for the sensor measurements and that it's going to receive so that's this going to be over here and then the third thing we need is some way to initialize the base filter so we need some initial belief that the robot has and once we have those three things we can basically at least conceptually run the base filter um like do all the computations um so that those are the three things we need to make some assumptions on uh to make the base filter uh computationally tractable so I'll describe each of those things in this kind of linear gaussian setting uh one by one so we'll start off with the Dynamics model um so we're going to assume that the Dynamics uh model has the following form so the state at some time T so that's XC equals uh some Matrix 80 multiplied by the previous state plus some other metric BT multiplied by the previous control input plus an additional term that I'll describe in a bit so let me call this equation one okay so this looks very much like the setup we had when we discussed lqr right when we talk about lqr we looked at continuous times Dynamics so we said x dot equals ax plus b uh here we're looking at the Field Time Dynamics things are a bit easier uh in in this setting you can also do everything I'm going to do in the continuous time setting but just to keep things a bit more straightforward I'll assuming that things are in Industry time so this is basically the discrete time analog at least if you ignore this plus Epsilon term this is the the three-time analog of the Dynamics we are assumed when we talked about lqr um so the main I guess new thing here is this Epsilon D so that's an additive term on the Dynamics your state nominally evolves linearly so ax T minus one plus b u t minus one but then there's this kind of uncertainty or disturbance term so Epsilon t we're going to think of it as a random variable so it's something that describes our uncertainty about how the state will evolve uh and moreover we're gonna make some assumptions about it so it's going to be a random variable and that's drawn from a a normal distribution so by normal I really mean gaussian it's a multi-variate normal gaussian distribution and it has the same Dimension as the the state so the dimension of Epsilon T has to match the dimension of the state um but yeah so it's some random variable that has the same Dimension as the state and is drawn from a gaussian distribution so I guess just to quickly recall so hopefully this is something you've seen before um but just a reminder to a gaussian distribution in the univariate case looks something like this it's a bell curve and it has some means mu and some standard deviation so plus minus Sigma so that the university is more generally we have some probability density function so P of Epsilon t which equals that's right down here so the determinant of 2 pi Sigma inverse square root and then the exponential I mean it's this [Music] okay bracket all right so that's the probability density function in the multivar case so mu here is the mean of the the gaussian distribution and sigma is a matrix so it's the covariance Matrix so if you have a diagonal covariance matrix it means that all the elements of Epsilon are independent of each other in general you could have a covariance matrix that's not a diagonal so there might be some dependence among the different components of the random variable Epsilon t um and yeah usually we denote this so The annotation we'll use is n mu Sigma so this is a normal distribution with mean equal to mu and covariance equal to a sigma um so I guess just to be explicit this is some n by one Matrix where n is the dimension of the state also the dimension of Epsilon and this is a n by n Matrix uh and again in the universities uh mu is just going to be some single number the the mean of your normal distribution uh the covariance is really just a variance of the standard deviation squared okay so that's going to be the Assumption on Epsilon the organism that it's drawn so it's some random variables at every time step you draw some Epsilon you draw some noise some uncertainty from a gaussian distribution given by some mu and some Sigma we're actually going to make some further assumptions [Music] so usually we're gonna assume so this is not strictly necessary but I guess just for convenience we're going to assume that the mean mu is zero Zero Bar meaning that it's the zero vector [Music] and there's some covariance Matrix which we'll denote as r t so instead of Sigma we're going to call it RT uh the same Dimension so n by n by n and potentially changing with time so at every time step you might have a different gaussian distribution potentially all right I guess any questions on on this all right so the the main yeah I guess the main takeaway is the Dynamics are basically linear uh but we have this additive uncertainty and that's described by a gaussian um all right so that's the first thing we need so this specifies our Dynamics model right so for any State and control input to any XC minus one U T minus one we now have a probability and distribution defined by this gaussian or sorry defined by the the Dynamics plus the gaussian random variable which we're calling Epsilon the second thing we need to specify is some kind of measurement model measurement or sensor model and again we're gonna assume that the measurement model has a linear form so the sensor measurement at some time t that's ZT we're going to assume is a linear function so some Matrix C multiplied by the state plus some uncertainty delta T where again Delta we're going to assume is a random variable that's a gaussian [Music] with again zero mean and covariance equal to Q t so this is some Matrix that has the dimension of the sensor measurement squared so the sensor measurement is a m by one let's say uh then QT is M by n Matrix so what this is saying is that nominal Leafs if you ignore the uncertainty nominally the sensor measurements that your robot receives is a linear function of your state so you just take your set multiply it by this Matrix C which of course depends on the specific sensor that your robot has that serves as a measurement except that we have some uncertainty so we don't know that this is the sensor measurement we're going to receive exactly we're gonna we're saying that we receive this without any uncertainty and then we have some gaussian distribution which describes our uncertainty about the sensor measurement all right uh maybe just as a quick check so I guess what's the the mean um of the random variable corresponding to the sensor measurement sorry go ahead um not quite so by uh sensor measurement I mean I guess I mean this part um so what's the yeah what's the mean of this random variable see some answers maybe in the first row go ahead yeah it's it's CT times times XC um right we're just taking this which is some vector and we're adding a zero mean gaussian on top of that so if I take a gaussian add something to it I'm just shifting it and so the the mean is shifting from zero to uh to City XC okay yeah so what this is saying is we have some uh probabilistic model of our sensor measurements so at any given State XC [Music] um so at any given State XT we can calculate the probability of receiving sensor measurement VT um given that given that state XT so that's coming from this model linear plus uh gaussian the last thing we need to implement the the base filter is the initial belief so before the robot takes any actions before it received any such a measurements it has some belief or needs to have some belief about its state uh to basically kick-start the base filtering process uh and we're going to assume [Music] that this belief [Music] so the belief at time Step Zero uh is also a gaussian um so it's mean we're going to say as Mu zero and that's standard deviation uh we're gonna say is Sigma zero all right yeah I guess any any questions on any of those three things so the Dynamics model measurement model initial belief okay so the main uh message or the main kind of trick that that makes everything work in the setting uh maybe I'll call it the observation so the proof of this observation uh if you're interested in Japan three point 2.4 in the probabilistic robotics book um so it's a bunch of I guess linear algebra but the the observation itself is relatively easy to state which is that under the assumptions that we've made [Music] so under the assumptions on the Dynamics so layer of those gaussian measurement model linear plus gaussian initial belief discussion if you make these assumptions then the belief [Music] at some time safety so the belief over the the state at time step t is also a gaussian and yeah this is the the main uh I guess fact that makes things kind of nice in the setting so we start off with a gaussian belief so at times of zero we propagate that through the Dynamics of the system which is linear Plus calcium it turns out when you do that propagation so when you do the dynamic update of the base filter making these assumptions you get another galaxy in distribution you then get some sensor measurement you apply vs rules that posterior belief so once you do the measurement of it that belief is also a gaussian so at every time step you maintain a gaussian belief uh on the on the state so all right so basically we uh the instantiation of the base filter so when you make these assumptions on the Dynamics the measurement and the initial belief has a special name so it's called a common filter so we've developed I think back in the 1960s and remains like I said one of the most popular uh techniques for doing State estimation so I'll write down what the steps are [Music] I'm not going to go through a derivation yeah if that could take a while [Music] and this is in chapter 2.2.1 in the probabilistic robotics book um so it's like I said it's an instantiation of the base children so it is the base filter uh just under the assumptions that we made on the Dynamics the measurement model and the the initial belief um so it's going to have two steps that we Implement at every time set t uh so the first step is the Dynamics update the second step is the measurement update uh I'm gonna I guess just write down the the equations and then we can just analyze them for a bit all right so this is the the first step so this is the Dynamics update step uh so at the previous time step we had some uh beliefs so let's say the zero at times that before the robot did anything uh it we assumed it had some gaussian belief with me mean mu zero covariance Sigma zero um so this part is before the robot takes into account any sensor measurement that it receives at the new time step so this is the Dynamics update and specifically this belief bar so believe bar again is when you just do that nrx update no management update this belief bar uh like I said is going to be a gaussian distribution with mean equal to this new T Prime and covariance equal to Sigma T Prime all right so I guess what's the main message here I think the specific like terms are not super important the main thing I want to convey is that you can do this computation right so you can just like literally type this up in like two minutes in Python uh these are just like matrix multiplication operations Matrix transpose and that's it so at the previous time step if you have some belief uh which is a gaussian distribution uh with mu T minus one Sigma T minus one uh you we're assuming that we know the Dynamics and that they're linear Plus calcium so a t b t are known mu T minus 1 is known the control input that the robot executes uh is known sorry it should be T minus one I think um and uh yeah so 80 segment D minus 1 80 and RT was The covariance Matrix corresponding to the sensor measurement noise so that's is going to be known as well so everything here on the right hand side is known so we can just do these Matrix multiplications and get our updated belief which is propagating the previous belief through the uh the Dynamics the second step of the the measurement update again I'll just write down the the equations all right so this has three steps so we calculate this intermediate variable K T So This is called the common game and then from that we calculate mu T and sigma T again the specific operations here are not super important to the the main point I want to make the main point is you can again just implement this in Python right so you can use like numpy to do these Matrix multiplications to do inverse and that's it like everything here on the right hand side is known or assumed to be known um including the mean and the covariance once you do the Dynamics update so at the end of this measurement update you get a belief which is a normal distribution with mean Beauty and covariance sigma t um and and then you do this at every time step so you take this new belief you propagate that through the Dynamics for the next time step you do the management update you receive a new measurement sorry you do the Dynamics update and then you do the measurement update uh once you receive a new measurement all right so just pictorially um it might be helpful to just have a picture in your head instead of the equations [Music] so yes uh this one is the uh the center of the yeah exactly and the sensor measurement uh like appears here as you would expect so that's the VT [Music] all right so with pictures I'm going to draw I guess pictures in kind of a univariant uh setting so just one dimensional State space so let's say this is the the set XT which is assumed to be a scalar um let's maybe just for the sake of concreteness say that x d equals um let's say some constant a um actually let me make it simpler this x t minus one plus some positive constant Plus some Randomness Epsilon so the initial time step so at time equals zero let's say our mean is something positive so it's it's over here and we have some uh standard deviation some variance for the gaussian that describes our uncertainty um so at the next time step uh I guess what's the so when we do the Dynamics update what is the gaussian going to look like given this under this assumption so we're saying the state that XC is the state at XC minus 1 plus some constant a plus some parameters so if we take this initial belief so let's call this belief at time let's say XC minus one um what is our Dynamics update going to do we just pictorially yes yeah exactly so we so we shifted by a um and yeah this this uncertainty as well so there was no uncertainty here you literally just shifted by a but then because we had some uncertainty it's going to inflate the uh the gaussian a bit um so the next time step once we do the Dynamics update um it's gonna be tricky there let's do this let me just do it on the same picture so we just take this and we shift it so that's going to be belief bar at XD so the beliefs that we get before we've taken into account a sensor measurement but after we've propagated the previous belief through the linear Dynamics and yeah I guess with that observation which I didn't prove if you want to see figure proof it's in that chapter but the claim is that once you do the Dynamics update and also once you do the measurement update if you start off with the gaussian you're always going to maintain a gaussian belief so we started off with a gasket we just shipped the mean of the gaussian and inflate it a bit and depending on what Epsilon is and the specific I guess mean and covariance that you calculate comes from these equations how would you can yeah you can Implement uh let's say the next time step you receive a sensor measurement so that thanks a t uh you receive some sensor measurement maybe that's over here I'll call this ZT um so I guess what would you expect the new belief to look like just qualitatively like pictorially so what would the mean look like for instance say again yeah yeah so it's going to be in in between basically right so so before we take into the account the center measurement uh we have disbelief so the dotted line uh and now our sensor measurement is saying well really I think well I is like the center thinks uh that this is the the state but of course we know that the sensor is not perfect um so there's some uncertainty associated with the sensor measurement um but the sensor is giving us a hint essentially right that the the true state is actually further to the the right uh so once we do the the update we're going to get something I guess this orange chalk values uh that looks something like this uh so maybe we just shift the belief bar um a bit to the other right uh and we might shrink our uncertainty a bit um maybe we can think about that as well so let's say the sensor model which is up there uh so part of the sensor model is this delta T so that's our uncertainty on the sensor measurements let's say qt in this case we just have a scalar so it's a variance instead of covariance let's say that QT is very small so that the variance is is very small um I guess what's going to happen to the the standard deviation uh and also the the mean in that case go ahead exactly yeah so what we're saying is that we we really trust our sensor measurement right we have very little uncertainty uh about the the sensor measurement so the sensor is telling us we're here we probably are here so the mean is going to shift more uh towards the sensor measurement and we're going to reduce our uncertainty significantly so this orange curve I will have a mean that's close to ZT and we'll have a uncertainty so standard deviation that that's very small uh conversely if we're very uncertain about our extensor measurements so the sensor model has a high variance or covariance then we're not going to trust the the sensor model that much um so our update so when we receive a sensor measurement uh our update from belief bar to belief is not going to be that drastic because we're saying the sensor is not really giving us that much information we shouldn't trust it that much so that's kind of the the intuition question is yes yeah good question so the sensor measurement does reduce the the uncertainty um but actually there there could be other factors so even with an Epsilon so some uncertainty on the Dynamics uh we could potentially reduce the amount of uncertainty we have uh the like the the standard deviation I guess you see maybe it's not totally obvious but you see a example I see a hand go ahead yes yeah exactly so if you have a stable dynamical system so about all the states are converging towards zero then you're on certain D strings you can kind of just like sit back and do nothing in a sense like you can be confident that after a long time because all states are converging to the origin let's say after a while you should believe that you're close to the origin so even with this Epsilon uh it's not necessarily the case that uh you're gonna increase the the amount of 170 so it could be that if things are stable or things are Contracting even with the Dynamics that that could help you reduce uncertainty uh and then yes the sensor measurement update uh like can further helps you reduce uncertainty and the specific like Transformations like going from the belief in the previous science step to the updated belief with the Dynamics and then the updated belief with the measurement the specifics of like how you uh update the mean and the standard deviation like those are given by these equations but these equations that capture the the iteration that I sketched out other questions okay all right so that's the common filter um so the calculator has some pretty strict assumption so I guess to people have thoughts on when the computer is maybe not a good uh or before that the question it's a good question yeah how much the measurement update yeah is um yes okay is this like intermediate thing we calculate uh to do the update um yeah uh yes that's okay uh K is the thing that's multiplying uh ZT uh and and CT Beauty Prime so uh yeah K itself like depends on the measurement uh model so cute QT and let me see I got the was it I I got that Matrix use uh mixed up I think so which one was was Cutie the Dynamics or the measurement I think should be the measurement okay good yeah so QT is the measurement so that makes sense so k80 depends on the the measurement covariance uh and yeah this update that you do uh on the the mean and the the covariance depends on on KT which itself depends on on QT um so yes uh the change that you make from belief bar to belief uh is kind of dependent on on KT but Katie it sounds like depends exactly yeah yeah I think that that's the main iteration so if you draft your model a lot um so if delta T let's say the extreme cases where you're absolutely certain so there's no standard deviation and let's say like CT is the identity Matrix uh then you can actually forget about the Dynamics right you can just trust the sensor so your sensor you know is absolutely like certainly correct um and yeah in that case you would collapse all your uncertainty to the sensor measurement that you get and that yeah I guess you could maybe as a sanity check you can go through that uh compute KD you might get some divide by zero issues that you can take take limits as a QT this covariance goes to zero all right okay so yeah I guess when do you think the camera filter is not useful so it is useful when the Dynamics are approximately linear so if you're hovering uh we implement the LPR controller by linearization if we can approximate your Dynamics uh kind of in a uh linear with a good linear approximation then the account filter is good but yeah I guess what about the initial belief so the the assumption that you have a gaussian uh belief on the other state um does that seem reasonable or can you think of maybe examples where that might break down I guess maybe I'll sketch out an example um the main I think the main limitation at least for robotic settings is that the gaussian distribution is unimodal [Music] um actually I guess maybe maybe before that let me just make some uh this quick other comments this is pointers to additional techniques if you're interested I said the Dynamics are not linear [Music] foreign we can approximate the Dynamics with linear approximations which leads to something called the extended carbon filter so there's a chapter in there problem if you want to see that um but yeah it's basically taking non-linear equations uh linearizing them and also updating the linearizations as the state changes um but yeah I think one of the main limitations of the candle filter foreign so we cannot fundamentally uh capture uh beliefs that have two modes or more than more than two modes so as an example just to help us take things through so imagine that you have a robot with a with a door sensor um so I guess what I mean by a door sensor is if the robot is close to any door it can detect that so maybe it has a camera it sees that that there's a door um and it can it can basically send this proximity to some door um and let's say we know that the robot is going to operate in a specific environment um so we're gonna yeah we have this like corridor uh and we know that we're gonna place the robot somewhere in this corridor and imagine that we have some like gaussian belief let's say uh maybe with very high uncertainty so very wide uh very very large variants uh and let's say the robot sometimes uh maybe at the very first time it's a detects that it's close to a door um so just qualitatively what should its belief look like uh like once it receives that management update let's say it doesn't even move so there's no Dynamics up there just a measurement update what would it believe what would its belief look like given that it it senses some door that is that is close to or the senses that it's close to some door yes maybe go ahead some yeah exactly so what we want our robot to believe like what do we want our robots like say estimator to be is maybe something like this right so we know that it's close to or it knows that it's close to some door so it should be pretty confident that it's either close to the store or it's close to the store and it shouldn't really believe that it's somewhere in the middle like it should have like low probability that Associates with being in the middle because being the middle is not uh close to either of the doors um if you have a counter filter then you're kind of like fundamentally committing to gaussian beliefs and this kind of belief like this bimodal belief or monthly World beliefs in general uh we cannot capture uh well so we can capture kind of clumsily but we can't really capture it well with a gaussian so a gaussian I guess there are a couple of options so one option is to do something like this you can say um this is not a scale but we can place a gaussian whose mean is that one of these doors but that's not really satisfactory because yeah we could equally have been equally well been close to the other door we could do the other door but that has the same problem or we could just split the difference we could say okay like maybe I should think that I'm in the middle but in some sense that's kind of the worst thing you could do right like the same uh just taking the the average of the two doors and putting your probability uh mass in the the middle uh is almost like exactly wrong right like you know you're not there and so you shouldn't uh believe that you're in the in the middle so yeah I guess this lack of ability to capture multi-modal distributions uh is a pretty big limitation for for robotics problems I'll go through some concrete examples um but but yeah so that that's what uh we're gonna try to fix with the next algorithm [Music] so we're gonna describe a different way to implement the base filter this time it's not going to be exact we're going to make some approximations uh and this is going to be called the particle filter and this is chapter 4.3 in the probabilistic robotics book the main idea behind the the particle filter uh is going to be to give up on exactly representing this belief distribution um what we're going to do instead is approximately represent beliefs using samples which I guess in this context are known as particles so let's go back to this kind of door example and let's say we want to represent some distribution that looks like this so we know that we we can't do it with a gaussian we could try to parameterize this distribution somehow so maybe you can write this down as a as a mixture of gaussians but I can come up with other examples where even that becomes a bit clumsy so we're not going to capture this probability kind of density function exactly what we're going to do is approximate this distribution uh with a bunch of particles so just a bunch of samples so yeah maybe something that has a lot of density over here so a lot of particles a lot of samples over here and then you kind of thin out so the number of particles you have elsewhere is very small so we're never going to so with the particle filter I'll describe the steps in a bit but we're never going to represent this kind of exact probability density function the probability distribution we're always going to be working with samples so you can think of samples as kind of candidates for where the robot could be um and the the density of samples so if you have a lot of density over here a lot of density over here uh that will represent confidence that your robot is like in fact over here or over here so I guess a slightly more formally foreign [Music] we're going to maintain a I believe or the the state with with the wp filter but this belief will be represented foreign [Music] which again in this context are called particles so the set of samples that is at any time T we can denote by s t and it's going to be this list of of candidates up to capital M where m is going to say as large that's exactly what large means depends on the specific problem but just think maybe 10 000 samples or something like that really in practice with whatever fits on your computer uh like computationally efficiently um but all right so yeah I guess the main point is that we're going to change our representation of the belief so instead of saying that our belief is going to be gaussian uh like with the color filter we're going to say that our belief is represented by some set of particles and we're just going to keep updating that set of particles uh at every time step we're going to do an NX update or sensor management update but at every time step We're Not Gonna explicitly kind of represent some complicated distribution we're just gonna approximate some potentially complicated distribution with a finite number of particles um yeah I'll go through the steps of the algorithm but I guess any questions on on that point distribution uh yeah so it kind of depends if you really know nothing then you could just spread the particles uniformly across the environment usually we know something maybe like maybe the robot is likely to be on the ground floor rather than the first floor things like that so you can initialize the particles to like roughly match that expectation but it's like pretty prominent I think it's rare that you really know nothing but if you're truly know nothing then just uniform spread across the state space question he's like uh like as you go through trajectory yeah uh the optimal number of particles that you might want to use to represent yes yes good okay I'll come to that in a in a in a bit uh yeah that's a good observation so yeah let me describe the the algorithm [Music] and then we'll describe some of the challenges and the variations okay so what I'm going to do is actually play a video um so this is a video that's narrated by uh one of the kind of main like proponents of a particle filters in the robotics context this is Sebastian Throne who's also one of the the authors of the probabilistic robotics book uh so in this video is going to give a kind of qualitative explanation of particle filters uh which hopefully will give you the main idea the main intuition and then I'll describe the formal like Steps in the algorithm uh off of that so I guess here's the video I think it's about like three or so minutes long so I'm just gonna play it and then maybe we can discuss it and then describe the the actual algorithm [Music] foreign [Music] actually I guess can you hear that at the back if not I'll so this is a great segue to one of the most success again the topic here is robot localization and here we're dealing with a real robot with actual sensor data the robot is lost in this building you can see little rooms and you can see corridors and the water is equipped with range sensors these are sound sensors that measure the range to neighbor obstacles and its task is to figure out where it is the robot will move along the black line over here but it doesn't know this it has no clue where it is it is to figure out where it is now the key thing in particle field is the representation of the belief whereas before we get discrete worlds like our sun and Rain example or we had a histogram approach where we cut the space into small bins party decisions have been very different representations they represent the space by a collection of points or particles each of these small dots over here is the hypothesis where the word might be it's a Concrete value of its X location and its Y location and its heading Direction in this environment so it's a vector of V values the sum or set of all those vectors together form the belief space so productive is approximate the posterior by many many many guesses and the density of those gases represents the posterior probability of being at a certain location to this frameless let me run the video and you can see in very short amount of time the range sensors even though they're very noisy Force the particles to collect in the corridor there's two symmetrical pointers this one over here this one over here it comes from the fact that the corner itself is symmetric but as the robot moves into the office the Symmetry is broken this office looks very different for this over sober year and those particles die out now What's Happening Here intuitively speaking each particle is a representation of a possible State and the more consistent the particle with the measurement the more the sonar measurement fits into the place where the particle system over this the more likely it is to survive this is the essence of particle faders use many particles to represent a belief and then let those particles survive in proportion to the measurement probability and the measurement probability here is nothing else but the consistency of the sonar range measurements with the map of the environment given the particle place let me play this again so here's the mace the word is lost in space and again you can see how within very few steps the particles consistent with the range measurements all accumulating the corridor as though it's the end of the corridor only two particle clouds survive through the symmetry of the corridor and the particles finally die out this algorithm is beautiful and you can implement it in less than 10 lines of program code so given all the difficulty of talking of probabilities and base Network and hidden Markov models you will now find a way to implement one of the most amazing algorithms for filtering and state estimation in less than 10 lines of c chord isn't that amazing all right yeah I think it actually is amazing yeah I want to settle as a challenge to do it in 10 lines but uh you will implement it in the the next uh assignment in Python and you'll see that it's actually not that complicated all right I guess questions on the the video before we describe the algorithm yes okay good question yes yes we're just focused focusing on the state estimation question so uh in this context it's called like localization to robot just wants to figure out it's uh like x y and Theta so we are assuming that it has a map uh of the the environment we'll relax that assumption in about like two lectures but but at least for now it's just the state that the robot doesn't know if itself foreign yeah so the robot is actually doing so this is a great uh the black like curve with the arrow so the ground truth like the actual motion of the robot is that uh like her along that curve so it's not doing a random walk it's just like going like left and then like into that into that room like in this example yeah I think someone is just like remote controlling the robot in this case yeah good okay so yeah let's look at the the algorithm like the steps of the algorithm um so it has the general structure of the the base filter so there's going to be a measurement step sorry a Dynamics update step and a measurement step um so at each time step we do these uh computations uh capital M again is the number of samples that represents our belief so for every sample uh what we're going to do is uh or let me call it like particle so for every particle um index by Little M we're going to sample a next state next candidate state for that particle so we're assuming access to some Dynamics model so some model that assigns the probability of getting some State XT given that the robot was at State XC minus 1 and took control input UT minus one so this line up here that's the Dynamics update so we take every sample sorry let me say particle every particle and we sample some candidate for the next state if you started off at the previous time Step at that particle uh additionally what we're going to do for every particle is compute WT so this is a weight um otherwise known as The Importance in in this context uh and this is what he was referring to as the the consistency so WT for every particle is the probability of receiving sensor measurement ZT so let's say ZT is the actual sensor measurement that the robot received at times ft so WT is the probability or likelihood like the density function of evaluating um sorry density of obtaining ZT the measurement ZT given that you are at location xcm so we're doing this for every particle so for every little m one through through capital M we're saying what's the likelihood of receiving the measurement that we received ZT if we assume that we're at the location specified by this particle so if WD is high it means that there's a high likelihood of seeing the sensor measurement that the robot saw given that it was in uh location X um all right so that's kind of the first block uh that's the first for Loop uh and then we initialize uh s t so SD is going to be our new particle representation at this new time set so we just initialize that as our empty array and then again for uh every particle index so going from M to m equals one to the capital M uh we draw the index I uh or we draw some index I with a probability that's proportional to WT uh right so I guess before this last for Loop uh for every m going from one to capital M we had a WT you can normalize that you can think of that as a probability distribution over the the indices like over the particles uh and in this step we're basically uh throwing away particles that have a low uh likelihood that have a low uh like WT um so yeah I guess just uh more details on this so we're drawing capital M samples uh with replacement uh so we're not like uh throwing things away once we once we draw a particle so we're drawing capital and particles uh with replacement um from the set of particles that we had after we did the Dynamics update um and so yeah I guess the main intuition is that we're getting rid of particles um with a higher probability uh if they have low importance so if they were not consistent with the sensor measurements if wtu was small then there's a low probability that they're going to be sampled in the second block um all right I guess questions on on this guy yeah so the way time the number of particles we represent is always consistent we're going to get more and more kind of duplicates right so you might have multiple particles at the same exact location and that's perfect we find that's allowed by by this and that means that we're very confident if we have lots of particles at the exact same location it means that we're very confident that the robot is uh around that location question yes what do you know pretty precisely of your belief exactly yeah so it would uh pretty quickly like coalesce like the belief would like pretty quickly uh concentrate around one location and I guess that's what we want ideally uh I think it's interesting that the the particle filter like has this ability to uh represent uh like multiple reliefs which might be important at the first few time steps uh that the robot operates in but then you would hope that if the sensors are good and if your environment doesn't have a lot of symmetry then you get pretty confident about where you are pretty pretty quickly all right other questions on on the algorithm I'm happy to go out again or clarify anything that's I don't care throughout and maybe especially for it first yeah uniformly from the state space So at the initial time step before you see any measurements uh you have some initial belief about where your robot could be and I guess that was related to the question about like how you choose that initial belief uh if you have some knowledge about whether or what is likely to be then you could have more particles there if you have no knowledge really about whether or what is then you just sample uniform uh and then you go through these steps so you take all of your particles you propagate them through the Dynamics you guys play these widths which tell you how lightly the sensor measurement was to be received from a given location from a given particle and then you throw away particles using this kind of pre-sampling steps you sample uh particles that have a high weight with high probability question is yes so we're sampling uh so this second uh block like the second like for Loop uh we're sampling with replacement so we have capital N particles uh and we're capital M times uh drawing uh samples from that set but we're drawing with replacement so we can take one particle out uh so let's say this particle like tem has a very high W um so we might like sample like particle 10 like a bunch of times so if we sample particle 10 a bunch of times it must mean that we're not sampling other particles which have a low weight so we're throwing those away so that that's kind of the mechanism for like concentrating our beliefs in the the particle filter does that make sense record other questions okay um yeah so I guess if you're interested in more details uh chapter 4.3 is the the reference for this I just want to clarify some notation as well I think that uh in the past has caused some confusion uh it's a difference in notation between like the tilde uh which well what I mean here is that you sample something from a distribution um so if you assume that your robot was at x t minus one superscript m so the previous time step and took control input DT minus one uh we're assuming that we have some probabilistic model right so linear Plus gaussian or something more complicated so we can sample some particular state from that conditional distribution so that's the the tilde notation um the second thing is the computation of the W uh what we're doing there is evaluating the density function um of the sensor measurement model so we're saying How likely uh is it that we received the sensor measurement ZT if the robot was in fact at a state XD superscript M so yeah I guess these have kind of different like semantics like one is sampling from a distribution the other is evaluating the probability density function um I guess questions on on that okay all right so yeah okay now we can get back to this question about uh uh variants of the the particle uh filter and maybe like changing the number of particles we need at every time step so what I described is is the kind of vanilla version the most basic version uh that it makes sense to start off with if you're encountering a new uh problem uh as I mentioned this is one of the most like popular algorithms in robotics for doing State estimation and particularly in the context of localization uh which is the the problem of figuring out whether a robot is in some environment uh someone asked I guess whether we're resuming we have a map of the environment which in this video the robot had some uh like knowledge of the geometry of the environment that it's going to operate into the how the rooms look like and so on and all it's doing is updating its belief about where it is as it receives more sensor measurements uh later on we're gonna also extend this to doing mapping so as a robot doesn't know what the environment looks like it doesn't know where it is so it has to both figure out and relatives and also what its environment looks like so it's implemented simple to implement and works quite well but it has some limitations so specifically the the state space Dimension is very large um all of the beliefs need to be extremely complicated so many many different modes in a high dimensional space then you might need lots and lots of particles to get a good approximation of the belief and there are like clever like variants where you uh sort of re-sample or regenerate particles or increase the the number of particles you represent uh as like time gets as a sort of time moves forward so if you see that the belief is getting more and more complicated uh then you could sort of adaptively increase like capital M the number apples that represent your belief so that that's that's definitely one of the the kind of popular electrics there's a bunch of Tricks uh that that people come up with to get particle filters to work well and some of them are described in the chapter I reference in the problem spoke I guess so that's sort of address your question okay good okay um yeah any other questions all right so I'll see you on Thursday
Introduction_to_Robotics_Princeton
Lecture_18_Princeton_Introduction_to_Robotics_Optical_Flow.txt
pretty much all right let's go and get started so the the plan for today as I mentioned in the previous lectures to spend the whole lecture today talking about the optical flow so this is one particular problem in computer vision that has like particular relevance to robotics as I'll explain in a bit um so I guess here's what we mean by Optical flow so we have some sequence of images so video that's playing on the other right so the camera here is just a static and what we're looking at at every pixel is the apparent motion in the image uh so if you look at just some kind of corners of the the video uh these vectors are have like very small magnitude if you look at the portions that correspond to the car or the trucks that are kind of passing by uh you can see that the arrows at every pixel have larger magnitude so the direction corresponds to the direction of the apparent motion uh and the magnitude corresponds to the magnitude will be fire portion and on the left is is just the uh the magnitude of the motion that's that's being visualized as you can see portions where the car is moving have a higher magnitude of optical flow uh and yeah I guess as we mentioned a couple of times before uh Optical flow is really crucial for the operation of drones uh so all of our crazy flies other drones as well have this downward facing uh camera uh that is doing some kind of computation similar to what we saw on the previous slide uh to basically estimate the drone's velocity and as part of this assignment that's due next week you'll see exactly how that computation Works how you can use Optical flow and to figure out that the drones will ask any questions yeah so you could use accelerometers uh what you need to do then is integrate the acceleration um to get the velocity and then if you want to get possession then you integrate that again so if your acceleration measurements are a little bit noisy or slightly wrong if you do that integration twice and then that compounds the errors and so you get a larger error and velocity your position with Optical flow you're kind of measuring the velocity like more directly and so there's one step less of like error propagation that happens if you want to get positions or first we want to get velocities then then you directly get that out of output flow yep good questions okay um all right so I guess you guys are kind of pictorial description of what we're going to do so we have some image uh at some time T um let's say it's a grayscale image it could be a RGB image so I X Y T is the intensity of light uh at pixel location x y and at time T if we're looking at a grayscale image and then we have a corresponding image at the next time step so t plus one is the next time step and in this kind of sequence from T to t plus one some of the objects in the scene have moved around and what we're trying to estimate is for each pixel we want to estimate like where things in the the scene uh kind of seem to have moved from from the previous time step to the the current timestamp so yeah we want to do this for every pixel so estimate that the pixel motion from image at time T to image at time t plus one okay so I guess what I want to emphasize really is is the optical flow is the apparent motion of objects so it's not necessarily the the actual uh motion of object in the scene uh and just just to kind of hammer this home let's think about a couple of examples so can you think of examples where there's no Optical flow so no apparent portion but there is actually relative motion so you have a camera that's looking at some environment uh the camera is moving relative to the environment but there's no no Optical flow like zero Optical flow let's say like everywhere or maybe in some portion of the the image so it's a example of that going with the cameras inside of the car looking in the car yeah that's fair um so I guess in that case there's no relative motion between the camera and the car uh but there is a result of the motion between the the camera and the the outside world uh other examples three in drive like going down the hallway yeah it's like totally yep good yeah so if your camera is moving in a completely textureless environment uh so an extreme version of this is you just place your camera in front of a black wall uh you move the camera around uh no matter where the camera is it sees exactly the same scene because there's no no texture um okay so I guess another let's think about the other case so there's no relative motion uh between the camera and its environment but there is Optical flow go ahead yes yeah perfect so so changing uh light source changing intensity or the the Sun or some other uh sort of light is moving around in that case the camera is fixed relative to the environment but you could still see like changes in the image because of the the changes in the lighting okay so apparent motion could be due to lots of different factors so it could be I guess the most direct one is that the camera is actually moving relative to the uh the environment or the environment is moving directly to the camera uh or or lighting changes okay so after the flow actually is uh seems to be a really crucial element not just for robots but also for biological systems so there's a couple of really interesting uh videos the audio scenes through the trees or Landing gracefully on a tiny telephone wire it can seem like it's performing a great acrobatic feet just how does it pull off those complicated Maneuvers at such high speeds it turns out that birds rely on a trick of the eye that even humans can perform they're using something called optic flow optic flow is the way our eyes perceive motion as we travel through a landscape for example it's the illusion that trees and buildings are passing aside as we drive down City street the greater the optic flow the faster things appear to be moving along with this college in the queens of the great Institute part of the buckavatula then a graduate student with the Australian National University performed a series of experiments testing how birds take advantage of object flow a recreation shown here was performed by a starling but they originally used to budget first they trained a bird to fly through a narrow Corridor with either horizontal or vertical lines page like the walls flagged by the vertical Vines should give the appearance of more movement creating greater optic flow in both cases the bird's wound up flying straight down the center of the corridor but when the lines were vertical they flew slower indicating that they adjusted their speed based on the amount of motion they perceived the scientists then let the birds fly through a quarter in one vertical wall and one horizontal wall in this case the birds did fly straight down the middle of the corridor instead they flew closer to the wall with the horizontal stripes but why it turns out that flying farther away from the vertical stripes decreased the apparent speed on that side so the scientists think that in order to avoid collisions Birds try to keep the same amount of optic flow in both their eyes but the researchers didn't stop there they then had the birds fly through yet another Corridor this time with one vertical wall and one completely blank wall the birds still stared clear of the strength wall but this time so much so that they occasionally collided with the blank one the results of these experiments show that birds use optic flow to fine-tune their navigation when maneuvering through narrow spaces at high speeds understanding how birds make these delicate modifications could have major implications in the real world it could help us build better flight navigation systems for man-made flying machines or it could help us create structures like wind turbines and skyscrapers that are more visible to birds so they can keep on soaring yeah so it's pretty pretty interesting that but basically like balance the amount of optical flow and use that for for doing uh obstacle avoidance or navigation and you'll see an example not exactly that but with something similar to that in the current assignment which might be an adorable one because the different videos um yeah so uh the other thing that Optical allows you to do is figure out the time to Collision to uh to an obstacle so if you're moving at some obstacle uh the optical flow around that obstacle allows you to figure out how quickly you're gonna hit the obstacle if you keep moving in that direction and that's something you'll see in this assignment uh there's actually another really interesting example of using our Google flow in kind of biology so this is uh David Attenborough one of his videos it's just about like two minutes I'll let it play straight through creepy workers are able to send complex messages [Music] in the wild there's sometimes less out in the open but mankind has persuaded them to live and saw their honey in highs The Economist heart is its Queen she is just a little bigger than her subjects and [Music] um in a spring when food starts alone the workers get busy collecting nectar [Music] thank you [Music] they have a remarkable method of telling one another where to find under the flowers [Music] this returning lead has just found a new source of is going to tell other [Music] s [Music] first she gatherers the Lord to do that she turns on her sister's backs and vibrates her abdomen now that she got their attention she begins her guns using a code of movements that tells [Music] the telephone yeah [Music] the duration of however indicates the distance from the nectar source the longer the level the further the fly [Music] the ankle over which analysis across the current tells on the direction to the flower in relation to the to the sun now our instructions are remarkably after because foreign [Music] [Music] [Music] I guess they're using a polar coordinate system are in Theta so the amount of waggling corresponds to our web Direction corresponds to Theta the way people kind of figure this out is also pretty interesting so there's kind of like Decades of experiments on bees trying to understand exactly how they communicate this is one paper but one thing they did was basically change the amount of optical flow that the BC um so the the queen bee goes somewhere comes back and I guess she sees like a certain amount of optical flow but then before the other bees uh kind of go on on their flight they change the the amount of texture like in that like area and so if you increase the amount of texture then the the other bees travel a smaller distance um to see the same amount of optical flow so I guess that's kind of how they figured out that based are kind of communicating the amount of flow to tell the other views where to where to go uh questions on I guess either um so sorry the the experimenters uh changed the uh the amount of texture uh to yeah to figure out that the bees are communicating Optical flow yeah I guess that would be that would also be impressive but yeah okay other questions okay so yeah I guess let's switch to the the Blackboard I'll switch back and show some videos later all right so just for uh Simplicity uh we're gonna work with grayscale images you can do exactly the same thing with RGB images as well uh but things are a bit a bit simpler if you're different grayscale images so grayscale again each pixel has a scalar associated with it corresponding to the main density of light at that pixel and we're going to represent an image by I uh x y t so basically an image is a function that takes in three parameters so the X location of the pixel the Y location of the pixel and the the time and outputs and intensity value so I X Y T is the intensity of light maybe I'll just write it down here intensity and X Y is the pixel location and the goal is to to estimate the optical flow uh at every pixel so at each pixel location we want to end up with a vector and that corresponds to the the apparent motion at that pixel going from time T to time t plus one um and so yeah if we have some object in the scene over here another object over here I'll let another object over here let's say between time t and t plus one and one time step this object maybe moves over here the subject moves here this object moves over here so at these three pixels this is the the vector we want to to estimate and in this scene I guess if you assume like nothing else has moved then at all the other locations away from these obstacles the optical flow is is just equal to zero okay we're gonna make two assumptions let me write them out over here so as I mentioned I guess this lecture we're doing things in kind of an old school way we're gonna think hard about the problem make some assumptions do some math come up with an algorithm as opposed to the learning based approaches which we'll start discussing uh in the the next lecture uh the first assumption is what's known as intensity or if you're working with the color images and color constancy and in Twitter means is that if you have some point in the the scene um at time t it looks the same at time t plus one uh so basically if you have some object let's let's look at this particular object that's moved from this location to that location in a single time step uh then then it looks it looks the same so it's not like um the intensity of light uh here uh is very different from the the densities of light uh like at the corresponding location at the next time step so um this assumption could be violated if you have very drastic changes in the lighting so if an object goes from Shadow to a not Shadow then this assumption might be violated but we're assuming that uh that that that it's valid um the second assumption is going to be an assumption on small motion so basically the the amount of motion between two corresponding or two uh sorry a consecutive frames TNT plus one is not that large So within a single time step it's not like objects are moving massive amounts and this could be a reasonable assumption if you have a high kind of frame rate a camera then it's reasonable to assume that within a single frame uh things have not moved all that much in the scene all right so these are the main assumptions we make We'll add one more later on uh yeah yeah yeah definitely actually I think it might be easiest with a picture so I'll redraw the picture over here so let's say we're just looking at one particular object um that's this disk over here and at time step T the location of the the object is at X and Y um yeah I guess you can think of like the center of the object is that pixel location X Y uh at the next time step uh let's say it moves a bit I'm obviously exaggerating the the amount of motion in practice assumption two is going to be valid or redeemedly valid uh but yeah let's say the amount of motion is kind of this much uh so the optical flow uh Vector at this location X Y uh corresponds to this like change right so where did this object go from time set meter to time set t plus one um so the location at the next time step to X plus uh y plus V where u v is the optical flow vector at that location X and Y um so the first assumption which is this intensity constancy or color constancy uh mathematically what it says is that I X Y T so the intensity of light at location X Y on a Time T is equal to [Music] the intensity of light at this location so if you look at the the intensity of light over here at time set T then the Intensive light over here attacks that previous one is the same as as it was over here and at times ft um yes I guess does that uh answer your question okay good other questions on the assumptions good do you make any assumptions or are you assuming no changes [Music] okay so the motion uh the relative motion of objects in the in the real world like relative to the camera could be anything so it could be yeah like fully kind of three-dimensional motion um what we are trying to do is just figure out the the apparent motion and that's purely in in the uh the two-dimensional like plane of the the image uh so we're not going to be able to at least directly from this uh figure out how the object is moving in 3D uh all we're trying to do is like figure out like where the object seems to have moved just in the the image plane good other questions uh yes thank you good yeah thanks good yeah the frame rate is high enough um like and I guess high enough kind of depends on how quickly things in the scene are moving so you could have a slow frame rate but then you could assume the things on the scene are moving like very very slowly too okay all right so that's yeah let's uh label this equation I'll call this equation one um so if assumption 2 is satisfied um so objects are not moving that much between uh like consecutive images and then we can tailor expand the right hand side of of one and so let's do that so I X plus u y plus v t plus one three Taylor expanders if you think of this as a function of x y and M and T this is approximately equal to uh the the function value uh at x y and D Plus I X X Plus i y Plus i t where I'm just using this shorthand so I X is the image intensity derivative uh the partial like derivative along after action evaluated add X so at the the pixel location X to x y t uh and similarly iy I'm using a shorthand for the intensity derivative along the Y Direction and i t is the partial derivative of the intensity with respect to a Time so let's just think about about how one would obtain these and I guess intuitively what they mean so let's say we have some image that has some object over here so how would you either computer or estimate this quantity so I X so let's expect some particular pixel location really let's draw this a bit bigger so some pixel over here uh if you had a particular image or like two images at time t and t plus one how would you calculate or estimate IX so intuitively again what it means is that it's the uh the derivative so the partial derivative of the intensity along the the X Direction X is is this where Y is this way so how would you how do you estimate that for a particular image yep yeah exactly so so you can uh look at the uh the difference in the intensity values uh at neighboring pixels right so you can look at the pixel you're interested in you could look at its pixel on the the left and the pixel on the right and just look at the the difference uh between the intensity of light let's say at this pixel to the left pixel and then the right pixel as R minus L like that's that's a approximation uh of this intensity derivative along the X Direction so it's basically saying how's my intensity changing if I move a little bit in the the X Direction and there's other ways to to do this uh so you can yeah just look at write minus like the pixel on the right minus the pixel you're interested in uh or yeah other other kind of approximations where you look at larger windows so it's the same kind of thing we were doing in the the previous lecture and you can do the same thing along y um so calculate the differences in the intensity values along y uh and I guess maybe maybe we can do one more so I T for the partial derivative along time at pixel location X Y how would you calculate or estimate that right um oh sorry not for for for yeah for why you do you do that but I guess I meant for for the time version uh how do you how do you calculate that one good you have the answer yeah yeah exactly so you look at the the image intensity at that pixel location at time T you look at the corresponding image intensity at the time to reduce one look at the difference and that's that's this partial uh derivative all right so yeah I guess that's that's our daily approximation uh which is going to be valid or approximately valid uh if our second assumption holds so we can combine these two equations [Music] actually let me just label that equation over there let's call this equation two all right so from two we can just rearrange it a bit foreign so I'm going to bring one of the terms the first term over to the left hand side so this is now approximately equal to IX X or sorry I had a typo there oops [Music] so this is partly why things were confusing because this should be you the V uh those are the the components of the the optical flow so it's not just X I guess it's the change in X right yeah so it's uh the x that the the second time search so that's expose you y plus v minus X Y so to U and B when you do the Taylor expansion yeah sorry about that okay so uh this times U plus I y v Plus the partial derivative with respect to time and the change in time is just one right so t plus 1 minus t and so that's why there's no term multiplying that um and then this left hand side over here if you look at our assumption one that tells us that this is equal to zero or we're assuming it's equal to zero uh and so just kind of modifying uh this we have i x u plus I Y V plus i t uh is approximately equal to zero and this rearranging this a bit so the gradient uh I dot product with u v transpose uh the people there plus ID is approximately equal to zero and this gradient here I guess just we can just Define that to be this Vector of IX and i y all right any questions on this okay so all right just to recap so what are we trying to do we're trying to figure out u and v right uh that's the uh the optical flow Vector at pixel x y and we're trying to do this just separately at every pixel so for now just imagine that we're focusing in on one particular pixel location X Y uh so we get this equation uh so how many unknowns do we have we have two unknowns right so you and me are scalars uh two unknowns uh this is one scalar equation right the left hand side we're taking a DOT product that's a scalar ID that's a scalar like inside of a scalar so we have two uh scalar unknowns but only one scalar equation so this is not enough for us to uniquely solve for the optical flow Vector UV question um um I guess yeah I mean I was thinking of this as a column Vector um yeah it doesn't matter yeah it's just a common Vector yeah yeah so I wrote it as a row Vector but yeah I guess I want I wanted to be a column Vector yeah it's not super important yeah it's just uh yeah just a two-dimensional Vector with components U and B good other questions Okay so what exactly is going on like we have some ambiguity in uh in figuring out uh you and me so we um like just given the assumptions we made we can't figure out uh UMB uh and specifically uh so this ambiguity is known as the aperture problem video in just a minute that I think will visualize this but let me just go through the uh the map um so let's say um so let's call this equation so there's a question over here and we've got it equation three uh this one over here so suppose the vector u and v satisfies equation three so what I'm going to claim so I'll prove this in just a second the claim is that U plus u Prime V plus v Prime um where U Prime V Prime is some vector not a perpendicular to the gradient vector uh so just pictorially if I have let's say the the gradient Vector points in in this direction uh suppose UV is some Vector that points over here that satisfies this this equation equation three and U Prime V Prime is some Vector any Vector that's perpendicular to the this gradient Direction uh yeah I guess one thing is that if u v satisfies uh this equation then if I add in u Prime V Prime that also satisfies the equation so let me just prove this quickly and then we'll we'll see kind of intuitively what it means so if you um so let's just plug it in so a gradient of I doubted with U plus u Prime V plus V Prime Plus i t equals zero then we can expand the dot product [Applause] us gradientify dotted with U Prime P Prime sorry not this is not equal I guess we're trying to prove that this is equal to zero this is equal to zero by assumption right we're assuming that u and v satisfies equation three so that's that's equal to zero uh and then we're assuming that U Prime V Prime is perpendicular to the gradient and so if you take the dot product and this is also equal to zero so we've shown that this Vector you U plus u Prime V plus v Prime also satisfies equation three so I guess what does this mean so it means that motion that is perfect perpendicular to the gradient is not resolvable given the the assumptions we made and I'll show you a video like I said uh in just a second so yeah basically the situation from the matters that the uh any emotion uh perpendicular is not resolvable uh given the ideas options that we've made all right so let me show you yeah questions on the map okay what does it mean uh perpendicular to the gradient I guess I I just mean kind of this picture uh so the Grady collector uh is what we defined over here so it's uh the partial along X and the partial long y so that's some Vector like this yes so if you have motion or like a balanced motion that's like perpendicular to an object is then moving in this direction uh that uh component of the the optical flow Vector uh we are not going to be able to resolve um uh because we can add in any U Prime and V Prime and it still satisfies equation three right so there's an ambiguity of the article flow in that direction that's perpendicular to the gradient I think a video hopefully is gonna make things more clear all right so yeah I guess this is the barbershop illusion if you've seen these things outside the other barbershops so I'm gonna play this clip I guess three times uh and I want you to focus on different parts of the video each time uh so the first time focus on this portion of the uh the video so that's the top kind of portion of the uh the L uh let me just play it yeah and what you should notice is that things seem to be moving in a particular direction so I guess which way do people see the motion so it's yeah it's that way okay so that's one so just keep that in mind um now all right so the second time around focus on this portion of the image so that's the kind of vertical strip and again think about which way things seem to be moving and so it should seem that things are kind of moving up all right and then the final one uh focus on this kind of ellipsoidal region [Music] and yeah I think you'll see that it's simply moving kind of diagonally now if you get rid of the background the thread background you see that there's actually just a bunch of lines that are all moving diagonally uh except that of course that as well that portion there too we're looking through an aperture right like we're looking through like this rectangular uh screen yeah so the background goes away it'll look like there's a there's a whole bunch of lines that are all moving in the same uh Direction diagonally but that we're looking through this kind of rectangular like aperture right so our human like Minds are somehow resolving this ambiguity right so we think the motion uh at the the kind of top portion of the syrup is to the left uh the vertical portion is up and the uh the yellow soil region is kind of diagonal so we're making like some assumptions uh to resolve this ambiguity equation thank you oh yeah so uh yes okay so the the powerful uh okay yeah so maybe there was a fourth one that I missed so I guess keep an eye on the The Barber Falls so which way do you see things moving so probably up right but actually the the actual like motion uh is like it's just spinning right it's not like uh something is actually moving up uh so um the spinning motion when you just view it like this uh through this aperture like where you cut off the bottom and top portions uh it feels like things are moving uh like just horizontally so that portion I guess is similar to the vertical strip in the DL ship thanks for asking okay all right questions on this all right so basically to resolve this this ambiguity which is like fundamental uh given the assumptions we made we need to make some additional assumption which clearly like our our minds are making some uh assumption right because we see a particular like direction of motion question yeah so this ambiguity um is in in this Vector like U Prime V Prime um we basically cannot like just from this uh one scalar equation uh we can't figure out exactly what U and B are because we have like two unknowns uh U and B and just one one equation and yeah the specific thing that we cannot figure out uh is any uh component of the motion that's perpendicular to the grading because if I add in any U Prime and V Prime uh if UV satisfies equation three uh then you plus your Prime and B plus b Prime uh also satisfies uh equation three so I guess that's the specific uh yeah the fact that we have to um unknowns and just one equation so we're gonna have to make some assumptions like to get some additional uh equation or equation and the specific algorithm that we all discussed or the assumptions that we've made so it's called the Lucas Optical flow uh algorithm I'll describe Canada the most basic version that are more refined version was about the Assumption we're going to make the extra assumption that will help us get an extra equation to resolve this ambiguity is known as spatial coherence and intuitively the the assumption is that Neighbors have the same so effective neighbors basically move together so have the same Optical cure UV so if we go back to this this picture if we look at this specific pixel that's kind of in the center of the object that has some apparent motion that's given by UV what we're assuming is that if you look at neighboring pixels so if you look at some pixel that's maybe over here or over here they all have the same Vector like the same UV this is reasonable to a certain degree like if you have like rigid objects that are moving the theme if you look at one particular point in the object that has a particular u and v if you look at a neighboring point it's reasonable to assume that at the same thing but it kind of breaks down around the boundaries of objects right if you look at this point over here that's on the boundary of the object and you look at a neighboring pixel those are not going to have the same USB so this is the luncheon it's not perfect but yeah I guess we'll see how far we can we can be accessible any questions on the assumption all right so what this assumption allows us to do is basically get a bunch more equations so we're going to look at let's say uh five by five window of pixels around a particular pixel Network x y um so yeah let's say like this is a five by five uh kind of image batch the 5x5 array of opexal expression just the special code here it's mean that like all pixels are moving at the same exact velocity yeah so if you take it like totally seriously right uh like if if a pixel uh here is moving at a particular velocity if neighbor is moving and same velocity it's neighbor is moving similarity then the assumption is that is that everything is moving at the same velocity what's happening in practice is we're just looking at a pixel kind of in isolation and we're trying to calculate a U and B for that pixel and we're just assuming that it's neighboring pixels uh are moving uh like together with uh with this pixel so if we look at a different pixel like a pixel that's on the the boundary of some image and this assumption is just not going to be satisfied and our estimate of the optical flow at that location on the boundary is going to be incorrect so I guess that's what it means and practically is that when the assumption is not satisfied at some image like location will be there sorry I thought education of the image then the optical flow Vector we calculate is not going to be correct and I'll show you some uh does that answer the question okay all right so yeah let's let's just focus in on a particular pixel X and Y and look at some 5x5 patch of pixels uh around the the X Y pixel um so what we have is basically okay let me let me write it down so we're going to end up with uh 25 different equations scalar equations right as a vector equation right so each of each of these uh each of the the pixel uh pixels in this 5x5 window uh we can apply this equation three um and what we're reviewing so without that assumption like each pixel would have its corresponding U and B uh but what we're assuming is that all of the pixels in this 5x5 patch have the exact same um so at some pixel location P1 there's a particular image like gradient i x and iy at a different picture location there's a different is and I Y um and if you I guess if you multiply all of these or multiply these out you have 25 scalar equations right so each one is saying the image derivative along the X Direction times U plus the imagery conservative along the y direction times b equal to minus the partial a long time question dimensions of the merchants yeah so this one is it has 25 rows and two columns and this is a two by one and this is a 25 by uh by one yeah so we're just taking these scalar questions at every pixel and assuming that the u and v are the the same at all the pixels and we're just concatenating all of those scalar equations to get 25 scalar equations which we can just write kind of compactly in this one questions so it seems like we're trying to solve for two variables of the problem yeah so why do we need 25 yeah so we don't really need 25 uh so this is a over determined uh system so yeah we just have two unknowns uh and yeah we end up with with 25 equations um one way to think about this is that if you like we're trying to get some kind of like robust uh solution so we're trying to get something that will satisfy uh like these equations kind of as accurately as possible uh and we can actually let me just um just to find some notation notation here so let's call that first Matrix we call the p Uh u and v we can call it D Bar and then equal to just uh yeah I'm just just defining the first Matrix to be a so this was the 25 by two Matrix and this is a two by one this is a 25 by e all right so we have this over determined uh sequence of system of equations so we can find the least Square solution um so basically find some uh Vector B some some UV that minimizes this so we saw this I was back in the same one when you were trying to figure out that the trust is coefficient uh for the security fly uh drones um and yeah there's a analytical solution so you can just use the Sci-Fi for instance but yeah there's also an analytical solution yeah the least question so this is b equals a transpose a inverse multiplied by a transpose multiplied by B so let's call this equation four uh so yeah this is the the least wire solution what I mean is this Vector minimizes uh this objective so it tries to satisfy all of the equations as much as possible where as much as possible if I Quantified by this objective you look at the difference in the altitude like normal vector and all right so let's think about where um okay so I guess we so just to answer your question we we didn't need so many questions uh but uh the rough intuition is that if we have uh some like error uh from like each regression if we have a bunch of equations and we're kind of approximately satisfy them then that can make us more like robust to like kind of little pieces of life that we have questions what is the subscript to the equation oh like the the L2 Norm uh so just the usual like a notion of distance yeah good okay all right so let's think about where this equation or where the solution exists and where it doesn't exist [Music] so if we uh yeah if the question is like when can we solve this equation when can we get the least squares solution so we need uh this Matrix a transpose a to be invertible right to pick up plate and this this inverse over here um yeah we need this Matrix if I'm flow rate to have an inverse so if you look at a transpose a a again was just at 25 by 2 Matrix over there if you take a transpose multiplied by itself uh you get a two by two Matrix right so two by five two by twenty five times twenty five by two so you get a two by two Matrix um yeah I guess you can maybe just do the calculation uh offline if you want to but it ends up being uh x squared uh and each summation here is over the for each pixel pixels in the 5x5 image patch so each like element of the two by two Matrix is sum over each of the 25 pixels in your 5x5 patch for this one you calculate the IX at every uh sorry at every pixel Square there and and so on um so for this Matrix to be invertible this is the two by two Matrix we can look at its determinant you're going to see whether it's invitable or not so that determinant the convertible if the determinant is not equal to zero so the problematic cases are where the determinant is equal to zero so the determinant analytically [Music] squares minus IX i y squared so that's the determinant uh the determinant equal to zero so what are the the problematic cases good yeah so one case is when I X is equal to i y so intuitively what that corresponds to actually I guess if you did someone figure out what is the correspond to intuitively like what's what does that mean like visually for ix2 equal to iy right 45 degrees Yeah so you have an edge uh at a 45 degree angle so the the image gradient is is in this uh leg Direction so yeah that's one uh problematic is uh other problematic cases there's a simpler one actually it doesn't move sorry there's no texture yeah there's no texture so everything is zero so i x equals i y equals zero for for all the for all pixels like all pixels in this five by five batch in that case each term here is zero so that determine is zero one of them um Oh Yes actually uh uh yeah just one okay so I guess either i x equals zero or equal to zero either of those uh yeah either of those go ahead so is that that actually it's only moving in like it's moving along uh yeah so it's not the motion right so it's the uh the image like gradient so we have a edge basically that's that's like completely vertical or or completely completely horizontal so in these cases like in this case the gradient along the y direction is equal to zero so if you just look along the object and uh and similarly uh yeah if you have an edge kind of an object that's horizontally oriented then nothing is changing in this direction so that's that's also from zero yeah so it's basically where uh if you move a little bit in the image uh the intensity of life is not changing that much so the object let's say was horizontal and was moving horizontally uh and if you were just focusing on let's say this little kind of portion of the image over here so you have a horizontal object with moving horizontally you're not going to be able to figure that out right just by looking at that portion because um yeah the like everything kind of locally seems to be uh seems to be the same from image to image so yeah these are these are the problematic uh scenarios I guess questions on this yeah okay let's look at just visually yeah yeah but like intuitively yeah so intuitively uh see I don't think I have a um yeah I don't think I have an example but go ahead sorry but um then then [Music] I don't think that would solve the issue right you would still have um so if you have like some orientation yeah that's right so I think the okay so I guess the iteration I have is if you have some let's say like this is an object okay this is better so there's an object that that is like has the same texture like everywhere along it so this is currently uh I mean think of this as kind of like a horizontal like strip so this is moving in in this direction um if you look at just this like location over here if you kind of just like zoom in over here uh you're not going to see any uh like any motion um and yeah I guess if you're moving uh like if it's oriented this way if it's like a thin strip uh that that's oriented uh this way like vertically and then if you're moving in the vertical Direction uh you won't see the uh the motion um so yeah if you change the the orientation of the camera uh I don't think it changes anything fundamentally right like there's still some direction where the object along which the object could be moving uh where where you don't uh pick up that apparent motion is that right or I guess yeah yeah okay um it would be easy if you like to tell me you have like that background yeah yeah good so so if you made the the image patch larger so instead of a five by five patch let's say you're looking at like a 10 by 10 patch like something that goes beyond the kind of boundary of the object uh then if we look at I guess it's partially equivalent now but uh like if you look if you look at uh like that determinant it won't necessarily be the case that uh each IX or iy is equal to zero because along the uh the boundary of the object there's going to be some gradient right in the IX or riy Direction good okay okay so yeah I guess when when does uh Lucas Canada like this algorithm that we described have have trouble um so yeah we discussed these uh situations but we're just uh to reiterate if there's some portion of the image that that has like very low texture or no texture uh then that's uh that's a kind of hard case uh the other one was this uh Edge where ixis is equal to iy um or or the other ones where like one of the the great interactions is the seafood to zero um oops sorry yes I guess this is the uh the video game um so here they're actually using some uh some tricks to uh to improve things uh so you would expect that along the uh like the boundaries of the object uh things kind of uh like break down like the estimate that you that that you get is not going to be exactly right here there I think they're like doing something like filtering um so it's not just they look at each pixel individually but they're uh like looking at like multiple pixels like doing this computation for multiple pixels and kind of like smoothing things uh and I think you kind of see that over here like it's a bit like Blurry right so the the edges between uh let's say a car uh and and the surrounding is not like completely um yeah completely sharp so I think they're doing some uh like blurring uh like some smoothing uh to get rid of the the problems that the the edges the boundaries of objects uh but yeah I guess just zooming out like we made three assumptions right so the first assumption uh was the color constancy or or intensity constancy uh that assumption uh like breaks if you have kind of drastic changes in lighting for instance um or if you have like yeah objects that are very reflective then if you look at the object from one location it moves a bit uh then the object might look very different even with with a small amount of motion the second assumption was the uh that the the motion is a small um kind of compared to the frame rate and the third assumption we made or the spatial coherence that is basically there are like rigid bodies in in the the world and if you look at one pixel like neighboring pixels have the same Optical flow so if any of those assumptions break then we're not going to get good estimates so as part of this assignment uh I guess what you'll do is install the cameras that we gave you on the Drone so it should be pretty quick I think Nate posted a video of him putting it together it took about like two minutes or so it should be really quick you'll collect a video um so I guess you're free to kind of choose your like environment like move the Drone around you can do it in our lab space um and then you'll use this algorithm this Lucas Canada algorithm to calculate Optical flow and yeah I guess I'd encourage you to maybe like try to break it I mean first try to try to not break it and make sure it's working but when you our confidence is working try to try to break it right like maybe point it towards some like very textural uh like theme and see what you get or or look at some of the other cases that we discussed so this has implications for uh for drones uh maybe one question we can think about is how should the carpet look uh if you're if you're like working with uh with drones I guess what should you uh how should you make your your carpet look uh to increase the uh the accuracy of optical flow measurements yeah stripes or some kind of texture and you'll often see that in in your own videos they have like really kind of brightly colored uh like almost like play matte type type outfits that people buy yeah I guess we fried out uh like with our uh like green kind of like grassy like outfits that has like enough texture and that the optical flow works pretty well but if you if your ground is very kind of textureless then the drones phase estimator gets uh like completely uh wacky and we've yeah we've had that issue in our other like lab space out in like forestall where the amount of texture is not that high if you fly drones there we have to put like little kind of like tape markers to get a good State estimates from from all right yeah I guess any other questions go ahead yeah yeah good yeah yeah perfect so so actually that's one of the the improvements you can make and that makes a big Improvement so you can segment your image uh into like different kind of spatial uh regions uh and then just make the spatial coherence assumption on those segments and so you don't have this issue where the Assumption breaks at the boundary because you're doing it separately for for each second yep yeah that makes a big difference so nowadays I guess because of gpus we talk a bit about gpus in the next lecture uh because a lot of these computations are paralyzable uh you can speed things up quite a bit we won't ask you to do that for for this assignment you'll see that when you write the code it'll be like relatively slow but you can't speed it up because everything is so paralyzable and you even the image segmentation nowadays with gpus like in runoff like real-time rates so yeah it's not a not a big issue yeah analyze it in one and then analyze the same image again like wouldn't you be able to like not have the issue that you were doing oh I see like do it separately for for a bunch of different angles yeah uh yeah I think that might work I don't know I guess there's someone have a is that what you were saying initially okay sorry I I didn't yeah I didn't follow it initially yeah I think I think that works I don't see a problem with that yeah that adds a bit of overhead but uh yeah it should be fine I guess you just find directions where there is a gradient that would be the other way right like just find some directions where there is a gradient uh kind of choose your coordinate system to match that and then do the computation with that coordinate so yeah I think that works as well yeah good yeah sorry I haven't followed you were saying before good question kind of like it looks like it's like sampling I don't know if it's like sampling but like uniformly here um yeah yeah like just for certain areas that you want more yeah yeah that that's that's another kind of variant so one thing you could do is like look at where there is a bunch of texture uh so some kind of like key like interest points and just to the the computation there um yeah I think that depends on on exactly what you're trying to do like if you have a drone I think that that makes sense like if things are like far away uh from the Drone and you know they're far away then it doesn't really matter like what the optical flow there looks like what matters is if you're like heading directly towards an obstacle uh then you really care about uh like figuring out what the the optical flow looks like uh in that region which can potentially allow you to figure out like time to Collision I guess as you'll see in this uh this assignment yeah yeah and one was like [Music] yes yeah yeah definitely yeah all good all good ideas good yes yeah yeah so that that's uh I guess where we're gonna go starting in the next lecture I'll show you some like data sets of uh uh like there's a specifically four uh like learning Optical flow and and that's uh that kind of basically gets rid of some of the assumptions that we've made right like this this is a pretty like model based approach like we sat down we made some assumptions we did some math we got an algorithm it works pretty well but it has like flaws uh so yeah I guess we'll see how to relax some of these assumptions with learning and lots of data starting the next lecture yeah so it seems like nowadays it's basically just better if you have lots and lots of data if you don't have a ton of data then baking in some of this extra knowledge can can make a big difference all right I guess I'll see you next week [Music]
Introduction_to_Robotics_Princeton
Lecture_10_Princeton_Introduction_to_Robotics_Planning_with_dynamics_constraints.txt
all right uh let's go and get started so today is going to be the the last lecture on uh the small deal on motion planning so just to remind you of where we left off in the previous lecture uh so we started discussing motion planning with Dynamics constraints thank you so our first boss at the rrt algorithm we didn't take into account constraints that your robotic system might have in terms of how it can move so that's what we tried to fix or start the giant effects in the previous lecture and more specifically we discussed a pretty Broad uh class of systems which are known as differentially flat systems systems so we looked at the the planar quadrotor as an example of a differential flat system and then I mentioned a couple of other systems as well so the full 3d quadrature Dynamics also satisfying this property of differential flatness uh I think the the general kind of definition of differential flatness might look a bit abstract and intimidating so I just want to highlight the main idea the main iteration behind flatness um so basically what differential platforms allows you to do is we can specify any trajectory so I guess it's caveat with any sufficiently smooth in the flat output space so whatever the flat output space happens to be for your system for the planar water this was the XY position of the central pass for the full 3d quarter I mentioned that their flat output spaces x y z and yaw in general the dimension of your flat output space is going to be the the same as the number of control inputs that your system has so yeah we can specify so if your system is differentially flat it means that we can specify any sufficiently smooth trajectory in the flat output space uh and we can do two things so we can recover control input trajectory and that's going to make the flat outputs do exactly what you specified and then moreover we can recover the full State trajectory so again for the planar coordinator if you come to me with a sufficiently smooth trajectory for the center of mass I can give you a control inverted actually that's going to make the center of mass do what you specified and then moreover I can tell you exactly what the full state is going to do so we don't have direct control of the full state right I'm not saying that you can come to me with any trajectory for the full state of the system and I'll give you a control input sequence that's going to make the full state do that I'm just saying if you come to me with a center of Masters actually then I can get the system to follow that Central Mass trajectory the other states might do something wacky and like weird we don't have direct control over it but we can directly control the central amount so that's basically what difference of flatness is giving us uh and while extremely or like relatively uh broad like not all systems are differentially flat or not all systems are known to be differentiate flat uh So the plan for today is to discuss motion planning with Dynamics constraints uh for systems that are not known to be differentially flat or I can say it's systems that are not necessarily differentiate blood foreign so that's going to be I guess maybe half of the the lecture or so the second thing we're going to do today to wrap up our discussion on motion planning uh is to relax this assumption that we've made which is that we can follow any uh trajectory exactly like without any errors of course in practice we're not going to be able to do that there's going to be some uncertainty in the Dynamics some external disturbances like wind gusts that make the Drone kind of move away from whatever trajectory is trying to follow so we're gonna discuss some techniques for doing feedback control uh for trajectory tracking so this is slightly different from our uh first kind of discussion on feedback control which was targeted at making some equilibrium State like the hovering fit uh asymptotically stable uh what we're going to try to do here is not try to get the system to a particular like fixed state or another to track a trajectory uh in in state space um so yeah that's the plan so I guess let's start with the the first part I'm just going to switch to the slides here questions [Music] some time yep because I I assume you can't always change yeah that's a good question I guess let me just repeat the question in case you couldn't get out of the back so the question is do we assume that we can instantaneously change our control inputs or do we assume uh that there might be some lag in how quickly you can change your control inputs so here we're assuming that you can instantaneously choose control inputs at any time so you can switch between one control input and another instantaneously the way to modify uh you so I guess one kind of trick you can do is change your control inputs such that it captures the difference in control input from time to time and that gives you a way of like potentially saying that I don't want to change my control inputs uh like my original controller inputs like that much from from time to time if you put a cost on the new control input which is the the change in the original controller so there's like tricks you can play like that which allow you to bake in I like your work constraints but yeah for now anytime we say control input we're just gonna assume that you can like set the value of the control input to whatever you want at any time yep good okay let me maybe lower the lights a bit um all right so what we're going to do the the general kind of approach here we're going to take to bacon uh Dynamics constraints into our motion planning algorithms uh will be to start with the rrt algorithm and modify it to handle uh Dynamics constraints um so I guess this is a quick reminder of the RT algorithm this is a iterative algorithm that operates in the configuration space um so we're growing a tree in the configuration space of your system uh so for a planar quarter the configuration space is X Y Theta for a full 3d quarter or it's XYZ little bit here so not including the kind of velocity terms and at every iteration you sample some random configuration from your configuration space you look at the closest point in the existing tree that you're growing iteratively and then you try to extend the tree towards the uh the points that you've sampled and one variation that we discussed in the previous lecture uh is instead of growing the tree in the configuration space you grow the tree in the flat outward space and in that case differential flatness gives you a way of doing that extension operation kind of exactly while opening the Dynamics of your system but yeah today we're going to focus on systems that are not necessarily differentially flat so the question is how can we modify something like the rot algorithm to take into account some non-trivial Dynamic constraints so this is our general form of Dynamics right we can always take some second order differential equations that you get from f equals Ma and write it in this first order form so hopefully everyone is comfortable with this form now it's like Cloud equals f of x and u x as the state so these are the configuration variables and the kind of derivative terms and they use the the control input um so the main idea that that we're gonna try to pursue is to instead of constructing the rrt in the configuration space we're going to construct the RT in the state space of the system uh so the the space that X lives in uh so here's one idea so we're gonna I guess propose uh some uh algorithms we're going to see whether or not they work well and then we're going to try to improve uh our algorithm so this is kind of idea number one so okay so here's here's idea number one um so on each iteration of the the IRT algorithm uh what we're going to do is randomly sample uh a state from the existing tree so you initialize the tree with the starting State we're going to call that XA so the first iteration you just sample that state in general you sample some random configuration from the existing tree so this is different from the rot algorithm right so the standard RT algorithm you sample some random configuration from your configuration space then you find the closest point in the tree here we're saying we're just going to randomly sample some point from the existing tree uh we're then going to randomly select a control input which we're going to call urant and we're basically just going to apply that random control input for some small amount of time which we call DD right so what this results in uh is you start from this random State xrand and then you just integrate your Dynamics forwards you just see where the system ends up if you apply this random control input for a small period of time and you call that state that you end up in when you apply that control input for a time of Delta Delta T sorry you call that Xs and then you check whether this state access is in Collision or not with any obstacles if it isn't in Collision then you throw away this iteration you kind of revert back to your tree before the situation if it is not in Collision then you add that to the tree and then I guess if you're close enough to the goal then you just terminate the the algorithm so yeah this has a different structure from the the IRT algorithm um right so we're we're doing things in a different way we're like randomly sampling points in the tree randomly selecting control inputs and then growing the tree that way um the kind of benefit of this or one I guess nice feature of this algorithm uh is that the trajectory that you get out at the end of the process is guaranteed to be dynamically feasible uh because we're like essentially by construction right like we're choosing control inputs uh and then ending up at states that you would end up in if you apply that control input and then it should be glowing growing the tree that way so if you just apply these control inputs uh then you're guaranteed to to go to the state access from the the extra hand state I guess questions on on this algorithm just the structure of the algorithm okay does someone have intuition for whether or not this is gonna work well or any yeah I guess any thoughts on is this maybe we can start with a concrete question like is this gonna find a trajectory if if a trajectory exists uh from start to finish in the state space is it going to find a trajectory good yeah good question so um this algorithm actually allows you to specify constraints on the control input so for example for a quadrot order there might be a bound on the truss that you can produce and typically propellers only spin in one like directions you can't produce like negative thrust so zero is a lower bound typically on the thrust and then there's some upper bound depending on how quickly the propellers can move and yeah you can modify this algorithm to take those into account basically the control input that you sample like you ran you sample it just from the allowable set of control inputs questions yep yes yeah so that that that's that's exactly right so it turns out that uh the good news is that one can show that this algorithm is probabilistically complete uh in the sense that if there is a way to get from the start state to the the goal state or close to the goal State this algorithm will find it on the probability that the algorithm finds it converges to one as the number of iterations goes to Infinity but it doesn't actually work well in practice so the number of iterations exactly as you were saying the number of iterations that you need to find the find the path be very very large and this is what you get when you run this algorithm for for some number of iterations it's basically just a jumbo right it's just like a jumbo like around the starting location the starting location here is around zero zero and it's kind of just doing this like random walk brand new motion if you're familiar with that um and it is exploring but very very slowly so you're just gonna get like a jumbo that keeps growing very slowly over time eventually that jumbo is going to grow to Encompass something that's close to the the gold state but that's going to be very slow all right yeah I guess questions on on this in some sense this shows the weakness of probabilistic completeness right like one could say oh like my algorithms I probably complete wow like that's amazing but yeah it turns out you can come up with like pretty silly algorithms that are also probabilistically complete but don't work well in in practice so uh probabilistic completeness is a nice like property to have for an algorithm like it should at least three probabilistically complete but just because you have problems probabilistic completeness uh doesn't mean that it's going to work uh particularly well in in practice okay so yeah I guess let's try to modify the RT again to fix this problem uh so we need something basically that promotes exploration uh so similar to the the rrt uh and the the key kind of uh step in the algorithm that in the RT algorithm uh that promotes exploration is the fact that we're randomly sampling essentially some like direction to explore in from the uh kind of like boundary of the the tree so we're randomly sampling some random some configuration finding the closest point in the tree and then expanding expanding the tree so that's kind of like growing The Horizon of the the tree the the boundary of the tree if you like um in directions that have not been explored so far um so the idea with this kind of second iteration of the RT that takes into account Dynamics constraints is going to keep going to be to keep the structure of the the IRT algorithm essentially the same uh the only thing we're going to change is how we do the extension operation and so that's going to basically allow us to explore while Phil allowing us to satisfy the Dynamics constraints that your robotic system has okay so here's the the second kind of uh version of our algorithm um so again we're going to initialize uh the tree with just the starting the starting State XA at each iteration uh we're going to randomly sample a state as in the usual kind of RT um usual RT we do the the sampling in the configuration space here we're sampling in the state space um but then this is not from the existing tree right we're just randomly sampling some State uh that's not necessarily part of the uh the tree right now yeah we find the nearest vertex and the nearest State in the existing tree and then we're going to somehow have to grow the tree from exterior to xran and that's the tricky part right so if we assume that your robot can execute any trajectory then we could just say like an extra hand to ax near with a there's a line segment but because your aquatic system has some Dynamics constraints uh you can't just like follow a line segment in in state space as we saw the car example for instance or the plane of water like cannot just like translate sideways um so here's one one idea um so we can sample a number of control inputs and basically see which one gets you closest to xrand right so ideally we want to go in the direction of extra and exactly uh maybe we're not going to be able to do exactly that so a proxy for moving towards like strand exactly is to sample let's say 100 control inputs or some like fixed number of control inputs and see which one gets you uh closest to X1 and let's say just in terms of euclidean distance um so when you start from X near you apply some control input you see where it gets you and you try a lot of different controller inputs um which you apply for some small amount of time which I'm calling here and delta T and then you continue as usual with the rrt algorithm so you see where you end up you check whether that's it is in Collision if it is in Collision then you throw it away if it's not in Collision then you keep it and you throw the tree yeah like that yes I guess questions on this version of the the algorithm go ahead so you could end up doing so much yeah yeah so that is a there's a kind of trade-off here um so there's no kind of analytical optimal like solution it's something that uh you have to figure out by a travel matter for your specific problem uh but yeah if you sample do a few control inputs then the likelihood that you're going to make progress towards x-ram is low if you sample too many control inputs then like you said each iteration is going to be computationally expensive until that might slow things down so intuitively there's going to be some sweet spot like in the middle but I don't know of a kind of principle way of finding that yeah just like travel matter for your particular problem question is yep uh yes good so in in this version I'm just saying like uniformly random but but yeah you can make this for clever by like biasing the control input like sampling in a specific way I'll say a little bit more about that in a couple of slides but that's definitely if you know how to do that for your system then you should yeah uh yes yeah we are still considering obstacles maybe let me just sketch out uh just the Souther than maybe one iteration of it so let's say these are obstacles uh you can think of these obstacles as living in the the state space of the system um like not necessarily like you don't have to like construct these explicitly in Safeways but just uh yeah just for the purpose of drawing pictures imagine that these are obstacles in safe space um so yeah we have some existing tree let's say that looks like this um so what we're doing is sampling some random state strand we're finding the closest point on the tree which is this point and then we're sampling some number of control inputs uh so let's say this is one control input this is another one maybe this is another one and this is one and if you apply the control inputs for some small amount of time delta T then your state is going to evolve in a particular way so it's not going to be a line segment right here state is going to obey whatever differential equations like xor equals f of x u that describe your Dynamics and so you're going to have like a little electricity that goes from this state to some new state and then what I'm saying is like you picked the control input that allows you to make the most progress towards the x-rand state and then you grow the tree so yeah the the step where the obstacles come in is when you extend the literary in these different directions you can do Collision checking so you can check whether this state is in Collision or not if it's not in Collision then you keep it if it is in Collision then you can just throw it away there's things kind of uh that are intermediate that you could do so if this state has in Collision the one that makes you make the most amount of progress towards the random say that you sample if that state is a collision uh then maybe you you throw that away but then you look at the next uh state or sorry the next control input that makes you make the most progress towards xrand uh if that's not in Collision then you keep black fitters in collisions you can keep kind of checking or like you've done all this work in terms of sampling different control inputs so you can check each of them and find the one that's closest to xrand um question yes yep yes yeah good I'll that's going to be I guess idea number three is gonna uh use that uh so yeah instead of just like randomly sampling control inputs and seeing which one uh gets you close to the the goal uh also not the goal close to X Rand and yeah you can be a little bit more clever and optimize a control input yeah I'll get to that uh on a couple of questions just what about this algorithm um yeah so that depends on the Dynamics of the system so um if you just have some arbitrary Dynamics actually equals f of x and u H the trajectory that you end up following so whether it's a line segment or not uh just depends on the exact form of the Dynamics uh so the exact form of f um so I guess for a planar quadrotor imagine that it's in some like tilted configuration if you apply a constant control input for some short duration of time uh it's kind of gonna go like this right so it's not going to be a line segment even in terms of central class so the exact trajectory that you follow when you apply some control input for a short duration of time just depends on these dynamics that describe your system so we're relying here I guess I didn't mention the fix personally but we're relying on being able to integrate the Dynamics and in general if f is non-linear we cannot do that analytically but you can use numerical techniques so if you use like Audi A45 for instance or sci-fi dot integrate is the python version of that to get these like kind of local trajectories that you get when you apply control inputs for the previous delta T okay other other questions [Music] okay so all right so this algorithm I think can work reasonably well it's definitely an improvement on on our idea number one and especially for systems that have a relatively small number of control inputs a small dimensional control input space what happens if your control input space is high dimensional is that if you just randomly sample a control input in a high dimensional space the probability that that one of those is going to result in progress towards this random state that you sampled and that probably is going to be low right so you're not going to make much progress if your controlling input space is high dimensional question yeah so if you have some like Subspace of the high dimensional control space uh if you can find such a Subspace that allows you to make progress in in many different directions in your state space then you should do that but it might not exist right for like a humanoid robot like it's not clear whether there's some like low dimensional Subspace of the control input space that's going to make the humanoid robot kind of do interesting things like exploring in safe space yeah okay um I guess uh Yeah Bugger or feature of this algorithm uh is that the algorithm is sensitive to how we Define closest right so what I was saying here is that you find the closest point in terms of the euclidean distance so the standard notion of distance uh but that may not be the the best choice uh so I guess here's a example I mean it is this real-time feedback control um yeah so I guess here's a here's an example um so this is with the the car system that I sketched the previous time so here's one pair of States so one state is a robot let's say has some fixed like velocity in the forward Direction and the other state is where the center of mass is just like translated and and it's moving with the same like fixed velocity um let's consider like another state which is over here so again the velocity is fixed just the position is is changing so I'll call this State one I'll call it state two and I'll call the state three um can actually I guess let me let me make it a bit more uh extreme just to highlight the point okay yeah so these are our uh our three states um so same fixed velocity for each one just different locations so in terms of euclidean distance so the standard notional distance in the state space I guess which of these pairs is the the closest um so let's take like one maybe as the reference point so it's two closest to one or is uh sorry uh three uh yeah it wasn't meant to be a quick question or it was three uh closest to one in terms of euclidean distance yeah so if you're just looking at the distance in the state space and then two is closer so on because the velocity components are the same for all three it's just the the location is different and two is closer in terms of location but I guess intuitively um in terms of if you think about the Dynamics of the system uh which of the the states uh or like do you think like three is closer to one or is too closer to one maybe let me rephrase the question a bit so is it easier to get to one from three or is it easier to get to one from two uh so I see a short time so it's easier to get to one from three and the reason is you can just go straight right so in a short like amount of time uh basically just by moving in the direction you're moving in you can go from three to one whereas going from two to one might require more effort right so you might like because you cannot move sideways you have to do some potentially complicated maneuver to get from uh two to one um so I guess does that make sense questions yeah go so if you if you say 10 move forward and backward backwards motion and you're sampling randomly yeah if you ever wanted to move sideways it's like most of the samples yeah exactly yeah and so that that's an argument for not using random sampling or an argument for not using euclidean distance as your notion of distance so there are ways to modify the notion of distance uh to capture like the Dynamics of your system so uh yeah like something that says like three is closer to one than two is to one um yeah I won't go into the details of how you define these but I guess the main point is is that euclidean distance is not necessarily the the correct uh Choice uh of distance when we're finding when we're growing the uh DRT another example just just uh to build more intuition is if if you have a pendulum [Music] um let's say this is a inverted pendulum and their velocity here is in this direction um so interest is one pair of States so the pendulum isn't at this angle or at this angle but both kind of moving in in the same direction or I guess it's slightly tricky to draw let me just extend the pendulum a bit so the angle is the same but the velocity is is very different so the velocity is is in the the other direction um so if you're moving in in One Direction it's kind of the same thing as the colors if you're moving in in One Direction uh then even if you're slightly further away in terms of the angle um like two states uh that are moving in the same direction might be considered closer even if they're not necessarily the closest in terms of euclidean distance so yeah the euclide distance is not necessarily a good metric in in the state space for defining uh how close to states are so we thought about the pendulum of the car uh and I won't go into this but there are like other metrics uh closeness uh that one can derive based on the dynamic so this is similar you can like modify this algorithm in a pretty straightforward way like I said I'm finding the closest point in terms of the cleaning distance you find the closest point in terms of some other distance that better captures the Dynamics of your system uh and basically this can improve the efficiency with which your path has fallen so the number of iterations that you need for the RT algorithm can be made to be smaller than if you were just using the euclidean distance okay uh here's idea number three so I think someone kind of already foreshadowed this so again we're going to keep the rrt algorithm structure the same but just change the extension operation um so we're going to do this by solving what's known as a boundary value problem um so the structure again is the same so we sample some x-ram we find the closest uh point in the tree X near and then we're trying to kind of grow the the tree in the direction uh of x-ray so what we can do is find uh or connect uh X mere to X strand with a line segment and then try to get towards some point on the line segment which I'm calling x s here um so you might not be able to Traverse that line segment exactly right because we are because of the Dynamics of the system but maybe there's some short kind of sequence of control inputs you can apply and that's going to take you from xnier to to access so um yeah I guess this is called a boundary value problem the the boundaries are fixed like with the starting and configurations are fixed and you want to find some control inputs uh that drive your system from X near to access uh this also goes by the name of trajectory optimization uh so you can think of this as an optimization problem like I want to find a short sequence of control inputs that satisfies the constraint that my starting state is X near and my ending state is access uh maybe while minimizing the control effort you need to make your system do that you can tackle this again I'm not going to go into the details here I just want to make you aware that these techniques exist so we can tackle this using techniques from non-linear optimization um and this is a pretty active area of research like doing trajectory optimization for systems that have non-trivial Dynamics so yeah I guess one question you might ask as well if we can just do that if we can just solve this boundary value problem like why do we need the rrt right like just solve a boundary value problem from your starting state to the the end State and yeah these techniques are so good then there's no point even doing the rrt you just solve a big trajectory optimization problem uh sometimes you can do that like especially over relatively short uh periods of time you can in a single kind of shot like one optimization problem find some trajectory that goes from the starting state to the gold state avoids obstacles but uh yeah this if you try to do this for a longer time Horizon uh then like these optimization algorithms get pretty computationally intensive or don't necessarily find good Solutions so they're particularly good over short time Horizons so if you can combine the structure of the RIT algorithm with this trajectory optimization step that's responsible just for local extent of the tree and then that tends to work pretty well in in practice all right yeah these questions on this okay so let me switch now back to the Blackboard so basically how do we deal with [Applause] uh uncertainty or disturbances in the Dynamics of your system [Music] so so far uh in like our all our discussions about uh motion planning uh We've made this assumption uh which is that we can exactly follow some trajectory that emotional planner gives us but yeah of course this is not really true right so in practice this assumption is violated for a lot of different reasons so the first one is the exact initial condition I mean like this uncertainty so you might have some uncertainty in the initial state of your system so if you have a drone maybe you don't know exactly where it's going to start off so you plan the trajectory assuming a particular starting state but really in practice uh so yeah you wanna you you made the plan that goes from A to B but let's say your drone just starts off at a slightly different initial State um so that that's one uh reason why you might might not be able to exactly follow the trajectory that you plan or you might have some uncertainty in the Dynamics so for example the physical parameters that you've identified for your drone might not exactly capture the rail system or there might be external disturbances like wind gusts so even if you start off following Mr j3 exactly maybe there's a wind gust that comes in and blows you away from the trajectory that you were trying to follow so yeah I guess this is where feedback control uh comes in [Music] [Music] [Music] is that better than last time yeah yeah okay good I'll use this joke okay good um so yeah so let's say we have again some Dynamics [Music] uh Excel equals f of x cubed as usual in our standard first order kind of general form um and then suppose we have some trajectory that we're trying to follow so for example found using like one of the variants of the RFP algorithm that we described today that takes into account the Dynamics of your system so it is pictorially this is a trajectory in state space so the trajectory I'm going to call x0 as a function of time where time goes from zero to capital T and you have some control input sequence as well so basically if you start off at the the initial State and you apply this control input sequence or the previous zero to capital T and then you follow this trajectory x0 of T so the subscript 0 here refers to the fact that this is our nominal kind of reference trajectory the parentheses is time so this is x0 at time zero this was like zero time capital T and just some point over here is x0 at some arbitrary time a little d um and now let's say at some time t the true state of the system is not exactly where you want it to be so if everything is going perfectly your robot is going to be at the set at zero of T but in practice like maybe you deviate it a little bit uh and then you ended up at some other state which is this x of X at 90. right so the question is how can we basically get back to the trajectory that we're we're trying to track that came from a planning algorithm question yeah yeah good uh so so you you could also just re-plan entirely uh so you could say okay I'm at the State uh I want to get to some goal I'm just going to resolve the planning problem um so there are I guess pros and cons um so if you haven't deviated that much from the trajectory you're trying to follow then it might still make sense to keep trying to follow that directly you're trying to follow um you can replan the the challenge with replanning is that often that takes a fair bit of time so doing that online on board your system like as it's like flying uh can be computationally intensive but uh but yeah that's definitely something you can do if it's computationally feasible uh it makes sense to do that as well yeah but yeah I guess for now we're just gonna I'll show you some like videos where that sort of thing happens uh for now let's just say we're trying to uh we're like committed to a particular trajectory and we're just going to try to get back to it and not uh good question okay so what we're going to do is apply yeah some feedback control and we're going to choose a particular like form um of the the feedback controller so we'll apply a control input of the following form so you the control input at time t uh is going to be the nominal control input that came from our trajectory planning motion planning algorithm and that's this term over here Plus some Matrix this is the game gin Matrix again [Applause] times X of T minus x02 so yeah this should look very familiar this is like basically the same form as PD control basically the same form as lqr when we were looking at stabilization and the only difference here is that everything is time varying right so we're not trying to get to some particular like fixed State we're trying to follow a trajectory so the state that we want to be at any given time is different than any of that question yes yeah the nominal you the the first term here uh is whatever you originally calculated that makes you follow that trajectory assuming you're exactly there and there's no uh disturbances um so yeah if your stay at time T is equal to the state that you want to be at so if you're following the trajectory exactly and then this term is a zero and so you continue applying the nominal control input if this term is not zero so if there's some deviation from the normal trajectory then you add in this correction term uh and of course what we're gonna think about is how to find what this K Matrix is and questions [Music] yes uh so U is defined for every time every point in time zero the capital t uh it might have some structure so um I guess one of the versions of the algorithm I described today you might end up with control inputs that look sort of like this so maybe it's like piecewise constant but it's defined for for every point in time same with this question yes yeah yeah yeah yeah yeah yeah so even if it switches uh you can still like calculate the control input for it like query the controller for for any point okay all right so yeah I guess how do we find this uh this kft um so we're gonna use the technique that's a variant of the lqr uh techniques that we introduced a few lectures ago [Music] foreign so this is going to be called time varying lqr so okay yeah again given the setup from before so the Dynamics are XR equals f of x U some nominal trajectory x0 as a function of time phenomenal control input user as a function of time for t in zero to capital T we're going to Define a new variable which is similar to what we had done uh in our original lqr discussion so this variable X tilde of T is going to be the deviation of the actual state from the desired state so we're just going to Define this to be X of T let's say that your robot is actually in at time T minus the state that you want to be in at time T so that the normal state so we can then differentiate this with respect to time because x dot of P minus x0 Dot of t so because x0 is changing with time this term is not necessarily zero right this is some some non-zero quantity um so x dot at time T is exactly f of X at time t U at time t uh and then I'm just gonna rewrite this so by definition X at time t is X tilde Plus x0 and the same thing with the control input so U at time t uh I'm gonna I guess I didn't Define it here let me do it here so you tell the at time T is U at 19. minus u0 at time t so this is utilda Plus u0 and then we still have this minus x0 Dot um so f is non-linear right we're assuming you have some non-linear Dynamics like the peanut water or the 3D quadrotor so we can linearize this these Dynamics Again by doing a first order Taylor series expansion [Music] so I can write this or approximate this using linearization as a some Matrix a at time t X time T minus x0 time t plus some Matrix B U A Time t minus Q zero at time t so the only I guess difference here between this and what we have seen before with lqr is the the matrices that Define the linearization those are time varying because our Dynamics here the Dynamics on X tilde are time varying because x0 the reference trajectory itself is a little state is time varying so I can write this more compactly than so x minus x0 is still there and then this is you delivered okay and this is X till the DOT at time t all right so yeah essentially the same as lqr our previous discussion on QR just with everything here being done varying because we're trying to track a trajectory um so yeah I guess similar to our original lqr discussion we can define a cost function so a performance objective that we try to optimize using feedback control again this is going to be essentially identical to what we had with our QR just with things being time varying so I'm going to Define the cost function to be a integral from 0 to capital T that's our time Horizon on which the trajectory is defined uh of X tilde T transpose Q axelity Plus you tell the transpose r utila DT um so yeah this is identical to to our previous lqr discussion so this is the penalty for deviating in the state from the the reference control input sorry the reference State and this is the penalty for the control input deviating from the reference control input and the Q and R matrices are user defined that tell you exactly how to penalize deviations in different states in principle actually these can also be chosen to be time varying so I can write Q as a function of time r as a function of time if I wanted to so maybe as time gets closer to capital T I have a higher penalty on deviation in state like I really want to be close to the the final state in practice you often choose these to be constants in entire question yep yeah very good so we're actually not going to do that so um in our original IQR discussion uh we said that lqr gives you two things one is like it optimizes this kind of performance objective and it gives you asymptotic stability for the linear system uh we're gonna give up on the asymptotic disability part here uh partly because asymptotic stability here isn't well defined right like our whole our kind of world ends like at capital T like we're not defining uh like we're not defining the even the trajectory we want to be at after uh capital t uh so we're not gonna ask for asymptotic stability or or any other kind of stability we're just going to say this is our objective um and I think this has like fairly clear intuition right like this is saying like don't deviate too much uh in state and don't deviate too much in control input and we're just going to try to optimize the question um yeah so the reason for the linearization is that we're going to be able to actually solve uh this optimization problem no not uh not in general yeah so if you have just these Dynamics over here uh xor equals f of x uh and you try to find a feedback controller that optimizes this objective that's not something one can do uh at least analytically yeah yeah for a linear system um or linearized system we can uh calculate a feedback controller that is going to optimize this objective um and yeah it's basically almost identical to uh what we had done with uh with lqr so let me just write down the statements of four the linearized Dynamics so time varying linear Dynamics we can calculate this gain Matrix which I wrote over here it's a k of t and that will optimize really minimize I guess the cost function from any initial condition so yeah the way to think about this is if you choose some particular values for this Matrix this game Matrix and that fixes your feedback controller you can apply that feedback controller foreign State like whenever you start off you can apply this feedback controller it's going to do something right if you just make up random numbers it might like do this like not even get you anywhere close to the the tricky you want to be at if you choose a good feedback controller it will kind of roughly get you back to the desired States what I'm claiming is that I'm going to describe a procedure for calculating a particular K of T that's going to optimize this objective from any initial condition that you start off in so for any initial condition in principle you can like run the feedback controller calculate the states and control inputs that result from applying feedback controller and that quantity is just a scalar right this is a number given an initial condition and this process that I'll describe will find you again Matrix that optimizes that from any initial condition so that's yeah it's basically exactly analogous to our original lqr discussion um so it's going to be again two steps so let me write down step one first so we're going to solve this ricotti equation which we had thought uh in in like Al QR when we discussed it initially it's going to be a slightly different version of the regard equations I'll write it down um Q so yeah sorry these two terms are also on the right hand side so everything here is known so Q of t r of T and those are the matrices that Define the the cost function over there uh aft B of D those are the matrices that Define your linear time varying linear dynamics that you get from the linearization process over there um yeah I guess the the distinction between this equation and the previous version of the record equation we had discussed when discussing lqr one is that everything is time varying right the A and B matrices potentially the Q and R matrices as well uh the other big distinction is that uh the the term over here uh if you maybe go back a few lectures look at this equation this was a zero uh so that was a algebraic equation uh this is a differential equation [Music] um in the the variable s of t so this is a Matrix differential equation uh with sounds sort of complicated but actually it's not that complicated so each element here so sft belongs to sap is a matrix uh so it's a n by n Matrix for any point in time so you can think of each element of sft on the left hand side or as yeah as dot I guess minuses are on the left hand side uh everything on the right hand side if you look at the dimensions is also n by n uh so Q of T is then by n if you multiply out these matrices you'll see that they're n by n so for each element on the left hand side and the right hand side you have a differential equation so you have a system of N squared so n by n differential equations and this is not analytically solvable so we're not going to be able to write down uh yeah some close formulation for this but we can integrate this or we can solve this numerically so for example using uh Audi A45 in Matlab if you use that there's also a Sci-Fi dot integrate in Python we need some boundary conditions here to solve differential equations um and yeah I guess typically what you do here is given some final value of s of t so s at time capital T you fix it to some something that you choose so for example it is the identity Matrix and then you solve this differential equation this gives you s at every point in time so it gives you a matrix at every point in time and then the second step to actually get the feedback controller I guess this stuff is relatively straightforward so K of t uh we just got the signs correct so kft is r inverse B times s of t and that's the the optimal gain Matrix um all right so let me just pause for a second and just make sure the process is clear and what I'm claiming is here so the process is this two-step thing you solve this differential equation which looks a little bit ugly but you're not solving it analytically you just call od45 or other some other numerical integration technique you get asset every time by solving this different equation and then the Game Matrix is just as you just multiply these matrices expressions with um for solving the differential equation uh like you need some some boundary condition so either like at Time Zero or at time capital t uh yeah typically like you set the the boundary condition at the final time uh and then you integrate this equation uh sort of like backwards because it's like minus uh as thought of d uh and yeah what that will give you is as at every time that satisfies this differential equation does that make sense yeah okay all right other questions versus yeah that's final yeah good uh so this is something that you would one would like specify typically you just use it to be the Identity or or you choose it to be like you or something like that like almost like any reasonable Choice like Works in practice yeah often often it's just the identity Matrix good other questions okay uh yeah yeah so the semantic meaning is um okay I guess I didn't discuss this uh here but so if you remember from our QR discussion uh this as that time T has a physical meaning and specifically it's like X transpose SX uh is the cost to go so at any uh let me I'll write it here so if I give you some uh State uh like axilda at time t you take the transpose multiplied by S multiply that by X again so this is a n by n Matrix this is uh and by one if you take the transposer to one by n so you get a just a scalar so it turns out that this scalar corresponds to actually let me write it like yeah this is okay okay so this scalar corresponds to the the cost which is this and then you would incur uh if you applied the feedback controller uh from this state yeah so so the asset the final time uh is basically it's sort of like a terminal cost like it's saying at the final time step uh like what's the it's something that you define it like what's the cost function uh for like the future there is no future in a sense right because that yeah we're saying like you don't have to like follow the trajectory after that so you just essentially specify some uh like value uh like for the the final uh s like at capital t yeah all right great questions okay other questions go ahead um So Random I think if you make it very small maybe or like very large then it might change the solution a bit uh but yeah I think typically like I said if you choose it to me the Identity or if you choose it uh to just be the Q Matrix uh like that that works pretty well and it's not gonna it's not gonna be kind of dramatically different at least in my experience yeah okay so yeah I guess the last thing I want to do is like show some of this stuff working in in practice and let me switch back so yeah I just want to show just a few videos uh this is a very biased uh selection of videos this is all from stuff that I uh have worked on in the past uh this is an under actuated double pendulum so it's also called acrobot um the interesting thing about the system is that it has two so there's a shoulder joint and there's an elbow joint one actuator so there's a single motor at the elbow joint and no actuation at the shoulder joint and so yeah you can the goal here is to do what's being shown in the video so using the single motor you kind of swing up the pendulum and you try to balance it at the top the reason is called acrobot is because it sort of mimics the human acrobat like from a like high bar or whatever those are called you don't have much Control Authority at the wrist right you can't apply much torque but at the hip you can apply a lot of torque um this is also actually uh related to a model of walking uh known as the compass gate Walker so a compass gate Walker is like a drawing like Compass uh so if you think about the the pendulum kind of in this like kind of uh configuration uh there's no torque that you can or not much torque that you can apply so yeah I guess it's this kind of configuration there's not much art that you can apply like up at the ground but there's a lot of torque within reply at the hip so that's another kind of neat connection so this trajectory was found using uh Sony's motion planning or trajectory optimization techniques so a nominal trajectory that goes from the downward configuration to the upward configuration if you just apply that this is like a sort of chaotic system that's going to widely not follow the trajectory you specified so we were using uh like exactly this technique that I strived I'm wearing lqr to make the system follow the desired trajectory I think that's our question okay okay all right all right I guess what was the question yeah here's another example um this was with a fixed wing airplane um that is trying to get through uh obstacles that are less than a wingspan away from each other and this is exactly the same kind of technique so we're assuming here that we know where the obstacles are we plan a trajectory that makes the airplane avoid the obstacles if you were following it exactly of course you're not going to be exactly following it so again we have a Time varying lqr controller that corrects for deviations from the desired trajectory and then we crash into our net because the room wasn't that large [Music] uh and yeah I guess he has a final example so this goes back to sorry let me just pause it here uh yeah this yeah this is me in in grad school so yeah this goes back to uh a point that that you made which is uh when do you re-plan versus when you follow some particular trajectory uh this uh video is going to show an example of planning or re-planning actually it's just like planning here um let me just play it for a bit and I think this is explained um so the interesting in this setup is that the robot doesn't know exactly where the obstacles are until it leaves the launcher so it comes off the launcher and then uh there's some like system that says okay like here are some obstacles and then it has to like instantaneously basically like plan uh to avoid the the obstacles so that clip over here is showing the setup so the airplane doesn't know where the obstacles are comes off the launcher and then suddenly it's told here a month of obstacles and then it needs to plan to avoid them um so what we were doing here was calculating a whole bunch of different trajectories with corresponding control inputs and corresponding feedback controllers so you can think of this as a kind of library of trajectories with Associated feedback controllers and there was an additional step here which was calculating these blue kind of tube like regions uh funnel-like regions I won't have time to go into that but what we can guarantee here is that the robot is not going to leave these tubes if you apply your feedback controller so what happens at runtime so when the robot is told like here A bunch of obstacles it basically searches through this library of trajectories with Associated tubes and try to find one that's Collision free so one that doesn't intersect the obstacles if it finds one then it executes that and these are just different clips showing the different trajectories that are found when these Associated tubes that the system is guaranteed to be guaranteed to remain in if it applies the feedback controller um so yeah the basic kind of architecture here is identical to what I described you plan or a bunch of trajectories and then you have Associated feedback controllers actually exactly using time wearing lqr photo system and then yeah if I was bored one day and then I was at MIT so I made the obstacle spell at MIT which I regret now but yeah I guess questions on this this is just showing that this stuff like really works in in practice [Music] yeah yes yeah good question yeah so the funnels were pre-computed at around time you just starts through them all right I'll see you next time foreign
Ted_Ed_Egyptian_History
A_day_in_the_life_of_an_ancient_Egyptian_doctor_Elizabeth_Cox.txt
It’s another sweltering morning in Memphis, Egypt. As the sunlight brightens the Nile, Peseshet checks her supplies. Honey, garlic, cumin, acacia leaves, cedar oil. She’s well stocked with the essentials she needs to treat her patients. Peseshet is a swnw, or a doctor. In order to become one, she had to train as a scribe and study the medical papyri stored at the Per Ankh, the House of Life. Now, she teaches her own students there. Before teaching, Peseshet has a patient to see. One of the workers at the temple construction site has injured his arm. When Peseshet arrives, the laborer’s arm is clearly broken, and worse, the fracture is a sed, with multiple bone fragments. Peseshet binds and immobilizes the injury. Her next stop is the House of Life. On her way, a woman intercepts Peseshet in the street. The woman’s son has been stung by a scorpion. Peseshet has seen many similar stings and knows exactly what to do. She must say an incantation to cast the poison out. She begins to recite the spell, invoking Serqet, patron of physicians and goddess of venomous creatures. Peseshet recites the spell as if she is Serqet. This commanding approach has the greatest chance at success. After she utters the last line, she tries to cut the poison out with a knife for good measure. Peseshet packs up to leave, but the woman has another question. She wants to find out if she is pregnant. Peseshet explains her fail-safe pregnancy test: plant two seeds: one barley, one emmer. Then, urinate on the seeds every day. If the plants grow, she’s pregnant. A barley seedling predicts a baby boy, while emmer foretells a girl. Peseshet also recommends a prayer to Hathor, goddess of fertility. When Peseshet finally arrives at the House of Life, she runs into the doctor-priest Isesi. She greets Isesi politely, but she thinks priests are very full of themselves. She doesn’t envy Isesi’s role as neru pehut, which directly translates to herdsman of the anus to the royal family, or, guardian of the royal anus. Inside, the House of Life is bustling as usual with scribes, priests, doctors, and students. Papyri containing all kinds of records, not just medical information, are stored here. Peseshet’s son Akhethetep is hard at work copying documents as part of his training to become a scribe. He’s a particularly promising student, but he was admitted to study because Peseshet is a scribe, as was her father before her. Without family in the profession, it’s very difficult for boys, and impossible for girls, to pursue this education. Peseshet oversees all the female swnws and swnws-in-training in Memphis. The men have their own overseer, as the male doctors won’t answer to a woman. Today, Peseshet teaches anatomy. She quizzes her students on the metu, the body’s vessels that transport blood, air, urine, and even bad spirits. Peseshet is preparing to leave when a pale, thin woman accosts her at the door and begs to be examined. The woman has a huge, sore lump under her arm. Peseshet probes the growth and finds it cool to the touch and hard like an unripe hemat fruit. She has read about ailments like this, but never seen one. For this tumor there is no treatment, medicine or spell. All the texts give the same advice: do nothing. After delivering the bad news, Peseshet goes outside. She lingers on the steps of the House of Life, admiring the city at dusk. In spite of all her hard work, there will always be patients she can’t help, like the woman with the tumor. They linger with her, but Peseshet has no time to dwell. In a few short weeks, the Nile’s annual flooding will begin, bringing life to the soil for the next year’s harvest and a whole new crop of patients.
Ted_Ed_Egyptian_History
The_Egyptian_myth_of_the_death_of_Osiris_Alex_Gendler.txt
It was a feast like Egypt had never seen before. The warrior god Set and his wife, the goddess Nephtys, decorated an extravagant hall for the occasion, with a beautiful wooden chest as the centerpiece. They invited all the most important gods, dozens of lesser deities, and foreign monarchs. But no one caused as big a stir as Set and Nephtys’s older brother Osiris, the god who ruled all of Egypt and had brought prosperity to everyone. Set announced a game— whoever could fit perfectly in the chest could have it as a gift. One by one, the guests clambered in, but no one fit. Finally, it was Osiris’s turn. As he lay down, everyone could see it was a perfect fit— another win for the god who could do no wrong. Then Set slammed the lid down with Osiris still inside, sealed it shut, and tossed it into the Nile. The chest was a coffin. Set had constructed it specifically to trap his brother and planned the party to lure him into it. Set had long been jealous of his brother’s successful reign, and hoped to replace him as the ruler of all Egypt. The Nile bore the coffin out to sea and it drifted for many days before washing ashore near Byblos, where a great cedar grew around it. The essence of the god within gave the tree a divine aura, and when the king of Byblos noticed it, he ordered the tree cut down and brought to his palace. Unbeknownst to him, the coffin containing Egypt’s most powerful god was still inside. Set’s victory seemed complete, but he hadn’t counted on his sisters. Set’s wife Nephtys was also his sister, while their other sister, the goddess Isis, was married to their brother Osiris. Isis was determined to find Osiris, and enlisted Nephtys’s help behind Set’s back. The two sisters took the shape of falcons and travelled far and wide. Some children who had seen the coffin float by pointed them to the palace of Byblos. Isis adopted a new disguise and approached the palace. The queen was so charmed by the disguised goddess that she entrusted her with nursing the baby prince. Isis decided to make the child immortal by bathing him in flame. When the horrified queen came upon this scene, Isis revealed herself and demanded the tree. When she cut the coffin from the trunk and opened it, Osiris was dead inside. Weeping, she carried his body back to Egypt and hid it in a swamp, while she set off in search of a means of resurrecting him. But while she was gone, Set found the body and cut it into many pieces, scattering them throughout Egypt. Isis had lost Osiris for the second time, but she did not give up. She searched all over the land, traveling in a boat of papyrus. One by one, she tracked down the parts of her husband’s dismembered body in every province of Egypt, holding a funeral for each piece. At long last, she had recovered every piece but one— his penis, which a fish in the Nile had eaten. Working with what she had, Isis reconstructed and revived her husband. But without his penis, Osiris was incomplete. He could not remain among the living, could not return to his old position as ruler of Egypt. Instead, he would have to rule over Duat, the realm of the dead. Before he went, though, he and Isis conceived a son to bear Osiris’s legacy— and one day, avenge him.
Ted_Ed_Egyptian_History
The_Egyptian_myth_of_Isis_and_the_seven_scorpions_Alex_Gendler.txt
A woman in rags emerged from the swamp flanked by seven giant scorpions. Carrying a baby, she headed for the nearest village to beg for food. She approached a magnificent mansion, but the mistress of the house took one look at her grimy clothes and unusual companions and slammed the door in her face. So she continued down the road until she came to a cottage. The woman there took pity on the stranger and offered her what she could: a simple meal and a bed of straw. Her guest was no ordinary beggar. She was Isis, the most powerful goddess in Egypt. Isis was in hiding from her brother Set, who murdered her husband and wanted to murder her infant son, Horus. Set was also a powerful god, and he was looking for them. So to keep her cover, Isis had to be very discreet— she couldn’t risk using her powers. But she was not without aid. Serket, goddess of venomous creatures, had sent seven of her fiercest servants to guard Isis and her son. As Isis and Horus settled into their humble accommodation, the scorpions fumed at how the wealthy woman had offended their divine mistress. They all combined their venom and gave it to one of the seven, Tefen. In the dead of night, Tefen crept over to the mansion. As he crawled under the door, he saw the owner’s young son sleeping peacefully and gave him a mighty sting. Isis and her hostess were soon awakened by loud wailing. As they peered out of the doorway of the cottage, they saw a mother running through the street, weeping as she cradled her son. When Isis recognized the woman who had turned her away, she understood what her scorpions had done. Isis took the boy in her arms and began to recite a powerful spell: "O poison of Tefen, come out of him and fall upon the ground! Poison of Befen, advance not, penetrate no farther, come out of him, and fall upon the ground! For I am Isis, the great Enchantress, the Speaker of spells. Fall down, O poison of Mestet! Hasten not, poison of Mestetef! Rise not, poison of Petet and Thetet! Approach not, poison of Matet!" With each name she invoked, that scorpion’s poison was neutralized. The child stirred, and his mother wept with gratitude and lamented her earlier callousness, offering all her wealth to Isis in repentance. The woman who had taken Isis in watched in awe— she had had no idea who she’d brought under her roof. And from that day on, the people learned to make a poultice to treat scorpion bites, speaking magical incantations just as the goddess had.
Ted_Ed_Egyptian_History
미라를_만드는_법ㅣ렌_블로치_Len_Bloch.txt
Death and taxes are famously inevitable, but what about decomposition? As anyone who's seen a mummy knows, ancient Egyptians went to a lot of trouble to evade decomposition. So, how successful were they? Living cells constantly renew themselves. Specialized enzymes decompose old structures, and the raw materials are used to build new ones. But what happens when someone dies? Their dead cells are no longer able to renew themselves, but the enzymes keep breaking everything down. So anyone looking to preserve a body needed to get ahead of those enzymes before the tissues began to rot. Neurons die quickly, so brains were a lost cause to Ancient Egyptian mummifiers, which is why, according to Greek historian Herodotus, they started the process by hammering a spike into the skull, mashing up the brain, flushing it out the nose and pouring tree resins into the skull to prevent further decomposition. Brains may decay first, but decaying guts are much worse. The liver, stomach and intestines contain digestive enzymes and bacteria, which, upon death, start eating the corpse from the inside. So the priests removed the lungs and abdominal organs first. It was difficult to remove the lungs without damaging the heart, but because the heart was believed to be the seat of the soul, they treated it with special care. They placed the visceral organs in jars filled with a naturally occurring salt called natron. Like any salt, natron can prevent decay by killing bacteria and preventing the body's natural digestive enzymes from working. But natron isn't just any salt. It's mainly a mixture of two alkaline salts, soda ash and baking soda. Alkaline salts are especially deadly to bacteria. And they can turn fatty membranes into a hard, soapy substance, thereby maintaining the corpse's structure. After dealing with the internal organs, the priest stuffed the body cavity with sacks of more natron and washed it clean to disinfect the skin. Then, the corpse was set in a bed of still more natron for about 35 days to preserve its outer flesh. By the time of its removal, the alkaline salts had sucked the fluid from the body and formed hard brown clumps. The corpse wasn't putrid, but it didn't exactly smell good, either. So, priests poured tree resin over the body to seal it, massaged it with a waxy mixture that included cedar oil, and then wrapped it in linen. Finally, they placed the mummy in a series of nested coffins and sometimes even a stone sarcophagus. So how successful were the ancient Egyptians at evading decay? On one hand, mummies are definitely not intact human bodies. Their brains have been mashed up and flushed out, their organs have been removed and salted like salami, and about half of their remaining body mass has been drained away. Still, what remains is amazingly well-preserved. Even after thousands of years, scientists can perform autopsies on mummies to determine their causes of death, and possibly even isolate DNA samples. This has given us new information. For example, it seems that air pollution was a serious problem in ancient Egypt, probably because of indoor fires used to bake bread. Cardiovascular disease was also common, as was tuberculosis. So ancient Egyptians were somewhat successful at evading decay. Still, like death, taxes are inevitable. When some mummies were transported, they were taxed as salted fish.
Ted_Ed_Egyptian_History
How_did_they_build_the_Great_Pyramid_of_Giza_Soraya_Field_Fiorio.txt
As soon as Pharaoh Khufu ascended the throne circa 2575 BCE, work on his eternal resting place began. The structure’s architect, Hemiunu, determined he would need 20 years to finish the royal tomb. But what he could not predict was that this monument would remain the world’s tallest manmade structure for over 3,800 years. To construct the Great Pyramid, Hemiunu would need to dig a 6-and-a-half-kilometer canal, quarry enormous amounts of limestone and granite, and use kilometers of rope to pull stones into place. Today, there are still vigorous debates about the exact methods the Egyptians employed. But we do know that first Hemiunu needed a construction site. The Egyptians spoke of death as going west like the setting sun, and the Nile’s west bank had a plateau of bedrock that could support the pyramid better than shifting sand. In a brilliant timesaving move, masons carved the plateau itself to look like the stones used for the rest of the pyramid. With this level foundation in place, construction could begin. The project called for a staggering 25,000 workers, but fortunately, Hemiunu had an established labor supply. Egyptians were required to perform manual labor for the government throughout the year, and citizens from across the country came to contribute. Workers performed a wide range of tasks, from crafting tools and clothes to administrative work to back-breaking manual labor. But contrary to popular belief, these workers were not enslaved people. In fact, these citizens were housed and fed with rations better than the average Egyptian could afford. To complete the project in 20 years, one block of stone would need to be quarried, transported, and pushed into place every 3 minutes, 365 days a year. Workers averaged 10-hour days, hauling limestone from two different quarries. One was close to the site, but its fossil-lined yellow stone was deemed suitable only for the pyramid’s interior. Stones for the outside were hauled from roughly 13 kilometers away, using 9-meter long sleds made from giant cedar trunks. When mined from the ground, limestone is soft and splits easily into straight lines. But after air exposure it hardens, requiring wooden mallets and copper chisels to shape. The pyramid used over 2 million stones, each weighing up to 80 tons. And there was no room for error in how they were shaped. Even the smallest inaccuracy at the bottom of the pyramid could result in a catastrophic failure at the top. Researchers know where the materials used to build the pyramids came from and how they were transported, but the actual construction process remains mysterious. Most experts agree that limestone ramps were used to move the stones into place, but there are many theories on the number of ramps and their locations. And the pyramid’s exterior is just half the story. Since death could come for the pharaoh at any time, Hemiunu always needed an accessible burial chamber at the ready, so three separate burial chambers were built during construction. The last of these, known as the King’s Chamber, is a spacious granite room with a soaring ceiling, located at the heart of the pyramid. it lay on top of an 8.5-meter high passageway called the Grand Gallery, which may have been used as an ancient freight elevator to move granite up the pyramid’s interior. Granite was used for all the pyramid’s support beams. Much stronger than limestone, but extremely difficult to shape, workers used dolerite rocks as hammers to slowly quarry the stone. To ensure the granite beams would be ready when he needed them, Hemiunu dispatched 500 workers in the project’s first year so that the material would be ready 12 years later. Five stories of granite sit atop the King’s Chamber, preventing the pyramid from collapsing in on itself. Once complete, the entire structure was encased with white limestone, polished with sand and stone until it gleamed. Finally, a capstone was placed on top. Covered with electrum and glimmering like gold, this peak shined like a second sun over all of Egypt. This video was made possible with support from Marriott Hotels. With over 590 hotels and resorts across the globe, Marriott Hotels celebrates the curiosity that propels us to travel. Check out some of the exciting ways TED-Ed and Marriott are working together and book your next journey at Marriott Hotels.
Ted_Ed_Egyptian_History
The_Egyptian_Book_of_the_Dead_A_guidebook_for_the_underworld_Tejal_Gala.txt
Ani stands before a large golden scale where the jackal-headed god Anubis is weighing his heart against a pure ostrich feather. Ani was a real person, a scribe from the Egyptian city of Thebes who lived in the 13th century BCE. And depicted here is a scene from his Book of the Dead, a 78-foot papyrus scroll designed to help him attain immortality. Such funerary texts were originally written only for Pharaohs, but with time, the Egyptians came to believe regular people could also reach the afterlife if they succeeded in the passage. Ani's epic journey begins with his death. His body is mummified by a team of priests who remove every organ except the heart, the seat of emotion, memory, and intelligence. It's then stuffed with a salt called natron and wrapped in resin-soaked linen. In addition, the wrappings are woven with charms for protection and topped with a heart scarab amulet that will prove important later on. The goal of the two-month process is to preserve Ani's body as an ideal form with which his spirit can eventually reunite. But first, that spirit must pass through the duat, or underworld. This is a realm of vast caverns, lakes of fire, and magical gates, all guarded by fearsome beasts - snakes, crocodiles, and half-human monstrosities with names like "he who dances in blood." To make things worse, Apep, the serpent god of destruction, lurks in the shadows waiting to swallow Ani's soul. Fortunately, Ani is prepared with the magic contained within his book of the dead. Like other Egyptians who could afford it, Ani customized his scroll to include the particular spells, prayers, and codes he thought his spirit might need. Equipped with this arsenal, our hero traverses the obstacles, repels the monsters' acts, and stealthily avoids Apep to reach the Hall of Ma'at, goddess of truth and justice. Here, Ani faces his final challenge. He is judged by 42 assessor gods who must be convinced that he has lived a righteous life. Ani approaches each one, addressing them by name, and declaring a sin he has not committed. Among these negative confessions, or declarations of innocence, he proclaims that he has not made anyone cry, is not an eavesdropper, and has not polluted the water. But did Ani really live such a perfect life? Not quite, but that's where the heart scarab amulet comes in. It's inscribed with the words, "Do not stand as a witness against me," precisely so Ani's heart doesn't betray him by recalling the time he listened to his neighbors fight or washed his feet in the Nile. Now, it's Ani's moment of truth, the weighing of the heart. If his heart is heavier than the feather, weighed down by Ani's wrongdoings, it'll be devoured by the monstrous Ammit, part crocodile, part leopard, part hippopotamus, and Ani will cease to exist forever. But Ani is in luck. His heart is judged pure. Ra, the sun god, takes him to Osiris, god of the underworld, who gives him final approval to enter the afterlife. In the endless and lush field of reeds, Ani meets his deceased parents. Here, there is no sadness, pain, or anger, but there is work to be done. Like everyone else, Ani must cultivate a plot of land, which he does with the help of a Shabti doll that had been placed in his tomb. Today, the Papyrus of Ani resides in the British Museum, where it has been since 1888. Only Ani, if anyone, knows what really happened after his death. But thanks to his Book of the Dead, we can imagine him happily tending his crops for all eternity.
AI_LLM_Stanford_CS229
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2019_Lecture_14_Transformers_and_SelfAttention.txt
Okay. So I'm delighted to introduce, um, our first lot of invited speakers. And so we're gonna have two invited speakers, um, today. So starting off, um, we go and have Ashish Vaswani who's gonna be talking about self attention for generative models and in particular, um, we'll introduce some of the work on transformers that he is well-known for along with his colleagues. Um and then as a sort of, um, a special edition then we're also going to have Anna Huang talking about some applications of this work. There are actually at least a couple of people in the class who are actually interested in music applications. So this will be your one chance in the course to see music applications of deep learning. Okay, um, so I'll hand it over to Ashish. Thanks, Chris and, uh, thanks, Evie. Uh, Anna is actually here to make the class less dull. So [LAUGHTER] she's the highlight on this one. So uh, so, uh, hi everyone. Um, um, uh excited to be here. This is a very large class. Uh, first invited speaker, no pressure, so hopefully this will all go well. Uh, so yes, so the talk is going to be about, uh, self attention. Um, and so the purpose is, is not going to be just to talk about a particular model, but, as, as, as, as empiricists and, and, like, well, I'm an empiricist and I consume machine learning to apply it to various tasks. And, and, and, well, starting point always is to ask this question, you know, what are the- what's the structure in my dataset or what are the symmetries in my dataset, and is there a model that exists that that's a very good- that, that has the inductive biases to model these properties that exist in my dataset. So hopefully, over the course of this, uh, this, this lecture Anna and I will convince you that, uh, self attention indeed does have some- has the ability models and inductive biases that potentially could be useful for the problems that you care about. Um, so, um, this talk is going to be our learning representations primarily of, uh, variable length data where we have images but, uh, most of it is going to be variable length data. And, uh, and, and, and all of us care about this problem because we- in deep learning, and deep learning is all about representation learning. And if- and building the right tools for learning representations as, as, as, as sort of- is an important factor in, in achieving empirical success. Um, now, uh, the models of choice, the primary workhorse for perhaps even now and or up to this point had been recurrent neural networks. Um, um, how, how many people here are familiar with RNNs? [LAUGHTER] Okay. So definitely up to this point, the primary workhorse have been recurrent neural networks, and some of the more, uh, some, uh, some gated variants that explicitly add multiplicative interactions like LSTMs, they also, they also have mechanisms that allow for better gradient transfer. And some recent variants like gated, uh, recurrent units that are simplification, they're kind of the- they're- they dominate this, this recurrent landscape. Um, and typically how did recurrent neural networks, uh, learn or, um, produce representations? They consume a string or a sentence, um, even an image, imagine, you know, in a particular- in sequentially and, uh, at each, at each, uh, position, at each timestep they produce, they produce a, a continuous representation that's summarization of, of everything that they've actually crunched through. Um, now, so in, in, in the, in the realm of large data, uh, par- having parallel models is, is quite, is quite beneficial. In fact, I was actually reading Oliver Selfridge. Uh, he was a, he was a professor at MIT and, uh, he had this, uh, sorry, he wrote the precursor to deep nets its it's called Pandemoniums. I would recommend everybody to read it. And he has this fascinating note that, you know, if you give me more parallel computation, I'll just add more data and make it slower. So you can consume more data. Um, and, and recurrence, uh, recurrence sort of just by construction, um, limits parallelization because you have to, you have to wait until- your wait un- for a particular time point to produce a representation. Um, but if there's any questions, please raise your hands, I'll hopefully look around and, and, uh, be able to attend to your question. Um, and again, and, and now because we're actually producing these representations, we're sort of summarizing, you know, if you want to pass information, if you want to pass co-reference information, then we kind of have to shove all of this inside this fixed size vector, so it could potentially be difficult to model. And, uh, while they have been successful in language, uh, explicit they don't have- the architecture doesn't have a very clear explicit way to model hierarchy which is, which is something that's very important in language. Um, now, um, so they have been devin- it has been excellent work of, a precursor to self attention that actually surmounted some of these difficulties. And what were these difficulties basically is a convolutional sequence models where you have these limited receptive field convolutions that, again, consumed the sentence now not, not sequentially but in depth. And they produce representations for every- they produce representations of your variable length sequences. Um, and, uh, they're trivial to parallelize because you can apply these convolutions simultaneously at every position. Each layer is trivial to parallelize. Uh, the, the, the serial dependencies are only in the number of layers. Um, you can get, uh, you can- you can get these local dependencies efficiently because that a single application of a convolution can consume all the information inside its local receptive field. Um, now if you want to have these really long distance interactions while you don't have to pass through a linear number of steps, you still because these, because these receptive fields are local you might need something like linear and depth or logarithmic if you're doing something like dilated convolutions. So there's still need- the number of layers that are needed are still a function of the length of the of, of your string. Uh, but they're a great development and they actually pushed a lot of research like WaveRNN, for example, is a classic sort of success story of convolutio- convolutional sequence models even by net. Um, now, so far attention has been like one of the most important components, the sort of content-based, you know, memory retrieval mechanism. And it's content-based because you have your decoder that attends to all this content, that's your encoder and then just sort of decides what to wha- what, what information to absorb based on how similar this content is to every position in the memory. So this has been a very critical mechanism in, uh, in neural machine translation. So now the question that we asked was, like, why, why not just use attention for representations and, uh, now here's what sort of a rough framework of this, this representation mechanism would look like, uh, the way- just sort of repeating what attention is essentially. Now imagine you have- you want to represent the word, re-represent the word representing, you want to construct its new representation. And then first, uh, you, you attend or you, you compare yourself, you compare your content, and in the beginning it could just be a word embedding. Your compare content with all your words, and with all, with all the embeddings and based on these, based on these compatibilities or these comparisons, you produce, uh, you produce a weighted combination of your entire neighborhood, and based on that weighted combination you, you summarize all that information. So it's, like, you're re-expressing yourself in certain terms of a weighted combination of your entire neighborhood. That's what attention does, and you can add feed-forward layers to basically sort of compute new features for you. Um, now, um so the first part is going to be about how, like, some of the properties of self attention actually help us in text generation, like, what inductive biases are actually useful, and we empirically showed that indeed they, they move the needle in text generation. And this is going to be about machine translation, but there were other work also that we'll talk about later. So [NOISE] now with this, uh, with this sort of, uh, with this attention mechanism you get this- we get a constant path length. So all pairs or a word can in- position can interact with any position, every position simultaneously. Um, hopefully if the number of positions is not too many. Uh, attention just by virtue of, like, it's a construction, you have a softmax, you have these gating and multiplicative interactions. And again, I'm not gonna be able to explain why, but it's, it's interesting, like, you've seen these models, like, even, even the, uh, even Pixel, PixelCNN, uh, or, um, when it was actually modeling images, they explicitly had to add these multiplicative interactions inside the model to, to basically beat RNNs, and attention just by construction gets this because you're, you're multiplying the attention probabilities with your, with your activations. It's trivial to parallelize, why? Because you can just do attention with matmuls, especially the variant that we use in our paper, uh, in our work. And, uh, so now the question is convolutional sequence to- convolutional sequence models have been very successful in, in, in, in ge- generative tasks for text. Can we actually do the same or achieved the same with, uh, with, uh, attention as our primary workhorse for representation learning. Um, so just to sort of add some context and there's been some, there's been some- up to- up to the transformer there have been a lot of great work on using self attention primarily for classification within. There was, there was work on self attention within the confines of, like, recurrent neural networks. Um, perhaps the closest to us is the, is the memory networks, uh, by Weston, Sukhbaatar, where they actually had a version of recurrent attention, but they didn't have, uh, but they didn't actually- empirically, they didn't show it to work on sort of conditional modeling, like, uh, translation and their mechanism was, uh, like, they were using sort of a fixed- they were using a fixed query at every step. So there's- it, it leaves something to be desired. They still had this question, is it actually going to work, um, on, on, on large scale machine translation systems or large-scale text generation systems. So this is sort of the, the culmination of, um, of the, the self attention, our self attention work. This is the tran- the- and we put it together in the transformer model. And, uh, so how does this look like? So we're going to use attention pri- we're going to use attention primarily for computing representations so- of your input. Imagine you're doing English to German translation. So you have your words, and notice that, uh, attention is, uh, permutation invariant. So you just change the order of your positions. You change the order of your words and, and, uh, it's not going to affect the actual output. So in ord- in order to maintain order we add, we add position representations. And, uh, there's two kinds that we tried in the paper, these, these fantastic sinusoids with no entropy invented. And we also use learned representations which are very plain vanilla both of them work equally well. Um, and, uh, so, so first we have- so the encoder looks as follows, right? So we have a self attention layer that just recomputes the representation, uh, for every position simultaneously using attention, then we have a feed-forward layer. And we also have residual, residual connections and I'll, I'll sort of give you a glimpse of what these residual connections might be bringing that is between every, every layer, and the input we have a skip connection that just adds the activations. Uh, and then this tuple of, uh, self attention and feed-forward layer just essentially repeats. Now, on the decoder side, uh, we've- we, we have a sort of standard encoder decoder architecture. On the decoder side, we mimic a language model using self attention, and the way to mimic a language model using self attention is to impose causality by just masking out the positions that you can look at. So basically, uh, the first position it's- it can't look forward, it's illegal to look forward. It can look at itself because we actually shift the input. Um, so it's not copying, uh. It's kind of surprising that parti- with these models, it's very easy to copy at one point, when early on it was even harder to ge- you know, do copying with recurrent models. But now, at least, you can copy really well, which is a positive sign, I think overall. Um, but, uh, so now on the decoder side, uh, we have, uh, we have this causal self attention layer followed by encoder-decoder attention, where we actually attend to the, uh, last layer of the encoder and a feed-forward layer, and this tripled, repeats a mul- a few times, and at the end we have the standard cross-entropy loss. Um, and, um, so, um, sort of, staring at the- at, at our parti- at the particular variant of the self- of the attention mechanis- mechanism that we use, we went for both- we went for simplicity and speed. So, um, so how do you actually compute attention? So imagine you want to re-represent the position e2. And, uh, we're going to first linearly, linearly transform it into, uh, a query, and then we're gonna linearly transform every position in your neighborhood or let's say every position at the input because this is the, uh, uh, the encoder side, to, uh, a key. And these linear transformations can actually be thought as features, and I'll talk more about it later on. So it's like- it's, it's basically a bilinear form. You're projecting these vectors into a space where dot product is a good- where just a dot product is a good proxy for similarity. Okay? So now, you have your logit, so you just do a so- softmax computer convex combination. And now based on this convex combination, you're going to then re-express e2 or in terms of this convex combination of all the vectors of all these positions. And before doing- before doing the convex combination, we again do a linear transformation to produce values. And then we do a second linear transformation just to mix this information and pass it through a- pass it through a feedforward layer. And this is- um, and all of this can be expressed basically in two- in two- in two-matrix multiplications, and the square root factor is just to make sure that these, these dot products don't blow up. It's just a scaling factor. And, uh, and, and, wha- why is this particular- why is this mechanism attractive? Well, it's just really fast. You can do this very quickly on a GPU, and simul- you can do it simultaneously for all positions with just two matmuls and a softmax. Um, on the decoder side it's, it's exactly the same, except we impose causality by just adding 10 e- minus 10 e9 to the logits. So it basi- it's just- you just get zero probabilities on those positions. So we just impose causality by, by adding these, uh, highly negative values on the attention- on the attention logits. Um, is, is everything- [LAUGHTER] I thought that was a question. So, um, [LAUGHTER] okay so attention is really, uh, attention is cheap. So because it's- because this variant of attention just involve two- involves two matrix multiplications, it's quadratic in the length of your sequence. And now what's the computational profile of RNNs or convolutions? They're quadratic in the dimension. Because, basically, you can just think of a convolution just flattening your input or just applying a linear transformation on top of it, right? So- and when does this actually become very attractive? This becomes very, very attractive when your dimension is, uh, much larger than your length. Which is the case for machine translation. Now, we will talk about cases when there's- when the- when this is not true, and we have to- we have to do a- we have to make other model developments. Um, but, uh, but for short sequences or sequences where your length does- where your dimension dominates length, attention is a very- has a very favorable computation profile. And as you can see, it's about four times faster than an RNN. Um, um, and, and faster than a convolutional model where the- you have a kernel of- like filter with, uh, three. So, so there's still one problem. Now, here's something- so in language, typically, we want to know, like, who did what to whom, right? So now, imagine you applied a convolutional filter. Because you actually have different linear transformations based on let- relative distances, like this, this, this, this, linear transformation on the word who, uh, o- o- on the concept, we can have- can learn this concept of who and, and, and, pick out different information from this embedding of the word I. And this linear transformation, the lre- the red linear transformation can pick out different information from kicked and the blue linear transformation can pick out different, different information from ball. Now, when you have a single attention layer, this is difficult. Because all- because they're just a convex combination where you have the same linear transformation everywhere. All that's available to you is just a- is just mixing proportions. So you can't pick out different pieces of information from different places. Well, what if we had one attention layer for who? So you can think of an attention layer as something like a feature detector almost, like, because a particular- it, it might try to- it might- because it carries with it a linear transformation, so it's projecting them in a space that- which starts caring maybe about syntax, or it's projecting in this space which starts caring about who or what. Uh, then we can have another attention layer for or attention head for what, did what, and other- another attention head for, for, for whom- to whom. And all of this can actually be done in parallel, and that's actually- and that's exactly what we do. And for efficiency, instead of actually having these dimensions operating in a large space, we just- we just reduce the dimensionality of all these heads and we operate these attention layers in parallel, sort of bridging the gap. Now, here's a, uh, perhaps, well, here's a little quiz. I mean, can you actually- is there a combination of heads or is there a configuration in which you can, actually, exactly simulate a convolution probably with more parameters? I think there should be a simple way to show that if you had mo- more heads or heads are a function of positions, you could probably just simulate a convolution, but- although with a lot of parameters. Uh, so it can- in, in, in the limit, it can actually simulate a convolution. Uh, and it also- we can al- we can continue to enjoy the benefits of parallelism, but we did increase the number of softmaxes because each head then carries with it a softmax. But the amount of FLOPS didn't change because we- instead of actually having these heads operating in very large dimensions, they're operating in very small dimensions. Um, so, uh, when we applied this on, on, on machine translation, um, we were able to drama- uh, dramatically outperform, uh, previous results on English-German and English-French translation. So we had a pretty standard setup: 32,000-word vocabularies, WordPiece encodings, WMT14-, uh, WMT 2014, uh, was our test set, 2013 did the dev set. And, uh, and some of these results were much stronger than even our previous ensemble models. And, um, and on English-French also, we had some- we had some very favorabl- favorable results. Uh, and we- and we are, we, we, we achieved state of the art. Now, ste- stepping back a bit, uh, I- I'm not claiming that we, we arrived at an architecture that has better expressivity than an LSTM. I mean, there's, there's, there's, there's theorems that are- that say that LSTMs can model any function. Um, perhaps, all we did was just build an architecture that was good for SGD. Because stochastic gradient descent could just train this architecture really well, because the gradient dynamics and attention are very simple attentions, just a linear combination. And, uh, um, I think that's- I, I think that's actually favorable. But hopefully, uh, as we- as we go on, but the- well, I'd, I'd also like to point out that, you know, we do explicit mo- we do explicitly model all, all path connection, all, all, all pairwise connections and it has its adva- advantage of a very clear modeling, very clear relationships directly between, between any two words. Um, and, like, hopefully we'll be able to also show that there are other inductive biases. That it's not just like building more architectures that, that are good for- that are good inductive biases for SGD. So frameworks, a lot of our work was initially pushed out in tensor2tensor. Maybe that might change in the future with the arrival of JAX. There's ano- there's a framework also from Amazon called Sockeye. There's also Fairseq, uh, the se- the convolutional sequence-to-sequence toolkit from Facebook that the, they prob- I'm actually not sure if it has a transformer implementation, but they have some really good sequence-to-sequence models as well. Um, okay. So the importance of residuals. So, uh, we have these resil- residual connections, uh, between, um, so we have these residual connections that go from here to- here to here, here to here, like between every pair of layers, and it's interesting. So we, um, we- so what we do is we just add the position informations at the input to the model. And, uh, we don't infuse- we don't infuse or we don't inject position information at every layer. So when, uh, we severed these residual connections and we loo- stared at these, uh, stared at these attention distributions, this is the center or, sort of, the middle map is this attention distribution. You actually- basically, it- it's been unable to pick this diagonal. It should have a very strong diagonal focus. And so what has happened was these residuals were carrying this position information to every layer. And because these subsequent layers had no notion of position, they were fi- finding it hard to actually attend. This is the encoder-decoder attention which typically ends up being diagonal. Now, so then we, uh, we said okay. So then we actually continued with- continued to sever the residuals, but we added position information back in at every layer. We injected position information back in. And we didn't recover the accuracy, but we did get some of this, sort of, diagonal focus back in. So the residuals are doing more, but they're certainly, definitely moving this position information to the model there. They're pumping this position information through the model. Um, okay. So, so that was- that was- so, so now we saw that, you know, being able to, sort of, model both long- and short-, short-term relationships, uh, sh- uh, long and, long- and short-distance relationships with, with attention is beneficial for, for text generation. Um, what kind of inductive, inductive biases lay- actually, uh, appear, or what, what kind of phenomena appear in images and something that we constantly see- constantly see in images and music is this notion of repeating structure that's very similar to each other? You have these motifs that repeat in, in different scales. So, for example, there's a b- it's another artificial but beautiful example of self-similarity where you have this Van Gogh painting where this texture or these, these little objects just repeat. These images are- these different pieces of the image are very sa- similar to each other, but they might have different scales. Uh, again in music, here's a motif that repeats, uh, that could have- it could have, like, di- various, like, spans of time between in, in, between it. So, um, so, so this, so we, we, we, we attempted after this to see, well, to ask this question: can self-attention help us in modeling other objects like images? So the, the path we took was, sort of, standard auto-regressive image modeling the- or probabilistic image modeling, not GANs. Because it was- well, one, it was very easy. We had a language model almost. So this is just like language modeling on images. Uh, and also training at maximum, likely, it allows you to, sort of, measure, measure how well you're doing on, uh, on, on your held-out set. Uh, and it also gives you diversity, so you hopefully are covering all possible, uh, different kinds of images you- So, um, and to this point there's al- we had an advantage that's also been- there are- there've been good work on using recurrent models like PixelRNN and PixelCNN, that, that we're actually getting some very good compression rates. Um- And, um, again here, originally the argument was that, well, you know, in images because there- because you want symmetry, because you want like if you have a face, you want, you want one ear to sort of match with the other. If you had a large receptive field, which you could potentially get with attention at a lower computational cost, then it should benefit- then it should be quite beneficial for, for images, for images and you wouldn't need many layers like you do in convolutions to actually get dependencies between these far away pixels. So it seem like self-attention would have been a- what, what, what was already a good computational mechanism, right? But this sort of- but it was actually interesting to see how it even modeled- naturally modeled self-similarity, and people have used self-similarity in image generation like, you know, uh, there's this really cool work by Efros where they actually see, okay, in the training set, what are those patches that are really, that are really similar to me? And based on the patches that are really similar to me, I'm going to fill up the information. So it's like actually doing image generation. Uh, there is this really classic work called non-local means where they do image denoising, where they want to denoise this sort of, this patch P. And they say, I'm going to- based on my similarity between all other patches in my image, I'm going to compute some function of content-based similarity, and based on the similarity I'm going to pull information. So as- and exploiting this fact that images are very self-similar. And, uh, uh, this has also been sort of, uh, applied in some recent work. Now if you just took this encoder self-attention mechanism and just replace these word embeddings with patches, and that's kind of exactly what it's doing. It's, it's computing this notion of content-based similarity between these elements and then based on this content-based similarity, it constructs a convex combination that essentially brings these things together. So it's, it's a very ni- it was, it was quite- it was very pleasant to see that, oh, this is a differentiable way of doing non-local means. And, uh, and we took the transformer architecture and replaced words with pixels. Uh, there was some- there were some architecture adjustments to do. And, uh, so this was but- this was basically the kind of- it was very similar to the original work, and here the position representations instead of being, you know, one-dimensional, they were- because we are not dealing with sequences, we have two-dimensional position representations. Um, okay. So I pointed out before, attention is a very com- very favorable computational profile if your length- if your dimension dominates length, which if- which is absolutely untrue for, absolutely untrue for images. Uh, because even for like 32 by- even for 32 by 32 images, when you flatten them and you- and you flatten them, you have 30- you get 30, 72 positions, uh, so it's your standard CFIR image. Um, so simple solution, uh, because like convolutions of- I mean, you get- convolutions are basically looked at local windows and you get translational equivariance. We said, "Okay. Let's adopt the same strategy." And also there's a lot of spatial locality and images. Uh, but now, we will still have a better computational profile. If your- if your receptive field is still smaller than your dimension, you can afford- you can actually still do much more long distance computation than a standard convolution because you're, uh, because you're quadratic in length. So as long as we didn't increase our length beyond the dimension, we still had a favorable computational profile. And so the way we did it was, uh, we essentially had, uh, two kinds of rasterizations. So we had a one-dimensional rasterization where you had a sort of single query block, uh, which was, uh, which was then attending or to the- into a larger memory block, uh, in this rasterized fashion along the- along, along the rows. Um, then we tried another form of rasterization, falling standard two-dimensional locality, where you had- where we actually produced the image in, uh, in blocks and within each block we had a rasterization scheme. Um, again, these- the image transformer layer was very similar. We had two-dimensional position representations along with query- with the same- with a very similar attention mechanism. Um, and we tried both super-resolution and unconditional and conditional image generation. Uh, this is- this is Ne- Niki Parmar, I and a co- and a few other authors from Brain, um, and we presented it at ICML. And, uh, we were able to achieve better perplexity than existing models. So PixelSNAIL is actually another model that used- mixed both convolutions and self-attention and they- they outperformed us on, on, on, on, on, bits per dimension. So we were measuring perplexity because these are probabilistic- these are probabilistic models. It's like basically a language model of images and, and it just- and your- and the factorization of your language model just depends on how you rasterize. In the- in this- in the one-D rasterization, we went first rows and then columns. In the two-D rasterization, we went blockwise and inside each block we rasterized. On ImageNet, we achieved better perplexities, and, uh, so yeah, I mean we're at a GAN level, right? I mean this weird- this is- I think probabilist auto-regressive Image generation, uh, by this point had not reached GANs. At ICLR 2019, there's a paper by Nal that actually uses self-attention and gets very, very good quality images. But what we, what we observed was, we were getting structured objects fairly well. Like can people recognize what the second row is? Cars. [OVERLAPPING] I heard- I said- most- almost everyone said cars. I'm not going to ask who said something else, but yes, they're cars. yeah. And, uh, so the- and the last row is another vehicles like, uh, so essentially when structured jo- structured objects were easy to capture. Um, like frogs and sort of, you know, objects that were camouflaged just turned into this mush. Um, and- but on super resolution, now super-resolution is interesting because there's a lot of conditioning information, right? And, uh, when you have a lot of conditioning information, the, the sort of possible- you break- you, you actually lock quite a few of the modes. So there's only a few options you can have at the output. And super- our super resolution results are much better. We were able to get better facial orientation and structure than previous work. And these are samples at different temperatures and, uh, and, uh, and we wou- when we quantify this with actual human evaluators, we- like we flash an image and said, is this real, is this false? And we were able to, uh, we were able to fool humans like four times better than previous results in super resolution. Again, these are not- these results like I, I guess the, the latest GAN result from Nvidia makes us look like a joke. But, I mean this is, I mean, we're starting later than GAN. So hopefully we'll catch up. But, but the point here is that this is an interesting inductive bias for images, so very natural inductive bias for images. Um, and, uh, and, and there is hope to apply it- for applying in classification and other such tasks also. Um, so one interesting thing, just to sort of both out of curiosity and asking how good is maximum or like does maximum likelihood. Well, one, does the model actually capture some interesting structure in the role? Second, do you get diversity? Well, maximum likelihood should get diversity, by, by virtue, by virtue of what it does. Uh, so then we just- we did image completion. And why is- why image completion because as soon as you lock down half the image to the goal truth, you're actually shaving off a lot of the possible modes. So you have a much easier time sampling. So, uh, so the first is, uh, first is what we supply to the model. The, the, the right row- the right most column is, is gold, and we were able to generate different samples. But what was really interesting is the third row. Uh, so the rightmost column is- the rightmost column is gold. Uh, now if you look at the third row, this horse. So actually there's this sort of glimpse or a suggestion of a pull, but the model hallucinated a human in some of these, in some of these images, which is interesting like in- it does capture at least the data teaches it to capture some structure about the world. Um, the dog is just cute and I guess it also shows that, you know, there was this entire object, this chair, that the model just completely refused to imagine. So there's a lot of difficulty. And I guess Anna is gonna talk about [NOISE] the another way to exploit self- self-similarity. Thank you. [APPLAUSE] So thank you Ashish for the introduction. Uh, so there's a lot of self-similarity in images. There's also a lot of self-similarity in, in music. So we can imagine, transformer being a, a good model for it. Uh, we- we're going to show how, uh, we can add more to, to the self attention, to think more about kind of relational information and how that could help, uh, music generation. [NOISE] So, uh, first I want to clarify what is the raw representation that we're working with right now. So analogous to language, you can think about there's text and somebody is reading out a text, so they add their kind of own intonations to it, and then you have sound waves coming out of that speech. So for music there's a va- very similar kind of, uh, line of a generation where you say the composer has an idea, uh, writes down the score and then, a performer performs it and then you get sound. So what we're going to focus on today is mostly, uh, you can think of the score but it's actually, er, a performance, um, in that it's a symbolic representation where MIDI pianos were used and, uh, um, professional amateur, uh, musicians were performing on the pianos. So we have the recorded, uh, information of their playing. So in particular, um, at each time se- step modeling music as this sequential, uh, process, what is being output are, okay, turn this note on, ah, advance the clock by this much, and then turn this note off. And also there is, uh, dynamics information, so when you turn the note on, you first say like, how loud it's going to be. Uh, so traditionally, uh, modeling, uh, music as kind of a language, we've been using, uh, recurrent neural networks. And, um, because as Ashish introduced and, and talked about, there is a lot of compression that needs to happen, like a long sequence has to be embedded into like a fixed length vector. And that becomes hard when, uh, in music you have- you have repetition coming, um, at a distance. So, uh, I'm first going to show you, um, samples from, from the RNNs, from a transformer and then from a music transformer that has the relative attention and kind of let you hear the differences and then I'll go into how we, uh, what are, what are the, uh, modifications we needed to do on top of the, uh, transformer model. Uh, so here, uh, this task is kind of the image completion task. So we give it an initial motif and then we ask the model to do continuations. So this is the motif that we fed. [MUSIC] How many people recognize that? Awesome. Okay. [LAUGHTER] Yeah, so this is a, uh, kind of a fragment from a Chopin Etude piece. And we're going to ask, uh, the RNN to do a continuation. [NOISE] [MUSIC] So in here, like in the beginning, it was trying to repeat it. But very fast, it, er, wandered off into, its other different ideas. So that's one challenge because it's, uh, not able to directly look back to what happened in the past, uh, and, and can just look at kind of a blu- blurry version, and that blurry version becomes more and more blurry. Uh, so this is what the transformer does. Uh, so so, uh, a detail is, uh, these models are trained on half the length that you're hearing. So we're kinda asking the model to generalize beyond the length that it's trained on. And you can see for this transformer, it, it deteriorates beyond that. But it can hold the motif pretty consistent. [MUSIC] Okay. You, you, you ge- you get the idea. [LAUGHTER] So initially, it was able to do this repetition really well. Uh, so it was able to copy it very well. But beyond the length that was trained on, it kinda didn't know how to cope with, like longer contexts. And, uh, what you see, uh, the, the last one is from the music transformer. I think so that kind of [NOISE] the relational information. And you can just see visually how it's very consistent and kinda repeating these [NOISE] these larger, uh, arcs. [MUSIC] Yeah. So that was, uh, music transformer. And so in music, the, the self similarity that we talked about, uh, so we see, uh, the motif here, and so, so there we primed the model with a motif, and this is actually a sample, unconditioned sample from the model. So nothing, er, there was no priming that the, uh, model kinda had to create its own motif and then, uh, do, uh, continuations from there. And here, uh, if we kinda look at it and analyze it a bit, you see, uh, a lot of repetition, uh, with gaps in between. And if you look at the self attention structure, we actually do see the model, uh, looking at the relevant parts. Even if, if it was not immediately, uh, preceding it. So, so here, uh, what I colored shaded out is where the motif, um, occurs. Uh, and you can, uh, see the different colors, there's a different attention heads and they're kinda focusing, uh, among those, uh, grayed out sections. [NOISE] So I'll play the sample and we also have a visualization that kind of shows you as the music is pa- uh, is being played or what notes it was attending to as it was predicting that note. And, uh, this was generated from scratch. And, uh, so the self attention is, um, from, from kind of note to note level or event to event level. So it's, it's quite low level. Uh, so when you look at it, it's, it's ki- a little bit overwhelming. It has like multiple heads and, er, a lot of things moving. Uh, but there's kind of these structural moments where you would kind of see more of this, uh, clean, uh, kind of, uh, sections where it's attending to. [MUSIC] VOkay. So, um, how, how did we do that? And so starting from kind of the the regular attention mechanism, we know it's, uh, a weighted average of the past history. Uh, and the nice thing is, uh, however far it is, we have direct access to it. So if we know, uh, there are kind of motifs that occurred, uh, in in early on in the piece, we're still able to based on, uh, the fact that things that are similar, uh, to be able to retrieve those. Um, but, uh, it also becomes, all the past becomes kind of a bag of words, like there is no structure of which came, uh, before or after. So there's the positional sinusoids that Ashish talked about. That, uh, basically in this, uh, indices indexes into a sinusoids that are moving at different speeds. And so close-by positions would have, uh, a very similar kind of, uh, cross section into those multiple sinusoids. Uh, in contrast for, er, for convolutions, you kinda have this, uh, fixed filter that's moving around that captures the relative distance. Like 1B4, 2B4. And these are kind of, uh, in some ways like a rigid structure that allows you to be, uh, a kind of, uh, bring in the, the distance information very explicitly. Um, you can imagine relative attention, um, with the multiple heads, uh, at play, uh, to be some combination of these. So, uh, on one hand, you can access, uh, the the history very directly. On the other hand, you also know, er, how you rel- relate to this history. Uh, capturing for example, like translational invariance and, er, and we, uh, and for example, we think one of the reasons why in the beginning, uh, priming samples that you heard that the, uh, music transformer was able to generate beyond the length that it was trained on at a very coherent way, is that it's able to kind of rely on this translational invariance to to carry, uh, the relational information forward. So, if we take a closer look at how how how the, how this works is, uh, the regular transformer you have, you compare all the queries and keys, so you get kind of this, uh, square matrix. You can think of it as like a self similarity, uh, matrix, so it's, uh, a square. Uh, what relative attention does is, to add an additional term that thinks, uh, that thinks about whenever you're comparing two things, how far are you apart? And also based on the content, do I, do I care about things that are two steps away or three steps away or I maybe care about things that are recurring, at kind of a periodical distance. And, uh, with that information gathered, that influences, uh, the the similarity between positions. And in particular, uh, this extra term is based on, um, the distance. So you wanna, uh, gather the embeddings, uh, that's irrelevant to the, uh, the query key distances, uh, on the [NOISE] on the logits. So, in translation, this, uh, has shown, uh, a lot of improvement in, um, for example English to to German translation. Uh, but in translation, the sequences are usually quite short. It's only a sentence to sentence. Uh, a translation for example, maybe 50 words or 100 words. But the music, er, samples that you've heard are in the range of 2,000 time-steps. So it's like 2,000 tokens need to be able to fit in memory. So this was a problem, uh, because the original formulation relied on building this 3D tensor that's, uh, that's very large in memory. Um, and and why this is the case? It's because for every pair, uh, you look up what the, what the re- so you can compute what the relative distance is, and then you look up an embedding that corresponds to that distance. So, um, for like this there's a length by length, like L by L, uh, matrix. You need like, uh, to collect embeddings for each of the positions and that's, uh, depth D. So that gives us the 3D. What we realized is, you can actually just directly multiply the queries and the embedding distances. [NOISE] And they, uh, come out kind of in a different order, because now you have the queries ordered by a relative distance, but you need the queries ordered by keys, uh, which is kind of a absolute by absolute, uh, configuration. So what we could do is just, uh, do a series of skewing, uh, to to put it into the right, uh, configuration. And this is, uh, yeah. Just a, just a quick contrast to, to show, um, the difference in memory requirements. So, er, a lot of the times the challenge is in, uh, being able to scale, uh, you know, being able to be more memory efficient so that [NOISE] you can model longer sequences. So with that, uh, this is, um, I can play you one more example if we have time. But if we don't have time, we can, go ahead. We'll see more of that. Okay. [LAUGHTER] So this is, this is, uh, maybe a one, uh, about a one-minute sample and I- I hope you like it. Thanks. [MUSIC] Thank you for listening. [APPLAUSE]. [LAUGHTER] Thanks, Anna. Um, um, great. Um, so to sort to, um, so relative attention has been a powerful mechanism for, um, a very powerful mechanism for music. It's also helped in machine translation. Um, one really interesting, uh, consequences of, uh, of, um, one really interesting consequence of relative attention in, uh, images, is that, um, like convolutions achieve, uh, convolutions achieve translational equivariance. So if you have, let's say, you wa- uh, you have this, this red dot or this feature that you're computing at this red dot, it doesn't depend on where the image of the dog is in the image, is in the the larger image. It just doesn't depend on its absolute location. It's going to, it's going to produce the same activation. So you have- convolutions have this nice, uh, translation equivariance. Now, with, with relative, uh, positions or relative attention, you get exactly the same effect because you don't have any- once you just remove this notion of absolute position that you are injecting [NOISE] into the model, uh, once you've, once you've removed that, then your attention computation, because it actually includes I mean, we've, we've- Niki and I couple of others have actually, and Anna were actually working on images and seems- and it seems to actually show, uh, better results. Um, this actio- this now satisfies this, uh, uh, the- I mean, it, it can achieve translation equivariance which is a great property for images. So there's a lot of- it seems like this might be an interesting direction to pursue if you want to push, uh, Self-Attention in images for a self-supervised learning. Um, I guess on, on self-supervised learning so the geni- generative modeling work that, that I talked about before in, in itself just having probabilistic models of images is, I mean, I guess the best model of an image is I, I go to Google search and I pick up an image and I just give it to you, but I guess generative models of images are useful because, if you want to do something like semis-, uh, uh, self supervised learning where you just pre-train a model on a lot of- on a lot of unlabeled data then you transfer it. So hopefully, this is gonna help and this is gonna be a part of that machinery. Um, another interesting, uh, another indus-interesting structure that relative attention allows you to model, is, uh, is, is kind of a graph. So imagine you have this, uh, you have this similarity graph where these red edges are, are this notion of companies, and the blue edge is a notion of a fruit, uh, and um, an apple takes these two forms. And, uh, and you could just imagine relative attention just modeling this- just being able to model, or being able to- you, you, yourself being able to impose these different notions of similarity uh, between, uh, between, uh, different elements. Uh, so if you have like, if you have graph problems, um, then relative self-attention might be a good fit for you. Um, there's also, there's also a simi- quite a position paper by Battaglia et al from Deep Mind that talks about relative attention and how it can be used, um, within graphs. So while we're on graphs, I just wanted to- perhaps might be interesting to connect, um, uh, of- some, uh, excellent work that was done on, uh, on graphs called Message Passing Neural Networks. And it's quite funny, so if you look at, if you look at the message passing function, um, what it's saying is you're actually just passing messages between pairs of nodes. So you can just think of self attention as imposing a fully connect- it's like a bipe- a full, a complete bipartite graph, and, uh, you're, you're passing messages between, you're passing messages between nodes. Now message passing, message passing neural networks did exactly that. They were passing messages between nodes as well. And how are they different? Well, the only way that when- well, mathematically, they were only different in that message passing was, was, uh, forcing the messages to be between pairs of nodes, but just because of the Softmax function where you get interaction between all the nodes, self attention is like a message passing mechanism, where the interactions are between all, all nodes. So, uh, they're, they're like, they're not too far mathematically, and also the me- the Message Passing Paper introduces an interesting concept called Multiple Towers that are similar to multi-head attention, uh, that, that Norman invented. And, uh, it's like you run k copies of these message passing neural networks in parallel. So there's a lot of similarity between existing, you know, this connects to work that existed before but these connections sort of came in later. Um, we have a graph library where we kind of connected these both, both these strands message passing and, uh, we, uh, we put it out in tensor2tensor. Um, so to sort of summarize, um, the properties that Self-Attention has been able to help us model is this constant path length between any two, any two positions, and it's been, it's been shown to be quite useful in, in, in, uh, in sequence modeling. This advantage of having unbounded memory not having to pack information in finite, in, in sort of a finite amount of- in a, in a fixed amount of space, uh, where in, in our case our memory essentially grows with the sequences is, is helps you computationally, uh, it's trivial to parallelize. You can, you can crunch a lot of data, it's uh, which is useful if you wanna have your large data sets. We found that it can model Self-Similarity. Uh, It seems to be a very natural thing, uh, a very, a very natural phenomenon if you're dealing with images or music. Also, relative attention allows you to sort of, gives you this added dimension of being able to model expressive timing and music, well, this translational equivariance, uh, it extends naturally to graphs. Um, so this part or everything that I talked so far was about sort of parallel training. Um, so there's a very active area of research now using the Self-Attention models for, for, for less auto-regressive generation. So notice a- at generation time, notice that the decoder mask was causal, we couldn't look into the future. So when we're, when we're generating we're still generating sequentially left to right on the target side. Um, so, um, and, and, and, and why, why is generation hard? Well, because your outputs are multi-modal. I f you had- if you want to translate English to German, there's multiple ways and, and, and your, your second word that you're translating will depend on the first word. For example, if you, if you first- the first word that you predict was danke, then that's going to change the second word that you predict. And if you just predicted them independently, then you can imagine you can just have all sorts of permutations of these which will be incorrect. Uh, and the way we actually break modes is just- or we make decisions is just sequential generation. Once we commit to a word that makes a decision, and then that nails down what's the next word that you're going to predict. So there's been some, there's been some work on, it's an active research area, uh, and you can kind of categorize some of these papers like the non-autogressive transformer of the fast- the third paper, fast decoding. Um, the fourth paper towards a better understanding of all Vector Quantized Auto-encoders into this group, where they're actually make- doing the decision making in a latent space, that's being, uh, it's e- either being learned using word alignments, uh, fertilities, or that's being learned using Auto-encoders. So you make- you do the decision making in latent space, and then you- once you've made the decisions in latent space, you assume that all your outputs, are actually conditionally independent, given that you've made these decisions. So that's how they actually speed up. There's also- there's ano- there's another paper. The second one is a paper that does Iterative Refinement. There is also a Blockwise Parallel Decoding paper by Mitchell Stern, uh, Noam Shazeer, and Jakob Uszkoreit, uh, where they essentially just run multiple models like, uh, and rescore using a more- a decode using a faster model and score, using the more expensive model. So that's how it sort of it speeds it up. Um, [NOISE] transfer learning has had the- Self-Attention has been beneficial in transfer learning, GPT from OpenAI and BERT are two classic examples. There's been some work on actually, scaling this up, like add a factor as, uh, efficient optimizer. Um, there's a, there's a recent paper by Rohan Anil and Yoram Singer. Um, there's also Mesh-Tensorflow, which actually they've been able to train models of just several orders of magnitude larger than the original models have been trained. So there's, I mean, when you're working this large data regime you would probably want to memorize a lot of- you want to memorize a lot of things inside your parameters used to train a larger model. Uh, Mesh-Tensorflow can uh, can let you do that. Um, there has been a lot of interesting work, universal transformers, sort of recurrent neural networks can actually count very nicely. There's these cute papers by Schmidhuber where he actually shows that recurring neural, the count- the cell mechanism just learns a nice counter, like if you're- you can learn kind of a to the n, b to the n, uh, with LSTM. So then, uh, universals transformers brings back recurrence in depth inside the transformer. Uh, there is a really cool Wikipedia paper, um, simultaneously with the image transformer paper that also uses local attention. Transformer-XL paper that sort of combines recurrence with Self-Attention, so they do Self-Attention in chunks, but they sort of summarize history by using recurrence, it's kinda cute. It's been used in speech but I don't know if there's been some fairly big success stories of Self-Attention in speech. Uh, again, similar issues where you have very large, uh, um as positions to, uh, to do Self-Attention over. So yeah, um, self supervision is a- if it works it would be, it would be, it would be very beneficial. We wouldn't need large label datasets, understanding transfer, transfers is becoming very succe- becoming- is becoming a reality in NLP with BERT and some of these other models. So understanding how these, what's actually happening is a- is an interesting area of ongoing research for me and a couple. And a few of my collaborators and uh, multitask learning and surmounting this, this quadratic problem with Self-Attention is an interesting area of research that I- that I'd like to pursue. Thank you.
AI_LLM_Stanford_CS229
How_to_Prepare_for_a_Product_Manager_Interview.txt
Hey, I'm Priyanka. And today I'm going to give you some tips on how to prepare for your product manager interview. As with all interviews, it's important to prepare in advance. Depending on the company, industry, and the level that you're applying for, you will be asked to demonstrate your knowledge and experience in one or more of the following subjects. Product strategy-- besides showing that you can successfully deliver a product from design to development to launch, it is important to also show the interviewers that you can think big and create a winning strategy for your product that lays the path for how you will achieve this vision for your product. There are a lot of people that can go build a feature, but there are a limited number of people that can really think, what is the next generation product? Or how can I really disrupt this market? That sort of strategic long-term mindset is very valued by employers. As always, practice product strategy questions, structure your thought process, and also how you will present this answer to the interviewer. Product design-- this is one of my favorite topics. When answering interview questions about product design, don't make the mistake of rushing to describe a whole list of features. Instead, start by thinking about your user and demonstrating that you understand that the design process always starts with a strong understanding of the user's context and their needs and wants. Ask what the success criteria would be for this product. Once you have assessed this, you can prioritize the most important user problems and come up with a list of product features or solutions for them. To do well in the interview and go to that next round, you are typically expected to present at least a few must have features, some features that solve the user's need in a unique way. So bring your best creative skills to this question, and you will do well. Execution-- let's think about product execution questions. Prepare to answer these types of questions by thinking, how did you overcome disagreements or challenges or hurdles in taking the products from concept to finish? Your hiring manager will want to know how you assess the success of products once they launch. Be prepared to dig into both broad criteria, like free of defects or meets customer needs, but also some specific metrics that you would use to evaluate the success of a product, like customer adoption, retention, or customer satisfaction scores. Prioritization-- a great PM needs to know how to say no to many different ideas in order to say yes to the few really good ones. Show your interviewers that you can prioritize and navigate tradeoffs by giving solid examples from your own experience. Was there a time when you chose to focus on customer retention over market expansion, for example, and why? Demonstrate how you use data to make sound prioritization decisions. You can choose from many different prioritization methods. For example, one method that I like to use is impact versus effort, where you can organize your products and features to see which ones drive the most impact or value for your customer compared to the effort it's going to take. And remember, no matter which prioritization method you use, you must tie it back to the product strategy, goals, and success criteria. Market estimation-- you will sometimes be asked to estimate the size of a market in product management interviews because product managers in real life often need to know what is the total addressable market. To prepare for this, I recommend first creating a cheat sheet that covers key numbers such as global population or population of the world with internet and so on. Next, decide whether you will use a top down or a bottom up approach to solving these questions. Think about the constraints. They could be on the supply side or the demand side or both. Make sure to share with your interviewer your approach, your thought process, and the formula that you plan to use for estimation. Before you share your final answer with the interviewer, don't forget to do a reality check on your answer and revise it if needed. Remember, product managers are not expected to be perfect, but we do need to be able to catch our mistakes and fix them. And that's a skill you can demonstrate to the interviewer. Communication and consensus building-- product management is a team sport, and your success depends on many other teams and people. Share with the interviewer how you work with them on decisions such as maybe a product launch date or what should go on the roadmap or what should we include in the next release. If you want to give some structure to these questions, you can use the STAR method, or Situation, Task, Action, and Result, when you answer any of these behavioral PM interview questions. I suggest you keep your answers structured and concise. There you go. Those are my tips. I hope this will help you feel more confident and prepared going into your product management interview. [MUSIC PLAYING]
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_5_Rademacher_complexity_empirical_Rademacher_complexity.txt
So I guess, yeah, sorry for the delay a little bit. I couldn't find water somehow. Anyway, so but OK, let's get started. So last time we talked about concentration equality, which was for some preparations for what we need today or maybe the next lecture. And today, we are going to go back to the uniform convergence. So recall that our goal was to do a uniform convergence. And we have proved some results. For example, we have proved that, I guess, the first thing we had is that we say the excess risk is bounded by this uniform convergence. So we basically care about something like the sup of the differences. So guys, we have shown that the excess risk-- I don't know why I'm-- something wrong with my pen? Excess risk, this is bounded by, for example, something like L h star minus L hat h star plus sup L h minus L hat h, where h is capital H. And we have used this to get a certain kind of uniform convergence result. For example, we have shown that for finite hypothesis class, we've got L h hat minus L of h star. This is bounded by-- I guess we have shown this technically. But this can be turned into excess risk bound. We have shown this is less than-- sorry. This is less than square root ln h, roughly speaking over n, if you ignore other log factors and you can take sup over h and capital H. And also, for hypothesis class parameterized by class with p parameters, we also have got something like L theta hat minus L hat. Sorry, I think there's something wrong with my note. Let me take a quick note here. So we can get L sup theta in capital theta, L theta minus L hat theta. This is bounded by something like O tilde square p over n. This is what we did two lectures ago. So and you can think of this, I guess we have discussed this briefly. So this quantity, and this quantity, these are in some sense complex dimensions of the hypothesis class. And this is generally the type of results that we're going to get. So we're going to have something that decreases as n goes to infinity. And also, there is another constant, which is, there's another factor, which is the hypothesis class, right? So basically, eventually you can say that if n is bigger than the hypothesis class, then you can get non-trivial error bound. So the problem with these two bound is the following. So the limitation I think you can talk about different limitations from different perspectives. But I think the basic limitation of this p parameter bound here is that it requires n to be much bigger than p so that this bound is small. So and this is not necessarily feasible in many cases. And also, this is not really what happens in reality, right? So in reality, in many cases, n is smaller than p is quite often. It's not always the case, but it's pretty often. It's more often in a modern situation where you have a so-called overparameterized neural network. By overparameterized I guess I would define that more carefully in the later course, in the later lectures. But basically, in the modern setting when you have a deep neural network, your ImageNet has a million examples. But your parameters could be something like 10 million, or maybe 100 million, sometimes could be billions. So of course, this is not necessarily always the case. Where sometimes you still have n is bigger than p depending on the situation. But generally, people found that it's useful to make your network, your p very large. So definitely it's not the case that you want n to be much, much bigger than p. That's definitely not true. So and the reason is sometimes why this is not capturing what happens in reality is that this is not precise enough, right? So not precise enough in the sense that your complexity measure is, in some sense, to worst case. Your complex measure is measuring the complexity of all possible parameters with all possible models with p parameters. But you are not specializing enough to some special kind of models among all the models with p parameters. For example, you cannot achieve your case, especially in the more classical language, you cannot distinguish a sparse parameter from a dense parameter. You cannot distinguish, for example, you have a parameter class where theta has not one norm bound versus some hypothesis class where theta has some two norm bound, right? So in either of these cases, you get the parameter p, the p will be showing up in your bound. So and but not necessarily the b, the control of the norm of the parameters. So that's why we are looking for something that can be more precise that can not depend on p but depend on some more accurate characterization of the complexity. So today and next lecture, and next few lectures in some sense, so what we are going to say, our goal is to prove something like L theta hat minus L hat theta hat is less than something like some complexity of theta and over n. And but this complexity measure could be more fine-grained than just a single number p. And this complex measure could possibly also depend on distribution. So this complexity may depend even on distribution, distribution p. p is the distribution of your data. So maybe for some distribution p your complexity is smaller. For some other distribution p your complexity is higher. And we are trying to capture the intrinsic kind of difficulty of the learning problem. But of course, this is somewhat subjective. Because this is depends a little bit on what you believe that is happening in the real life, right? So if you believe that the real parameter is sparse, then you probably should have a complex measure that captures the L1 norm of the parameters. If you believe that the ground truth parameters have other properties, then you probably should use a different complexity measure. So but this is the general goal. And also, the practical way of thinking about this is that you can think of the right-hand side as something that motivates your regularization. So the practical impact, I guess, maybe, or the practical implication is that you can use this complexity of theta as a regularization. Because if you just optimize your model where you're going to find some parameter theta, especially if you don't have enough theta, you may have multiple global minimum among the search space. So but if you know that certain complexity measure will make the bound better, then you can actively find models with small complexity. So you can use this complex measure on the right-hand side. And you add this complexity measure multiplied by lambda to your tuning loss. So you get a regular loss. So that you are more likely to find a small complexity one which generalizes better. So I guess that's the basic idea. And so, what we're going to do is that-- so we're going to-- today, the first part is we're going to talk about-- a week ago, before we were talking about the sup-- the uniform convergence. So this is our tool where you want to prove that L h minus L hat h is small for all possible h. And in the first part of the lecture, we're going to bound the expectation of this as some kind of a weaker goal. And in the second part of the lecture, if we have time, we're going to bound it with high probability without expectation in front of it. And what's the expectation comes from, this randomness come from the data and the training data. Because L hat depends on the training data. And the training data are randomly being drawn. So that's why what's inside the expectation is a random variable that depends on the randomness of the training data. And you would take expectations of these random variables. And that's the goal. So we're going to bound, upper bound this with some other quantities that we think are more intrinsic and convenient for us to use. So I guess I need to start with some definitions. So this is a definition called Rademacher complexity, which is the main object we're going to focus on in this lecture. So definition is, let f be a family of real valued functions. So far, in this definition f is just an abstract family of functions. And we're going to define a complexity for this family of functions f. And then, we're going to say what functions of f we are going to care about. We care about actually the functions of the losses, the family of the losses. But for now, f is just the abstract family of functions. And you're going to define a complex measure for this abstract family of functions. So let's say this family of functions that maps some input space, let's call it Z to real number. And let P be a distribution over this input space Z. Then the average, the so-called-- often you don't really necessarily have to specify average Rademacher complexity. But technically, it's the average Rademacher complexity of f is defined as the following. So this is R and then sub f, where n indicates how many examples you have, how many training examples, how many empirical examples you have. Rnf is defined to be you first draw some examples. You can think of this as training examples, iid from the distribution p. And then you draw some so-called Rademacher random variables. Recall that Rademacher random variables are just binary plus 1 minus 1 uniform. You draw sigma 1 up to sigma n, iid uniformly from minus 1, 1. And then you look at this quantity. You look at the sup over this function class capital F. And you look at the quantity the average of sigma i f Zi and from 1 to n. So this sounds like kind of a pretty complicated definition. But let me try to interpret a little bit. So in some sense, maybe first of all, only think about this quantity right here. Just think about what's inside this sup. So this is the correlation. 1 over n is just a normalization, which is not important. This is the correlation between the outputs of f. So the output of f is fZ1 up to fZn, and some random variable sigma 1 and sigma n. Of course, if you just look at this, a correlation should be typically very close to 0 because you shouldn't correlate with random variables. But there is a sup, right? So you are first joining the sigma 1 up to sigma n, and then you take a sup over f. So basically, you are saying that what's the maximal-- so basically, this whole thing is the maximal correlation between f, the output of f, and sigma 1 up to sigma m after you draw sigma i. So you first draw sigma i, and then you try to find something that correlates with sigma i's. Or you try to find f such that it can output something that looks like the random things they have drawn. So in some sense, if you have a high complexity, means that for most or for almost all-- for most the binary patterns, binary patterns just means that you have the sigma 1 up to sigma n, there exist f in this hypothesis class such that the output on this family is similar to or similar or correlate with a random pattern. So for any random pattern, if you draw it, then you can find post hoc a function f in this family class such that the output on Z1 up to Zn looks like the random patterns you have drawn. So in some sense, this is saying that how diverse the outputs you can have from this family of functions f. So if this family of functions f can map your Z1 up to Zn into any possible patterns, then this Rademacher complexity will be the largest. So for example, suppose every binary pattern can be somewhat output by this family of functions f on Z, then you get the maximum Rademacher complexity intuitively. Any questions so far? Is this necessarily a non-increasing function of n? Is this necessarily a non-increasing function of n? The question is, is this necessarily a non-increasing function of n? I think it should be. But it shouldn't be-- but I don't think it's trivial to see why it is non-increasing. At least off the top of my head, I don't see a super simple argument. But I think you can prove it without too much effort. I think roughly speaking, how do you prove it is that you can-- because you take the sup, right, so you somehow can-- I think you can prove it by switching the sup with the expectation for one-- for example, the last example. And then you've got the roughly speaking the definition of the n minus 1 version of the Rademacher complexity. But maybe I may not do it on the fly, just in case I missed something, I got stuck. So but roughly speaking, I think that should work. Any other questions? By the way, I never got any questions from Zoom. So you should feel free to speak up. Just unmute yourself and ask questions. Sometimes I'm not even sure whether the Zoom is working. Should f be mapped from minus 1 to 1? Otherwise it would have [INAUDIBLE]?? That's a great question. So f is not required to be mapped to plus 1 minus 1. And it's true that this can be unbounded, right? So this is actually sensitive to the scale of f. If you scale f by a factor of 2, then you're going to have 2 times the Rademacher complexity. And this is actually somewhat useful in certain cases, which we probably will talk about later. There is a question? Cool. So now, let's see why we all care about this. So why do we care about this Rademacher complexity? The reason is that you know the following. So you know that-- let me write down what is the theorem. Suppose you did this hypothetical experiment. You draw n examples from distribution P. And then you look at this quantity, the error rate of f of Zi, i from 1 to n, minus the expectation of fZ. This is the quantity we have dealt with in the last lecture, the concentration how much you deviate from a min. But you take a sup here because you sometimes care about the maximum possible deviation post-hoc after you draw the examples. If you look at this quantity, then this quantity is bounded by 2 times the Rademacher complexity of i. I guess, to appreciate what the theorem is really doing, I guess, it's probably time to say what exactly what kind of f we care about. So f so far is abstract thing. But now let's try to instantiate. So suppose you take f, the capital F to be the family of functions that maps Z, which is taken to be a pair of x and y, the input and output. And you map it to the loss function, the loss of x, y on the hypothesis h for any h in the hypothesis class. So basically, this is the family of losses. For every model, every model is a function, right? And given a model, and you get a loss function defined by the model h. So basically, this is the composition of the model function with the loss function-- the two-dimensional loss function, the little l. So together, basically you get the-- this is mapped from the data point to the loss of the data point. But you can vary what functions, what models you care about. So you get a family of losses. So in some sense, it's just a slight extension of the family of models in some sense. But here it's about the losses. And suppose you care about this. You take f to be this. And you can see that the left-hand side is exactly what we was trying to bound, right? Just because fZi is loss of xi, guess we write xi, yi like this, xi, yi at h, right? So then, the sum of-- the empirical sum is just the empirical loss, right? So 1 over n, sum of fZi, this is just 1 over n times the sum of L xi, yi h. This is just the loss-- the empirical loss of the hypothesis class-- the hypothesis h, right? And the expectation fZi, fZ is the expectation of the loss. And where x and y are drawn from the distribution p. And this becomes the population loss. So that's why the left-hand side of this theorem is really just the sup over h, L hat h minus Lh. Something like this. A quick question? And you take expectation of the randomness of the data. So that's the weaker version of uniform convergence that we outline in the beginning of the lecture. So and you can bound this by the Rademacher complexity of this function class f, the Rademacher complexity of this family of losses. So basically, the theorem is saying that the generalization error is less than the Rademacher complexity of f. I think technical expectation of the generalization. And there was a question here? Is there a sup of the absolute value of the difference? No, there is no absolute value here. Okay. Yes. There is no absolute value? There is no absolute value. That's a great question. So there is no absolute value. And it becomes a little bit trickier if you add your absolute value. I think, if you add absolute value, first of all, you need a different proof-- a slightly different proof. And second, you're going to have a different constant. Instead of 2, you can get probably 4. And the cleanest way to do it is that you don't do absolute value in this theorem. You do the absolute value in the outer layer. Actually, you don't even need absolute value actually technically. Because eventually, you only cover one side of the bound when you do the generalization error. So technically, you don't even need absolute value anywhere. OK? So and if you really think about this R and f in this context, right, so for this particular f, what does it mean? It really means that how well the family of losses, so the losses of data, n data can correlate with random pattern. So this is still sounds a little bit not super intuitive. We can further-- for simplified case, we can further simplify this a little bit. So suppose you have a binary classification. So suppose, let's say, y is between plus 1 and minus 1. An L is 0, 1 loss. So L of xyh is equals to the indicator of h of x is not equal to y. If they are not equal, you have loss 1. Otherwise you have loss 0. And in this case, we can further interpret this a little bit more. So what you can do is the following. First of all, you will write this indicator into this form. We write it as 1/2 times y minus 1 times h of x. This is assuming-- here I'm assuming h of x is also n plus 1 minus y. So by the way, what I'm doing here is to try to instantiate this into a special case so that you can interpret the Rademacher complexity in a more intuitive way. And also, this whole thing is also useful by itself as well. So when h of x is plus 1 minus 1, then y is plus 1 minus, and also y is plus 1 minus 1, then the indicator that they are different, you can write this as this. Because if y and h are different, then you get yhx is minus 1. And then the whole thing will be 1. And if y and hx are the same, then y times x is 1. And then this quantity is 0. So you can just verify it. So the reason we do this is we somehow make it more linear in y and hx. And then, you can look at the Rademacher complexity. So the Rnf is this expectation of sup sigma I. So and let's plug in the loss. So here are the expectations. So in the definition, I have two expectations. So but now, I put two of the expectations into one, just you merge them. So that the randomness come from both the data and the Rademacher pattern. And you get sup over h and h. So you plug in this formula 1/2 times 1 minus yi hxi times sigma i. And now, let's do some rearrangements. It's a very simple rearrangement. Plus 1 over n times 1/2 sigma i. So here, this quantity it's inside the sup. But actually, it's a constant that doesn't depend on h. So you can put it outside of the sup. So you can technically write this, just because this sum of sigma i is a constant. And then, because now you can switch the expectation with the sum and get expectation sup of the first term and plus the expectation of this 1 over 2n sum of sigma i. And this term becomes 0. Oops. And this thing becomes 0 because the expectation of the Rademacher variable is 0. And so, then we're only left with the first quantity. And if you look at the first quantity, then you realize that-- so sigma i is a random variable. So h yi sigma i has the same distribution, sigma i, right? No matter what value you express. So for even for y is 1 or for y is minus 1, they will have the exact same distribution. So that's why you can replace yi sigma-- actually, you can also have minus here. That's still true. Because you-- and then we randomly flip the sign. So basically, that means you can replace yi sigma-- minus y sigma i by sigma itself. And still you don't change the expectation. So you can replace this by-- maybe let's define this-- I guess you can say technically-- the easiest way to check this-- I saw some confusion. The easiest way to check this, you just define sigma i prime to be minus y sigma I. Then you get sup 1 over n sum of hxi sigma i prime. But still, sigma prime distribution is still plus 1 minus 1 uniform and independent, right? So sigma i prime has the same distribution as sigma i. So you can just write this same way as this sigma i. OK? So what we have achieved here? What we have achieved here is that this seems to be a strictly simpler quantity than before. Why? This is basically the Rademacher complexity of the hypothesis class h, but not the family of losses, right? Before we were talking about a hypothesis class of the family of losses. And now you're talking about exactly the Rademacher complexity of the hypothesis class h. So basically, this is saying that for binary-- I think I'm missing something. I'm missing 1/2 here. Where did the 1/2 go? Yeah, I think I lost the 1/2. Sorry. I think I lost-- oh, I have the 1/2 in the notes. It's just I forgot to copy it. So 1/2. So basically, it's the 1/2 times the Rademacher complexity. So what we achieved is that the Rademacher complexity of f in this special case of binary classification and 0, 1 loss is equal to 1/2 times the Rademacher complexity of the hypothesis class. So that's a slightly simpler way of thinking about this. Because what's this? This is basically saying that how well h can memorize the random label. You can think of sigma 1 up to sigma n as random label. And R and h is big when you can-- there exist an h in the capital H such that h of xi is equals to sigma i. This is the best situation, right? This has the strongest correlation. So basically, if you can memorize all the random label with some hypothesis, hypothesis class, that means your Rademacher complexity is the biggest. And that gives you the worst generalization bound. And vice versa, if you cannot memorize, then you get better generalization bound. Right. OK. I have a question [INAUDIBLE]. So [INAUDIBLE] yi [INAUDIBLE]. But [INAUDIBLE]? I see. That's a good question. Let me repeat the question. So the question is, sigma prime is equal to y times-- actually, there's a minus there. So but it doesn't matter. So sigma prime is minus y times sigma i. But yi itself is random variable. So can we still claim that sigma prime has the same distribution as sigma i? So that's, indeed, that's a good question. So technically, I think what you should do is the following. So if you are really careful about this. So there are two randomness, right? So one is from the x and one is from a sigma. So you first condition the randomness of xi and say that-- so in the first expectation, so basically-- how do I say this? So you can write this as the following. So the conditional xi, yi. And then you look at the randomness of sigma i, right? So now, after you condition xi, then this is absolutely clear, right? So for any choice of yi, sigma i and sigma prime has the same distribution, conditioned on any choice of deterministic choice y. So then, so you do it inside. And then y is gone in your formula. So then you don't have to care about the outside. Make sense? Cool. So sounds good. And let me-- so the take-home-away here is that the Rademacher complexity of f is similar to the Rademacher complexity of the model. And the Rademacher complexity of the model is basically saying how well we can memorize random label, right? But there is a small caveat here, which is that this relationship is not always true. This relationship is true, exactly true for binary classifications 0, 1 loss. But it's not even true for, for example, some other loss function. So I think the intuition largely is still correct. But you cannot take this literally or rigorously, like religiously for every situation. And in some cases, actually, there could be a confusion. Because there could be cases where these two are mismatched, especially if your loss function can do something different. For example, loss function could change the binary number to real number, or the loss function has other kind of properties. So for example, the loss function is often nonlinear, so suppose you take exponential loss. So and actually, they are in some sense, in the past, they were this-- in some extreme cases, some papers actually misinterpreted this, in some sense. I guess I'm just giving you a warning in some sense. Don't always apply this every time without even thinking about it. The intuition is roughly true, but it's not exactly true at all time. I guess, there will be a place where I'm going to mention this again later in the lecture, in some of the later lectures. Can I ask one question? And by the way, just to-- and what we will do next is that we are going to prove the theorem. And just a small overview for what we will do next lecture. So in this lecture, we are going to deal with this abstract measure, the Rademacher complexity, right? And you may wonder-- probably some of you are wondering why Rademacher complexity is something measurable, something that is useful. So we don't answer that today. We are going to answer that in the next few lectures. So today we are just introducing this Rademacher complexity and say, this is bounding doing it from convergence. And this Rademacher complexity is something intuitive, I hope you find that, right? So it's talking about how well you can memorize labels. So it's something at least makes sense. And in the next few lectures, we are going to instantiate this to more concrete models where you can bound the Rademacher complexity by something more concrete in the next two lectures. I got some-- oh. Did somebody ask a question? I didn't hear. Yeah, I think a couple of people chimed in. You answered my question in the meantime. But someone else might have a question still. Yeah. Sorry. I forgot to open my-- have volume on. Yeah. Please ask questions if you-- yeah. Now it's working. All right. Thank you. Oh, actually, there is a question. What is the connection between the Rademacher complexity and a degree of freedom? I think, I assume by the degree of freedom you mean the number of parameters, right? So I guess that's kind of like what we motivated in the beginning. So using this Rademacher complexity, we will be able to prove more precise bound than on the number of parameters. So probably so far, you haven't seen that. I don't expect you to see that. But in the next lecture, we're going to see you can prove better bounds that depends on something more fine-grained than the number of parameters. I hope that answers the question. But please feel free to unmute yourself and ask any follow-ups. I have a, I guess, conceptual question. How do you generally think about the distinction between the hypothesis, the family of hypotheses, versus the family of losses over them? To me-- because they have the same cardinality, right? They seem like a direct mapping between 1/2. How do you distinguish, I guess, in your mind between those two? How do you think about them? Yeah, that's a great question. So in my mind, they are very similar, except that except that I think this will be a little more explicit in the next lecture or maybe two lectures later. So except that when you talk about the models, the models oftentimes output the real number. So for example, if you think about logistic regression, the models output the logits, which could be anywhere on the real line. And they would turn that into a probability and then use that probability to compute the loss. And the loss becomes something, first of all, non-active. And often, sometimes the loss is reasonably-- it's between 0 and 1. For logistic loss it's not between 0 and 1. But I think the most interesting regime is that it's somewhat small. So it's between 0 and 1. And if you care about classification loss, then there's literally between 0 and 1. So the loss function sometimes has a scale in some sense. It's something on the order of 1. But your model could be sometimes outputting some bigger numbers. So there is a conversion there, which will be more explicit in a future lectures. But besides-- beyond that, typically I don't distinguish them very much. Gotcha. Yeah. I found it interesting that in the example, at least for binary classification, the complexity of the loss family was less than or half of the complexity of the model family. Is it common that your complexity goes down when you compose it with the loss function? I think it's common that they are related. We will see that in many cases, they can be related, but I think I wouldn't read too much from that constant of 1/2. Because this 1/2 does depend on how you define on your labels. For example, if your label is 0, 1, I think you wouldn't see the 1/2. So there are some small artifacts there so the constant. So it doesn't really matter that much. Sure. OK. Cool. OK. Let's continue. So we're going to prove this. And this is called symmetrization technique, the proof. And this is a technique that can be used in many other cases. Not necessary in this course, but in other areas of probability, let's say. So summarization technique, I think it's probably comes from those kind of risk probability anyways, in the first place. So the technique is that-- let's write down what we care about. So what we care about is the sup-- I mean, I'll take expectation for now, just so that it's a little bit cleaner. We will take expectation in a bit. So this is not symmetric in some sense. Because you have this subtraction here. And so, these two terms don't look the same. That's what I mean by not symmetric. So there's a way to make them somehow more symmetric. So what you do is that if you fix-- for now, let's say we fixed x1, Z1 up to Z1. And then, we let Z1 prime and Zealand prime to be a different draw, another draw from the distribution p, and iid. So you draw a sequence of copies of Z1 up to Zn iid. And then, what you can do is that you can say, convert this second, the expectation, this quantity, using the Zi prime. Just because by definition, all the Zi's have the same distribution from P. So expectation of f is really the same as you look at expectation of sum of f of Zi prime. Because each of these term has expectations the same as e f. And you average them. So you got e of f. So and you see that this already makes it a little bit more symmetric for whatever, just on the surface looks more symmetric. Because this is a sum of things, and this is sum of things. Of course, it's still a little bit different because the expectation is in front of this thing. But there is no expectation in front of the first. So what you can do is you can, of course, one thing you can do is you can put the expectation in front of the poster, which is not doing really anything. Because for now, Zi is constant and Zi prime is random. So in some sense, you are just putting some constant inside expectation. And now, what happens is that you can switch the expectation with the sup. Maybe ask the question first. If you [INAUDIBLE] the sum between [INAUDIBLE].. Oh, sure. Yes, sorry. It's this one? Yeah. Cool. Thanks. Yeah. So now we'll make it more symmetric. So we'll switch the expectation with the sup. So I'm claiming that if you switch them you get an inequality. Zn prime sup. So why this is true? And this is just a very generic inequality which claims that you can switch sup and expectation get inequality. So generically, the claim is that suppose you have a function g that takes in two variable. And suppose you are taking expectation first over the randomness of the first variable-- the second variable. And then you take sup over the first variable. Suppose this is a quantity you have. Then you can replace this by eventually by first taking expectation and then-- actually, by first taking the sup and then take the expectation. Because when we do the math, you are doing it from the right-hand side to the left-hand side. And why this is true, this is because you can have an intermediate step, which is you take sup over u, take expectation over v. And you bound this guv by sup over u guv. Maybe let's call it u prime. So this inequality is very simple. It's just because this term is term-wise bounded by the sup. And once you do the sup, then you see that this whole thing doesn't depend on i anymore, right? So maybe I should have another step. So actually, I'm claiming that this is just equal to this. Because this term it doesn't depend on u. You already got rid of u, right? So the sup over u just can be gone. And then, the green term is equal to the term below. Just because you change the variable name u to u prime, that's nothing. So that's why it's equality. So in general, it's just probably useful to know this as a fact so you can switch the sup with expectation and get inequality. Yes, sometimes I do it. I don't remember which direction of the inequality is. So that's why you want to probably somewhat know how to prove it, so that in case you got confused which direction it is you can still recover it. OK. Cool. OK. So that's how this works. And now, if you take expectation over Z again, we're already conditioned on Z. But now suppose let's take the expectation over Z. Then you see that this is very symmetric. So what we got is that you take expectation over Z1 up to Zn. And it's bounded by-- now you have two expectations here. One is over Zi's, and the other is over Zi primes. And then you have sup. Let's put it into a single sum, by the way-- minus f Zi prime. OK. So now it becomes a little more symmetric. And I'll do some-- one more thing to make it even more symmetric. So this one, it's symmetric in a sense that actually is mean 0 random variable. It's not even mean zero. But actually, in terms of distribution, it's the distribution is symmetric in the following sense. So fZi minus fZi prime has the same distribution as fZi prime minus fZi. Because these two things are just the renaming of each other in some sense. So they have the same distribution. Or in other words, this has the same distribution as sigma i, fZi minus fZi prime for any sigma i that is binary. If it's minus 1, if it's plus 1 it's the same thing. If it's minus one, it's just to flip the order. So could that means that you can for free introduce this random variable sigma i and not change anything? So that means that if you introduce this Rademacher random variable, and you take expectation over sigma i's, you multiply the sigma i. This fZi is fZi prime. So this is still inequality. Actually, here even you choose any sigma i, this is equality. Of course, if you choose random sigma i's and average them, it's still equality. OK? So I think, technically, you claim that for any sigma i, this is equality. Technically the first step is this is equality for every sigma i, for any sigma i. And then you say that even you take another expectation over sigma i's, this is still true. This is still true. And then you can switch the expectation whatever you want. And I'm going to switch it, just because it's a little bit convenient for me to do that. OK. So now, what I'm going to do is that I'm going to break this into two sums. So I'm going to have expectations. Here you have all the randomness. This is just a simplification of notation. So I'm claiming that this is less than sup of the first term plus the sup of the second term. And here, what we are doing is essentially exactly the same thing as the switch of the expectation-- the swap of the expectation and the sup. But here, we only have two terms. So it's a swap of sum and sup. So here, we are doing something like sup of two terms. So something like a function of f plus another function of f over f. And then, you can say that this is less than sup of f, u of f, sup u of f. Yeah. I guess you can prove this almost the same way as we have done with the expectations. But just you need one step in the middle. I will use this as an exercise for you. So and now, probably have seen that we are getting closer and closer to the definition of the Rademacher complexity. The only thing is that we have two terms, and the Rademacher complexity have just the only of this. So now, we can change this to-- we can swap the expectation with the sum. So you get sup Zi. And here, and plus expectation sup minus sigma i, fZi prime. And here the randomness is Z1 up to Zn. And sigma went up to sigma n. So this is exactly-- the first time it's exactly Rademacher complexity. And I'm going to claim that the second term is also exactly Rademacher complexity. Because here, my randomness is Zi prime up to Zn prime, and sigma 1 up to sigma n. But again, because minus sigma i fZi prime has the same distribution as sigma i fZi, because minus sigma i has the same distribution as sigma i, and Zi prime has the same distribution as Zi. So then the second term is still this is equal to the first one. So basically this is just equal to 2 times this. Plus 2 times the Rademacher complexity of f. Any questions? So it kind of feels like we just did algebra for a bunch of lines? Is that [INAUDIBLE]? That's a great question. And that's exactly what I'm going to remark on. So the question was, if I phrased it slightly differently. So what we have really done here, did we do anything powerful, or did we do something-- because the left-hand side has a sup. The right-hand side still has a sup. So we do something really useful or did we just do a bunch of algebra, right? So I'm going to claim that we did do something useful. And the reason is that the left-hand side is something like a sup is what we care about, the difference between the empirical mean and the population mean. So on the right-hand side, roughly speaking, the most important thing is this sigma. So what we have achieved here? So one, we have achieved that we remove the Ef, right? So we get rid of the Ef. And it's probably not super clear why we should appreciate this fact that we got rid of the ef at the first sight. But I can say that this ef is somewhat annoying because you don't have a good control on it, right? So when you look at this, in some sense this quality doesn't depend on the relative-- for example, this quantity doesn't depend on Ef. So if you shift it a little bit, it wouldn't change. Actually, we are going to claim that this on the right hand side is translation environment. So in some sense, you move the-- remove the translation invariant part. So maybe let me just claim the right-hand side is translation environment. Or maybe, the Rademacher complexity. I'm going to claim that-- prove this in a moment. Let me see whether I plan to do this in today's lecture. I think I didn't plan to do it today's lecture. But this is a claim. So in some sense, you remove the translation invariant part. You remove the Ef, right, so which is useful in many cases. And second, you sometimes introduce use more randomness, sigma 1 up to sigma n. So why introducing this randomness is useful? It's probably still unclear right now. But eventually, what we can do is that we are going to have-- so currently what we really have is we have expectations of this, right? You also have the expectation of this. And here, the randomness is Z1 up to Zn and sigma 1 up to sigma n, right? So we will use additional randomness. And this allows us-- this will allows us to drop the randomness from Z1 up to Zn. This will be something we'll see, I guess, probably in the next lecture. So eventually, you don't have to care about the-- you don't have to take expectation over Zi up to Zn. You can claim with high probability. So the right-hand side wouldn't have to run-- Zi up to Zn, you don't need to take expectation over Zi up to Zn. The only randomness come from the sigma i's, which I guess probably you don't see exactly what I mean. But if eventually, you only care about the randomness of sigma 1 up to sigma i, and sigma 1 up to sigma n, that seems to be a benefit. Because this is very much simpler randomness. So sigma up to sigma n has a very simple distribution. They are just Rademacher random variables. So they are much less complicated than the distribution of Z1 up to Zn, which is something you don't know. You just assume there's a distribution P. But you didn't really know any other properties about it. So I think that's the second benefit. But of course, the limitation is that we still have the sup, which is still a problem. But I think you probably wouldn't-- shouldn't expect that you can remove the sup on this level. When you have abstract family class, you probably shouldn't expect you can remove the sup completely. So it should be the next level where you remove the sup when you have a concrete hypothesis class. Cool. Let me just drink something. So the next part is another useful property or useful thing to know about Rademacher complexity, which is that this Rademacher complexity can depend on the distribution p. It still can depend on distribution p, even though our goal is to try to use the new randomness, deal with the simpler randomness, right? Why this is the case? This is just because in this definition of Rademacher complexity, you do have to draw some z1 up to Zn from the distribution p, right? So this is extreme example where you can see that where p is a point max. So let's say Z is always equal to Z0 almost [INAUDIBLE].. So whatever how you draw it, you always just draw a single point. And in this case, actually, you can have a good Rademacher complexity bound for any bounded function. So suppose, let's say minus 1. Suppose you care about [AUDIO OUT] f. And this is the only constraint on the family f. So basically, you care about f-- or maybe more-- let's say f is the family of functions f such that fZ0 is bounded by 1. So we just have a bounded family of functions. You don't even have any parametric form. Still you can prove that the Rademacher complexity of this family is small. So you can say that look at sup. So this is what? Because fZi is always the same. So this is literally equals to 1 over n times fZ0 times sum of sigma i. And because fZ0 is just a constant, it doesn't depend on what f is, because Z0 is-- wait, sorry. My bad. I'm wrong with that. Let's see. So fZ0 still depends on Z0, right? But fZ0 is bounded between 0 and 1, or between minus 1 and 1. So that means that this is less than or equal to expectation if you just bound this fZ0 by 1, you got 1 over n times sum of sigma i absolute value. So and this is actually-- then use Cauchy-Schwarz or use the-- I think this is called Cauchy-Schwarz. So the expectation of this random variable is smaller than the expectation of the square of the random variable to the power 1/2. And then you can-- actually, these derivations, we're going to see these kind of derivation several times. So you get a 1/2 expectation sum of sigma i sigma j. i is not equal to j, plus sum of sigma i squared, times 1 over n square. I'm just expanding it. So you get 1 over n square. Sigma sigma j just means 0. So you get sum over i from 1 to n expectation sigma i squared 1/2. So this is-- each of this is 1. You take the sum, you get n. So you get 1 over n to the power 1/2 is 1 over square root. So in some sense, this is kind of interesting, right? But for very, very large family of functions, without even parametric form, you can still have a good Rademacher complexity. And the reason is that the distribution is so simple. So in some sense, this is an indicator that the Rademacher complexity can capture something about the distribution. If a distribution is extremely simple, then Rademacher complexity can capture that and tell you that it's very easy to generalize. So basically, any f on a very simple distribution should be considered as very simple. Even though this-- in some sense, this family of f is just-- you have basically no assumptions on f in some sense. There is no parametric form, it's a very large family of functions. But with respect to the distribution, a simple distribution, it should be considered as simple. And this is what Rademacher complexity can tell you. So that's saying that Rademacher complexity can take into account the distribution P. But how much it can take into account of distribution P, that's a question mark. So in many of the other analysis, you don't have this property. You don't really use too much about the solution P in many of the concrete bounds for Rademacher complexity. But in principle, it can capture something about P. I have 15 minutes. I think there is time for me to do this next part. Let me see. Here, I think I have time to do this. OK. So the next part, if there's any-- no questions-- [INAUDIBLE] What if you know that [INAUDIBLE]?? So your question is whether, for example, when the features, the x, the coordinates of x have correlations, or maybe have independence? Yeah. Independence is probably more like a simplistic thing, right? So can you get a better bound from Rademacher complexity? I think this-- to answer this question, we need to zoom in to concrete settings. For linear models, I guess you will see that you would get bound-- at least if you compare to extreme cases, in one case all the coordinates are correlated. And the other case is that-- actually, it's unclear. So because if all the coordinates are correlated, actually you probably should have a better bound. Because if, for example, in the very extreme case, where all the coordinates are the same, then you effective have a one-dimensional problem. So you should have a better bound. So it does depend on the particular situation, I think. So yeah. So it's interesting. It's not clear that independence means really simpler. Independence could mean that it's more complicated. Just because we have independent input distribution. So you have a diverse set of distributions, diverse set of data. It might be even more-- it's harder to generalize in some cases. So for example, here in this one point mass case, where you have a very narrow family of data. It means you can generalize easier because you can just memorize that Z is 0. So independence might make it harder. So the next 15 to 10-- 10 to 15 minutes, let me try to define this so-called empirical Rademacher complexity. And the goal here is to remove that expectation in front of the sup. So currently, the average version has two expectations. One is over the randomness of Z1 up to Zn. And there's another expectation of the randomness that we created over sigma 1 up to sigma n. And you have this sup. And we are going to claim that this is basically similar to this without expectation with high probability. So with high probability. And the probability is over the ruggedness of Z1 up to Zn. So you still have to draw Z1 up to Zn. But for most of the choice of Z1 up to Zn, there's two things are similar. So this is the random variable that depends on Z1 up to Zn. This is a random-- this is just a constant. This number is not a random number. It's-- probably shouldn't call it constant. This is a deterministic number, right? So this number depends on Z1 up to Zn. And I'm claiming that the second one, the right-hand side, is concentrating around the first one with high probability. So and if we can do this, then that's what we I alluded to before. So now if this is defined to be empirical Rademacher complexity. I guess, let me have a notation for that. This is called, I think let me just-- I think in the notes there is a formal definition. But here, I mean, just for the sake of time, let's just define this to be R s of f, where s is the set of Z1 up to Zn. And this is called empirical Rademacher complexity. And you can see that the original Rademacher complexity, the average Rademacher complexity is the expectation of the empirical Rademacher complexity, where you take expectation over the set s, right? So just because these two things only differ by a single expectation. And so, if you can do this, then you have a high probability bound. You don't have to average over Z1 up to Zn. And also, you can do the same thing for the left-hand side for the uniform convergence thing. So recall that before we only prove that the expectation of Z1 up to Zn sup, this minus ef, we want to prove that this is less than Rademacher complexity. So this one, we will also show that this is approximately equals to this with high probability. I guess, I should say, the later one, this one, later one is a random variable that depends on Z1 up to Zn is approximately equal to the expectation with higher probability. So if you have both of those, then you basically remove the expectation from your equation and you get a high probability bound. Does that make sense? Is there any questions? So basically, eventually we're going to prove this. So let me state the formal theorem. We can prove that you seem all the f is are bounded, then with probability, at least 1 minus delta, we have the sup-- so here, I don't have expectation. This is the-- there's a ran-- over the randomness. of Z1 up to Zn the sup of this is less than 2 times the empirical Rademacher complexity, the average Rademacher complexity, plus additional cost, additional term, which is the log ln of 2 over delta over 2n. So you pay additional small term, which is on order of 1 over square root of n times something logarithmic and a probability delta. So and but basically by paying this, you get a high probability bound instead of an average version. I think the proof here-- the proof is actually relatively straightforward. It's basically just applying [INAUDIBLE] inequality. But maybe let me do that in the next lecture. I think it takes probably 10 minutes. Maybe let me start with the remark. So I guess, typically, this ln 2 delta over n, this is typically much smaller than either the Rademacher complexity or the-- either the empirical one or the population one. And the reason is that these two things will be something like square root something over n. And this something depends on the complexity of f. It's something that is not negligible. But here, you have square root-- a logarithmic, some logarithmic term over n. So that's pretty much the smallest thing you can think of, right? So logarithmic, it's kind of like a constant. So your complexity of f wouldn't be on the logarithmic of anything. It should be something bigger than that. So that's why typically this additional term is negligible. So that's why basically you can think of this, you didn't lose anything by doing the empirical version. And it's interesting that what you lose here at least on this level-- what you lose here doesn't depend on complexity of f. So basically, if you-- so this term depends on complexity of f. But what you lose here between the expected empirical and population don't depend on complex of f. And maybe, I think this is a perfect time for doing the second remark, remark two. So and I guess, Rnf or Rsf, they are both translation invariant. So what does that mean? That means that suppose you have f prime, which is equal to a translation of f, which means that this is a family of function f prime Z, which is equal to fZ plus the universal constant Z0. So for every function in capital F, you have got a function capital F prime, which is just the translation. You just add some Z0 to it. Then they have the same empirical Rademacher complexity. And in some sense, we have seen this derivation somewhere in this involved derivations before. But let me just make it more explicit. So the Rademacher complexity of this is, you look at the expectation of sigma. And you take the sup of sum of sigma i f i prime, f prime Zi. And you plug in a definition plus the Z0. So now, you can put the part about Z0. I think we have seen the same technique before. Because Z0 is not a function of little f. So you can put it out. So you get plus 1 over n times sum of sigma i times Z0. And then you can swap expectation with the sum. So you get expectation sigma is sup plus expectation of 1 over n times sum of sigma i Z0. And this becomes 0 because sigma i is a binary-- or is a Rademacher random variable. So then this is Rs of f. So in some sense, this is a property of the Rademacher complexity, which is somewhat interesting. You don't care about the translation. But you do care about the scale. So if you scale everything by 1/2 or by 2, then you would change the Rademacher complexity. But it wouldn't change when you shift things. So it's about the relative differences between functions in f. It's not about absolute size of f. Or so, for example, if the function of f always takes values between 1 [INAUDIBLE] and 1 [INAUDIBLE] 1, that's not very different from taking values between 0 and 1. OK. I think this is a natural stopping point for today. Any questions? [INAUDIBLE] First of all, it's not always the case that-- right, that's a good question. So I claimed this vaguely without any justification. So why the Rademacher complexity should be like this, like 1 over square root of n. So I think it's not even-- so I should say, it's not actually exactly true. For most of the cases-- actually for all the cases we are going to see in the lectures, it's 1 over square root of n. But in some cases, it could be-- the dependency alone could be a little bit different. So yeah. So sorry. I was not quite clear. And I think-- I'm not sure whether that question is still in the homework. I think in the homework question there used to be a question where you have other dependency on. I think I probably removed that question for this year. I remember I remove it just because it's not that relevant to the overall goal. But there could be other dependencies. For some reason, it's always like, it's mostly the cases 1 over square root of n, I think the reason is that even you look at a single example, you don't take the sup. We just look at it-- you do the wrong thing. You say, I fixed my function and then I draw my data. I look at how different the empirical one is different from the population one. That's always 1 over square root n without any doubt. So that's why you can never go better than 1 over square root n. But you can be worse than 1 over square root n. I'm not sure whether that makes sense, right? So even you look at a single-- the concentration of a single-- at a single f, right, you fixed the function h, you draw the random variable Z1 up to Zn, and you still have some fluctuation on all the 1 over square root n. And so, you cannot repeat that. But it could be worse than that. [INAUDIBLE] From definition of the Rademacher-- I think can still see that to some extent. Because if you look at the sum of sigma ifi, maybe that's just be here. So this is still a sum of n terms. And so, even you don't take the sup, this term would be something on order of 1 over square root n just because of the concentration. You have sum of n terms, each of them is on the order of 1. And so, the sum of the n terms is on order of square root n. And then you divide by n, you get 1 over square root n. We can talk more offline maybe. yeah. Sounds good. I guess I will see you on Monday-- on Wednesday.
AI_LLM_Stanford_CS229
Stanford_CS330_I_Advanced_MetaLearning_TopicsTask_Construction_l_2022_I_Lecture_9.txt
Cool. So now that we're at the start of week five of the quarter, I'd figured I'd give a little bit of a roadmap. So far we've seen multi-task learning and transfer learning basics. And then we covered some of the core meta learning algorithms. And last week we covered core unsupervised pre-training algorithms. And really at this point we'll start to dive into slightly more advanced topics. And in particular, this week we're going to focus on two different advanced meta-learning topics. Today, we're going to be talking about task construction and some challenges that can come up and also some opportunities for constructing tasks in different ways. And on Wednesday, we'll be talking about large scale meta optimization and ways that you can do things at a larger scale basically. And then next week, we'll be talking about some Bayesian meta-learning ideas including actually a bit of a crash course on variational inference. Which I think should be useful for either as a refresher for some of you but also for other people. Something that's really useful to know about if you aren't familiar with it already. Awesome. So today we're going to be talking about task construction. And really the question of the day will be how should we define tasks for meta-learning in order to get good performance? And so this is going to cover both the problem that can arise in meta-learning, which is often referred to as memorization. And it happens when tasks are constructed in a particular way. And then we'll also talk about not just a problem but also an interesting opportunity of trying to construct tasks from unlabeled data and augment the set of tasks that you have available. The first part of the lecture will be covered in homework four. Homework four is optional. But if you do choose to do that, then a lot of the stuff that we'll talk about in the first half of the lecture is quite relevant for that. And so the goals for the end of the lecture is to understand when memorization can happen in meta-learning. And also understand techniques for constructing tasks automatically. Awesome. So first let's recap a little bit of what we've covered so far. And in particular, I mentioned trying to clarify some of the terminology. So this is the terminology that we've been using in class, where through all the lectures where kind of essentially we have a set of tasks. We have a set of meta-training tasks and the set of meta-testing tasks. And everything in the green boxes on the left is a training data set or a support set per task. And everything in the red box is what we use to measure generalization. And this is referred to often as the test set for that task or the query set for that task. And one thing that we noted was that you can-- we're actually using these like these test sets on the top during the meta-training process. So we're training on the test set. But we're still evaluating on something that is completely new. Which is, are these new held out meta test tasks. Now, one clarification and one point of confusion that came up is sometimes in the homeworks, we are referring to meta-training and meta test as sometimes being referred to as train and test. This is really confusing and ambiguous. And so it's something that we'll try to fix in future iterations of the course. And to somewhat try to make it less ambiguous in the homework. The train sets will refer to support and query. And so that was differentiating it from train and test but also already still somewhat confusing. And hopefully this clarification helps resolve some confusion there. Awesome. And then beyond the setup, we've talked about Black-box meta-learning where the key idea is to parameterize a learner as a neural network. And this network is the inner loop of the meta-learning process. It learns from a few examples. Sometimes this learning process is also called in-context learning. And you'll see that terminology used a lot in the NLP literature and also used in homework three. And then when we actually go about training this neural network to learn from these examples, that's the meta-learning process or the outer loop process. The benefits of Black-box meta-learning is that it's very expressive. You can represent-- this inner loop can represent a wide range of learning procedures. But the downside is that it can be a challenging optimization problem, which you probably saw in homework one. We've also covered optimization-based meta-learning that embeds gradient descent inside the meta-learning process. This incorporates the structure of optimization that we know and love. But it requires a second order optimization. And so in homework two, you may have found that second order optimization can make it slower. A little bit harder to deal with. And then lastly, we also talked about non-parametric meta-learning where you create an embedding of all of your examples and then do some form of nearest neighbors to compare the test example with your training examples. And output the label that you think-- of the example that you think you're closest to. Great. And so as you are hopefully seeing in homework two, this can be easier to optimize and computationally faster. But it is largely restricted to classification problems. And then lastly, in terms of the task construction process, what we've seen in the first two homeworks is kind of N-way classification problem. But the same ideas can be applied to having different tasks correspond to different regions or corresponding to different robotics tasks. And all of these settings, you need a construct a set of tasks using a significant amount of labeled data. Great. So now let's talk a little bit about the main topic of this lecture, which is memorization and unsupervised meta-learning. And so I'd like to start this off with a bit of a thought exercise for you. And in particular, this is a one picture of a Black-box meta-learner that takes its input a training data set and a test input and gives you a predicted label for that test input. And the thought exercise is when you have-- you train this on a set of tasks. Each task is indexed by I. And one thing you could consider doing is passing in the task index, a one-hot identifier of the task index into your Black-box meta-learner. And my thought exercise is first is when you pass in both the training data set and the task identifier what would happen during meta-training? [INAUDIBLE] So the response was it might learn to associate the task identifier and the training data set and memorize it. What is it? Just-- so like remember the [INAUDIBLE].. Yeah. So the examples that show up in the training data set are pretty complicated. Whereas this one hot vector is not very complicated. And so what it might do is it might learn to rely on the task identifier rather than using the training data. And during meta-training this probably isn't a problem. Basically, it might just learn to rely on this task identifier instead. Basically, the task identifier and the training data are redundant. They both encode information about the task. And then my next question is if it does start to rely on this task identifier what would happen at meta-test time if you pass in a new training data set and a task identifier for a new task? And so this would be a one-hot vector for-- a one-hot vector that's different than the one-hot vector that it saw during training in particular, meta-training. Yeah? Performance would probably be really terrible because they would try to rely on the task identifier again. But since it's a one hard representation, it has no correspondence with any of the ones that seen before. Yeah. So the answer was the performance would probably be pretty terrible because the one-hot vector for the new task is different from any of the one-hot vectors it saw before. And because it's relying on that, it wouldn't be able to actually perform the new task. So it won't-- in sum, it won't generalize to the new task. OK. So this seems like an initial thought exercise. Now, a second thought exercise what if instead of giving it a one-hot vector what if we gave it a paragraph that describes what it should do. Like, how to do the task or what the task is. Does anyone have thoughts on what would happen during meta-training and perhaps also during meta-testing? [INAUDIBLE] Yeah. So the paragraph is going to be a lot more complicated than a one-hot vector. And what it depends-- what the network decides to use will depend on how complicated the paragraph is versus how complicated it is to discern the task from the training data. And so it depends on basically whether it's easier for the neural network to use the description or to use the data. And similarly, if in terms of generalization to new tasks, in some ways, it depends on what it decides to use during meta-training. And also the structure of that paragraph. It may be that it can actually generalize to the new task using the paragraph description if it has kind of learned sufficiently general representations of natural language text. But it could also be that maybe it sees a new word in the new paragraph for the new task that it hasn't seen before. And it can't interpret what the goal of the task is. OK. So these are two initial thought exercises. And to start kind of getting you thinking a little bit. Now, the key problem that arises in both of these thought exercises is that the model can minimize the meta-training loss without actually looking at the support set or without actually looking at the training data for your task. And this means that it won't actually necessarily learn a learning procedure that relies on that training data. And so if we go back to how we've been constructing tasks for few shot classification problems, typically, we'll do it something like this. Where for task one, we sample some images. We assign some labels in an arbitrary or random way. But we make sure that the labels are consistent between training and test or between support and query. And likewise for the second task, we'll again sample a new set of labels, arbitrarily assign labels to images. And we'll do this for each of the tasks. And one thing that you can note here is when we do this across different tasks the piano class, for example, has a label of 4 for the first task. And it has a label of 3 for the third task. And so this means that when we randomly assign class labels to image classes for each task, the tasks are mutually exclusive. And what I mean by that is there doesn't exist a single neural network that can classify the images in task 1 and the images in class 3. Because you have a fundamentally different label being assigned to the same image class. And as a result of the fact that the tasks are mutually exclusive, it means that in order to actually solve and be able to minimize the meta-training loss, they have to actually use the training data. They have to look at the assignment of labels to images in the training data in order to accurately make predictions on the test sets or the query sets. Cool. Everyone following? OK. Now, the third thought exercise is what if we assign labels in a way that was consistent across tasks? And so in particular, say that we assigned the labels like this. And what I mean by consistent is I mean that the image class of piano or the image class of this breed of dog is always going to be the same across all of the tasks. And so this means that the tasks are not mutually exclusive. And a single function could solve all of the tasks. And so then my question to you is what would happen in this case? Yeah? [INAUDIBLE] Yeah. So just like the previous thought exercises, you would still memorize. And you wouldn't actually learn how to learn because you can just learn a single function that classifies rather than relying on the support data. Yeah? Wouldn't it also be the case for your prior examples, you would have seen in the support set? Would that resolve the problem? You're saying that-- you're asking is it also a problem that for the query images you would have seen them in the support. Yeah, so for different tasks, would that [INAUDIBLE] then for task 3 it is in the support set. Got it. So the question is in some cases like the same image is in the query set in some task-- for some task in the support set for other tasks. And also this class is appearing multiple times. That's actually not a problem. And I would guess that for most of you the way that you implemented homework one, you may have actually had images appear sometimes in the support set and sometimes in the query set. You can get a little bit more out of your data if you have it appear in-- have it be allowed to be appearing in either the support set or the query set. Cool. So in this case, the network can just learn to classify the inputs irrespective of the training set. And so in particular what can happen is that when you train a Black-box meta-learner, it can basically learn to just ignore this input and just classify based off of the x-test. Based on the query example. And likewise even for an optimization based meta-learner, it can also learn to basically ignore this and primarily rely on just the initial network in order to make predictions. It's a little bit more difficult in this case 'cause it actually has to learn how to ignore the gradient descent step. But it can figure out-- it can also figure out how to do that. Yeah. [INAUDIBLE] Yeah. That's a great question. So the question was one thing you may have noted here is that there are actually multiple meanings. Like, even though the labeling is consistent, there are multiple meanings for a given label. So like four means both a carousel and a piano, for example. And so this does mean that the network does need to have enough capacity in order to be able to lump those concepts together. And we'll actually start to see some experiments. And this may also start to hint towards some solutions to the problem. Cool. So it can ignore things. And if you actually run-- if you run meta-training on this exact example that I gave you and then evaluate it on examples from new classes, its ability to learn from a new data set, the accuracy goes to around 7.8% or 50% in these two different problems. And this is for the MAML algorithm. So it's even something that has to use a gradient descent step. And so it's actually running fine tuning at test time. It's just not giving you a very accurate model on these new classes. If you did evaluate it on the classes it saw during training, it would be able to give you good accuracy because those are classes that it saw. Cool. So then there's this question of we're not shuffling the labels. And we don't shuffle the labels. This is a big problem. Our performance really plummets. But is this actually a problem in practice. And for image classification, we can just shuffle the labels. It's something that we can do. It's also very easy to do. And so it's not really a problem for image classification problems. Like I mentioned, it's also not a problem if you see the same image classes as what you saw during meta-training. But it's actually going to be a problem in some cases. If you want to be able to adapt with data for new tasks. And if you aren't looking at problems exactly like some of the few short classification problems that we saw before and that we've seen in the homeworks. So let's look at a few examples where this may actually be a problem. So the first example is a robotics task. So say we want different tasks that are going to correspond to manipulating objects in different ways. So one task might be to close the drawer. Another task might be to pick up the hammer and hit a nail. Another task might be to stack these two blocks. This seems like a very natural task distribution for a meta-learning system. And maybe kind of the new task, the meta-test task is to close the box. Now, this is a case where this can be a problem. It can be a problem in a couple of different scenarios. One is that if you're doing this from image observations, the system can just look at the initial image. See what's in the initial image. If it sees that there is an open draw in the initial image then the task is to close the drawer. Or if it sees that there is a hammer in front of a nail then the task must be to hammer the nail. And so that's one scenario where it doesn't actually have to meta-learn in order to minimize the meta-training loss. Another scenario is maybe we want to-- maybe we aren't in an image based setting. Maybe it just get some way points or maybe the initial image is somewhat ambiguous. We might be in a scenario where maybe we want to tell it like give it a language description of what the task is. Like close the drawer or hammer the hammer. In those cases, if you also give it a task description which should be helpful. We're giving it more information. That will also make it much harder to generalize to a new task. So this is one example. One more example. Let's consider a post prediction tasks. So say you want to predict the orientation of an object in an image. And so maybe your meta-training task you're predicting the orientation of a couch. In another case, you're predicting the orientation of- I think this is a very old monitor or a television. And then at meta-test time, you want to be able to predict the orientation of a chair. This is also a case where you can look at the image and you don't actually need the meta-training data in order to be able to predict the orientation. But if you give it a new object, it may be ambiguous what the canonical orientation of that object is. And because of that ambiguity, it actually-- you do want it to actually look at the training set oh, sorry the support set or the training set for that test task in order to actually figure out what the canonical orientation is and how to predict the orientation of the object. Cool. So this is another example where memorization can occur. And in this case, it would memorize the canonical orientations of the objects seen during meta-training. And it would no longer-- it wouldn't be able to figure out what the canonical orientation of the new objects are because it didn't actually learn to pay attention to the support set. Cool. So do people agree that this is a problem? Cool. OK. More or less. Now, there's the question of whether we can actually do something about this. And so to think about whether we can do something about it. Let's try to formalize the problem a little bit more. And in particular, we saw that if the tasks are mutually exclusive, then a single function cannot solve all of the tasks. And this could be due to label shuffling. It could be due to hiding information. And in this setting, the meta-learning process will not memorize. And you should be able to generalize to new tasks. On the flip side, if the tasks are not mutually exclusive, then a single function can solve all the tasks. And it can just ignore the training data. And in this latter case memorization can occur. And the reason why memorization can occur is that there's actually multiple solutions to the meta-learning problem. There's multiple ways to minimize the meta-training loss function. And in particular, if we view the meta-learning process as taking as input, the training data set and the test input and relying on our meta parameters theta. One solution to the pose example would be to memorize the canonical pose information in your meta parameters and ignore the support set. And then another solution which is one that we would often want if we want to generalize is to carry no information about canonical pose information in theta and instead try to acquire it from the task training set. And so here are two solutions. And in general, there's actually going to be an entire spectrum of solutions where you could-- based on how the information flows. So you could store some information in theta and acquire some information from the support set. And likewise, basically there's a spectrum of solutions in between these two extremes. Although in practice, you'll probably get either one of these two extremes. Yeah. [INAUDIBLE] has some objects which are not in the [INAUDIBLE] hyperparameterization of [INAUDIBLE] Yeah. So the question is if we have a meta validation set or a set of validation tasks, we should be able to tell if our model is generalizing to new tasks. And as a result, maybe we can tune the hyper-parameters in some way in order to actually encourage it to generalize better. And in general, absolutely you'll see that it won't generalize to validation tasks. The tricky part is how do you tune the hyper-parameters in order to actually get it to generalize versus not generalize. And so you can diagnose this problem with held out validation tasks. But you can't necessarily solve it. You can maybe try to tweak the learning rate in some way to try to get it to do the right thing. But in general, it's tricky to do it purely based on learning rate or purely based off of like the size of the architecture. [INAUDIBLE] So the question was if you pass this into a hyper-parameter optimizer, would it be able to fix this problem? And it really depends on the hyper-parameters that you expose to the model. And in particular, we're going to need some form of regularization to encourage this to not happen. I guess the other thing that I'll mention is in general. In general, I see this solution happen more than this solution. And I think that that's because acquiring information from D-train can be an imperfect process. You actually have to acquire it from a more complex object. And as a result, oftentimes you can actually get a slightly lower meta-training error with this solution than with this solution. Cool. So if we view this kind from a standpoint of how the information is flowing, we might be able to think about trying to control that information flow. And in particular, we think of the meta-training process as something like this. And there's a number of different-- there's basically three objects that you can use to get a prediction. There is information in theta. Information in D-train. And information in x-test. And we're primarily going to focus on the first two. Information in theta and information in D-train. And when we think about how D-train and theta affect the corresponding prediction, we can try to think about basically encouraging it to use this information more so than using the information from theta. Now, one thing that would be nice to do is if we could try to maximize the information. The mutual information between the support set and the prediction. If we could maximize that mutual information, I think that would give us exactly what we want because then we'd be encouraging it to use information from D-train. And this wouldn't in any way be competing with optimizing the meta-training loss. But unfortunately, optimizing mutual information. Like actually estimating this and optimizing it is pretty difficult. And then so instead what we're going to try to do is try to minimize the information coming from theta. And that's something that's easier to do. Of course, if we only minimize the information coming from theta then we won't be able to do things well. Yeah? Can you clarify what you mean by maximizing mutual information between [INAUDIBLE]? Yeah. So the question is, can I clarify what I mean between maximizing the mutual information between the support set and the query set prediction. I guess, I mean that in the mathematical sense. So if you view these as two different random variables, then you can define the mutual information between two variables. Like A and B. As I guess there's a number of different ways to define it. But one way to define it is in terms of the entropy of two different variables. Another way to define it is the KL divergence between the joint distribution and the joint distribution and the kind of the product of the marginals. And basically, mutual information is going to be telling you like how interrelated are these two random variables? And if they're completely independent then they have zero mutual information. And by actually trying to maximize that mutual information, it means that we want y to actually change when D-train changes. And so if we're able to maximize that quantity, that means that when we change D-train y will actually change. And that means that it will actually rely on D-train rather than completely ignoring it. Yeah. [INAUDIBLE] fine tune with the training data set? Sorry. Can you-- I missed like one word in the first part, is theta the-- [INAUDIBLE] which has the general information about shared tasks and then we're trying to fine-tune with the training data set that we have. Yeah. So you can think of theta as the shared information. Although in this case, I'm actually just referring to it as the meta parameters. Like the parameters that you're optimizing during the meta training process. But yeah, when you think of it as a random variable, sometimes it can be helpful to think of it also in terms of the shared structure. But here really I'm referring to it as the meta-training parameters. Cool. So it'd be awesome if we could maximize the mutual information 'cause then we'd be telling it to rely on D-train to basically make different predictions for different D-train. Unfortunately, it's difficult to optimize that quantity. And so instead what we can try to do is to minimize information coming from theta. And of course, we can't only minimize information coming from theta because if we did that, then we wouldn't store any information in theta at all. And we then wouldn't be able to do very well. We just basically be learning from scratch in that case. And so instead what we're going to do is we're going to try to minimize the information theta and also minimize the meta-training loss. And we'll need to balance those two terms such that we're getting kind of a meta-training loss that we're happy with while also trying to minimize the information in theta as much as we can. Now, the way that we actually go about doing this relies on kind of some knowledge of what's called an information bottleneck. And we'll actually be covering information bottlenecks on Monday next week. So I don't want to go too much into it right now. But the intuition behind what we'll be doing is we're essentially just going to be adding noise to theta. And when you add noise to theta, you're removing information from what's in theta. You're increasing the entropy of that random variable. And so by doing so by basically trying to minimize training loss under a noisy-- minimize meta-training loss under a noisy version of theta that means that you're going to be trying to, basically telling it to be able to do meta-training with less information in theta as compared to previously. And so this kind of specific equation for what this will look like is you have your meta-training loss function, that I've just written out as l here. And the way that you write out a typical information bottleneck is the KL divergence between your distribution over theta. And some prior distribution, which is just kind of a standard Gaussian distribution. And the reason why this corresponds to adding noise is Q is going to be a Gaussian distribution with a mean of theta and a variance of kind of a diagonal variance. And so Gaussian distribution-- basically it's a sample from that Gaussian distribution. It corresponds to adding noise to the mean of the Gaussian distribution. And hence why you can think about it as adding noise to the variable. And so essentially what this is going to do is it's going to try to place precedents on using information from the support set over using information from theta because using information from data is now going to be harder because now you have to use information from theta after noise has been added to theta. And you resample the noise every time you use the meta parameters theta. Yeah? [INAUDIBLE] to calculate, actually calculate the [INAUDIBLE] which predicts the task identifier from theta and then minimize it, so [INAUDIBLE] so for example you can actually take the labels from like, [INAUDIBLE] you know which option they're coming from, right? So you can have a loss that tries to disentangle the information? Yeah. So it's interesting idea. So here we're just kind of blindly trying to add noise and reduce information from theta. Maybe we can try to do it in a slightly more targeted way by trying to explicitly say you shouldn't be able to predict the memorized information from theta. You were suggesting that you shouldn't be able to predict the kind of the task identifier which wouldn't quite work. The tricky part is that like in the pose example, the memory information basically corresponds to the canonical orientation of each of the meta-training objects and has it for all the objects. And it's a-- well, I don't see any way to try to hit that information in a targeted way. But we can also discuss it. And if any good ideas come out of it then it could be interesting to explore. Yeah? That's your support set from the meta parameters? And then try to predict the task identifier for that and that's what restrict your information about the task conversation into the meta-parameters? So you're saying that-- are you saying that erase this arrow and have an arrow from here to here? Or are you saying to add an arrow from D-train to theta? And then-- Just trying to for one possible interpretation of what my colleague said, would be to take your support data point and get it through the meta parameters and then convert to [INAUDIBLE] learn a neural network that will try to predict the task identifier that would restrict your stored information about tasks in the meta parameters. Yeah. So you could try to have something like on top of the activations that basically has it such that it can't predict the task identifier from that. The tricky part is you don't want to restrict information from D-train in any way, you only want to restrict it from theta and not from D-train. But maybe we can also discuss it a little bit more after class. Cool. Now, this regularizer that adds noise to weights, it's actually used in other parts of deep learning. It's often referred to as Bayes' by backprop where you're basically imposing a Gaussian distribution on the weights of a neural network. And once you define this regularizer which is basically just adding noise to the weights, you can really combine it with your favorite meta-learning algorithm. And it's applicable to any of the algorithms that we've talked about in this class. Cool. So how well does this work? So I mentioned that this regularizer is actually-- it's been used in kind of standard deep learning. It hasn't actually been that successful in standard deep learning. Usually you can get a few percentage improvement. But if we look at the meta-learning settings, it actually tends to have a more drastic effect. So for example, if you take the Omniglot data set which you've been working with in your homework. And you remove the label shuffling. Such that the labels are consistent across tasks. We'll refer to this as non-mutually exclusive omniglot. The original MAML algorithm in this case will get very low performance. There's another work that proposed a form of regularization that may actually be somewhat similar to what was suggested. But it also doesn't work very well. Whereas if you do this approach of adding noise to the weights, you're actually able to recover a lot of the performance in comparison to using label shuffling. And similarly, on the pose prediction task which is more interesting than the Omniglot tasks, it's harder to just shuffle the labels. If you look at the mean squared error between the predicted orientation and the correct orientation, we see that we can get a much lower mean squared error when incorporating this form of meta regularization. And you see an improvement both with the MAML algorithm as well as an algorithm called conditional neural processes which are a form of a Black-box meta-learner. Yeah? If we-- [INAUDIBLE] weights of the network. How would this compare to just like essentially keeping the earlier layers of the network fixed but then distributing it slicing through us layer setting up a completely random. So then you can really took it out of the train of like the final prediction task but it's still like learning the picture. So it's like the coordinates in that sense? So the question is, how would-- so this is adding noise to all the weights. So how would this compare to kind of meta training the earlier layers as normal but then like always re-initializing the last layer. And you'd also do that during meta-training. And then fully train the last layer in the inner loop. Yeah. So I think that something like that would-- off the top of my head I think that something like that would work. In general, things like prototypical networks actually often don't suffer from this memorization problem because you're forcing it to do nearest neighbors. It has to use that learning algorithm. And it's much more difficult to kind of memorize how to map from inputs to labels when you literally have to do these nearest neighbor comparisons. And so given the relation to things like prototypical networks, I would expect that to work especially for classification. I might have to think about it a little bit more for regression. But I suspect that something like that might work because you're forcing it to learn part of the network from scratch in the inner loop. But I may also need to think about it more a bit. Yeah. Yeah? What's the intuition I guess between adding noise versus something like dropout guess a similar thing? Yeah. So the question is, what's the difference between adding noise versus something like dropout? So in this paper we actually looked at adding noise to the weights and also adding noise to the activations. And something like dropout is going to be a lot more similar to adding noise to the activations. And we found that adding noise to the activations actually worked in some cases but didn't work well in all cases. And so the reason why there's a W here is it kind of refers to a diagnosis of the weights versus adding noise to the activations. And so you can see the activations results in the paper. The part of the challenge with adding noise to the activations or doing something like dropout is it can still-- I guess it depends on which layer you add noise to. But essentially say you have a neural network that is making predictions about a corresponding input. So then say you decide to like I don't know, try to add noise here or maybe you can even just add noise. Let's say you add noise right here. You're basically just trying to minimize and have less information here. But what the network can do is especially for classification tasks, the amount of information in a label is very small. If you're doing like 10 way classification then it's just log 10. And so if you're trying to minimize information and activations, it can actually minimize information just by basically stuffing the label here. And that's very, very little information that it needs to store. And that's actually less information than what it needs to memorize. And so I don't know if that was the most clear explanation. But basically, it's the place where it's memorizing is really in the weights of this neural network and not in the activations of this neural network. But the short answer is it works a lot better on the weights than the activations. Cool. And then yeah, I guess we all have drop out in these comparisons. But you can also just try applying things like weight decay or a version of Bayes by backprop. And we see that it works much better. It works much better with the regularization that we've talked about. Cool. Now, one thing that I'd like to briefly mention for people who are more theoretically inclined and want to dig a little bit deeper into this form of regularizer, you could ask like we've seen in practice it really improves generalization a lot. But does it actually do so kind of in a more provable sense? And there's a way to kind of basically derive a generalization bound for this approach. And it looks rather complicated. Something like this. But basically what this amounts to is we're trying to bound the generalization error or basically how well will this meta-training algorithm generalize to new meta test tasks. So that's the left hand side. The right hand side is a bound on the error. And so this corresponds to the error on the meta-training set plus the generalization gap. And this term in the generalization gap is exactly the meta regularization that we're applying when we add noise to the weights. So really the gist of this is that you can actually theoretically show that you can bound the generalization error. And actually improve generalization. Or at least you bound on the generalization by using this form of regularization. Cool. So to summarize the memorization problem. We talked about this-- basically this form of overfitting in meta-learning where you're memorizing a mapping from x to y. You're memorizing a function. And basically you're memorizing the x to y function rather than actually using the training data rather than learning to learn. And this is analogous to standard overfitting in supervised learning where you instead of memorizing functions, you're memorizing individual training data points. And similarly you can sort of think of meta regularization as also being analogous to standard regularization in supervised learning. Although in supervised learning you're kind of regularizing the hypothesis class. You can also think of this as regularizing the hypothesis class of the meta-learner. And this is a particular interpretation of controlling information flow. And you can view the kind of the specific meta regularization that we talked about as trying to minimize the description length of the meta parameters. And so these are also somewhat analogous. Although, like I briefly mentioned before, we actually see a much more drastic improvement from applying this regularization in some meta learning problems in comparison to standard regularization in deep learning. Any questions on memorization. Yeah? You mean like description length? That's a great question. Do I want to go into that? I guess I put it on the slide. So description length is-- maybe I shouldn't have put it on the side. I think I'd rather not go into it now but happy to talk about it in office hours. And if you're interested in learning more, you can read up on the MDL principle which is the minimum description length principle and Kolmogorov complexity. And I guess the gist of it is it's thinking about the length of the program that it takes in order to express a function. The length of the minimum program. Yeah? Isn't that essentially just the amount of information that's stored? Yeah. Yeah. Basically it's a measure of information. Cool. OK. So we've talked about a problem that comes up if you construct tasks in a certain way. And I'd encourage you to for those of you that are doing projects along the lines of few-shot meta learning, I'd encourage yourself to kind of ask yourself if your project-- if this sort of issue might come up in your project. And if so you can maybe consider revising your project or think about the regularization techniques that we've talked about or possibly develop a new regularization technique. Cool. And then the second part of this lecture, I'd like to talk a little bit about tasks. Well, another form of task construction which is unsupervised task construction. And earlier in this lecture, we revisited a few different examples of classification, imitation learning, and land cover classification. And these are all scenarios where we assume that we have labeled data from lots of different tasks. And there might be scenarios where we only have unlabeled data. Or maybe we have a mix of unlabeled data and labeled data. Last week, we talked about a class of approaches that can handle the setting where we only have labeled data during pre-training. And we did this with different unsupervised pre-training algorithms like contrastive learning and like masked autoencoders. And we pre-trained our representation. And then fine tuned something on top of that representation. And today, we're going to talk about algorithms that actually end up looking very similar to those previous algorithms except instead of just running fine tuning on top of the representation, we're actually going to be doing explicit meta-learning and doing the same kind of few-shot learning that we've seen in algorithms like Black-box meta-learning and non-parametric methods and so forth. And the general recipe for this algorithm is going to be something like we are given an unlabeled data set then we will propose tasks from that data set. And then run meta-learning on the task that we proposed. And this is going to look a lot like the recipe that we saw for contrastive learning as well. Where we are basically constructing tasks using different augmentation functions. Of course, the key step here is this middle step. It's easy to get-- well, given unlabeled data is just a given running meta-learning is what we've already talked about over the past few weeks. So the big question is, how do we propose tasks? And yeah, our goal will be to try to automatically construct tasks from unlabeled data. Now, I'm curious if anyone has thoughts on if you were to try to kind of set out some desiderata for a set of tasks, do you have thoughts on what you would want generally that task set to look like? I feel like it depends on the data that we're dealing with. But for the case of images, for example, I think it will be similar to contrastive, as in similar instances should be like the same class. And then I would like to have a diverse set of labels for my classes. I don't know. Yeah. So it's going to depend a bit on the data definitely. And also it's going to depend a bit on the tasks. So if at meta test time you expect to be doing a 10 way classification task, then the tasks you construct during this process will be different than if you think you're going to be doing like 1,000 way classification. And so you want to construct things that look similar to what you are going to see at meta-test time. You can also imagine doing something similar to contrastive learning. Yeah? It's kind of how you do the [INAUDIBLE].. So like they should be prominent enough in the data set to be like to individually be classified as a cluster and the difference between those clusters or [INAUDIBLE] that is really prominent. Did you look at the next slide? [INAUDIBLE] Yeah. So to repeat the-- I guess the-- maybe to summarize, maybe we want the task to be something like what you get from k means to be somewhat distinct from each other. While also being somewhat diverse I guess. So yeah, what I put on the slides is I think that we really want two key criteria. One is that we want the task to be pretty diverse. We want them-- and the reason why we want this is that we want them to be able to cover the test task that we are likely to see during meta test time. And of course, if you know information about what you're going to see about meta-test time, like this 10 way versus 1,000 way you can incorporate that and actually make it less diverse to make it more similar to that. And then the second thing you want is you want it to have some structure. So if you propose completely random tasks then your meta-learner won't actually be able to learn those tasks with a few examples. And this maybe gets a little bit up the k means comment a little bit, which is that if you cluster data kind of intuitively what that's doing is finding the structure underlying the data. And we can try to leverage that structure when during the meta-learning process. Yeah. [INAUDIBLE] Is there a way to formally define covering the test tasks? It's easy to define if you can represent the task distribution in a parametric manner. If there is some parameter underlying the task distribution, then you can say is this in the support of the distribution. If the tasks are not possible to describe in a parametric manner, if they're more non-parametric like one task is to pick up a shoe and another test is to push a chair. Like maybe you can parameterize the shape of the chair and the shape of the shoe. But in general, that's something that's much harder to characterize. Cool. So we'll talk about how we might try to cover diverse and structured tasks, both for image data and text data. This is really the main overview slide. And then on the next few slides, we'll basically just go through a few examples for how and walk through examples for constructing tasks that maybe look like this. And also like how these algorithms are used in practice. So the first example will actually be using something a lot like k means clustering. And in particular, we're going to be doing this with images. And if you just have image observations, it's hard to run clustering on raw pixels. And so what we'll first do is run unsupervised learning in order to get some embedding space, a lower dimensional embedding space. And once we have this lower dimensional embedding space, it's going to be much easier to construct tasks. And in particular, if we want to construct a classification task then we can find different clusters in this embedding space. And define a task as discriminating between examples from those clusters. So for example, one three way classification task would to be able to classify between purple examples, green examples, and orange examples. And another task might be to classify between the red dashed cluster and the yellow dashed cluster. And by running clustering to get these tasks, we're going to get parts of the space that have a little bit more structure. In contrast we just randomly kind of sliced like hyper-planes like decision boundaries through the space. So we may end up kind of getting much less structure because you may get kind of two examples that are very close together but that actually have different labels. And so in particular, to propose a task. Once we have this embedding space we first sample. If you want to do a two way classification task, we sample two clusters. So like the red cluster and the blue cluster. We sample two images from those two clusters. So the blue image and the red image. And then that will give us a support set for that task. And then to get a query set, we then sample two additional examples from each of those clusters and get a corresponding query set for that task. And then once we go and try to sample another task, we can simply sample another set of clusters like the purple cluster and the green cluster. And then if we wanted to sample-- these are two way classification tasks. If we wanted to do an N way classification task then we could sample end clusters and discriminate between examples from those N clusters. And you could note that these tasks don't correspond to exactly categorizing different like typical image categories. But they still have a lot of structure to them. For example, the first task corresponds to classifying if something is like a circular shape versus if you have a pair of objects. And the second is maybe classifying wine bottles or maybe bottles of beverages and something that also has a particular kind of shape. Cool. And then once we have defined these tasks, we just run meta-learning on those tasks. So this is one way to get tasks that are fairly diverse and have are fairly structured. And the result of this process should be a representation or a meta-learning model that is particularly well suited for downstream few shot classification tasks. And it should be something that's actually better than the initial representation that we used in step one. And the reason for that is we're explicitly going to be optimizing for few-shot generalization in these examples. And kind of few-shot N way classification depending on whatever form of N way classification we set up in step two. And so there's a few different ways to actually instantiate the different steps here. So the first step we could use some unsupervised learning methods. This paper is a couple of years old. So it's going to use some unsupervised learning methods that are a couple of years old. But you can also plug in something more recent as well. Clustering to automatically construct tasks for unsupervised meta-learning or CACTUs. And then you can combine this with a meta-learning algorithm that operates on those tasks like MAML or prototypical networks. And then we can look at some of the results. So we looked at using this approach to construct tasks in combination with both MAML and prototypical networks. And so if you combine CACTUs with MAML, this is the result that you get. And we evaluate on miniImageNet, which if you give full labels you get an accuracy of 62%. If you just do-- I guess you can first evaluate a few different unsupervised baselines, which just corresponded to doing k-nearest neighbors logistic regression or an MLP on top of the pre-trained representation. And you get a few-shot accuracy that's pretty low around 30%. You can also do something that uses clustering. And then contrast, if you actually run this approach to construct the tasks and do meta-learning on top of those tasks, you get an accuracy that's much higher, around 51%. And if you use a representation that's a bit better, the deep cluster representation, you actually do even better than that. Yeah. [INAUDIBLE] Oh, are you specifically referring to this method right here? Yeah. [INAUDIBLE] Yeah, so basically all four of these methods correspond to different ways of using the unsupervised representation on its own. And basically using different classifiers or something on top of that. Yeah. So are there labels at test time, meta test time? Oh, awesome question. Yeah, are there labels at meta-test time? Yes. So in this case, there are 25 labeled examples at meta-test time. So the meta-training side is completely unlabeled. And then at meta-test time you run it the same exact way as kind of normal labeled meta-training time. Where you are given a very small training data set. In this case like 25 examples, five examples of five different classes. [INAUDIBLE] at meta training time? Yeah. So basically at meta train time we are kind of creating our own labels and meta-training on that and then at meta test time we're using real labels. Yeah? View this process as first you have the unsupervised projection into a low dimensional space. And then each task is learning a distortion of that space that moves some images closer together and other images farther away. And then if so can you characterize the kinds of distortions that can be learned via this process. Like there's kind of a family of allowable transformations. Yeah. So you can definitely view this as like distorting the space in different ways. And then learning like learning to classify for those different distortions. You're asking are there ways to kind of characterize the distortions that could be represented? Yeah. My question is it seems like the adaptability that depends a lot on the initial unsupervised clustering. Like you really just transferred a lot of the burden. And if it produces clusters that don't correspond to all of your meta-test task requirements then it seems like it's limited by like well how much can you distort or change the space after it's done constructing-- Yeah, absolutely. So it's going to rely a lot on the initial clustering. And I guess I should clarify that in this method you cluster more than once. And you could-- if I remember correctly before clustering more than like each time, we scaled and I think we like applied some random scaling to the embedding. And that was one way to distort it. You could also imagine fancier ways to distort the space as well. You could also run it like-- you could also have multiple pre-trained representations and presumably they'll have different things that they pick up on or different high level features. And that may give you even more diverse tasks. And so this really connects to the diversity of tasks that you get. And I suspect that there is in general, it seems like it's actually able to learn pretty diverse tasks that seem to cover things pretty well, especially given that the gap here is only like a 9% gap between supervised. But there's probably also a lot of room for improvement. And these are results with MAML and miniImage 5-way-5-shot. But you kind of see a similar trend with different embedding methods with different data sets and also with prototypical networks. Prototypical numbers did underperform in some cases. But in general, they're both able to do pretty well. Cool. So that was one way to construct tasks. And coming back to some of what we saw with contrastive learning. You can also construct tasks in a very similar way to what we-- to what we saw with contrastive learning. So for image data in particular, we know that an image label won't change if you apply a certain transformation. Like if you drop out some of the pixels or translate the image a little bit or reflect the image. And so if you take Omniglot and you apply a transform like this or if you kind of flip a miniImageNet image you-- the underlying categorization of the image won't change. And so we can use this to construct tasks. So we can construct tasks in the same exact way as what we saw in contrastive learning. Where we first randomly sample some images, assign a label to those images. And then for each of those images, we also augment those images in some way. And then assign the same label to those. And then the first image set becomes our support set and the second becomes our query set. And you could also augment in a different way for your support set as well if you didn't want to use the original images. And so in practice, you can use whatever you know about the image domain. So for Omniglot you would use something like translation and pixel dropout. You wouldn't want to flip the image 'cause that may actually flip the category label of that character. Whereas from miniImagenet you would probably do things like flipping. And so these auto augment here, which corresponds to a number of different transformations. And if you use this approach for task construction, it gets really strong Omniglot performance. And in particular the performance is kind of in the 80s to 90s on Omniglot in many cases. And in the five shot setting this actually gets pretty close to a fully supervised baseline on Omniglot. And this makes sense because we know a lot about the structure of Omniglot images. We have a lot of domain knowledge about that. Whereas for miniImagenet it ends up doing somewhat similarly to the clustering approach that we talked about before. And in some cases it kind of slightly underperforms that approach. And that also makes sense. We have I think a little bit less domain knowledge about the kind of transformations that will affect natural images. Cool. So those were a couple of examples with images and how to construct images with-- how to construct tasks when you have unlabeled images. You can also apply the same recipe with text as well with a different task construction technique. One option, which we've seen before is to formulate a language modeling problem. Where a sequence that supports that is a sequence of characters and the query set is the following sequence of characters. And we've seen this example where the inner loop of the meta-learning process goes through each of these examples. You might have these examples as your support set and these examples as your query set. And you have different tasks like translating between languages, math problems or spelling correction. So this is an example, a way to construct meta-training tasks with just purely unlabeled text. But we might not want to use this option in all cases. For example, it may be harder to combine with optimization based meta-learning. And it's also not necessarily the best suited for simple binary classification tasks. The second option, which is basically the other form of generative pre-training we talked about is to use masked language modeling. And the tasks here will just correspond to masking different words. And concretely, the way that you can do this is for a given task, sample a set of unique words, of N unique words and assign each of those words a unique ID. So for example, you could sample the word Democratic and the word capital. And assign an ID of 1 to Democratic and an ID of 2 to capital. And then you can sample sentences with one of those two words, and mask out the word. And then the task is to classify which word was masked out. And so this is going to give you an N-way classification problem if you sample N words 'cause you'll be basically be given like, for these N words which of those words corresponds to the masked out word? And this allows you to construct kind of text classification problems without actually having access to any labeled text data. And so kind of concretely for the Democratic and capital example, we can have a support set that looks like this. Where M corresponds to the masked out word. The sentence corresponds to the input. The class corresponds to the label. And we have two examples from class 1 and two examples from class 2. And then the query will correspond to a new sentence that also has a word masked out. And you need to be able to predict whether or not the label is 1 or 2. Or basically whether the word is Democratic or capital. Cool. So with this kind of approach you can apply it either in a fully unsupervised way or you could actually-- and really with the other approaches too, you can combine it with some supervised tasks if you have some tasks available. And so here are some kind of what the results look like if you either apply it in a fully unsupervised way or everything on the right hand side of the table is a more semi supervised or supervised approach. BERT corresponds to kind of what we talked about last week with standard self-supervised learning. SMLMT is the proposed unsupervised approach. Multitask-BERT corresponds to multi task learning and fine tuning on the supervised tasks. There's also an optimization based meta-learner that's applied only to the supervised tasks as well as a hybrid method that does meta-learning on the proposed tasks and the supervised tasks. So this is more of like a semi supervised meta-learning approach. And what we see by incorporating these tasks generated from the unlabeled data is able to do significantly better than only using the labelled data. And also only using the labelled data both for the multi task learning and also for meta-learning. And also we see that for each of these different text classification problems, in many cases this is able to-- or in some cases this is able to outperform a BERT based pre-training. Cool. There's already a lot of results. But if you want more results there's kind of more in the paper that's linked. Cool. So in the second part of this lecture, we talked about kind of this form of unsupervised pre-training. It ends up looking a lot like the unsupervised pre-training that we talked about last week, where we saw masked language modeling. We saw autoregressive language modeling. We saw like generating tasks from augmentations. It has this kind of recipe of given an unlabeled data set, propose tasks with these different techniques. And then run meta-learning on it and really the key difference between these methods and the ones that we saw last week is that we're actually going to be explicitly running meta-learning and using kind of the meta-learner to perform few-shot learning rather than just doing fine tuning on top of this pre-trained-- on top of some pre-trained representation. And we also covered a few different ways to propose tasks that have been suggested in prior work, which includes classifying between augmented images and different image instances and classifying a masked word. Cool. Any questions on-- Yeah? [INAUDIBLE] if you have a large number of [INAUDIBLE] is there a way to do few shot, would it happen in the sense of unsupervised pretraining [INAUDIBLE] Given enough test task data or given enough data on the meta test task? Yeah. So the question is if you have enough meta test data, would kind of the unsupervised pre-training methods that fine tuned their representation start to outperform what we've talked about? And yeah, the answer is yes. So these kind of approaches are really going to shine if you're in a few shot regime at meta-test time because they're explicitly optimizing for few shot learning. And the settings in which unsupervised pre training are going to do well are settings where you have a bit more data of your target task. Because you're not actually-- you don't need to explicitly optimize for few shot learning, you can simply fine tune with a reasonable starting point. [INAUDIBLE] is that actually in the few shot, we don't have a masked word, right? So is it, would it kind of have a sentence without a mask, and [INAUDIBLE] Yeah. This is a great question. So all of these tasks-- it's a good observation. So all of these tasks like you see this m token, this masked token in the sentences and at test time you often won't have masked tokens. And so I think that's part of the reason why this is particularly well suited as my supervised approach more so than an unsupervised approach. If you really kind of go through these numbers carefully, be nice if there was like a bar plot. But if you go through them quickly or more thoroughly, you'll see that the semi-supervised approach is more performant. And I suspect that this is because there's a bit more of a distribution shift between the unlabeled tasks and the labelled tasks. [INAUDIBLE] Yeah. So there is one task where unsupervised is actually better than all of the approaches that use labelled data. I'm not sure exactly why that's the case. I'd have to kind of look at the paper a little bit more closely. But yeah, it's kind of an interesting finding. Awesome. Cool. So that's it for today.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_3_Finite_hypothesis_class_discretizing_infinite_hypothesis_space.txt
OK, now, let's talk about math. So last time, where we ended was, we were talking about uniform convergence. So we said that our goal for the next few lectures will be the so-called uniform convergence, which means that you want to somewhat prove that with high probability, if you take sup on maximum-- like a sup really just means maximum for this course. So if you take sup over the hypothesis class and you look at the difference between the empirical risk and the population risk, you want to show that this is small with high probability. So this is the general idea. And we said that this is different from showing that-- so this is different from-- so for every fixed h with high probability, L hat minus Lh, is small. So these two are of different nature because one-- so the order of the quantifier in some sense is different. One requires that if this event-- with higher probability, the event that the entire population risk is close to the empirical risk, right. So the other one is saying that you just only look at one single h, and you look at what's the probability that this population risk is different from empirical risk. And you want to show that this event happens with high probability. So in some sense, the difference is kind of like a union bound in some sense, which I'm going to talk more next when we get to prove this kind of statement. So I mean, this lecture, we are going to talk about two ways to do this. Actually, we are going to talk about two cases about h. So certainly, this statement depends on h. You cannot hope to prove things like this for every possible capital H. It does depend on the family of hypotheses you think about. And the bound actually depends on the family of hypotheses you are talking about. So the first part is going to be about finite hypothesis class where h is assumed to be finite. And the next part is going to be infinite case, infinite hypothesis class. And for infinite hypothesis class, there are many different ways to achieve this kind of bound. And today, we're going to talk about a relatively brute force way to do it. In some sense, you do a reduction to the finite hypothesis class. Essentially, no matter what you do, you are doing a reduction to the finite hypothesis class. But how do you reduce to the finite case does matter. So in today, we're going to talk about the brute force reduction, which does show some kind of intuition. OK. And so that's a brief overview of what we're going to do in this lecture. So I guess let me just start by-- let's talk about the finite hypothesis. And here is the theorem we're going to prove. So there are some conditions. So the condition is, as we did last time, recorded last time, we assume the loss is between 0 and 1 for every xy and every hypothesis. And this is true for binary loss. It's 0-1 loss. It's not true for every possible losses, but if you have other losses, you have do a little bit small, kind of like, fix to make these proofs still work. But this is not very essential. It's mostly for convenience. And what we're going to prove is the following statement. So we say that for every delta between 0 and one half, this is not very important either. So delta is a small number. And you will say that with probability at least 1 minus delta, we have that for every h, L hat h minus Lh, an absolute value is bounded by square root ln, the size of h, plus ln 2 over delta over 2n. And recall that the reason why we care about this uniform convergence was that it's useful for us to bound the excess risk, right? So we have shown that if you have this kind of uniform convergence, then you can prove that your excess risk is bounded. So using what we have discussed last time as a corollary, we also get L excess risk of Lh hat-- h hat is the ERM-- minus Lh star. So this is the ERM solution. Lh hat minus Lh star is less than-- you pay a factor 2 in that derivation, so you multiply the factor 2. So you get something like 2 times ln size of h plus ln 2 over delta over n. OK, cool. So this is the theorem we're going to prove. Before we prove the theorem, you can see that the bound-- the right hand side of the bound-- does depend on the size of the hypothesis class, right? If you have a bigger hypothesis class, then your bound would be worse. So it's harder to prove this uniform convergence when you have a larger hypothesis class. And if you try to interpret this bound at the end-- so here, this is bounded excess risk. And we can see that you need n to be bigger than the log of the size of h so that the right hand side of the bound becomes meaningful, right? So you want the excess risk to be something smaller than 1, at least, at a minimum. So you need n to be at least larger than log of the size of the hypothesis class. So that's why you need enough samples, right, to make these bounds meaningful. And, of course, as n goes to infinity, you have better and better bound. I'm going to have more discussion after we prove the theorem. OK? So now let's try to prove the theorem. So I guess the outline of the proof is that first, you would do individual h. You prove this for individuals. You prove the simple version, basically like we discussed the last time. And second, we take a union bound over all h. OK. So let's do the first step. So recall that last time, we have done this already for fixed data. So here, I'm just doing it a little more formally. So last time, we actually showed this, right? We used this Hoeffding inequality to get something like L hat theta minus L theta. This is something like an order of 1 over square root of n. That's what we did somewhat informally last time with the Hoeffding inequality. And today, I'm going to have a little more kind of like careful derivations to get exactly all the dependencies up to constant. And by the way, theta and hr, the same. h is just when you talk about finite hypothesis class, right? You don't necessarily have a parameter. You may just list all the hypotheses. That's why it's called h. And when you parameterize it, you have the parameter theta. But for this purpose, they are not different at all. So let's apply-- this is the last lecture. So let's apply Hoeffding inequality, so where ai is 0, bi is 1. So the bound is 0 and 1, right? So we gather that for every h in h-- suppose this h is fixed and then you draw your sample, you look at the probability that L hat h minus Lh is less than epsilon. And here, what's random? The random is the data set. The randomness comes from a data set which goes into L hat. And if you use the Hoeffding inequality, you get this is larger than 1 minus 2 times exponential minus 2 n square, epsilon square, over sum of beta i, bi, minus ai square. And because bi is 1 and ai is 0, so the sum of bi minus ai square is n. So you get 1 minus 2 exponential minus 2n epsilon squared. And right, this is because bi is 1, and ai is 0. And so in other words, if you look at the other side of the bound, you look at what's the chance that they are different. The chance that they are different is less than 2 times exponential 2n minus epsilon squared. Actually in many cases, Hoeffding inequality was stated in this way, instead of the way that I showed before. They are exactly the same. It's just complementary of each other. But you have a lower bound for some events, then you have the upper bound for the complementary of the event. And now, this is for every h, you have this, right? So basically, for every h, you have some kind of failure event, which is this event. And this event happens with a small probability. And now, let's recall that you have the so-called union bound. And the union bound is saying that if you look at the union of a bunch of events, maybe let's say k events, then they are smaller than the probability of the sum of the probability of each event. And here, suppose you say the Eh corresponds to the event that L hat h is different from Lh from average by epsilon, then you know that the probability that the union of the Eis, which is basically saying what's the union of this failure event? Basically, it means that there exists h such that this event happens, right? So such that L hat h minus Lh is larger than epsilon. So this is the union of all of these events. And it's less than the sum of each of the events. OK. And now, you plug-in what you have prepared. Maybe let's say this equation. Let's call it 1. So if you plug-in 1, then you get-- this is sum over all the hs. And you get 2 times exponential minus 2n epsilon square. So each of these events is small. And you multiply it by the total number of the possible events, which is the size of h. So basically, we have 2 times h times exponential times 2n epsilon squared. OK? And you can see that this is basically what we wanted to have. Because now, we have there exists h such that they are different. So the complementary of this will be just that this is just the equals to 1 minus the probability that for every h, the flipped event is true. By the way, I'm not distinguishing the latter two equal in most of this course-- so technically, I probably should write this is less than epsilon, right, instead of less than or equal to epsilon. But for this part, for this course, I'm not super careful about this, because they don't really matter that much. And in many cases, actually, the probability that this is exactly equal to epsilon is actually 0. So technically, they are even correct. But anyway, this is not super important for this course. Because of this, you can see that this is what we care about, right? For every h, L hat h is close to Lh. And we are already kind of getting there, almost. So the only thing we need is to know what this point is. We need to upper bound this so that we can lower bound this probability. OK? So now, let's choose epsilon so that you can get-- so basically, you want this probability to happen. You want this thing to be bigger than 1 minus delta. So that's why you want this to be less than delta, right? So basically, we just need to choose epsilon so that this probability becomes delta. So choose epsilon such that 2h times exponential minus 2n epsilon squared is equals to delta. And this involves you solve the equation, which is not too hard. So if you solve it, you get epsilon is equals to, I guess, exactly what I had before. So epsilon is equals to ln h plus ln 2 over delta over 2n. So basically, if you take epsilon to be this, then you know that the probability that for every h, L hat h minus-- maybe I start with the existence. Lh is bigger than epsilon, it's less than 1, less than delta, right? And then if you flip the event, you get the desired zero. Any questions so far? OK, so let me have a few remarks to kind of somewhat interpret what we have done and compare it with what we did in the first lecture. So if you compare with the asymptotic results, you're going to see this, right? So for asymptotic results, what you got is that Lh hat minus Lh star-- the excess risk-- is bounded by, is something like C over n plus o of 1 over n. And recall that this C can depend on dimension of the problem. And here, you can hide any other dependencies on the problem. And what we have now is that Lh hat, the excess risk, minus the-- sorry, Lh hat, the excess risk, is smaller than-- so here, you don't hide anything. You hide some constant, of course. You hide this as ln h over n over-- sorry, square root this. And of course, you also have something like O of square root ln 1 over delta over n. So this term is supposed to be relatively small because you can take the delta to be-- this is a logarithmic, right? You can take delta to be something like, maybe, n to the 10, right? So you still get square root log n over square root n, right? So let's say take delta to be n to the minus 10 so that this one will be square root log n over square root of n, which is almost negligible compared to the first term. So basically, let's say we ignore this for the comparison. So ignore this for the comparison, then you can compare this and this. So you can see that-- the first thing is that we have a worse dependency on n, right? Before the dependency, at least in terms of leading term, the dependency on n was 1 over n. And now, it's 1 over square root of n. So it goes to 0 slower. So this could be improved. This can be improved in certain cases, which we probably wouldn't do at all. I guess in one of the homework question, you're going to be asked to improve this to some extent. In some cases, you can improve this to 1 over n, depending on various situations. But generally, I guess you get relatively worse dependency on n in comparatry asymptotics. One of the reason why this is happening is that-- so partly because we didn't assume twice differentiability. Of loss function. So here, the only assumption we have on loss function is between 0 and 1. So it even works for 0-1 loss-- the classification 0-1 loss. But before, we did assume that the loss has to be continuous and differentiable. And I think we also assume it's twice differentiable. So that does play a fundamental role here. So when we don't have twice differentiability and we don't have other assumptions, it's kind of actually impossible to get 1 over n rates in many cases. But here, what I'm talking here is all about the downside of our new bound. So the pros, we actually already kind of like mentioned. The main pro is that now, we don't have any dependencies about anything. So before, we recall that last time, we were motivated to have non-asymptotic bounds, we are saying that this thing could hide a lot of things. This could hide, for example, something like dimension to the 50-- that's my extreme example-- over n squared. So p to the 50 over n squared will be counted as little of 1 over n. So that doesn't make a lot of sense just because if a dimension is too high, then this requires n to be very big to be small. So this was the issue that we mentioned last time about asymptotis. And now, we fixed that issue. And that's the main benefit we gain. So we don't have anything about the dependencies. And also, we expand to see how does this depend on the complexity, in some sense, of the hypothesis class. You can think of this as a complexity. The ln h can be thinked of as a complexity of the hypothesis class. Probably, if you have been through CS 229, we have talked about if you-- you can overfit if you have too complex a function class, but you don't have enough data. And this is, in some sense, a mathematical characterization of that. So if your function class is too complex so that the log of H is too big, and you don't have enough data compared to log of H, then you may have a worse bound. And on the other hand, suppose your log of H is small and your n is bigger than log of H, then you have a better bound which could be meaningful. Any questions so far? How does the [INAUDIBLE]? We are doing-- yes-- or no. This is the differentiating of the loss function. So the loss function is the function of-- depending on how you think about it, but by the differentiability, I really mean this function that takes in y hat and y and outputs a scalar. So taking the prediction, and the real label, and the optical scalar. So whether this function is differentiable with respect to y and y hat. So we didn't assume that this function is differentiable here. But implicitly, you are assuming that this loss function is differentiable with respect to y and y hat in the previous asymptotic analysis, because there, actually, we assume the whole loss function, if you compose it with the model, has to be differentiable. [INAUDIBLE] So-- I didn't hear very-- [INAUDIBLE] practical implementation of floating point numbers-- did you use the same bound? For practical implementations where you have floating points? Yeah. [INAUDIBLE] So I guess my interpretation of the question-- maybe let me rephrase the question a little bit also for the Zoom people on the Zoom meeting. So I think the proposal is that, for example, if you really have a practical model, and you have p parameters, and they are-- when you really implement this in computer, it's not continuous. So you can think of each parameter is described by maybe 32 bits, and then you can count how many total possible number of different models there are, and apply this bound. So yes. And that's a good idea, and that will give you what? That will give you that-- suppose you have p parameters. Let's say you have 32 bits. So 1 bit-- so then what does that mean? That means that the total size of h would be that for every p parameter, you have 2 to the 32 choices, and you multiply that to p-- or you take the risk power to p. And so that means the log of H is equal to something like 32-- like O of p. It's a constant times p. So basically, you only get the bound that depends on the number of parameters. And this is reasonable in some cases. This is not very reasonable in some other cases. But definitely, it's a pretty-- it's a bound that makes sense. So in some of the later parts of the lecture, we are going to see how to get a bound that doesn't depend on parameter. But if you are fine with getting a bound that depends on a number of parameters, then this is indeed a good bound. And this is actual natural question that leads me to the second part of the lecture-- today's lecture. So this is a proposal where you-- the proposal to do this has a small con, or small kind of problem, which is you basically have to say that I have to resort to practical implementation. In practice, I cannot really implement real numbers. All the real numbers, I have to discretize in some way. So that's-- and sometimes, you put additional restriction on yourself, saying that if I can only use floating points, then what bound I can have? So what I'm going to discuss next is that you don't even need this. You can even say that, even for all the possible continuous models which are supposed to have p parameters, and each parameter is really a real number. Like, you can-- for example, suppose you have almighty computer which can have infinite precision. Still, your bound would still look something like this. You still have O of p bound. So then you don't have to-- suppose we have that infinite hypothesis class proof. Then, you don't need this practical-- this way of proving it. You can have a more generalized, stronger way to prove it, and that's what we're looking for. Cool. So maybe let's start to do that. So let's talk about infinite hypothesis class. And as I suggested a little bit before-- so we are going to have a bound that looks like P over n square root. P over n, and P is the number of parameters. So this is something we're going to have. Cool. And so today, we're going to do this so-called brute force discretization. This is a technique of-- at least, this is how I name this technique, I guess. Because this technique is to brute force, I guess there's no real name for it. And what you can do is the following. So maybe-- yeah, let me state the theorem that I'm going to prove first, and then I can tell you what's the intuition and how to prove it. So this is the theorem. So suppose H is-- OK, I guess I'm still setting up. Suppose H is parameterized by theta in P dimension. So H is-- mathematically, you write H is a family of H sub theta. Each H sub theta is a parameterized model, where a theta is in some set of theta, which is a subset of Rp. So capital theta is the set of parameters you are going to choose from. And in some sense, this is for convenience. But I guess you probably wouldn't see why this is only for convenience, but it doesn't really matter. So suppose you only select models from this set, where your norm of the model is bounded by B. Our dependency on will be very-- will be only logarithmic. So in some sense, this is not really a real restriction. You can choose B to be pretty big just because your dependency on B is very relaxed where it's logarithmic in B. So this is our setup. And also let's recall that we're sometimes going to use this notation. This is really-- we use all of these notations interchangeably. So either this is really just a loss of theta-- the model theta on the data point x and y. So it's really just compare H theta of x and y, and you get the loss. These two are just the same thing. We are abusing the notation a little bit. And also recall that we have L theta, L hat theta. This is all as we defined before. And so here's the theorem. So we still have to assume that the loss is between 0 and 1. This is probably always assumed in most of these lectures-- most of this course for every x, y, and theta. And suppose this is additional assumption, where you assume that this loss function is K-Lipschitz in theta. So for every x and y. So what does this really mean? This really means that you are assuming l of x, y, theta, minus l of x, y, and theta prime. If you change your model to theta prime, then your loss would be different by a constant times theta minus theta prime [INAUDIBLE].. So maybe let's try to-- this is kappa. So again, this-- our dependency on kappa will also be logarithmic. So in some sense, this is also not assuming much, because if your loss is somewhat continuous, then you can have a-- it's going to be Lipschitz to some extent. Probably, the Lipschitz constant is not very good, but the Lipschitz constant would be something reasonable. And if you take logarithmic of it, it's not very sensitive to the Lipschitz constant. And then with this, you get with probability and the least 1 minus, I guess, O of e to the minus P. So actually, you have even higher-- even lower failure probability. The failure probability is e to the minus P. So with such small failure probability, you get that for every theta, you have the uniform convergence is less than some big O of square root P over n times max 1, and ln, kappa Bn. So eventually, the dependency on kappa and B are logarithmic. That's what I promised. And the main thing is really P over n, so you get the parameter dependency and you get the dependency of it. And you still have the square root here, so this is still worse than the asymptotic bound if you compare with the leading term of the asymptotic bound. But as we said, you don't have the second order term in an asymptotic bound. So how do we-- how to prove this? So actually, the proof is very similar to what was suggested in the question. So you are doing this quantization, and then you deal with the discretization error separately. So what you do is the following. So let me start with a sketch in some sense. So the kind of alternate sketch is the following. So you define E theta-- be the event that-- you have this failure event. L hat theta minus L theta is larger than epsilon. And epsilon is going to be something TBD. But epsilon would be very similar to this thing, because you care about whether these two are different-- how-- this much different. But anyway, so absolutely some number. This is kind of like a placeholder. So you care about this kind of event. And we know that this will be a small probability event as we have shown for the final case. So this E theta is a small probability event. And before we called that-- what we did is which we said that the union of this E theta is less than the sum of the E theta-- the probability of E theta. But now, because you have infinite number of theta, this is infinite, because each one has some small probability event so to fail. And you take the sum over all possible events-- then you get an infinite number anything. Like each of these will be some epsilon. You take the sum of infinite number of things-- you get infinite, so that's why it doesn't work. You cannot use exactly the same thing as before. But the reason why this can be fixed is because this union bound is very pessimistic. So if you think about union bound-- so union bound is really just saying that, I guess I'm not sure-- and depending on how you learn union bounds in previous lectures. Like what I learned about the union bound is the following. For example, you have-- this is the full probability space. And each event takes some part of space. Maybe this is E theta 1-- this is E1 and this is maybe E2. And the optimal-- when the union bound would be tight is when all of these events-- I call it-- they call them failure events-- all of them are destroyed. So suppose this is the case. Then, the union of these events will be the sum of the probability of each of the events. But here, it's not clear whether these events are disjoint. And actually, they may have a lot of overlap. So you have 1 theta-- so E theta. And if you change your theta to your nearby theta, you probably have something like this, which is E theta prime. And they have all of this overlap. And then your union bound starts to be very loose. So that also kind of motivates our way to fix it. The way that we fix it is the following. So we fix it by first picking not-- we don't take union bound over all possible events. We select a subset of events, and with a union bound over them. And then we say the other events will be close to some of this subset of prototypical events. So basically, the idea is that you-- the rough idea is that you select some prototypical typical events. Sorry. Maybe I should just say typical events. Or you just take some exemplar events. I don't know how-- I forgot how to spell that-- some exemplar events. And this subset of events you-- is a smaller set of events than what you finally care about. And then, you use union bound on the subset. And then you say that other events are similar to the subset-- to the exemplars. So then, you cover all the events, so that's the rough idea. So let's see how we exactly do this. So to exactly do this, we need to introduce something called-- any questions so far? So to exactly do this, we need to introduce something called discretized epsilon cover. This is actually also a useful tool for other cases as well. So let me first define this epsilon cover, and then say why it's-- it's kind of like a language to describe what are called prototypical events, or prototypical kind of parameters or models. So epsilon net-- sometimes, those are called epsilon net-- sometimes, it's called epsilon cover-- of a set, S. And here, S corresponds to the family of all models you care about. And you care about a subset of models. And if you-- this-- and with respect-- when you really define it, you have to specify a metric, rho. Rho is a set, C, which is also a subset of S. But I think technically, we don't have to require C to be a subset of S, but I think in almost all cases, it's a subset of S. Such that for every x in S-- so there exists kind of a neighbor in C which is close to x. So if you draw this, it's kind of like you're saying that you have a set of models-- of parameters. This is called S. And the epsilon cover is a subset of S. So as you select subpoints-- and let's call these points-- these are all in C. And then you say that the set of C needs to satisfy the following to be an epsilon cover. So what it has to satisfy-- it has to satisfy that for every point you pick in x in S-- you have to pick this point x in S-- there exists a neighbor-- somewhat kind of a neighbor in C. Let's call this x prime. I guess I cross and x seems to be the same. I'm not sure whether-- maybe I should-- anyway, you see what I mean. The purple cross is just indicating a point. So you have a point x here, and you can always find some other point in C such that x prime is kind of close to x. So that's basically is saying that all of these points in C are prototypical kind of points, because every point in S can find a neighbor in C. Does that make sense? And equivalently, you can also write this in the following way. So equivalently-- this is in some sense more-- explaining why this is called epsilon cover. So equivalently, you can write this as the S is covered by the union of the ball around all the x. Let me write down and explain what this is. So first of all, this is the-- so this thing is the ball centered at x with radius epsilon and distance metric-- or metric rho. So basically, this is saying the following. So this is the equivalent definition of epsilon cover. So you are saying that if you look at all the balls around all of these purple points-- so this is-- the radius is epsilon. So in some sense, you can say that this ball-- so this point covers the entire ball, because for every point in this ball, you can find-- you can use this point-- the center as the neighbor. So basically, every point covers some part of the space. And so the requirement is that-- if you look at all the points that can-- if you look at all the balls around all the centers, then this would cover the entire S. That means that every point in S can be covered by some ball, and that means every point in S has a neighbor in C. Any questions? C might not be-- in some sense, we will insist that C-- like, we will need to find a very small cover C, which is finite. And also, hopefully, we want the size of C to be small. But by definition, there's nothing about whether it's finite or not, but we will construct epsilon cover that is finite. So this is-- so so far, this is only a definition saying that C is epsilon cover of S. And we will try to make C be small. And this is actually exactly-- so this is exactly what we're going to do next. So how do we construct a set, C, that is finite and also covers the entire set? So what is S? So for us, S is the set of all parameters. It's the set of parameter theta with this l2 norm less than B. You only construct a subset of parameters that can cover all the parameters. So here is the lemma that says that you can do this. And you can have finite C, and also, actually, you can have a reasonable bound on how many C's-- how many points in C there are. So let's define this to be-- I guess, for this lemma, I call this theta. So theta is defined as above. So for every epsilon in 0 and B, for every radius, there exists an epsilon cover of this theta, the l2 norm with radius B such that with at most 3B over epsilon to the power P elements. So this is a cover, and the size of this cover is bounded by 3B over epsilon to the power of P. So I think this is actually-- we're going to prove a weaker version. The full-- we're going to have a homework question which guides you to prove exactly this version. So for now, in the lecture, we're going to prove a weaker version which is somewhat easier. So this weaker version-- and also actually suffices for our purpose. So you don't really necessarily need a stronger version to prove the final theorem just because the weaker version is only weaker by a little bit. So I guess the homework will guide you towards the stronger version, which also introduces some techniques which are useful. So here is the weaker version. The weaker version is pretty much like you discretize your computer. You just do a trivial discretization using some grid. So what you do is you just take C-- be a trivial grid in some sense. So what does that mean? It really means that you have this ball-- I guess there-- I guess if you have this ball, and you just say that you take that-- some arbitrary coordinate system. You just take the natural coordinate system and you discretize your space like this. And then you take all these points as your C, and that's it. And then the question is just a matter of counting and how fine-grained your grid needs to be. So formally-- so C is taken to be all the points x in RP such that xi, the coordinate, is a multiple-- I guess this is k. k times epsilon over square root of P. So epsilon over square root of P is my grid size, and k is the integer multiplied with it. For some integer on k where k is smaller than B square root of P over epsilon. Why I have this constraint on k is because at some point, you don't need more points because you already-- you don't have to do anything beyond this part, because if your k is too big, you already offset x, and there is no point. And if you do the calculation, this is the right thing. And so now, we have to do two things. One thing is we have to see how large C is, and the second thing is we have to prove that this is epsilon cover. So let's do the first thing. So why this is epsilon cover is because if you look at any point x in S, you just round it to the nearest point. So when you run it, you run it to-- so you do some rounding-- let's see. I guess when you run it, you run it to let's call it x prime. Let me not write exactly what the long-- the long way just means you take any vertex in this grid and you round it-- the nearest-- any reasonable nearest-- that's what I mean. You just do the trivial run. Let's say we run to a smaller number. It doesn't really matter that much. So if you run it-- so what you got is that xi minus xi prime is less than epsilon over square root of P. Because for every dimension when you're wrong, you increase-- you create epsilon over square root of p error. epsilon over square root of P is your grid size. And that means that the distance between x and x prime in the l2 sense-- this is, I guess-- so I think this is-- I should mention that the rho l2 norm. The rho-- this-- yeah, I should have mentioned this. So the magic we are using is rho is l2 norm. So then if you look at l2 number of these two things-- so this is the sum of xi minus xi prime squared, i from 1 to P. And then you bound each corner. You get P times epsilon squared over P squared root, which is epsilon. That's actually why I chose the grid size to be epsilon over square root of P just because I want to make it epsilon right there. So this proves that it's epsilon cover, right? And also we can count how large C is. So C is what? C is something to the power P, because for every coordinate, you have a bunch of choices for k. And how many choices for k there are-- basically, this was-- here is like k-- the absolute value of k is less than B square root of P over epsilon. So basically, you get B square root of P over epsilon. And because it can be positive and active, that's why you multiply 2. And it can also be 0, so you add 1. So that's the total number of choices in C. And one common is that-- eventually, only log C matters as you'll see. So log C will be P log 2B square root of P over epsilon plus 1. And that's why this weaker version is not super different from the stronger version, because the difference-- so the stronger version was 3B over epsilon to the power of P. And the log becomes P log 3B over epsilon. And if we compare the stronger version with the weaker version, the only thing that's different is the square root of P and the log. So that's why eventually, it doesn't change the bounds too much. Cool. So this is our proof for the weaker version of the lemma. And now, let's use this lemma and the epsilon cover to prove the final bound. So as we planned, what we do is that we first apply finite hypothesis case-- the finite hypothesis analysis to C. And then, you say that-- then you somewhat-- so this may be-- let's say this is number one, and then you say that extend 1 to the whole set, S. So now, the first step should be trivial because we already proved it. So if you want to do i, then basically you got that if you-- the first thing is that you do it for every fixed theta in C. Then, you have probability of this-- similar to epsilon, if you use Hoeffding's inequality, you get this is-- I guess let's call it epsilon tilde, because this epsilon tilde will be tuned to be decided later to make the bounds fit. So you got this is 2 times exponential minus 2n epsilon square root. This is by Hoeffding-- exactly the same thing as we have done before. And then you take a union bound. You got the probability that for every theta-- I guess exists theta in C such that this is not right. It's small. And how small it is? You multiply C with this exponential minus 2n epsilon theta squared. So these two steps are exactly as we did. And if you flip this, you get 1 minus-- so probability of-- the good event happens with high probability. I'm just flipping it. So now, we have to do the second step. How do we extend this for everything in S? And so second-- and we are basically using Lipschitzness. And you can see that this is not really anything super clever. It's kind of like a subset brute force. So just for some quick preparation-- so because l, x theta is kappa Lipschitz-- this is kappa Lipschitz in theta, this implies that L theta and L hat theta are both kappa Lipschitz. Why? This is just because if you average two kappa Lipschitz functions, they are still kappa Lipschitzs. So if f is kappa Lipschitz, g is kappa Lipschitz. f plus g over 2 is also a kappa Lipschitz. And you can prove this by a simple triangle inequality. And you can do this for multiple functions, not only just two functions. You can do it for n functions. So suppose we have this. Now-- so we also know that for-- so suppose we know for every-- so we already know that for every theta-- so L hat theta-- so it's supposed to be conditional on this event. So with some chance-- with a very high probability, this happens. And suppose this happens. This condition-- this event, we want to prove that the same thing happens when we replace C by theta-- by S. And so this means that if you have-- for every theta in-- I guess I call it capital theta, but not S. Capital theta, the ball, you can find some theta 0 in C such that theta minus theta 0 l2 is less than epsilon. This is by the definition of epsilon cover. C is epsilon cover of capital theta. That's why you have this. And then this implies that L theta minus L theta 0 is less than kappa times epsilon. This is by Lipschitzness. And so in some sense, you just use theta 0 as a reference point. So what you finally care about is L hat theta minus L-- so I guess you also know this. You don't know this. You also know L hat theta minus L theta hat theta 0 is less than kappa times epsilon. This is also by Lipschitz. So now, with this tool, you can-- what eventually you want is you bound the difference between L hat theta and L theta. And we have seen this kind of triangle inequality-- this kind of manipulation already. Because eventually, you care about the difference between L hat and L, but you use the theta 0-- some reference points to kind of bridge them. So you do this decomposition. You say that this is L hat theta minus L hat theta 0, plus L hat theta 0, minus l theta 0, plus l theta 0, minus l theta. And now, these two things are about differences between theta and theta 0. So this quantity is less than kappa times epsilon. And this quantity is also less than kappa times epsilon. And this quantity is less than epsilon. This is because theta 0 is in C. So we have already proved that for every theta in C, L hat theta is equal to-- is close to L theta. So that's why we got this 3 inequality. So in total, if you look at absolute value, then you can do the triangle inequality to get the absolute value of the sum of-- the sum of the absolute value of each of them. You get 2 kappa times epsilon, plus-- oh, epsilon tilde. So sorry, this is epsilon tilde, because recall that I used a different epsilon for the concentration just so that I can tune this epsilon tilde eventually. And if I-- now is the time to start epsilon to be epsilon tilde over 2 kappa, or maybe you can do it, and then we want like epsilon tilde to be epsilon times 2 kappa. Then you get-- so that you balance these two error terms, so you get this is less than 2 epsilon tilde. So now, let's look at the-- what's the-- let's go back to here. Because here, there is something about the cover size we have to deal with. We have to plug in the right cover size. And what is the cover size? So the cover size was-- so log cover size-- log C is equal to log 3B over epsilon to the power of P. And I have already set epsilon to be epsilon tilde over 2 kappa, so I need to plug in that so I get P-- oh, let's first to get this, and then let's plug in the choice of epsilon tilde. So we get P log 3B kappa epsilon tilde. And you can see that kappa is inside the log, so that's why it's somewhat not sensitive to the choice of kappa. And also epsilon tilde is also in the log, which is also nice. And now, we have to care about this failure probability. So we basically want to say that this is equal to something like delta. So we want to bound the failure probability to C exponential, minus 2n epsilon tilde squared. So this is something-- we'll show that this is small. Actually, in this case, I'm hoping to show that this exponential minus P. So how do we show this? And of course, it depends on what epsilon tilde is. So you need to choose the right epsilon pseudo such that this is true, and that's basically your final bound. And just to get something-- so you're going to see that the exact calculation of this is going to be a little bit complicated. But just to get some intuition here, so suppose-- so this is a heuristic, which is not even calculated correct, but it's approximately correct. So suppose optimistically that log C is equal to P instead of P times-- instead of P times log 3B over epsilon-- 3B kappa over epsilon tilde. So suppose you just have P. You don't have the log term. Actually, this becomes a very simple calculation. So what you got is that you got-- so basically, if you take the log of this bizarre inequality, you want that-- let me see. If you take the log, you get log 2, which is not super important. You get log 2 times log C, minus 2n epsilon tilde squared. And suppose log C is equal to P. Then you've got P minus 2n epsilon squared. And if you take epsilon-- tilde to be square root P over n, then you get this is equal to P minus 2p. It's equal to minus P. Which means that 2 C exponential minus 2n epsilon squared-- if you take the exponential back, you get this is less than exponential minus P. So this is fundamentally how it works. But we did make this incorrect assumption that log C is equal to P. But this is something not very far, so it's only off by a log factor. So if you want to fix this, technically, you need to deal with the log factor. It wouldn't change much, but it would introduce a little bit of complication. So I did have the calculation here. I'm just going to basically write it down, but I don't really expect that you follow all of this. It took me one hour to even figure out all the constants, so on and so forth. It's not super important. I think the intuition is already there. But let me just quickly write this just to say what you do formally. So if you suppose log C is equal-- you only have this bound. So this is what we only have. This is 6B kappa over epsilon tilde. And then let's take epsilon tilde to be square root-- some constant times P over n, times max. This epsilon tilde is actually the epsilon-- is the final bound, so that's why you're going to see kind of the same thing in your final bound. And C0 is a sufficiently large constant, which we will choose a bit later. And you're plug in all of this, and you just-- again, you take the-- you look at the log of the inequalities we care about-- the log of it is this-- and you plug in this choice of C. You get P log 6B kappa over epsilon tilde, minus 2n epsilon tilde squared. And you somehow know that if you don't ignore this log, it's already work. It's just if you have the log, you still have to deal with it. So you get something like P log-- I'm not even sure whether I really have to write down all of this, but just in case some of you want to have this hard calculation. So you get this. And then you-- somehow, I guess the first term becomes log 6B kappa. You explain the first term. Square root C0 P. I guess I decompose this into-- I guess I'm decomposing the first term. 6 over C0 P, minus C0 P. Log kappa Bn. I guess the way that I always think about this is that when you do the calculation, you always need to check with what happens if you don't have the log. So what-- if you don't have the log, then this term is large constant times P, and this term is P, so that's why it's nice. So eventually, you can-- if you take C0 to be something like 32, 36, I think you can show that this is bigger than this one. And this one I think-- I guess I-- and this one is inactive when P is large. And then you've got this is less than minus P. I guess the exact calculation-- there is some more detailed calculation in the notes, but it doesn't matter that much. So that's what we do. So then basically, this is saying that if you take an exponential-- so you get-- after this inequality, you get 2 C exponential minus 2n epsilon squared is less than 2 times exponential minus P. So this is our failure probability. So basically, with this probability-- so with probability larger than 1 minus O, e to the minus P-- we'll have L hat theta minus L theta is less than 2 epsilon tilde, which is this thing that we wanted. Let me not just copy it again. Cool. So that's the proof. And this proof is a little messy, and this is probably one of the reasons why. If you open up a classical machine learning book, they typically don't show you this proof. So it's just because it's a little messy, but actually, it's-- the reason why I always try to show this proof is that I feel like it's very intuitive, and it demonstrates what's really going on. And also this kind of thing is actually useful for many recent networks, if you're looking-- if you look at the technical low level details. So the fancy Radamacher complexity thing that we are going to talk about next, they are very nice, but sometimes, they don't apply, and you have to really use this. You go back to the most brute force way to think about it. So maybe just a few quick comments about this proof. I guess if you really think about this, this is really saying that you have a generalization error. So it's less than the log of this excess risk up to constant factor, of course, plus epsilon to the k. So this part is from the finite hypothesis case, and this one comes from the-- so this is not k, this is kappa. So this is the discretization error, and this is the finite hypothesis case. And in some sense, you are just trading off these two. And what that means by trading off, these two-- it really means that-- what epsilon you choose. So this one depends on epsilon. So this one depends on epsilon, but it depends on epsilon in a very weak way because it depends on epsilon in a-- by logarithmic. So that makes it very easy to trade off this, because you can pick epsilon to be quite small so that this term becomes small, and this term-- this depends on-- sorry, I think, technically, this should depend on log 1 over epsilon. So the smaller the epsilon is, the better the second term, but the worse the first term. But the first term increases as epsilon goes to 0 very slowly. So that's why you pretty much can ignore the second term in some sense, just because you take epsilon to be very small so that the second term becomes negligible. And even for those small epsilon, the first term is still reasonably bounded. And that's why you can make this trade off really nice. But in some other cases, as we'll see later, we will do the discretization-- the first term wouldn't be as nice as this. It wouldn't be log 1 over epsilon. It would be something that goes to infinity as epsilon goes to 0 in a faster rate. So it probably-- in the later case-- later-- sometimes, this first term will be log over epsilon squared. Then, the trade off will become a little bit more tricky, and you have to be more careful about the trade off. And finally, just to-- for-- this is probably an overview for-- this is from a somewhat bird's eye view. So log H of P in this case. Or the P, you can think of this as complexity measures. I guess I've mentioned this as well. So these are complexity measures of the hypothesis class. And the general phenomenon-- it's always like you have a bigger H. It means that you need more samples. This is always-- you have worse bound, which means you need more samples to learn. And in some sense, the next one or two weeks, we are talking about a more accurate-- I guess accurate may not be the right word-- a more fine-grained hypothesis-- more fine-grained complexity measure. So what is the right complexity measure? There is no really super decisive answer what's the right complexity measure. In some sense, it's up to the theorem prover. But we're going to have a more fine-grained and more, in some sense, fundamental complexity measure in the next two lectures, which is called Radamacher complexity. And you can use that to derive many of these bounds in more principled ways. And in general, I think one of the important questions for-- especially in somewhat classical statistical machine learning-- is to find out what's the right complexity measure for your hypothesis class. So we're going to discuss what does it really mean by right and wrong. There's no unique answer, but there are some kind of-- but this is the central question. So you need a complexity measure that really captures the fundamental complexity of this class. For example, if you have infinite class, you shouldn't use log H. Log H is not really the fundamental complexity measure for infinite hypothesis class. You probably should use dimensionality. Probably in the later course, we are going to see if you can use the norm of your parameters as the complexity measure. And it does depend on the specific cases. And also sometimes, it depends on data. So this will be what we discuss in the next few weeks. I think this is a natural place to stop. Yeah. I think that's all for today.
AI_LLM_Stanford_CS229
The_End_of_Finetuning_with_Jeremy_Howard_of_Fastai.txt
Everyone, welcome to the latest space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. And I'm joined by my co-host, Swyx, founder of Smol AI. Hey, and today we have in the remote studio Jeremy Howard from all the way from Australia. Good morning. The remote studio also known as my house. Good morning. Nice to see you both. Nice to see you, too. And I'm actually very used to seeing you in your mask as as a message to people. But today we're mostly audio. But thank you for doing the very important public service of COVID awareness. I want to say it was a pleasure. It was all very annoying and frustrating and tedious, but somebody had to do it. So somebody had to do it, especially somebody with your profile, I think really drives on the message. So I so we tend to really introduce people for them and then ask people to fill in the blanks on the personal side, you something I did not know about you was that you graduated with the B in Philosophy from the University of Melbourne. I assumed you had a Ph.D.. No, I mean, I barely got through my B.A. because I was working 80 to 100 hour weeks at McKinsey, a company from 19 years old onwards. So, yeah, I actually didn't attend any lectures in second and third year university. Well, I guess you didn't need it or you're very sort of self-driven and self-motivated. And I just I took two weeks off. Yeah. Before each exam period when I was working at McKinsey. And then, I mean, I can't believe I got away with this in hindsight. I would go to all my professors and say, I was meant to be in your class this semester. And I didn't quite turn up. Were there any assignments I was meant to have done or whatever? I can't believe all of them. Let me basically have. They basically always say like, okay, well, if you can have this written by tomorrow, I'll accept it. So yeah, stressful way to get through university. But yeah, well, it shows that, I guess. You mean back to the opportunities that definitely I should mention. I mean, funnily, like in as much as I, you know, in philosophy, the things I found interesting and focused on in the little bit of time I did spend on it was was ethics and cognitive science. And it's kind of really amazing that come it's now come back around and those are actually genuinely useful things to know about, which I never thought would happen. A lot of yeah, a lot of relevant conversations there. So you were a consultant for a while, and then in the magical month of June 1989, you founded both AutoML Decisions and Fast Mill, which I also used. So thank you for that. Good for you. Yeah, because I had read the statistics, which is that like 90% or something is small businesses fail. So I thought if I start to bias due to I have a higher chance. In hindsight, I was thinking of it as some kind of stochastic thing I didn't have control over, but it's a bit odd anyway. And then you were president chief scientist at Kaggle, which obviously is the sort of competition platform of machine learning and then analytic, where you were working on using deep learning to improve medical diagnostics and class on decisions. Yeah, I was actually the first company to use deep learning in medicine tech and have founded the field, and even now that's still a pretty early phase. And I actually heard you on your new podcast with Tanishq where you went very, very deep into the stuff, the kind of work that he's doing, such a young prodigy at his age. I mean, maybe he's too old to be called a prodigy now, X prodigy. And I think he took us in anyway just to just to round out the bio. You have a lot more other credentials, obviously, but most recently, you started FASSIHI Which which is still, I guess, your primary identity with Rachel's hobbies. So welcome, my wife. Thanks. Thank you. Yeah. Doing a lot of public service there with like getting people involved in the AI. And I can't imagine a better way to describe it than fast facts. Idea is you teach people from nothing to say. Well, diffusion and you know, seven weeks or something. And that's amazing. Yeah. Yeah. I mean, it's funny, you know, when we started that, what was that like 2016 or something? The idea that deep learning was something that you could make more accessible was generally considered stupid. But everybody knew that deep learning was a thing, that you got a math or a computer science Ph.D., you know, that was one of five labs that could give you the appropriate skills that you would joint Yeah, basically from one of those labs, you might be able to write some papers. So, yeah, the idea that normal people could use that technology to do good work was considered kind of ridiculous when we started it, and we weren't sure if it was possible either, but we kind of felt like we had to give it a go because the alternative was we were pretty sure that deep learning was on its way to becoming, you know, the most or one of the most techno, you know, important technologies in human history. And if the only people that could use it were a handful of computer science PhDs that seemed like a, a big waste and B, kind of dangerous. Yeah. And, you know, well, I just wanted to know one thing on your bio that at all, you were also the top ranked participant in both 2010 and 2011. So sometimes you see a lot of founders running companies that are not really in touch with the problem, but you are clearly building something that you know a lot about, which is awesome. And even yet, talking about deep learning, you created publish a paper on your own pet, which was kind of the predecessor to multitask learning and a lot of the groundwork that then went to into Transformers. I read back on the paper and you trained this model JWT Altium, which I, I did the math and it was like 24 to 33 million parameters depending on what training dataset you use today. That's kind of like not even small as like super small. What were some of the kind of like contrarian takes that you had at the time and maybe set the stage a little bit for the rest of the audience on what was kind of like the state of the art, so to speak, at the time and the people working towards. Yeah, the whole thing was a contrarian. Take, you know. So okay, so we started fast I, my wife and I and we Yeah. So we're trying to think, okay, how do we make it more accessible. So, so 20 when we started thinking about it was very 2015 and then 2016 we started doing something about it. Why is it inaccessible? Okay, well, a no one knows how to do it other than a few number of people. And then when we asked the sheer number of people were, how do you actually get good results, they would say like, it's like, you know, a box of tricks that aren't published. So you have to join one of the, you know, labs and learn the tricks. So a bunch of unpublished tricks, not much software around, but, you know, thankfully there was the piano and, you know, wrappers and particularly lasagna, the wrapper. But yeah, not much software around, not much in the way of data sets. You know, it's very hard to get started in terms of the compute, like how do you get that set up? So, you know, everything was kind of inaccessible. And, you know, as we started looking into it, we had a key insight which was like, you know what, most of the compute and data for image recognition, for example, we don't need to do it. You know, there's this thing which nobody knows about. Nobody talks about called transfer landing, where you take somebody else's model where they already figured out like how it detects edges and gradients and corners and texts and whatever else. And then you can find China to do the thing you want to do. And we thought that's the key. That's the key step to becoming more accessible in terms of compute and data requirements. So when we started fast, I, we focused from day one on transfer learning. Lesson one in fact was transfer learning, literally Lesson one that was something not normally even mentioned in. I mean, there wasn't much in the way of courses. You know, the basically the really the courses out there were PhD programs that had happened to have recorded their lessons. So they would really mention that all we wanted to show had it to four things. It seemed really useful, you know, work with vision, work with tables of data, work with kind of recommendation systems and clever filtering and work with text because we felt like those four kind of modalities covered a lot of the stuff that, you know, are useful in real life. And one was doing anything much useful with text. Everybody was talking about what to VEC, you know, like king plus queen minus well then and blah blah, blah, that's like cool experiments, but nobody is doing anything like useful with it. And NLP was all like liberties Asian and stop words and topic models and diagrams and spams and it was really academic and not practical. But yeah, I mean, to be honest, I've been thinking about this crazy idea for nearly 30 years since I had done Cognitive Science University, where we talked a lot about the cells Chinese room experiment, this idea of like, what if there was somebody that could kind of like near all of the symbolic manipulations required to answer questions in Chinese, but they didn't speak Chinese. They were kind of inside a room with no other way to talk to the outside world other than taking in slips of paper with Chinese written on them. And then they do well their rules, and then they passed back a trace of paper with Chinese. Back in this room with a person in is actually fantastically good at answering any question you give them written in Chinese, you know, do they do they understand Chinese? And is this you know, it's something that's intelligently working with Chinese ever since that time, I'd say the most thought to me, the most thoughtful and compelling philosophical responses. Yes. You know, intuitively it feels like no, because that's just because we can't imagine such a large kind of system. But, you know, if it looks like a duck and it's like a duck, it's it's a duck, you know, or to all intents and purposes. And so I always kind of, you know, so this is basically a kind of an analysis of the limits of of text. And I kind of felt like, yeah, if something could ingest enough text and could use the patterns it saw, too, then generate text in response to text, it could appear to be intelligent, you know, whether that means it is intelligent or not is a different discussion and not one I find very interesting. Yeah. And then when I came across neural nets when I was about 20, you know what I learned about the universal approximations theorem and stuff. And I started thinking like, I wonder if like a neural net could ever get big enough and taken enough data to be a a Chinese room experiment, you know, with that background and this kind of, like, interest in transfer learning, you know, I'd been thinking about this thing for kind of 30 years and I thought like, I wonder if we're there yet, you know, because we have a lot of text. Like I can literally download Wikipedia, which is a lot of text. And I thought, you know, how would something learn to kind of answer questions or, you know, respond to text? And I thought, well, what if we used a language model? So language models are already a thing. You know, they were not a popular or well-known thing, but they were a thing. But language models existed. This idea that you could train a model to fill in the gaps. So actually in those days it was fill in the gaps, it was finish a string. And in fact, Andre Cowperthwaite at his fantastic ah, an end demonstration from this at a similar time where he showed like you can have it in just Shakespeare and or generate something that looks a bit like Shakespeare. I thought, okay, so if I do this at a much bigger scale, are using all of Wikipedia, what would it need to be able to do to finish a sentence in Wikipedia, effectively to do it quite accurately? Quite often I thought cheese. It would actually have to know a lot about the world. You know, I'd have to know that there is a world and that there are objects and that objects relate to each other through time and cause each other to react in ways. And that causes precede effects. And that, you know, when they're animals and they're a people, and that people can be in certain positions during certain time frames and then you know all that together, you can then finish a sentence like this was signed into law in 2016 by U.S. President X and it would fill in the net, you know, So that's why I tried to create a what in those days was considered a big language model trained on the entirety on Wikipedia, which was that was, you know, been unheard of. And my interest was not in, you know, just having a language model. My interest was in like, what latent capabilities would such a system have that would allow it to finish those kind of sentences? Because I was pretty sure based on our work with transfer like and that I could then suck out those latent capabilities by transfer learning, you know, by fine tuning it on a task data set and whatever. So we generated this three step system. So step one was train the language model on a big corpus. Step two was fine tuning a language model on a more curated corpus. In step three was further fine tuned that model on a task. And of course, that's why everybody still with us today, right? That's about it. Yes. And so the first time I tried it, within hours I had a new state of the art academic result on IMDB. And I was like, Holy shit, it does work. And so you asked, to what degree was this kind of like pushing you against the, you know, established wisdom, you know, every way? Like the reason it took me so long to try it was because I asked all my friends in NLP if this could work. And everybody said, no, it definitely won't work. It wasn't like maybe everybody was like it definitely won't work. NLP is much more complicated than Envision. Language is a much more vastly complicated domain, you know, and you've got problems like the grounding problem we know from like philosophy and theory of mind that it's actually impossible for it to work. So yeah, so don't waste your time. Jeremy had people not tried because it was like too complicated it to actually get the data and like set up the training or like, were people just lazy and kind of they had does it just lazy? So like, so the person I thought at that time who, you know, there are two people I thought at that time actually who are the strongest at language models were Stephen Mccredie and Alec Radford. And at the time I didn't know Alec, but I after we had both, after I'd released U of M Fit and he had released GPT three, I organized a chat for both of us with Kate Betts, The New York Times and Kate met Sense and Alec answered this question for Kate and Kate. Dyslexia had it, you know, it come about and they said, Well, I was pretty sure that pre-training on a general large corpus wouldn't work, so I hadn't tried it. And then I read dual m fit and turns out it did work. And so I did it, you know, bigger and it worked even better. And similarly with Stephen, you know, I asked Stephen Emeriti like, why don't we just find, you know, take your time and like trade it on all of Wikipedia and find China is kind of like that. I don't think that's going to really fly. Like two years before. I did a very popular talk at Kate at the conference where everybody in NLP was in the audience. I recognized after faces, you know, and I told them all this. I'm sure transfer learning is the key. I'm sure imagining that, you know, is going to be an NLP thing as well. And you know, everybody was interested and people asked me questions afterwards, but just yeah, nobody followed up because everybody knew that it didn't work out. Even like so we were scooped a little bit by Tai and Lee Kwok Li at Google. They had they had already I didn't even realize this, which is a bit embarrassing. They had already done a large language model and fine tuned it. But again, they didn't create a general purpose large language model. On a general purpose corpus, they only ever tested a domain specific corpus and I haven't spoken to Kwok actually about that, but I assume that the reason was the same. It probably just didn't occur to them that the general approach could work. So maybe it is that kind of 30 years of mulling over the vessel Chinese Dream experiment that it convinced me that it probably would work. I don't know. Yeah. Interesting. I just dug up Alec announcement tweet from 2018. You said inspired by Code Elmo and you all on set. We shared a single transform. Our language model can be fine tuned to a lot of variety. It's interesting because, you know, today people think of the AI as the leader, kind of like the kind of like the research lab pushing forward the field. What was that at the time? You know, like kind of like going back five years. People think of being as an overnight success. But obviously it took a while. Yeah, Yeah. No, I mean, absolutely. And I'll say, like, it's interesting that it mentioned Elmo because in some ways that was kind of diametrically opposed to to your alarm fit. You know, there was he's kind of like so there was a lot of there was a lot of activity at the same time as your album fits released. So there was two before it as Brian McCann, I think at Salesforce had come out with this neat model that did kind of multitask learning. But again, they didn't create a general fine chain language model. First it was Elmo, which I think was a, you know, actually quite a few months after the first Joel Misfits example. I think. But yeah, that's a bit of that stuff going on. And the problem was everybody was doing and particularly after GPT came out that everybody wanted to focus on zero shot and few shot learning. You know, everybody hated fine tuning, everybody hated transfer learning and like I literally did tours trying to get people to start doing transfer learning and people, you know, nobody was interested, particularly after GPT showed such good results with zero shot and few shot learning. And so I actually feel like we kind of went backwards for years. And not to be honest, I mean a bit sad about this now, but I kind of got so disappointed and dissuaded by like it felt like these bigger lab, much bigger leaps, you know, like fast. I had only ever been just me and Rachel. You were getting all of this attention for an approach I thought was the wrong way to do it. You know, I was convinced with certain white to it. And so, yeah, for years people were really focused on getting better. Zero shot and Q shot. And it wasn't until, you know, this key idea of like, well, let's take the zero limit approach, but for step two, rather than fine tuning on a kind of a domain corpus, let's find chain on instruction corpus, and then in step three, rather than find training on a reasonably specific task classification, let's find chain on a on our NHS pass classification. And so that was really that was really key, you know, so I was kind of like out of the NLP field for a few years there because yeah, it just felt like, I don't know, pushing uphill against this vast tide, which I was convinced was not the right direction. But it's kind of, listen to me, you know, because as you said, I don't have a Ph.D., not at a university, or at least I wasn't then. I don't have a big set of computers to fine tune huge transformer models. So, yeah, there's definitely difficult. It's always been hard, you know, It's always been hard. Like I've always been somebody who does not want to build stuff on lots of big computers because most people don't have lots of big computers. And I hate creating stuff that most people can't use, you know, and also stuff that's created on lots of big computers has always been like much more media friendly. So like, it might seem like a recent thing, but actually throughout my 30 years in data science, the attention's always been on, you know, the big iron results. So when I first started, everybody was talking about data warehouses and it was all about Teradata. Had it be like, this big bank has this huge room full of computers, and they have like terabytes of data available. You know, the press of a button. And yeah, that's always what people want to talk about, what people want to write about. And then, of course, students coming out of their PhDs and stuff. That's why they want to go work, because that's what I read about. And to me it's a huge distraction, you know, because like I say, most people don't have unlimited compute and I want to help most people, not the small subset of the most well-off people. Yeah, that's awesome. And it's great to hear. You know, you do such a great job educating that a lot of times you're not telling your own story, you know? So I love I love this conversation. And the other thing before we jump in so fast, I actually you know, a lot of people that I know, they run across a new architecture and now they're like, I got to start a company and raise a bunch of money and do all the stuff and say you were like, I want everybody to have access to this. That Why was that? The case for you? Was it because he already had like a successful, you know, venture and like fast mail and you are more interested in that? What was the reasoning? That's a really good question. So I guess the answer is, yes, it is. That's the reason why. So when I was a teenager, I thought it would be really cold or like have my own company. You know, I didn't know the word startup. I didn't know the word entrepreneur. I didn't know the word V.C. and I didn't really know what any of those things were really until after we started Kaggle, to be honest, even though I had started to what we now call startups, I just thought they were just small businesses. You know, there were companies. So yeah, so those two companies were fast male and optimal decisions. Fast male was the first kind of synchronized email provider for non businesses. So something you can get your same email at home on your laptop that work on your phone, whatever. And then optimal decisions invented a new approach to insurance pricing. So you got profit optimized insurance pricing. So I saw both of those companies, you know, after ten years. And at that point I had achieved the thing that as a teenager I wanted to do. You know, it took a lot longer than it should have because I spent way longer in management consulting than I should have because I got caught up in that stupid rat race. But, you know, eventually I got there and I remember my mum saying to me, you must be so proud, you know, because she remembered my, my day and she's like, You've done it. And I kind of reflected it. I was like, I'm not I'm not proud at all, you know, like people quite like fast mail, you know, it's quite nice to have synchronized email. It probably would have happened anyway. Yeah. I'm certainly not proud that I've helped some insurance companies suck more money out of their customers. Yeah, no, I'm not proud. You know, it's actually I haven't really helped them out very much. You know, maybe in the insurance case, I've made it a little bit worse. I don't know. So, yeah, I was determined to not waste more years of my life doing things, working hard to do things which I could not be reasonably sure I would have a lot of value. So, you know, I took some time off that I wasn't sure if I'd ever work again. Actually, I didn't particularly want to because it felt like, yeah, I felt like such a disappointment and, you know, and I didn't need to. I had enough money, like I wasn't super rich, but I had enough money. I didn't need to work. And I certainly recognized that amongst the other people I knew who had enough money that they didn't need to work. They all worked ridiculously hard, you know, constantly put themselves in strangely stressful situations. And I thought, I don't want to be one of those idiots who's tied to, you know, buying a bigger plane than the next guy or whatever. You know, cargo came along. And I mainly kind of did that because it was fun and interesting to hang out with interesting people. But, you know, with response to I in particular, you know, Rachel and I had a very explicit, you know, long series of conversations over a long period of time about like, well, how can we be the most helpful to society as a whole, and particularly to those people who maybe need more help, you know, And so we definitely saw the world going in a potentially pretty dystopian direction. If the world's most powerful technology was controlled by US bold group of elites, so we thought, yeah, we should focus on trying to help that not happen. You know, sadly, it looks like it still is likely to happen. But I mean, I feel like we've we've helped make it a little bit less likely. So we've done that. They've shown that it's possible. And I think and I think your concept efficacy, your courses, your research, they publish, you know, just the other day you published a finding on and you know, learning that that I think is still something that people are still talking about quite a lot. I think that that is the origin story of a lot of people who are going to be, you know, little Jeremy Howard. So in your mission, you know, you don't have to do everything by yourself, is what I'm saying. definitely, definitely. You know, that was a that was a big takeaway from like analytic. Was that analytic? It definitely felt like we had to do everything ourselves. And I kind of I wanted to solve medicine. I'll say, Yeah, okay, solving medicine is actually quite difficult and I can't do it on my own. And there's a lot of other things I'd like to solve, and I can't do those either. So that was, that was the definitely the other pace was like, yeah, you know, can we create and an army of, of, of passionate domain experts who can change their little part of the world. And that's definitely happened like I find nowadays at least half the time probably quite a bit more that I get in contact with somebody who's done really interesting work in some domain. Most of the time I'd say they say, yeah, I got my start with fast. I so it's definitely I can I can see that. And I also know from talking to folks at places like Amazon and Adobe and stuff which you know there's lots of alumni there and they say, my God, I got here and like half of the people are fast I alumni. So it's fantastic. Yeah. Actually, Andre Capacity grabbed me when I saw him at Europe's a few years ago, and it's like I have to tell you, thanks for the fast day courses. When people come to Tesla, they need to know more about deep learning. We always send them to your costs, and the Open Air Scholars Program is doing the same thing. So it's kind of like, yeah, that a surprising impact. You know, that's just one of like three things we do is the course, you know. Yeah, that's it's, it's only ever been at most two people, either me and Rachel or me. And so far nowadays it's just me. So yeah, I think it shows you don't necessarily need a huge amount of money and a huge team of people to, to make an impact. Yeah. So it just to reintroduce Fassihi for people who may not have dived into it much, there is the courses that you do, there is the library that is that is very well loved. And I kind of think think of it as a nicer layer on top of page torch that people should start with by default and use it as the basis for a lot of your courses. And then you have you have like A and B dev, which I don't know, is that the third one out? So the three areas where research software and, and courses. sorry I was doing that. Yeah. So as I said in software, you know first AI is the main thing but then it f is not far behind. But then there's also things like fast core API, I mean dozens of open source projects that I've created and some of them have been pretty popular and some of them are still a little bit hidden. Actually, I should, some of them I should try to hit a job of telling, What are you about? What are you thinking about? Yeah, what, what's on my little things Like for example, for working with EC2 and I was, I created a fast EC2 library, which I think is like way more convenient and nice to use and anything else out there. And it's literally got a whole autocomplete dynamic, autocomplete that works both on the command line and in notebooks that are like fit, for instance, names and everything like that. You know, just little things like that. I try to make like when I work with some domain, I try to make it like I want to make it as enjoyable as possible for me to do that. So I always try to kind of like with API, for example, I think that GitHub APIs incredibly powerful, but I didn't find it good to work with because I didn't particularly like the libraries that are out there. So actually API like fast, easy to it, like autocomplete, both at the command line or in a notebook or whatever. Like literally the entire GitHub API, the entire thing is like, I guess like less than a hundred K of code because it actually is so far as I know, the only one that, that grabs it directly from the official open API spec that could happen to me says and like if you're in GitHub and just type an API, you know what a complete API method and hit enter it prints out the docs with six brief docs and then gives you a link to the actual documentation page. You know, GitHub actions. Second, right now in Python, which is just so much easier than writing them in typescript and stuff. So you know, just little things like that. I think that's a approach that more I wish more developers took to publish some of their work along the way. You describe the third arm of Fassihi as research. It's not something I see often obvious. Obviously you do. You do do some research and how do you how do you run your research? What are your research interests? Yeah, so research is but I spend the vast majority of my time on and the artifacts that come out of that largely software and courses, you know. So to me, the main artifact shouldn't be papers, because papers are things read by a small explosive group of people. You know, to me, the main artifacts should be like something teaching people, Here's how to use this insight and his software. You can use that builds it in. So I think I've only ever done three first person papers in my life, you know, And they were and not often the ones I wanted to do, you know, they were once like, so one was the whole muffett. Whereas Sebastian Ruder reached out to me after saying in the course and said, like, you have to publish this as a paper, you know? And he said, I'll write it. That's, that's my goal, you know. And he said, I want to write it because if I do, I can put it on my Ph.D. and that would be great. And it's like, okay, why would I help you with your PhD? And I that's that's great. So like, you know, one was the masks paper, which just had to exist and nobody else was writing it. And then the third was the first day I library paper, which again, somebody reached out and said, Please, please write this. We will waive the fee for the journal and everything and actually help you get it through publishing and stuff. So yeah, so I don't know, other than that, I've never written a first off a paper. So the research is like, well, so for example, you know, Dawn Bench was a competition which Stanford ran a few years ago, was kind of the first big competition, like who can train neural nets the fastest rather than the most accurate and and specifically those who can train image net the fastest. And and this was like one of these things where it was created by necessity. So Google had just released their outputs. And so I heard from my friends at Google that they had put together this big team to to smash Dawn bench so that they could prove to people that they had to use Google Cloud and use that to reduce and show how good tape used were. And we kind of thought, shit, this would be a disaster if they do that, because then everybody is going to be like, I'll take money. It's not accessible, you know, to actually be good at it. You have to be Google and you have to use special silicon. And so so, you know, we we only found out about this ten days before the competition finished. But, you know, we basically got together an emergency bunch of our students and Rachel and I and sat for the next ten days and just tried to crunch through and try to use all of our best ideas that have come from our research. That's up particularly progressive resizing just basically train mainly on small things, train on non square things, you know, stuff like that. And so yeah, we ended up winning, thank God. And so, you know, we turned it around from being like, like, shit, you know, this is going to show that you have to be Google and have to be used to being like, my God, Even the little guy can can do deep learning. So that's an example of the kind of like research out effects we do. And yeah, so all of my research is always how do we do more with less, you know, so how do we get better results with less data with best, less compute, with less complexity, with less education, you know, stuff like that. So yeah, well Moffitt's obviously a good example of, of that. And most recently you published Ken, let's learn from a single example. maybe could you tell the story a little bit behind that and maybe that goes a little bit too far into the learning of very little resources to literature. Yeah, Yeah. So, and my friend Jano Whittaker basically had been playing around with this fun Kaggle competition, which is actually still running as we speak, which is can you create a model which can answer multiple choice questions about anything that's in Wikipedia? And the thing that makes it interesting is that your model has to run on Kaggle within 9 hours and Kaggle is very, very limited. So you've only got 14 gig RAM, only two CPUs and a small, very old CPU. So this is cool. You know, if you could do well at this and this is a good example, it's like, you can do more with less. So yeah, Johno and I were playing around with fine chaining, of course, transfer learning Pre-trained language models, and we saw this like, so we always, you know, plot our losses as we go. So here's another thing we tried to do actually. So Vanquisher, when he worked with us, created called Fast Progress, which is kind of like taking ATM, but we we think a lot better. So we look at our fast progress curves and they kind of go down, down, down, down, down, down, down a little bit, little bit, little bit. And then suddenly go clunk the drop and then down, down, down, down, down a little bit and then got to clunk. They dropped. We're like, What the hell? These clunks are occurring at the end of each epoch. So normally in deep learning this would be this. You know, I've seen this before. It's always been a bug. It's always turned out that like, we accidentally forgot to turn on our MO during the validation set. So was actually learning then or we accidentally were calculating moving average statistics throughout the So, you know, first reset the moving average or whatever. And so maybe using hacking for this trainer. So you know I did not give my friends that hacking face the benefit the doubt. I thought they sucked up hacking face you know are idiots. Well you use the first AI trainer instead. So we switched over to learn. We still saw clunks and you know that yeah, it shouldn't really happen because semantically speaking and the epoch isn't like it's not a thing, you know, like nothing happens. Well, nothing's meant to happen when you go from ending one epoch to starting the next one to so there shouldn't be a clock, you know. So I kind of asked around on the open source discourse that's like, what's going on here that everybody was just like, that's just what that's just what these trading cards look like. They all look like that. Don't worry about it. That's like, you are using China. Yes. well, there must be some. But with training it's like, well, what are we also sorting? Lenna And somebody else is like, Now we've got our own China, we get it as well. Now just like, don't worry about it. It's just something we see. It's just normal. I can't do that. I can't just be like, Here's something that's like in the previous 30 years of neural networks nobody ever saw. And now suddenly we see it. So to worry about it. And I just I have to know, I can clarify. This is all was everyone that you're talking to, they all see it for the same dataset or in different datasets, they just over different datasets, different trainers. They're just like, no, this is just this is just what it looks like when you find chain language models. Don't worry about it. You know, I've never seen this before. Yeah, I hadn't seen it before, but I've been kind of like, as I say, you know, I kept working on them for a couple of years after U of M Fit, and then I kind of moved on to other things, partly out of frustration. So I hadn't been fine tuning, you know, I mean, Lama's only been out for a few months, right? But I wasn't one of those people who jumped straight into it, you know? So I was relatively new to the kind of llama fine shooting. Well, where else these guys at been, you know, doing it since day one sort of a few months ago, but still quite a bit of time. So. So yeah, that just like not this is all what we say, do worry about it. So yeah, I've got a very kind of like I don't know, this brain where I have to know why things are. And so I kind of I asked people like, why, why do you think it's happening? And I'd be like, pretty obviously because it's like, memorize the dataset. It's just like, copy copyright. It's only seen it once. Like look at this, that that loss has dropped by .3.3, which is like basically it knows the answer now. Like no, not just it is it's it's memorized dataset so yeah so look John, I did not discover this and Johno and I did not come up with a hypothesis, you know, I guess we were just the ones, I guess we'd been around for long enough to recognize it like this. This isn't how it's meant to work. And so we were, you know, and so we went back into like, okay, let's just run some experiments, you know, because nobody since we've actually published anything about this, well, not quite true. So people like, publish things, but nobody ever actually stepped back and said, like, what the hell? You know, how can this be possible? Is it possible? Is that what's happening? And so, yeah, we created a bunch of experiments where we basically predicted ahead of time it's like, okay, if this hypothesis correct that it's memorizing the training set, then we ought to say blah under conditions, blah, but not under these conditions. And so we ran a bunch of experiments. All of them supported the hypothesis that it was memorizing the data set in a single thing at once. And it's a pretty big dataset, you know, which in hindsight it's not totally surprising because the theory remember that, you know, fit theory was like, what's kind of creating all these latent capabilities to make it easier for it to predict the next token. So if it's got all this kind of latent capability, it ought to also be really good at compressing new tokens because it can immediately recognize that it's like, if that's just a version of this. So it's it's not so crazy, you know? But it is it requires us to rethink everything because like, and nobody knows like, okay, so how do we fine tune these things? Because, like, it doesn't even matter. Like, maybe it's fine, but maybe it's fine that it's memorized the data enough to one go and you do a second go and okay, the validation loss is terrible because it's now really overconfident. That's fine. Don't you know, don't keep telling people, don't track validation, loss check, validation, accuracy, because at least that that will still be useful. Does another thing that's got lost since you all Moffit Nobody tracks accuracy of language models anymore but you know, it'll still keep learning. And it does, it does keep improving. But is it worse? You know, like, is it like now it's kind of memorized it. It's probably getting a less strong signal, you know, I don't know. So I still don't know how to fine tune language models properly. I haven't found anybody who feels like they to like nobody really knows whether this memorization thing is. It's probably a feature in some ways. It's probably some things that you can do usefully with it. It's probably it. But yeah, I, I have a feeling it's messing up trading dynamics as well. And does it come at the cost of catastrophic forgetting as well. Right. Like which is the other side of the coin. it does to some extent, like we know it does like look at code lambda for example, Sarkar Lama was a, he was like a 500 billion token fine tuning whose lama to using code that also crows about code that merited and honestly they kind of blew it because code Lama is good at coding but is bad at everything else, you know. And it used to be good. Yeah, I was pretty sure like it before they released it. It being lots of people in the open source codes are like, my God, you know, we know this is coming unopened saying it's coming up. I hope they kept at least like 50% non code data because otherwise it's going to forget everything else. And they didn't only let point three of their point 3% of their epochs were non code data. So I did it for about everything else. So now it's good at code and it's bad at everything else. So we have fix this traffic forgetting it's fixable. Just somebody has to do, you know, somebody, somebody has to spend their time training a model on a a good mix of data like so. Okay, So here's the thing. Even though I originally created the three step approach that everybody now does, my view is it's actually rote and we shouldn't use it. And that's because people are using it in a way different to why I created it. You know, I created it thinking the tasks, specific models would be more specific. You know, it's like, this is like a a sentiment classifier. This an example of a task, you know, But the tasks now are like a, you know, our late shift, which is basically like answer questions that make people feel happy about your answer. So that's a much more general task. And it's a really cool approach. And so we see, for example, our late shift also breaks models like, you know, like GPT four, our LH test. We know from kind of the, the work that Microsoft did, you know, the pre the earlier less aligned version was better. And these are all kind of examples of catastrophic forgetting. And so to me the right way to do this is to fine, fine tune language models. It's actually throw away the idea of fine training. There's no such thing. There's only continued Pre-training And Pre-training is something where from the very start you try to include all the kinds of data that you care about, all the kinds of problems that you care about, instructions, exercises, code, general purpose, document completion, whatever. And then as you train, you gradually curate that, you know, you can actually make that higher and higher quality and more and more specific to the kinds of tasks you want to do. But you never throw away any data. You always keep all of the data types there in reasonably high quantities. You know, maybe the quality filter you stopped training on low quality data because that's probably fine to forget how to write badly. Maybe so, yeah, that's now my view is I think you're well fit is the wrong approach and that's why we're seeing a lot of these, you know, so called alignment texts and this view of like how a model can't both code and do other things. You know, I think it's actually because people are training them wrong. Well I think you have a clear anti laziness approach. I think kind of people are not as goodhearted you know that I had they told me this thing works and if I release a model this way, people appreciate I get promoted and I'll kind of make make more money. yeah, It's not just money. Like this is how citations work most, most badly, you know? So if you want to get cited, you need to write a paper that people in your field recognize as an advancement on things that we know are good. And so we've seen this happen again and again. So like I say, like zero shot in a few shot learning. Everybody was writing about that. Or, you know, with the image generation, every everybody just was writing about gangs, you know, And I was trying to say like, no cans are not the right approach, you know? And I showed through research that we demonstrated in our videos that you can do better than Gans much faster and with much less data. And nobody cared because again, like if you want to get published, you write a Gann paper that slightly improves this part of Gans and this tiny field here, you'll get published, you know. So it's yeah, it's not set up for real innovation. That's, you know, again, it's really helpful for me. You know, I have my own research lab with nobody telling me what to do and they don't even publish. So it doesn't matter if I get citations. So I just write what I think actually matters. I wish there was. And, you know, and actually places like Openai, you know, the researchers there can do that as well. That's a shame. You know, I wish there was more academic open venues in which people can focus on like genuine innovation on Twitter, which is unironically has has become a little bit of that forum. I wanted to follow up on one thing that you mentioned, which is that you checked around the open source discourse. I don't know if it's too I don't know if it's a question to ask, like what discourse are lively or useful right now. I think that something I definitely felt like I missed out on was the early days of illusory AI, which where. Yeah, yeah. Fair bit. And you know, like, what is the new word, Luther? And you, you actually shouted out the Alignment Lab discord in your blog post. And that was the first time I even knew, like I saw them on Twitter and never knew they had a discord, never knew that there was actually substantive discussions going on in there and that you were an active member of it. Okay. Yeah. And then even then, if you do know about that, you go there to look like it's totally. And that's because unfortunately, nearly all the discord, nearly all of the conversation happens in private channels. You know? And so how does someone get into that world? Because it's obviously very, very destructive. Right? You can't just come to the first AI discord, which I'll be honest with you, it's less bustling than some of the others, but it's not terrible. And so, like at least, yeah, to be fair, one of our most bustling channels is private. I guess. So this is just the nature of quality discussion, right? Yeah. I guess what you have about it is like I didn't have any private discussions on Discord for years, but there was a lot of people who came in with like, I just had this amazing idea for HCI how if you just thought about like if you imagine the AI is a brain and we, you know, it's just I don't want to talk about it. You know, I don't want to like it, but you don't want to be dismissive or whatever it is like, well, that's an interesting comment, but maybe you should like try training some models first to see if that aligns with your tuition. Like, but how can I possibly lie? I was like, Well, we have a course to actually spend time learning like, yeah, anyway. And then it's like, okay, I know the people who always have good answers there. And so I created a private channel and put them all in it. And I got to admit I that's where I post more often because there's much less, you know, flight of fancy views about how we could solve AGI, blah, blah, blah. So there is a bit of that. But having said that, like, I think the bar is pretty low. Like if you join Discord and you can hit like participants or community or whatever button, you can see who's in it and you'll see at the top who, who the admins or moderators or people in that dev role and just team one of them and say like, I, he has my GitHub where his some blog posts I wrote, you know, I'm interested in talking about this, you know, can I join the private channels and now I've never heard of anybody saying no. I will say, you know, is all pretty open So you can do the Alexa discord still. You know, one problem with the allthat discord is it's been going on for so long that it's like it's very inside baseball. It's quite hard to come. It started. Yeah. Cop I looks. I think it's all open. That's just. That's, stability. That's more accessible. Yeah. there's also just recently now some research that has, like, the Hermes model's data set just, just opened. They've got some private channels, but it's pretty open. I think you mentioned Alignment Lab, that one. It's all the interesting stuff is on private channels, so just ask if if you know me ask makes I've got admin on that on there's also Yeah. Okay. Skunkworks or skunkworks hey I there's a good discord which I think it's open so that yeah they're all pretty good. I don't want you to leak any any you know Discord said don't want any publicity. But no, I mean this is awful. We all want people like we all want people. We just. We just keep this like one of the stuff, you know? Yeah. Rather than people who and like, it's fine to not know anything as well, but if you don't know anything, but you want to tell everybody else what to do and how to do it, that's annoying. If you don't know anything and want to be told like here's a really small kind of task that as somebody who doesn't know anything, it's going to take you a really long time to do it. But it would still be helpful then and then you go into it. That would be great. The truth is, yeah, like, I don't know, maybe 5% of people would come in with great enthusiasm and saying that they want to learn and they'll do anything. And then somebody says like, okay, here's some work you can do. Almost nobody does that work. So if you're somebody who actually does the work and follows up, you will massively stand out. That's an extreme rarity and everybody will then want to help you take more work. So Yeah. So just yeah, just to work and people will want to support you are discarded. I used to be referrer all me for a long time without a public invite. And then when we open that and they're kind of like channel dating. Yeah, a lot of people just want to do it. I remember I used to be like, you know, for a moderator, it's like people just want to do like drive by post thing, you know? And like, they don't want to help the community. They just want to have the question answered. I mean, the funny thing is our our forum community does not have any of that garbage. You know, there's something specific about the low latency thing that people like. They expect an instant answer. And yeah, we're all somehow in a in a forum thread where they know it's like they're forever people are a bit more thoughtful, but then the forums are less active than they used to be because discord has got more popular, you know? So it's, it's all a bit of a compromise. You know, running a healthy community is yes, it's always a bit of a challenge. Now, we've got so many more things we want to dive in, but I don't want to keep you here for hours. This is not the The Room and podcast. We always like to say. One topic I would love to maybe chat a bit about is modular. Modular. You know, crystalline or not many of you on the podcast. So we want to spend a little time there. You recently did a Hacker's Guide to Language Models and you run through everything from quantized model to like smaller models, larger models and all of that, but obviously modular. So again, it's on approach. Yeah. What got you excited? I know you're Chris I've been talking about this for like years and yeah a lot of the ideas you had so. Yeah, yeah, yeah, I know. Absolutely. So I met Chris, I think it was at the first TensorFlow dev summit and I don't think he had even like, I'm not sure if he'd even officially started his employment with Google at that point. So I don't know. You know, certainly nothing had been mentioned. So I, you know, I had admired him from afar with Alpha M and Swift and whatever. And so I saw him walk into the courtyard at Google. It's like, shit. And that's Chris Lattner. I wonder if he would lower his standards enough to talk to me was worth a try. So I caught up my courage. It was like he nobody is talking to him. He looked a bit lost. I wandered over. It's like you're Chris Lattner, right? It's like, What are you doing here? And it's like, Yeah, yeah. And I go, Jeremy, how it is like all you do. You do some of this AI stuff and it's like, yeah, I like to say I stuff. You doing AI stuff. It's like, well, I'm thinking about starting to do some AI stuff. Yeah, I think it's going to be cool and say, so like I spent the next half hour just basically brain dumping all the ways in which I was stupid to him. And he listened patiently. I thought he probably wouldn't even remember or care or whatever, but yeah, that I kind of like, I guess I really caught up with him a few months later and it's like I've been thinking about everything you said in that conversation and he like, narrate it back. His response to every part of it, the projects he was planning to do in this essay. this dude follows up. shit. I was like, Wow, okay. And he like, Yeah, So we're going to create this new thing called Swift for TensorFlow, and it's going to be like, it's going to be a compiler with auto differentiation built in and blah, blah, blah. And I say, Wait, why would that help? You know, why would you? It's like, okay, with the compiler during the forward pass, you don't have to worry about saving context, you know, because it gonna be optimized in the back. That's like, my God, because I didn't really know much about compilers. You know, I spent enough to kind of like understand the ideas, but hadn't occurred to me that a compiler basically solves a lot of the problems we have as end users. That's like, Wow, that's amazing. Okay. You do know, right, that nobody's going to use this unless it's like, usable. It's like, Yeah, I know, right? So I thinking you should create like a fast AI for this. So okay, I but I don't even know Swift And it's like, well, why don't you start learning it? And if you have any questions, ask me this. Just like, holy shit. Like, not only is Chris Lattner lowered his standards enough to talk to me, but he's offering me personal, personal guided programing language that he made. So I was just like, I'm not going let him down. So I spent like the next two months, like just nerding out on Swift. And it was just before Christmas that I kind of like started writing down what I'd learned. So I wrote a couple of blog posts on like, okay, this is like my attempt to do numeric programing and Swift, and these are all the challenges I had and these are some of the issues I had with like making things properly performant. And here are some libraries I wrote and I sent it to Chris. And as I kind of hope he's not too disappointed with me, you know, because that would be the worst the second, you know, And I was also like I was like, I hope he doesn't dislike the fact that I, you know, didn't love everything. And yeah, it's like, thanks for sending me that. Let's get on a call and talk about it. And we spoke and he was like, This is amazing. I can't believe that you made this. This is exactly what Swift needs and he is like. And so, like, somebody set up like a new Swift home, what they call it, the equivalent of a PEP, you know, kind of IFC thing of like, you know, let's look at how we can implement Jeremy's ideas in the language. And so I wow. And so, yeah, you know, so, you know, and then we ended up, like, literally teaching some lessons together about Swift for TensorFlow. And we built a fast I kind of equivalent with him and his team. So much fun. Then in the end, you know, Google didn't follow through, which is fair enough. Like asking everybody to turn into a programing language is going to be tough. But like, it was very obvious, very, very obvious at that time that TensorFlow too, is going to be a failure you know, And so this felt like, okay, I you know, well, you know, what are you going to do? Like, you can't focus on TensorFlow, too, because it's not going to be like it's not working. It's never going to work. You know, nobody at Google's using it internally. So, you know, in the end, Chris left, you know, three, four TensorFlow got archived, there was no backup plan. So I kind of felt like Google is kind of screwed, you know, And Chris went and did something else. But we kept talking and I was like, look, Chris, you know, you're going to be your own boss man because like, you know, you've got the ideas, you know, like, only you've got the ideas, you know, And if your ideas are implemented, we'd all be so much better off because like, Python's the best of a whole bunch of shit, you know, like, I would. It's amazing, but it's awful, you know, compared to what it could be. Anyway. So eventually, a few years later, he called me up and he's like, Jeremy, I've taken your advice. I've started a company that's like, I got so we got to create a new language. We're going to create new infrastructure. It's going to build, it's going to have all the stuff we've talked about and it's like, wow. So that's, that's, that's what Modular is. And so Mojo is like, you know, building on all the stuff that Chris has figured out over. I mean, really from when he did his PhD thesis which developed LPM onwards, you know, and SWIFT and are, you know, the TensorFlow runtime engine, which is very good. You know, that was something that he, he and has lasted. So yeah I'm, I'm pumped about that. I mean it's very speculative. Creating a whole new language is tough. I mean Chris has done it before mean he's created a whole C++ compiler amongst other things, looking pretty hopeful. I mean I hope it works because, you know, you're telling them to quit his job. So it I mean, in the meantime, I will say, you know, Google now does have a backup plan. You know, they have Jacks, which was never a strategy. It was just a bunch of people who also recognized TensorFlow two as shit, and they just decided to build something else. And for years, my friends and that team were like, don't tell anybody about us because, you know, we don't want to be anything but a research project. That's an owl book. I suddenly they're the great white hope for Google's future and such access. You know, also not terrible, but it's still still written in Python. Like it would be cool if we had all the benefits of checks, but in a language that was designed for those kind of purposes. So, you know, fingers crossed that yeah, that mojo turns out great. Yeah. Any other thoughts on when where people should be spending their time? So that's more the kind of language framework level than you have to, you know, Gmail, some of these are like quantization focus kind of model level things. Then you've got the hardware people. It's like a whole other bucket. Yeah. What are some of the exciting stuff that you're excited about? Yeah, well, we might be surprised to hear me say this, but I think find cheating transfer planning is still a hugely underappreciated area, so today's Zero Shock Few shop learning equivalent is retrieval augmented generation, you know, which is like just like a few shot learning is nothing like it's a real thing. It's a useful thing. It's not a thing anybody would want to ignore. Why are people not spending at least as much effort on fine tuning? You know, because, you know, rag is like such a inefficient hack. It really isn't. It's like, you know, segment my data in some somewhat arbitrary way embedded it ask questions about that you know I hope that my embedding quest you know model embeds questions in the same bedding space as subparagraphs which obviously it's not going to if your question is like if I've got a whole bunch of archive papers embeddings and I asked like, what are all the ways in which we can make inference more efficient? But the only paragraphs that will find is like if there's a review paper that says, Here's a list of ways to make, you know, preference more efficient doesn't have any of the specifics. You know, it's not going to be like, here's one way, here's one way is different way in different papers, you know, Yeah, if you fine tune a model, then then all of that information is getting directly incorporated into the into the White side of your model in a much more efficient and nuanced way. And then you can use rack on top of that. So I think that's one area that's definitely like underappreciated and also the kind of like the the confluence of like, okay, how do you combine rack and find chaining, for example, something that I think a lot of people are uncertain about and I don't expect you to know either, is that whether or not you can fine tune new information in? And and I think that that is the focus of some of the open questions. But right like additional trading hands because there's no such thing as fine there's no such thing as fine cheating there's only K and training continued so fine tuning is pre-training like they're literally the same thing. Yeah. So the knowledge got in there in the first place through pre-training. So how could like continuing to pre-trained not put my knowledge in like it's, it's the same thing. Yeah. The problem is just we're really bad at it because everybody is doing it dumb ways. So you know, it's a good and it's not just new knowledge, but like new capabilities. You know, I think like in my heck has got to evolve into hack has got to talk show simple I mean it's a funny that's a simple example so it doesn't sound it but like taking a pre-trained base model and getting it to generate a squirrel. And it took 15 minutes to train on a single trip here. You know, I think that might surprise people that that that capability is at your fingertips. And you know, because it was already there, it's just latent in in the base model, really pushing the boundaries of what you can do with small models. I think is a really interesting question, like what can you do with a like I mean, there isn't much in the way of good small models, a really underappreciated one is beetle m3b, which is a like kind of seven B quality, three B model. There's not much of the 1 to 2 B range. Sadly, there are some code ones, but like the fact that there are some really good code ones in that wants to be range shows you that that's a great size for doing complex tasks. Well, there was five one recently which has had has been the subject of a little bit of discussion about whether they're trained on benchmark by 1.5 as well. So that's not a good model yet. Why not. It's it's good at doing Safari one in particular. It's good at doing a very specific thing which is creating very small python snippets. The thing. Okay so like fi 1.5 has never read Wikipedia for example, so it doesn't know who Tom Cruise is. You know, it doesn't know who anybody is. He doesn't know what about any movies. It doesn't doesn't really know anything about anything like because it was never it's never read anything. You know, it was trained on a nearly entirely synthetic data set which is designed for it to lend reasoning. And so it's a it was a research project and a really good one. And it definitely shows us a powerful direction in terms of what can you do with synthetic data And wow, gosh, if these tiny models can get pretty good reasoning skills, pretty good math skills, pretty good coding skills, but I don't know if it's a model you could necessarily build on. Some people have tried to do some fine tuning of it and again, they're like surprisingly good in some ways for A 1.5 B model. Pat Not sure you'd find it useful for anything. I That's the struggle of pitching small models because small is great. You know, if you don't have a lot, you don't need a lot of resources to run them. But the performance evaluation always so easy. It's always just like, yeah, works on some things and right we don't why I think Yeah. So that's why it's back to fine tuning. I would say a so Microsoft did create a 51.5 web but they didn't release it unfortunately. I would say a 51.5 web with fine tuning. So your task, you know, might quite, you know, might solve a lot of tasks that people have in their kind of day to day lives, you know, particularly in kind of an enterprise setting. I think there's a lot of like repetitive kind of processing that has to be done. It's a useful thing for coders to know about. So I think quite often you can like replace some thousands and thousands of lines of complex buggy code, maybe with with a fine tune, you know. GOT Yep. And Jeremy, before we let you go, I think one question on top of a lot of people's minds. So you're done practical deep learning for coders in 2018, 1921, 22. I feel like the more time goes by, the more the appeals get concentrated. If you're somebody who is interested in deep learning today and you don't want to go join or you don't want to join Entropic, what's like the best use of their time? Should they focus on Yes, my model development should have always on fine tuning math and all of that. Should they just like focus on making brag not a hack and coming up with a better solution? Yeah. What's practical deep learning for coders 2024 kind of look like. Yeah, I mean, good question. I'm trying to figure that out for myself, you know, like, what should I teach? Because I definitely feel like things have changed a bit. You know, one of the ways in which things have changed is that coding is much more accessible now. So if you look at a lot of the folks and the kind of open source 11 community are folks who really hadn't coded before a year ago, and they're using these models to help them build stuff they couldn't build before, which is just fantastic. You know? So one thing I kind of think is like, okay, well, we need a lot more material to help these people use this newfound skill they have because they don't really know what they're doing. You know and they don't claim to, but they're doing it anyway. And I think that's fantastic, you know? So like, are there things we could do to help people, you know, bridge this gap? Because previously, you know, I know folks who were, you know, doing menial jobs a year ago and now that training language models, thanks to the help of codex and copilot and whatever. So you know. Yeah. What does it look like to, like really grab this opportunity? You know, maybe maybe fastest goals can be dramatically expanded now to being like, let's make coding more accessible, you know, kind of oriented coding or accessible. If so, our course should probably look very different, you know, and we'd have to throw away that like our you have to have at least a year of full time programing that you know, as a prerequisite. Yeah, what would happen if we got rid of that. So That's kind of one thought that's in my head, you know, as to what should other people do. Honestly, I don't think anybody has any idea. Like the more I look at it, what's going I know I don't you know, like we don't really know how to do anything very well. Are clearly open. I do like they seem to be quite good at something so they're talking to folks at or who have recently left. I mean I even there it's clear there's a lot of stuff they haven't really figured out. And that is kind of like using recipes that they've noticed have been okay. So yeah, we don't really know how to train these models well, we don't know how to find sharing them well. We don't know how to do rec well, we don't know what they can do. We don't know what they can't do. We don't know how big a model you need to solve different kinds of problems. We don't know what kind of problems they can't do. We don't know what could, prompting strategies of particular problems. You know, like somebody sent me a message the other day saying they've written something that is a prompting strategy for GPT four. They've written like 6000 lines of Python code and it's to help it play chess. And then they've they said they've had it play against other chess engines and creating the best stockfish engines and it's got an ELO has 3400. my God. It lost to the best chess engine in existence. And I think this is a good example of like people were saying that GPT four can't play. I mean, I was sure that was wrong. I mean, obviously you can play chess about the difference between like with no prompting strategy, it can't even make legal moves with good prompting strategies. It might be just about the best chess engine in the world, far better than any human player. So yeah, I mean, we don't really know what the capabilities are yet, so I feel like it's all blue sky at this point. It feels like computer vision in 2013 to me, which was like in 2013, Computer Vision, which is that the net what we've had Alex net we've had HHI net, it's around the times either in Fergus like know it's probably before that so we hadn't yet had this idea and Fergus like this is actually what's going outside the layers. So you know, we don't actually know what's happening inside these transformers. We don't know how to create good training dynamics and we don't really know anything much. And there's a reason for that, right? And the reason for that is language models suddenly got really useful. And so the kind of economically rational thing to do. But this is not criticism, this is true, that's not a rational thing to do, is like, okay, like build that as fast as possible, you know, make something work, get it out there. And that's what you know, open AI in particular, did Anthropic kind of did, Yeah. But there's a whole lot of technical debt everywhere. You know, nobody's really figured this stuff out because everybody's been so busy building what we know works as quickly as possible. So yeah, I think there's a huge amount of opportunity to, you know, I think we'll find things can be made to work a lot faster, a lot less memory. I got a whole bunch of ideas I want to try, you know, every time I look at something closely, like, really closely, you know, it's like, turns out this person actually had no idea what they're doing. Yeah. Which. Which is fine. Like, none of us know what we're doing. We should experiment with that as we had a three down on the podcast who create a flash. Well, and I ask them, Let nobody think of using SRM before you like where people are just like, No. And you will say, Yeah, people are just that. And they didn't think of it. They didn't try and they didn't come from like a systems background and yeah, I mean the thing about flash attention is I mean lots of people were absolutely thought of that and so had I. Right. But I mean that the honest truth is, particularly before Triton, like everybody knew that tiling is the right way to solve anything. And everybody knew that attention focused attention wasn't tiled. That was stupid. But not everybody's got his ability to, like, be like, well, I'm confident enough in CUDA and or try to use that insight to write something better, you know? And this is where like I'm super excited about Mojo and I always talk to Chris about flash attention as I make it. You know, there is a thousand flash attention's out there for us to build. you just got to make it easy for us to build them. So, like, Triton definitely helps, but it's still not easy, you know, It still requires kind of really understanding the CPU architecture, writing it in that kind of very Cuda ish way, right? So yeah, I think, I think, you know, if Mojo or something equivalent can really work well, we're going to see a lot more flash attention's popping up. Good. Jane, before we wrap, we really do a quick lightning round. We're going to have three simple questions. So the first one is around acceleration. And you've been in this field a long time. What's something that it's already here today and I that you're talking would take much longer? I don't think anything. So I've actually been slightly to polish so my 2014 TEDx talk, I had a graph and I said like this is like the slope of human capabilities and this is the slope of AI capabilities. And I said, I'll wear and I put a dot saying, we are heroes just before they passed. And I, I, I looked back at the transcript the other day, I said, in five years I think we'll, you know, we might have crossed that threshold in which computers will be better at most human tasks and most humans or most average humans. And so that might be almost true now for nonphysical tasks. So I was like took, you know, took about twice as long. So I thought it might. Yeah, No, I wouldn't say anything. Surprised me too much. It's still like, definitely like, I got to admit, you know, I had a very visceral reaction using GPT four for the first time, not because I found it surprising, but actually like, hey, actually doing it like it's something I was pretty sure would exist by about now, maybe a bit earlier. But actually using it definitely is different to just feeling like it's probably on its way, you know? And yeah, whatever GPT five looks like I'm sure I met you now have the same visceral reaction. You know it's it's really amazing to watch develop. We also have an exploration question. So what do you think is the most interesting unsolved question in the AI? How do language models land? You know, what are the training dynamics like? I want to see there was a great paper about resonates a few years ago that showed how that was able to like plot a kind of projected three dimensional loss surface for comfort with and without step connections. And, you know, you could very clearly see without the skip connections as bumpy. And with the skip connections, it was super smooth. That's the kind of work we need. Like so there was actually an interesting blog post that came out just today from the PyTorch team where some of them have created this letter 3D Matrix product visualization thing. The memo relays it. Yeah, yeah. And they actually showed some by nice examples of like GPT two attention layer and like showed an animation and said like if you look at this we can actually see a bit about what it's doing, you know, so yeah, okay. It reminds me of the Seiler and Fergus, you know, come from that paper that was the first one to do these reverse convolutions to show what what's actually being learned and each player in a net. Yeah, we need a lot more of this. Like what is going on inside these models, How do they actually learn and then how we use those insights to help them to learn better. So I think that be one. The other exploration I'd really like to see is a much more rigorous analysis of what kind of data do they need, at what level and when do they need it and how often. So that kind of like dataset mixing, curation, so forth, right? In order to make sure that the best code is yeah, how much is Wikipedia? Yeah. Yeah. Very uncertain mind and what you know, what kind of mixture you need for it to keep its capabilities and what are the kind of underlying capabilities that it most needs to keep. And if it loses those, it would lose all these other ones. And what data do you need to keep those and, you know, other things we can do to change the loss function, to help it, to not forget to do things like that? Awesome. And yeah, before wrapping what's one message, one idea you want everyone to remember and think about, You know, I guess the main thing I want everybody to remember is that, you know, there's a lot of people in the world and they have a lot of, you know, diverse experiences and capabilities and now they all matter. And now that we have a, you know, nearly powerful technology in our we could think of one of two ways. One would be, that's really scary. What would happen if all of these people in the world had access to this technology? Some of them might be bad people. Let's make sure they can't have or one might be, Wow. If all those people in the world, I bet a lot of them could could really improve the lives of a lot of humanity. They had this tool. This has always been the case, you know, from the invention of writing to the invention of the printing press, to the, you know, development of education and it's been a constant battle between people who think that that distributed power is unsafe and it should be held on to by an elite few. And people who think that that humanity on net, you know, this is a marvelous species, particularly when part of a society and a civilization. And we should do everything we can to enable more awesome to contribute. This is a really big conversation right now. And, you know, I, I want to see more and more people showing up, showing what, you know, what the the great unwashed masses out there can actually achieve. You know, that actually, you know, regular people connect to a lot of really valuable work and and actually help us be, you know, more safe and also flourishing in our lives and providing a a future for our children to flourish in. You know, if we if we lock things down to the people that we think, you know, the elites that we think can be trusted to run it for us, yeah, I think all bets are off about where that leaves us as a as a as a society. You know? Yep. Now that's an important message. And yeah, that's why we've been promoting a lot of open source of all purpose. Open source community is I think, continue letting the builders build an explorer. That's always a good idea. Thank you so much for coming on, Jeremy. This is great. Thank you for having me, baby.
AI_LLM_Stanford_CS229
Andrew_Ng_Opportunities_in_AI_2023.txt
[MUSIC PLAYING] It is my pleasure to welcome Dr. Andrew Ng, tonight. Andrew is the managing general partner of AI Fund, founder of DeepLearning.AI and Landing AI, chairman and co-founder of Coursera, and an adjunct professor of Computer Science, here at Stanford. Previously, he had started and led the Google Brain team, which had helped Google adopt modern AI. And he was also director of the Stanford AI lab. About eight million people, 1 in 1,000 persons on the planet, have taken an AI class from him. And through, both, his education and his AI work, he has changed numerous lives. Please welcome Dr. Andrew Ng. [APPLAUSE] Thank you, Lisa. It's good to see everyone. So, what I want to do today is chat to you about some opportunities in AI. So I've been saying AI is a new electricity. One of the difficult things to understand about AI is that it is a general purpose technology, meaning that it's not useful only for one thing but it's useful for lots of different applications, kind of like electricity. If I were to ask you, what is electricity good for? It's not any one thing, it's a lot of things. So what I'd like to do is start off sharing with you how I view the technology landscape, and this will lead into the set of opportunities. So lot of hype, lot of excitement about AI. And I think, a good way to think about AI is as a collection of tools. So this includes, a technique called supervised learning, which is very good at recognizing things or labeling things, and generative AI, which is a relatively new, exciting development. If you're familiar with AI, you may have heard of other tools. But I'm going to talk less about these additional tools, and I'll focus today on what I think are, currently, the two most important tools, which are supervised learning and generative AI. So supervised learning is very good at labeling things or very good at computing input to outputs or A to B mappings, given an input A, give me an output. For Example, given an email, we can use supervised learning to label it as spam or not spam. The most lucrative application of this that I've ever worked on is probably online advertising, where given an ad, we can label if a user likely to click on it, and therefore, show more relevant ads. For self-driving cars, given the sensor readings of a car, we can label it with where are the other cars. One project, that my team, AI Fund, worked on was ship route optimization. Where given a route the ship is taking or considering taking, we can label that with how much fuel we think this will consume, and use this to make ships more fuel efficient. Did a lot of work in automated visual inspection in factories. So you can take a picture of a smartphone, that was just manufactured and label, is there a scratch or any other defect in it. Or if you want to build a restaurant review, reputation monitoring system, you can have a little piece of software that looks at online restaurant reviews, and labels that as positive or negative sentiment. So one nice thing, one cool thing about supervised learning is that it's not useful for one thing, it's useful for all of these different applications, and many more, besides. Let me just walk through, concretely, the workflow one example of a supervised learning, labeling things kind of project. If you want to build a system to label restaurant reviews, you then collect a few data points or collect a data set. Where it say, the pastrami sandwich great, say that is positive. Servers are slow, that's negative. My favorite chicken curry, that's positive. And here, I've shown three data points, but you are building this, you may get thousands of data points like this or thousands of training examples, we call it. And the workflow of a machine learning project, of an AI project is, you get labeled data, maybe thousands of data points. Then you have an AI entry team train an AI model to learn from this data. And then finally, you would find, maybe a cloud service to run the trained AI model. And then you can feed it, best bubble tea I've ever had, and that's positive sentiment. And so, I think the last decade was maybe the decade of large scale supervised learning. What we found, starting about 10, 15 years ago was if you were to train a small AI model, so train a small neural network or small deep learning algorithm, basically, a small AI model, maybe not on a very powerful computer, then as you fed it more data, its performance would get better for a little bit but then it would flatten out. It would plateau, and it would stop being able to use the data to get better and better. But if you were to train a very large AI model, lots of compute on maybe powerful GPUs, then as we scaled up the amount of data we gave the machine learning model, its performance would kind of keep on getting better and better. So this is why when I started and led the Google Brain team, the primary mission that I directed the team to solve, at the time, was let's just build really, really large neural networks, that we then fed a lot of data to. And that recipe, fortunately, worked. And I think the idea of driving large compute and large scale of data, that recipe's really helped us, driven a lot of AI progress over the last decade. So if that was the last decade of AI, I think this decade is turning out to be also doing everything we had in supervised learning but adding to it the exciting tool of generative AI. So many of you, maybe all of you, have played with ChatGPT and Bard, and so on. But just given a piece of text, which you call a prompt, like I love eating, if you run this multiple times, maybe you get bagels cream cheese or my mother's meatloaf or out with friends, and the AI system can generate output like that. Given the amounts of buzz and excitement about generative AI, I thought I'd take just half a slide to say a little bit about how this works. So it turns out that generative AI, at least this type of text generation, the core of it is using supervised learning that inputs output mappings to repeatedly predict the next word. And so, if your system reads, on the internet, a sentence like, my favorite food is a bagel with cream cheese and lox, then this is translated into a few data points, where if it sees, my favorite food is A, in this case, try to guess that the right next word was bagel or my favorite food is a bagel, try to guess the next word is with, and similarly, if it sees that, in this case, the right guess for the next word would have been cream. So by taking texts that you find on the internet or other sources, and by using this input, output, supervised learning to try to repeatedly predict the next word, if you train a very large AI system on hundreds of billions of words, or in the case of the largest models, now more than a trillion words, then you get a large language model like ChatGPT. And there are additional, other important technical details. I talked about predicting the next word. Technically, these systems predict the next subword or part of a word called a token, and then there are other techniques like RLHF for further tuning the AI output to be more helpful, honest, and harmless. But at the heart of it is this using supervised learning to repeatedly predict the next word. That's really what's enabling the exciting, really fantastic progress on large language models. So while many people have seen large language models as a fantastic consumer tool. You can go to a website like ChatGPT's website or Bard's or other large language models and use it as a fantastic tool. There's one other trend, I think is still underappreciated, which is the power of large language models, not just as a consumer tool but as a developer tool. So it turns out that there are applications that used to take me months to build, that a lot of people can now build much faster by using a large language model. So specifically, the workflow for supervised learning, building the restaurant review system, say, would be that you need to get a bunch of labeled data, and maybe that takes a month, we get a few thousand data points. And then have an AI team train, and tune, and really get optimized performance on your AI model. Maybe that'll take three months. Then find a cloud service to run it. Make sure it's running robustly. Make sure it's recognized, maybe that'll take another three months. So pretty realistic timeline for building a commercial grade machine learning system is like 6 to 12 months. And so teams I've led, we often took roughly 6 to 12 months to build and deploy these systems. And some of them turned out to be really valuable. But this is a realistic timeline for building and deploying a commercial grade AI system. In contrast, with prompt-based AI, where you write a prompt. This is what the workflow looks like. You can specify a prompt, that takes maybe minutes or hours. And then, you can deploy it to the cloud, and that takes maybe hours or days. So there are now certain AI applications that used to take me, literally, six months, maybe a year to build, that many teams around the world can now build in maybe a week. And I think this is already starting, but the best is still yet to come. This is starting to open up a flood of a lot more AI applications that can be built by a lot of people. So I think many people still underestimate the magnitude of the flood of custom AI applications that I think is going to come down the pipe. Now, I know you probably were not expecting me to write code in this presentation, but that's what I'm going to do. So it turns out, this is all the code that I need in order to write a sentiment classifier. So I'm going to-- some of you will know Python, I guess. Import some tools from OpenAI, and then add this prompt, that says, classify the text below delimited by three dashes as having either a positive or negative sentiment. [INAUDIBLE],, I had a fantastic time at Stanford GSB. Learnt a lot, and also made great new friends. All right. So that's my prompt. And then I'm just going to run it. And I've never run it before. So I really hope-- thank goodness, it got the right answer. [APPLAUSE] And this is literally all the code it takes to build a sentiment classifier. And so, today, developers around the world can take, literally, maybe 10 minutes to build a system like this. And that's a very exciting development. So one of the things I've been working on was trying to teach online classes about how to use prompting, not just as a consumer tool but as a developer too. So just talking about the technology landscape, let me now share my thoughts on what are some of the AI opportunities I see. This shows what I think is the value of different AI technologies today, and I'll talk about three years from now. But the vast majority of financial value from AI today is, I think, supervised learning, where for a single company like Google can be worth more than $100 billion US a year. And also, there are millions of developers building supervised learning applications. So it's already massively valuable, and also with tremendous momentum behind it just because of the sheer effort in finding applications and building applications. And then, generative AI is the really exciting new entrant, which is much smaller right now. And then, there are the other tools that I'm including for completeness. If the size of these circles represent the value today, this is what I think it might grow to in three years. So supervised learning, already really massive, may double, say, in the next three years, from truly massive to even more massive. And generative AI, which is much smaller today, I think, will much more than double in the next three years because of the number-- the amount of developer interest, the amount of venture capital investments, the number of large corporates exploring applications. And I also just want to point out, three years is a very short time horizon. If it continues to compound in anything near this rate, then in six years, it will be even vastly larger. But this light shaded region in green or orange, that light shaded region is where the opportunity is for either new startups or for large companies, incumbents, to create and to enjoy value capture. But one thing I hope you take away from this slide is that all of these technologies are general purpose technologies. So in the case of supervised learning, a lot of the work that had to be done over the last decade, but is continuing for the next decade, is to identify and to execute on the concrete use cases. And that process is also kicking off for generative AI. So for this part of the presentation, I hope you take away from it that general purpose technology is a useful for many different tasks, lot of value remains to be created using supervised learning. And even though, we're nowhere near finishing figuring out the exciting use cases of supervised learning, we have this other fantastic tool of generative AI, which further expands the set of things we can now do using AI. But one caveat, which is that there will be short term fads along the way. So I don't know if some of you might remember the app called Lensa. This is the app that will let you upload pictures of yourself, and then will render a cool picture of you as an astronaut or a scientist or something. And it was a good idea and people liked it. And its revenue just took off like crazy like that, through last December. And then it did that. And that's because Lensa was-- it was a good idea. People liked it. But it was a relatively thin software layer on top of someone else's really powerful APIs. And so even though it was a useful product, it was in a defensible business. And when I think about apps like Lensa, I'm actually reminded of when Steve Jobs gave us the iPhone. Shortly after, someone wrote an app that I paid $1.99 for, to do this, to turn on the LED, to turn the phone into a flashlight. And that was also a good idea to write an app to turn on the LED light, but it also wasn't a defensible long term-- also didn't create very long term value because it was easily replicated, and underpriced, and eventually incorporated into iOS. But with the rise of iOS, with the rise of iPhone, someone also figured out how to build things like Uber, and Airbnb, and Tinder. The very long term, very defensible businesses that created sustaining value. And I think, with the rise of generative AI or the rise of new AI tools, I think, really, what excites me is the opportunity to create those really deep, really hard applications that hopefully can create very long term value. So the first trend I want to share is AI is a general purpose technology. And a lot of the work that lies ahead of us, is to find the very diverse use cases and to build them. There's a second trend I want to share with you, which relates to why AI isn't more widely adopted yet. It feels like a bunch of us have been talking about AI for 15 years or something. But if you look at where the value of AI is today, a lot of it is still very concentrated in consumer software internet. Once you got outside tech or consumer software internet, there's some AI adoption but it all feels very early. So why is that? It turns out, if you were to take all current and potential AI projects, and sort them in decreasing order of value, then to the left of this curve, of the head of this curve, are the multi-billion dollar projects like advertising or web search or for e-commerce product recommendations or company like Amazon. And it turns out that about 10, 15 years ago, [? there's ?] my friends and I, we figured out a recipe for how to hire, say, 100 engineers to write one piece of software to serve more relevant ads, and apply that one piece of software to a billion users, and generate massive financial value. So that works. But once you go outside consumer software internet, hardly anyone has 100 million or a billion users that you can write and apply one piece of software to. So once you go to other industries, as we go from the head of this curve on the left over to the long tail, these are some of the projects I see, and I'm excited about. I was working with a pizza maker that was taking pictures of the pizza they were making because they needed to do things like make sure that the cheese is spread evenly. So this is about a $5 million project. But that recipe of hiring a hundred engineers or dozens of engineers to work on a $5 million project, that doesn't make sense. Or there's another great example. Working with a agriculture company that with them, we figured out that if we use cameras to find out how tall is the wheat, and wheat is often bent over because of wind or rain or something, and we can chop off the wheat at the right height, then that results in more food for the farmer to sell, and is also better for the environment. But this is another $5 million project, that that old recipe of hiring a large group of highly skilled engineers to work on this one project, that doesn't make sense. And similarly materials grading, cloth grading, sheet metal grading, many projects like this. So whereas to the left, in the head of this curve, there's a small number of, let's say, multi-billion dollar projects, and we know how to execute those delivering value. In other industries, I'm seeing a very long tail of tens of thousands, of let's call them, $5 million projects, that until now, have been very difficult to execute on because of the high cost of customization. The trend that I think is exciting is that the AI community has been building better tools that lets us aggregate these use cases, and make it easy for the end user to do the customization. So specifically, I'm seeing a lot of exciting low code and no code tools, that enable the user to customize the AI system. What this means is instead of me, needing to worry that much about pictures of pizza, we have tools-- we're starting to see tools that can enable the IT department of the pizza making factory to train AI system on their own pictures of pizza to realize this $5 million worth of value. And by the way, the pictures of pizza, they don't exist on the internet. So Google and Bing don't have access to these pictures, we need tools that can be used by, really, the pizza factory themselves, to build, and deploy, and maintain their own custom AI system that works on their own pictures of pizza. And broadly, the technology for enabling this, some of it is prompting, text prompting, visual prompting, but really, large language models and similar tools like that or a technology called data-centric AI, whereby, instead of asking the pizza factory to write a lot of code, which is challenging, we can ask them to provide data which turns out to be more feasible. And I think the second trend is important, because I think this is a key part of the recipe for taking the value of AI, which so far still feels very concentrated in the tech world and the consumer software internet world, and pushing this out to all industries, really to the rest of the economy, which-- sometimes it's easy to forget, the rest of the economy is much bigger than the tech world. So the two trends I shared, AI as a general purpose technology, lots of concrete use cases to be realized as well as low code, no code, easy to use tools, enabling AI to be deployed in more industries. How do we go after these opportunities? So about five years ago, there was a puzzle I wanted to solve, which is-- I felt that many valuable AI projects are now possible. And I was thinking, how do we get them done? And having led teams in Google, and Baidu, in big tech companies, I had a hard time figuring out how I could operate a team in a big tech company to go after a very diverse set of opportunities in everything from maritime shipping to education to financial services to healthcare, and on and on. It's just very diverse use cases, very diverse go to markets, and very diverse customer bases and applications. And I felt that the most efficient way to do this would be if we can start a lot of different companies to pursue these very diverse opportunities. So that's why I ended up starting AI Fund, which is a venture studio that builds startups to pursue a diverse set of AI opportunities. And, of course, in addition to lots of startups, incumbent companies also have a lot of opportunities to integrate AI into existing businesses. In fact, one pattern I'm seeing for incumbent businesses is distribution is often one of the significant advantages of incumbent companies, if they play their cards right, can allow them to integrate AI into their products, quite efficiently. But just to be concrete, where are the opportunities? So I think of this as-- this is what I think of as an AI stack. At the bottom level is the hardware, semiconductor layer. Fantastic opportunities there, but very capital intensive, very concentrated. So needs a lot of resources, relatively few winners. So some people can and should play there. I personally don't like to play there myself. There's also the infrastructure layer. Also fantastic opportunities, but very capital intensive, very concentrated. So I tend not to play there myself, either. And then there's the developer tool layer. What I showed you just now was-- I was actually using OpenAI's API as a developer tool. And then, I think the developer tool sector is a hypercompetitive. Look at all the startups chasing OpenAI right now. But there will be some mega winners. And so I sometimes play here, but primarily, when I think of a meaningful technology advantage, because I think that earns you the right or earns you a better shot at being one of the mega winners. And then lastly, even though a lot of the media attention and the buzz is in the infrastructure and developer tooling layer, it turns out that layer can be successful only if the application layer is even more successful. And we saw this with the rise of SaaS as well. Lot of the buzz and excitement is on the technology, the tooling layer. Which is fine. Nothing wrong with that. But the only way for that to be successful is if the application layer is even more successful, so that, frankly, they can generate enough revenue to pay the infrastructure, and the tooling layer. So, actually, let me mention one example. Amorai-- I was actually just texting the CEO yesterday. But Amorai is a company that we built that uses AI for romantic relationship coaching. And just to point out, I'm an AI guy. And I feel like I know nothing really about romance. And if you don't believe me, you can ask my wife, she will confirm that I know nothing about romance. But when we went to build this, we wanted to get together with the former CEO of Tinder, Renate Nyborg. And with my team's expertise in AI, and her expertise in relationships because she ran Tinder, she knows more about relationships than I think anyone I know, we're able to build something pretty unique using AI for kind of romantic relationship mentoring. And the interesting thing about applications like these is when we look around, how many teams in the world are simultaneously expert in AI and in relationships? And so at the application layer, I'm seeing a lot of exciting opportunities that seem to have a very large market, but where the competition sets is very light, relative to the magnitude of the opportunity. It's not that there are no competitors, but it's just much less intense compared to the developer tool or the infrastructure layers. And so, because I've spent a lot of time iterating on a process of building startups, what I'm going to do is just, very transparently, tell you the recipe we've developed for building startups. And so after many years of iteration and improvement, this is how we now build startups. My team's always had access to a lot of different ideas, internally generated, ideas from partners. And I want to walk through this with one example of something we did, which is a company Bearing AI, which uses AI to make ships more fuel efficient. So this idea came to me when, a few years ago, a large Japanese conglomerate called Mitsui, that is a major shareholder and operates major shipping lines, they came to me and they said, hey, Andrew, you should build a business to use AI to make ships more fuel efficient. And the specific idea was, think of it as a Google Maps for ships. We can suggest a ship or tell a ship how to steer, so that you still get to your destination on time, but using, it turns out, about 10% less fuel. And so what we now do is we spend about a month, validating the idea. So double check, is this idea even technically feasible, and then talk to prospective customers to make sure there is a market need. So we spent up to about a month doing that. And if it passes this stage, then we will go and recruit a CEO to work with us on the project. When I was starting, out I used to spend a long time working on a project myself, before bringing on a CEO. But after iterating, we realized that bringing on a leader at the very beginning to work with us, it reduces a lot of the burden of having to transfer knowledge or having a CEO come in and having to revalidate what [? we ?] discovered. So the process is, we've, learned much more efficient, we just bring the leader at the very start. And so in the case of Bearing AI, we found a fantastic CEO, Dylan Keil, who is a reputed entrepreneur, one successful exit before. And then we spent three months, six, two week sprints, to work with them to build a prototype as well as do deep customer validation. If it survives this stage, and we have about a two thirds, 66% survival rate, we then write the first check in, which then gives the company resources to hire an executive team, build the key team, get an MVP working, minimum viable product working, and get some real customers. And then after that, hopefully, then successfully raises additional external rounds of funding, and can keep on growing and scaling. So I'm really proud of the work that my team was able to do to support Mitsui's idea, and Dylan Keil, as CEO. And today, there are hundreds of ships, on the high seas right now, that are steering themselves differently because of Bearing AI. And 10% fuel savings translates to around to maybe $450,000 in savings in fuel, per, ship per year. And, of course, it's also, frankly, quite a bit better for the environment. And I think this startup, I think, would not have existed if not for Dylan's fantastic work, and then also, Mitsui bringing this idea to me. And I like this example because this is another one is like-- this is a startup idea that, just to point out, I would never have come up with myself. Because I've been on a boat but what do I know about maritime shipping. But is the deep subject matter expertise of Mitsui, that had this insight, together with Dylan, and then my team's expertise in AI, that made this possible. And so as I operate in AI, one thing I've learned is my swim lane is AI, and that's it. Because I don't have time or it's very difficult for me to be expert in maritime shipping, and romantic relationships, and health care, and financial services, and on, and on, and on. And so I've learned that if I can just help get a accurate technical validation, and then use AI resources to make sure the AI tech is built quickly and well, and I think, we've always managed to help the companies build a strong technical team, then partnering with subject matter experts often results in exciting new opportunities. And I want to share with you one other weird aspect of-- one other weird thing I've learned about building startups, which is I like to engage only when there's a concrete idea. And this runs counter to a lot of the advice you hear from the design thinking methodology, which often says, don't rush to solutioning. Explore a lot of alternatives before you do a solution. Honestly, we tried that, it was very slow. But what we've learned is that at the ideation stage, if someone comes to me and says, hey, Andrew, you should apply AI to financial services. Because I'm not a subject matter expert in financial services, it's very slow for me to go and learn enough about financial services, to figure out what to do. I mean, eventually, you could get to a good outcome, but it's a very labor intensive, very slow, very expensive process, for me, to try to learn industry after industry. In contrast, one of my partners wrote this idea as a tongue in cheek, not really seriously. But, let's say, [INAUDIBLE] by GPT, let's eliminate commercials by automatically buying every product advertised in exchange for not having to see any ads, it's not a good idea, but it is a concrete idea. And it turns out, concrete ideas can be validated or falsified, efficiently. They also give a team a clear direction to execute. And I've learned that in today's world, especially, with the excitement, the buzz, the exposure to AI of a lot of people, it turns out that there are a lot of subject matter experts in today's world, that have deeply thought about a problem for months, sometimes even one or two years. But they've not yet had a build partner. And when we get together with them, and hear, and they share the idea of us, it allows us to work with them to very quickly go into validation and building. And I find that this works because there are a lot of people that have already done the design thinking thing of exploring a lot of ideas and winnowing down to really good ideas. And there are-- I find that there are so many good ideas sitting out there, that no one is working on. That finding those good ideas that someone has already had, and wants to share with us, and wants to build partner for, that turns out to be a much more efficient engine. So before I wrap up, we'll go to the question in a second, just a few slides to talk about risk and social impact. So AI is very powerful technology. To say something you'd probably guess, my teams and I, we only work on projects that move humanity forward. And we have multiple times killed projects that we assess to be financially sound, based on ethical grounds. It turns out, I've been surprised and sometimes dismayed at the creativity of people to come up with good ideas. So to come up with really bad ideas that seem profitable but really should not be built. We've killed a few projects on those grounds. And then, I think, has to be acknowledged that AI today does have problems with bias, fairness, and accuracy. But also the technology is improving quickly. So I see that AI systems today are less biased than six months ago, and more fair than six months ago, which is not to dismiss the importance of these problems. They are problems and we should continue to work on them. But I'm also gratified at the number of teams working hard on these issues to make them much better. When I think of the biggest risks of AI. I think that the biggest risks-- one of the biggest risks is the disruption to jobs. This is a diagram from a paper by our friend at the University of Pennsylvania, and some folks at OpenAI, analyzing the exposure of different jobs to AI automation. And it turns out that, whereas, the previous wave of automation mainly-- the most exposed jobs were often the lower wage jobs, such as when we put robots into factories. With this current wave of automation, is actually the higher wage jobs, further, to the right of this axis, that seems to have more of their tasks exposed to automation. So even as we create tremendous value using AI, I feel like, as citizens, and our corporations, and our governments, and, really, our society, I feel a strong obligation to make sure that people, especially people whose livelihoods are disrupted, are still well taken care of, are still treated well. And then lastly, there's also been-- it feels like every time there's a big wave of progress in AI, there's a big wave of hype about artificial general intelligence as well. When DeepLearning started work really well 10 years ago, there was a lot of hype about AGI. And now, the generative AI is working really well, there's another wave of hype about AGI. But I think that artificial general intelligence, AI that can do anything a human can do is still decades away, maybe 30 to 50 years, maybe even longer. I hope we'll see it in our lifetimes. But I don't think there's any time soon. One of the challenges is that the biological path to intelligence, like humans and the digital path to intelligence, AI, they've taken very different paths. And the funny thing about the definition of AGI is you're benchmarking this very different digital path to intelligence with really the biological path to intelligence. So I think, large language models are smarter than any of us in certain key dimensions, but much dumber than any of us in other dimensions. And so forcing it to do everything a human can do is like a funny comparison. But I hope we'll get there. Hopefully, within our lifetimes. And then there's also a lot of, I think, overblown hype about AI creating extinction risks for humanity. Candidly, I don't see it. I just don't see how AI creates any meaningful extinction risk for humanity. I think that people worry we can't control AI. But we have lots of, AI will be more powerful than any person. But with lots of experience, steering, very powerful entities, such as corporations or nation states that are far more powerful than any single person, and making sure they, for the most part, benefit humanity. And also technology develops gradually. The so-called hot take off scenario, where it's not really working today, and then suddenly, one day, overnight, it works brilliantly, and we achieve super intelligence, takes over the world. That's just not realistic. And I think the AI technology will develop slowly, like all the-- and then it gives us plenty of time to make sure that we provide oversight and can manage it to be safe. And lastly, if you look at the real extinction risk to humanity, such as, fingers crossed, the next pandemic or climate change, leading to a massive de-population of some parts of the planet, or much lower odds, but maybe someday, an asteroid doing to us what it had done to the dinosaurs. I think if we look at the actual real extinction risk to humanity, AI having more intelligence, even artificial intelligence in the world, would be a key part of the solution. So I feel like if you want humanity to survive and thrive for the next 1,000 years, rather than slowing AI down, which some people propose, I would rather make AI go as fast as possible. So with that, just to summarize, this is my last slide. I think that AI, as a general purpose technology creates a lot of new opportunities for everyone. And a lot of the exciting and important work that lies ahead of us all is to go and build those concrete use cases, and hopefully, in the future, hopefully, I'll have opportunities to maybe engage with more of you on those opportunities as well. So with that, let me just say, thank you all very much. [APPLAUSE]
AI_LLM_Stanford_CS229
Run_ANY_OpenSource_Model_LOCALLY_LM_Studio_Tutorial.txt
this is the easiest way to get open-source large language models running on your local computer it doesn't matter if you've never experimented with AI before you can get this working the software is called LM Studio something that I've used in previous videos and today I'm going to show you how to use it let's go this is the LM Studio website the LM Studio software is available on all platforms Apple windows and Linux now today I'm going to show you how to get it running on a Mac but I've gotten this working on Windows as well and it is dead simple so really you just download the software and install it there's nothing to it and once you do that this is the actual LM studio so first let's explore the homepage here you're going to get a nice little search box where you can search for different models that you want to try out basically anything available on hugging face is going to be available in LM studio if you scroll down a little bit you get the new and noteworthy model so obviously here's Zephyr 7B beta here's mistal 7B instruct code llama open Orca these are the top models for various reasons and not only that it tells you a bunch about every single model it pulls in all the information from the model card so it's easily readable from here thank you to the sponsor of this video updf updf is an awesome free alternative to Adobe Acrobat but let me just show it to you so after a few clicks I got it downloaded and installed I loaded up an important PDF and you can do a lot of awesome things with it so let's start with OCR so I clicked this little button in the top right I select searchable PDF and then perform OCR so that allow me to search through it and do other things with it now that it's a text document very easy and there we go after a few seconds I have the OCR version right here now I can highlight all the text easily switching back to the PDF we can do a bunch of cool stuff so we can easily highlight we can add notes I can easily protect it using a password by clicking this button right here I can easily add stamps so I could say confidential right there and you can easily edit PDFs check this out and best of all it has a really cool AI feature where you can actually ask questions to this document so it's basically chat with your doc all you have to do is click this little updf AI in the bottom right it loads up the document I click get started and it's going to give me a summary first and then I can ask it any question I want all right so let's ask it something who are the authors of this paper so be sure to check out updf and they're giving a special offer to my viewers 61% off their premium version which gives you a lot of other features link and code will be down in the description below thank you to updf so let's try it out if I just search for mistol and hit enter I go to the search page and we have every model that has mistel in the keywords and just like hugging face you get the author and then you get the model card information and you get everything else involved too so you can really think of this as a beautiful interface on top of hugging face so here's the BLS version from 4 days ago let's take a look at that so if I click on it here I can see the date that it was uploaded loaded again 4 days ago I can see it was authored by the bloke and then I have the model name dolphin 2.2.1 Ash Lima RP mistal 7B ggf lot of information in that title on the right side we can see all the different quantized versions of the model so everything from the smallest Q2 version all the way up to the Q8 version which is the largest now if you're thinking about which model to choose and even within a model which quantized version to use you want to fit the biggest version that can actually work on your machine and it's usually a function of Ram or video RAM so if you're on a Mac it's usually just Ram but if you have a video card on a PC you're going to look at your video RAM from your video card so I'm on a Mac today so let's take a look and one incredible thing that LM Studio does for you out of the box is that it actually looks at your specs of your computer and right here it has this green check and should work which means the model that I have selected right now should work on my computer given my specs so you no longer have to think about well how much RAM do I have how much video RAM do I have what's the model size which quantization method should I use it'll just tell you it should work now here's another example I just searched for llama this is the Samantha 1.1 version of llama and it is a 33 billion parameter version and right here it says requires 30 plus GB of RAM now my machine has 32 GB so it should be enough and it's not saying it won't work but it's giving me a little warning that says hey it might not work and back to the search page for mistol let's look at a few other things that we're going to find in here so it tells us the number of results it tells us it's from hugging face Hub we can sort by the most recent we can sort by the most likes we can sort by the most downloads usually likes and downloads are pretty in line with each other I usually like to sort by most recent because I like to play around with whatever the most recent models are and you can also switch this to least so you click on that and you can find least recent but I don't know why you would want to do that then we also filter by a compatibility guess so it won't even show me models that it doesn't think I can run and if I click that again now it's showing all models so I like to leave that on filtered by compatibility best guess now again within the list of quantized versions of a specific model we can actually see the specific Quant levels here so this is q2k q2k and so on all the way up to Q8 and the largest one down here is going to be also the largest file size if we hover over this little information icon right here we get a little description of what each of the quantization methods give us so here Q2 lowest Fidelity extreme loss of quality uses not recommended and up here we can see what the recommended version is which is the Q5 km or KS and it says recommended right there so these are just a little bit of a loss of quality Q5 is usually what I go with here it gives us some tags about the base model the parameter size and the format of the model we can click here to go to the model card if we want but then then we just download so we download it right here so I'm going to download one of the smaller ones let's give it a try we just click and then you can see on the bottom this blue stripe lit up and if we click it we can actually see the download progress and it really is that easy and you can see right here I've already downloaded the find code llama 34b model and I'm actually going to be doing a video about that and also another coding model called Deep seek coder and what makes LM studio so awesome is that it is just so so easy to use and the interface is gorgeous it's just super clear how to use this for anybody and it makes it really easy to manage the models manage the different flavors of the models it's a really nice platform to use all right while that's downloading I'm going to load up another model and show it to you so in this tab right here this little chat bubble tab this is essentially a full interface for chatting with a model so up at the top here if we click it you find all the models that you've downloaded and I've gone ahead and selected this mistal model which is relatively small 3.82 gab so I select that and it loads it up and then I'm really done it's ready to go I'm going to talk about all the settings on the right side though and over here on the right side the first thing we're going to see is the preset which basically sets up all the different parameters pre-done for whatever model you're selecting so for us for this mistal model of course I'm going to select the mistal instruct preset and that's going to set everything here's the model configuration and you can save a preset and you can also export it and then right here we have a bunch of different model parameters so we have the output Randomness and again what I really like about LM studio is that it can be used even if you're not familiar with all of this terminology so typically you see Temp and end predict and repeat penalty but a lot of people don't know what that stuff actually means so it just tells you output Randomness words to generate repeat penalty and if you hover over it it gives you even more information about it so here output Randomness also known as Temp and it says provides a balance between Randomness and determin minism at the extreme a temperature of zero will always pick the most likely next token leading to identical outputs each run but again as soon as you select the preset it'll set all of these values for you so you can play around with it as you want here's the actual prompt format so we have the system message user message and the assistant message and you can edit all of that right here here you can customize your system prompt or a pre- prompt so if you want to do role playing this would be a great place to do it so you could say you are Mario from Super Mario Brothers respond as Mario and then here we have model initialization and this gets into more complex settings some things are keep the entire model in Ram and a lot of these settings you'll probably never have to touch and here we go we have Hardware settings too so I actually do have apple metal I'm going to turn that on and I'll click reload and apply and there we go next we have the context overflow policy and that means when the response is going to be too long for the context window what does it do so the first option is just stop the second option is keep the system prompt and the first user message truncate the middle and then we also have maintain a rolling window and truncate past messages so so I'll just keep it at stop at limit for now and then we have the chat appearance if we want plain text or markdown I do want markdown and then at the bottom we have notes and now that we got all those settings ironed out let's give it a try all right and I said tell me a joke Mario knock knock who's there jokes jokes who just kidding I'm not really good at jokes but here's one for you why did the Scarecrow win an award because he was outstanding in his field and so you can export it as a screenshot you can regenerate it or you can just continue and continue is good if you're getting a long response and it gets cut off over on the left side we have all of our chat history so if you've used chat GPT at all this should feel very familiar if you want to do a new chat you just click right here if you want to continue on the existing chat you just keep typing so for example I can just say tell me another one and it should know that I'm talking about a joke because it's continuing from the history that I previously had in here so why did this made turn red because it saw the salad dressing great now if I wanted to say new chat and I said tell me another one it wouldn't know what I'm talking about there we go and it's just typing out random stuff now so I'm going to click stop generating and then if we look at the bottom we have all the information about the previous inference that just ran so time to First token generation time tokens per second the reason stopped GPU layers etc etc so it really gives you everything but it keeps it super simple the next thing I want to show you is for developers so if you want to build an AI application using LM Studio to power the large language model you click this little Double Arrow icon right here which is local server so I click that and all you have to do is click Start server you set the port that you want you can set whether you want cores on and you have a bunch of other settings that you can play with so once I click Start server now I can actually hit the server just like I would open Ai and this is a dropin replacement for open AI so it says right here start a local HTTP server that behaves like open ai's API so this is such an easy easy way to use use large language models in your application that you're building and it also gives you an example client request right here and so this is curl so we curl to the Local Host endpoint chat completions and we provide everything we need the messages the temperature the max tokens stream and then it also gives us a python example right here so if we wanted to use this python example we could do that and what's awesome is you can just import the open AI python library and use that but instead replace the base with your local host and it will operate just the same so you get all the benefits of using the open AI library but you can use an open source model and of course on the right side you get all the same settings as before so you can adjust all the different settings for the model and then the last tab over here looks like a little folder so we click it it's the my models tab which allows you to manage all the different models that you have on your computer so right now it says I have two models taking up 27 GB of space I don't want this fine model anymore it's taking up too much space so let's go ahead and delete it so I just click delete and it's gone just like that it is so easy to manage all of this and I think I covered everything for LM studio if you want to see me cover any other topic related to LM Studio let me know in the comments below if you liked this video please consider giving a like And subscribe and I'll see you in the next one
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_14_Neural_Tangent_Kernel_Implicit_regularization_of_gradient_descent.txt
OK. Hello, everyone. Let's get started. So last time, what we did was the NTK, the neural tangent kernel approach. And so today, we're going to continue with that to finish the last part of the neural tangent kernel approach. And then we talk about the so-called implicit regularization effect. So the last time, briefly, we recall that last time we have done the following. So we have claimed that the are two steps-- three steps in this analysis using an NTK approach. So one step is that you say that f theta x is close to g theta x in some neighborhood. [INAUDIBLE] Oh, wait. Oh, sorry. Yeah, there are too many steps in the setup. So I always forgot some step. The worst step I would forget is that I forgot to record all. This one, you can remind me. But if I forgot to record, then nobody will remind me. So that's the thing I check every time. OK, cool. And I think this is recording. This is recording. And you can hear me on Zoom, right? Maybe-- [INAUDIBLE] it's recording [INAUDIBLE].. That would be great. Nobody seems to say anything. But it sounds like the-- it's recording the-- it's receiving audio. OK. So last time, we said that in some neighborhood, this theta around B theta 0, you can have accurate approximation. And recall that the B theta 0 was something like a neighborhood of size, something that depends on a sigma. So B theta 0 was defined. We show that if you look at the neighborhood, where you have some-- where a theta is close to theta 0 with distance, something like a square root n over sigma, then you indeed have something-- I guess you get how well the approximation is. I think the approximation in this neighborhood-- the approximation error is something like beta square root n-- beta n over sigma squared. All right, that's what we had. And also in this neighborhood-- two, in this neighborhood, so there exists a global minimum error 0. Nobody seems to respond to my request about confirming that the audio is working. But I don't know. There are only four people here. Oh, OK. Well-- OK, so you can hear me. Thank you so much. Great, great. Thanks. It's just sometimes I get paranoid by this. OK, thank you so much. OK, cool. So all right. So this is what we proved last time. And we discussed here that this quantity is the key thing, right? Beta over sigma squared is the very key thing. And if it goes to 0, then that's great. Because your error becomes smaller and smaller. Oh, I see. I cannot hear you. That's the problem. I see. Probably now I can hear it. I think my speaker is very-- the volume is very low. OK, cool. Thanks so much. And now, the third step as we discussed last time, is that-- and we have discussed the various cases, where this Beta sigma squared is close to zero, we discussed two cases. And today we are going to talk about the third step, where we show that optimizing with f theta within a neural network is similar to optimizing with g theta. And in some sense, the only thing you care about is an analysis of the optimization for f theta. But you want to do this kind of like relationship, so that you can make the optimization easier. And we also briefly discussed what we do with this optimization. I think there are two ways or one way you-- there are two ways to deal with the g theta. So I think there are two ways. One is something like using strong convexity. And the other is using only the smoothness. And today, we're going to focus on this case, which doesn't require too much of the background about optimization. All right. So now, let's go into the detail. By the way, I think a small remark before I go into the detail-- so why you care about the step three in some sense? Right? So a priori there's no reason. OK, so there's one reason, which is that you want to understand what happens when you optimize over neural networks, right? But suppose the solution-- at the understanding, suppose at some point like we are at a moment, where we want to prove this number 3 but we haven't succeeded. But you probably would question yourself, why I care about such an answer. Even if I prove this answer, why it's interesting? And the answer is yes, it's not that interesting because if you prove that optimizing over a neural network is the same as optimizing some linear model like a kernel method, then why not just use a kernel method, right? And it turns out that that's indeed true. If you use kernel method, you are not-- it's not going to work. If you optimize neural network in this way, in this particular iteration, with this particular learning ratio and so forth, it wouldn't work as well either. So in some sense, this is-- so this whole thing, the value of this theorem is only for showing that under certain regime, optimizing of a neural network is the same as optimizing of a linear model. But there is no any bigger impact in some sense just because you are optimizing the neural artificial network in a weird regime, which is the same as optimizing kernels. And in this regime, nothing works very well. But still, for the technical reason, I still try to go through this. It's not super complicated. And I think, in some sense, the techniques also is kind of useful partly because I think it's somewhat surprising. Because at the first place, you wouldn't probably believe it, right? Why you believe that optimizing neural network in any case would be similar to optimizing kernels. And this shows that that's possible in some cases, even though that case is not that informative or that kind of useful. OK. So let's analyze. So we start with the first step. We start with analyzing the optimization with g theta. And this is really just like linear regression, right? It's really understanding optimization of linear regression. So the problem is really you take the min of this y back minus phi delta theta to a norm square with GD. And just to briefly recall the notation, so phi is this feature matrix, which is of dimension n by p. So this is n by p matrix. And each row is something like right f theta and 0 x i-th transposed, I see. You put all of these as rows. What exactly this phi matrix is, it doesn't matter so much anymore for the rest of the discussion. Because phi is just a matrix. And delta theta is the difference between a theta and theta 0. And we are just optimizing over delta theta. And what we do is we just take gradient descent. And you'd say the gradient descent as you take the gradient. And the gradient is phi transpose times y vec minus phi delta theta. All right. This is the gradient update. Well, kind of one of the features of this analysis is that you are looking at the convergence, the changes in the output space. And in some sense, this is kind of like-- to some extent, this is the spirit of the kernel method, where you're looking at not the parameter. You don't look at the parameter space, but you look at output space. And when you look at the output space, it's kind of how you're looking at the function space to some extent. But what does that mean? That means that you're looking at the output y at time t plus 1. So this is defined to be just the phi times delta theta and t plus 1. I guess by definition-- let's define that version with t. And you look at how you change the residual over time. All right. So you compare your output, item t, with the target output y vec. And how does this change over time? And this is just the definition and your plug-in-- the definition of theta t, which is delta theta t minus eta phi transpose, y vec minus phi delta theta, minus y vec. And now this requires some kind of rearrangement to make it look cleaner. And how do I rearrange this? I guess I'm going to first group everything on about the delta theta. So you group this term and the term related this. This also delta theta t. So you get phi minus eta phi transposed phi. I think I'm missing something here. What's wrong with this? One moment, sorry. Yeah, OK. I think I'm right. So if you look at what's-- multiplied in front of this term, there's this one. This one, right? So there are three things-- multiplier in front of theta t. So this is the multiplier in front of theta t. And then you look at the multiplier in front of y vec, then you get minus eta phi phi transposed plus identity y vec. And interestingly, this, you can write it as I minus eta phi transposed-- sorry-- phi. I'll write this as eta phi phi transpose times phi, delta theta t minus I minus eta phi phi transposed, y vec. And then you get I minus eta phi phi transpose delta theta t minus y vec. All of these are basically just the standard calculation. I think some version of-- if you take some version of a linear regression course, then probably you would have seen this. You got this. So that's the update. That's the recursion for the residual of the output. And you can see what happens is basically the residual in the previous round got multiplied by this matrix. And what this matrix is? This matrix is a matrix that is smaller than an entity, because you have I minus something. And the something is a PSD matrix. So you are shrinking your residual in some way every time. And you can quantify how fast you are shrinking. So when eta is less than say 2 over sigma max, phi phi transposed. And let's define this quantity. Let's call it 1 over 2 tau squared. Then you can show that this convergence-- then I minus eta phi phi transpose can be shown to have operator norm less than 1 over eta tau eta. So I think I have some eta sigma squared. And then you have-- so this, also this is a moment. But suppose you have this, then you know that y t-- y-hat t minus y back into norm is less than the operator norm of this matrix times-- sorry, let's take this t plus 1, y vec 2 norm. So this is less than 1 minus eta sigma squared, y-hat t minus y, 2 norm. And if you do a lot of recursions, you get 1 minus eta sigma squared to the power t plus 1 times y vec0 minus y, 2 norm. So you have an exponential decay of error. [INAUDIBLE] Yes, that's a good point. So sigma is the minimum value of phi. And let's prove this right now. Let's say this is number one. But suppose you have number one, then you have all of this exponential decay. So now, let's prove the number one. So how do we do that? So establishing 1-- so intuitively, it's just really that this is a matrix PST so that's why you subtract something by your operator norm less than 1. All right. But we need to make sure we know exactly how small it is. But it cannot be-- it has to be strictly less than 1. That's why we needed this equality 1. And to see this, I guess there are multiple ways to see it. But I think the way I tend to think about it is that-- so first of all, sigma is sigma min of phi, which is also the sigma min of phi phi transposed square root. This is just by the standard property of the singular values. So I guess one way to think about this is that if you look at the eigen single-- either eigenvalue or singular value-- because this is a PSD matrix, so it doesn't matter-- value of phi phi transposed, then let's say suppose the eigenvalues are tau 1 squared up to tau 1 squared, and then tau 1 squared is equal to tau squared. That's the definition of the sigma max, right? And tau n squared is the sigma squared. This is my definition of the sigma min. And then the I minus-- we care about this matrix. So this one has singular value-- eigenvalue or singular value 1 minus eta tau1 squared up to 1 minus eta tau-n squared. Right? This is just because I has singular value of 1, so I'm not sure whether this is-- do you think this thing requires a proof? This is just because, I guess, there are many ways to see this. I guess the best way for me to see it is always take eigen decomposition. You say, phi is u sigma v transpose where sigma is this matrix with tau-1 up to tau end. And then I minus phi phi transposed eta transpose is just I minus eta u sigma squared u transposed, right? And I is equals to u u transpose. Transpose, and then you get u times I minus eta sigma squared u transpose. And this is the SVD for all the eigen decomposition or SVD for I minus eta phi phi transposed. And that's why what's inside is the eigenvalues or the singular values for the resulting matrix. All right. And then now you bound this, right? But if you care about the operator norm, you care about the largest, the absolute value of the eigenvalue, right? So basically, that's I minus eta phi phi transpose the operator norm is less than the max over j, 1 minus eta tau j squared-- absolute value. And I think your eta is guaranteed-- the choice of eta is trying to guarantee that you never get to the negative side. You try to make sure that eta is less than 1 over 2 tau squared. And tau is the largest one. So that's why even the largest one is-- even the largest tau will not make the 1 minus eta tau squared negative. So everything is positive. So then this is just equals to 1 minus eta tau-n squared, which is equals to 1 minus eta sigma squared. OK? So it sounds good. Yeah, in some sense these are sometimes the basics of optimization. But this course doesn't require optimization. That's why I'm providing some basic tools here. And all right. OK, so basically we are done with the analysis of this linear regression, right? After you have this, you know that you are-- your error is decaying exponentially fast. And then after sufficient number of iterations, your error is small. All right. So basically, maybe let's call this two. So from two, you have "for T is at most something like log 1 over epsilon, over eta sigma squared, iteration. So your error y-hat t minus y vec is less than epsilon times the initial thing. And you can take epsilon to be small so that you can-- you can take epsilon to be very small, because it's a logarithmic decay of errors. OK. So this is the analysis for g. And now let's talk about analysis for f. All right. So you will see that the analysis for f is very similar to this but with some tweak. Maybe let me state the theorem just so that we have a formal statement somewhere. So there exists a constant, say, c0 and 0, 1 such that when this key quantity beta over sigma squared is less than c0 for sufficiently small eta. And eta could-- this could depend on-- which could depend on beta, sigma, maybe the dimension p, so and so forth. I think you can have a concrete bound for how small eta has to be. But I am just lazy. I want to, but I'm lazy. But I want to have these details. I'm also lazy, but I also want to have these details. So that it's not too complicated for the explanation. But when you have sufficient small eta, then in T is equals o of log 1 over epsilon, over eta sigma squared steps, the empirical loss for the f theta t is also less than-- is less than epsilon. So the empirical loss for the neural network is also having this error epsilon. So how do we do this? So I guess we have kind of discussed the intuition already. So the intuition is that you always have to try to kind of relate this to the g. And here, by relating to the g, basically, you're just trying to follow-- what I mean is, you really try to follow the proof you had before. Just try to imitate as much as possible. And of course, there will be some differences. And then you will deal with the differences. So I think one difference is that-- by the way this is a proof sketch, because I'm going to omit some small technical jargon, which is not super important. So the important thing is that you have a changing phi in some sense. So this is what's the difference in neural networks, when you have neural networks compared to the linear regression case. So you'll see why this is the case. So suppose you define this phi, phi superscript t to be the kernel at time T. All right. So this is the-- this is the NTK kernel, when you Taylor expand at time T. And if you try to expand at time T, then the gradient descent-- I think we have discussed this before, but now you can see it explicitly. I think I have claimed that if you Taylor expand at time T, then the gradient in respect to the approximation is the same as the gradient with respect to the origin of the neural network, right? So this is what I think you wrote. At the very end here, this is a remark. So if you Taylor expand at time T, then the gradient with respect to neural network is the same as the gradient in respect to linear function. This is just because these two things agree. These two things agree at this point up to first order. So that's why even you compose with the loss function up to the first order, they still agree. And here, you can even see that explicitly. So suppose you write down the gradient of the loss function-- I'd say at t-- then what you've got is that you have 1/n. You can do the chain rule. So you can get this is y i-th minus f theta t x-th i-th times gradient f sub theta t times x i-th. You can verify this just even-- without using the remark I had, right? This is just the chain rule. And then this is equals to-- if you view this as this is from the phi, this is the ith row of the phi, right, so this corresponds to ith row phi. And this corresponds to the difference of-- maybe let me write this more explicitly. This is y i-th minus y i-th t times the gradient. And then if you write this as a vector for matrix modification formula, you got phi t which corresponds to this one. And then you have y vec minus y-hat t. And there's a 1/n over n in front of it. Right. So that's the gradient. And that means that the update rule for theta t, I think I somehow have a theta here. Yeah, updated rule for delta theta t-- I guess, I'm going to use theta t instead of theta theta t. They are the same, right? They are just only different up to a translation. So theta t plus 1, is equals to theta t minus eta times this gradient. And this is equals to a theta t minus 1/n times phi t y vec y-hat t And-- OK. So now there is a little bit kind of like a small thing here. So suppose you say, you give a name called this b t. So then this is-- let's say-- so eta 1/n. So this is theta t, eta b t. So I'm going to try to-- OK, so what's our goal? Our goal is to try to deal with recursion for the y's. That's what we did before, right? A recursion for the y's. And how do I get a recursion for the y's? I have to look at how the y changes. What is the y? This is one entry of the y-hat-- the output at time t plus 1. But I also write this as of something related to the time at the function output at time t. So how do I do that? This is a nonlinear function. Before, we just do a linear multiplication. Because before we just know that if this were g, then this is just equals to phi times delta theta, t plus 1, right, if this f was g. But because this is nonlinear, we have to do something. So what we do is we try to Taylor expand at time t. So that you can have a relationship between theta t and theta t plus 1. So if you try to expand, then you have to write the gradient of f say that t x i and times the differences between the two iterate and plus something high order, right? And if you look at what's the difference? The difference is a function of eta. All right. The difference is exactly this eta b t. So that's why we can write plus the gradient of f-theta t x i times-- minus eta b t and plus-- the second order term will be a quadratic. And this will be quadratic in this way if I'm going to write it somewhat informally. And more formally, I can write this as something-- a function of eta squared times something, because the difference has an eta in it. So that's why you square it. You'll get eta squared. And sometimes this is a term I want to ignore. I'm trying very hard here just because I want to ignore this term. And the reason why we ignore it is because it's eta squared, right? So basically, what I'm going to say is that Mt, the constant, is not a function of eta. So if you fix everything else and just take eta to be very small-- so if you take eta to go to 0, then the second order term, the eta squared Mt term is negligible. We can do this more formally, but I don't want to go into so much jargons. But there is a way for you to kind of bound Mt by something, right? Whatever you bound it to, right, so you get the bond for Mt and then you just say that if eta is small enough, so that eta squared of Mt becomes negligible. That is basically how you formally do it. So if you ignore this second order term, then everything becomes so simple, right? So for now, let's ignore the second order term. Then what you have is that y hat t plus 1. This is equals to-- if you put this equation, let's call this equation 3 in vector form. Now you've got y t hat-- y hat t plus 1 is equals to y hat t. Because this is y hat t. And plus, this linear function of b t, right? So what is this? This is really eta-- minus eta times phi transposed times b t. I guess actually this is right. So yes, phi transposed times b t. And then plus something like eta squared times some constant. I'm going to keep this just for a little bit, so that-- but essentially, I want to ignore it. And then you can rewrite this as y hat t minus eta phi transposed. And what is b t? b t is this difference between theta t and theta t prime. This is something like this, right? Let's go back to that. So this phi-- wait, I'm not sure why I'm missing like I have a constant here. Oh, I see. I see. So I think this 1/n-- I guess there's some mismatch in my notes. But it doesn't matter. So let's have the 1/n here. Well, like for the linear regression case, I didn't have the 1/n in the loss function. And now I have the 1/n. And that's why there's a little bit of mismatch. But it's not a fundamental difference. So let's have the 1/n. And then you have-- actually this is phi t. So this is phi t. And then you have eta phi T phi T transposed times y vec minus y hat t times 1/n. How do I do this? So let me ignore the 1/n forever. Because you can redefine a loss function whatever you want, right? So let's say just we don't have 1/n in the loss function. Sorry. So then you have this. And then this becomes-- if you subtract y vec from both of these and then you will reorganize this, so you're going to get this is equals to I minus eta phi t phi t transposed, y hat t minus y vec. I think there's-- somehow, there is a little bit problem with the-- there's a little bit. I think this is a plus. This is a plus. Right. OK. But the point is that basically if you compare this equation, I guess, technically, you still have some eta squared term, which we don't care. If you compare this equation with the recursion before-- the recursion before was this, actually, maybe here-- the only difference is that this matrix is different. But before you are multiplying with a fixed matrix phi phi transposed. And now you are using phi t phi t transpose. But everything could be the same if this phi t-- but you don't necessarily need it to be the same, right, in the original proof. You only need this matrix to be smaller. You only need I minus eta phi phi transpose for this matrix to be smaller than an entity. Right? So suppose we ignore the eta squared M term, because it's second order. And then suppose theta t minus theta 0. Suppose you are within sigma over 4 beta at time t. So suppose you are not very far away from the theta 0, OK, so then you know that phi t minus phi in 2 norm is less than sigma over 4. This is by Lipschitzness of phi. This is our assumption. And that means that sigma min of phi t is not very different from the sigma min of phi, let's say, minus sigma over 4. This is the largest, 3/4 times sigma. So sigma min of phi t is also good. So you still have a lower bound for the eigenvalue. It's just a little bit weaker up to a constant factor. And that means that I minus eta phi t phi t transposed operator norm is less than 1 minus eta times 3/4 times sigma. All right. So very similar to the before. So but there is an assumption here. So this sounds great, right? But there is an assumption, which is that theta t is not very far away from theta 0. This is something you cannot take for granted. You have to prove it is the case. So that's why we have to inductively-- so basically, the only thing left is that we need to inductively prove this. Prove theta t minus theta 0 is never too big. Basically, that's the thing. And in some sense, this is expected. This is expected because-- in some sense this is expected because recall that delta theta hat the 2 norm-- theta hat was the global minimizer. This is the global min that we constructed in the last lecture. So we said that there is a global min of size squared root n over sigma, right? And because there's a global min with this size, and if this is much, much less than sigma over 4 beta-- so when beta over sigma squared is sufficiently small, right? So I guess this-- when beta over sigma squared is sufficiently small, then you have this inequality. And that means that there exists a global min within this region sigma over 4 beta. If there exist a global min within this region, why you should leave this region? Right? That's why you should somewhat expect that it's always within this region. And how do we formally do this? I think you just say formally, you just do an induction, right, because you know that-- I guess a square root. We know that 1/n-- you see where I made the mistake here. So 1 over square root n times y hat t minus y vec-- 0. This is o of 1. Because every entry is on the order of o of 1. So you have n entries. And then in that way, you can show that-- this implies that we actually have an exponential decay of error. But actually, even we don't care about that, you still have for every time t, you have this. And if for every time t you have this, then it means that 1 square root n phi sigma t minus sigma hat 2 norm is less than O of 1. Because theta hat is the ground truth, right? So this is because phi theta hat is equal to y vec. Theta hat is the construct-- the one that we constructed the last lecture. And then this means that theta t minus theta hat 2 norm is less than square root n over sigma, which is exactly right. You are saying that your iterate is not very far away from the target theta hat. And then you also know that your target theta hat is also not very far away from-- so we also know that the target, this is also less than-- I guess there's a big O here. Big O of square root n over sigma. Because this is what we did last time. And then by triangle inequality, we got theta t minus theta 0 is less than theta t minus theta hat. Right? So theta t minus theta hat and theta hat minus theta 0 2 norm which is less than O square root n over sigma. And this is less than sigma over 4 beta if beta over sigma squared is less than-- much, much less than what 1 over square root of n. So this is how you inductively maintain the distance between, let's say, how do you inductively show that theta t is not very far away from theta 0. Yeah. The step sounds a little bit complicated. But actually, the intuition is very simple. There are probably many ways to prove this. I just presented one way. So there's already a global minimum there. So there shouldn't be any ways for you to leave. And what you do is basically you say that you have a theta hat here. You have theta 0. You know that these two are-- the distance between these two is of square root n over sigma. And you are optimizing. And in some sense, theta hat is your target, right? Because theta hat has the best fitting. So you are somewhat moving even closer to theta hat. So why not you should have even bigger distance eventually afterwards, right? So that's why this is working. So if you look at the iterate, I think you are somewhat moving to theta hat. So OK. So enough. Out of all of this, so then we got this equality. And with this equality, we got that y vec t plus 1 minus y vec-- y hat t plus 1 minus y vec is less than-- t minus y vec, 2 norm, minus eta times 3/4 sigma squared. And then you can do a recursion to get exponential decay of error. OK. Any questions? I think I made a small typo somewhere in the assumption of the theorem. I need to fix that. I think my assumption should be that this is less than c0 over square root n. But it doesn't really matter, because you can make beta I think-- if you change as we see last time. Or you can make either them with bigger or the alpha bigger. You can make beta over sigma squared arbitrarily small too. So it doesn't really matter very much. Any questions? [INAUDIBLE] Sure. [INAUDIBLE] Yup. [INAUDIBLE] I guess, I think, there's one version too. Let me rephrase your question and let me know if it's not what you asked. So I guess one question you could ask is that whether you really rely on the exponential decay for the kernel case to have this relationship between neural network and the kernel. I think the answer to that is no. So the second type of approach that I somewhat outlined last time but didn't really go into detail, that approach didn't require that you have exponential decay of error. So in that case, both the kernel and neural network, you can only show them to have some polynomial speed of decay, like the error is polynomial in t. So you can still make this relationship. So exponential decay is not that important. But I think this is actually something that people realize after the first few papers. At the very beginning, the very first paper using this exponential thing, and people thought that because you have exponential decay of errors so fast, that's why you don't leave this neighborhood. But I think you can do something so that even without exponential decay you still don't leave the neighborhood. Because whether you leave the neighborhood, it probably depends most on whether in the neighborhood there is a global minimum, right? If there is a global minimum in the neighborhood, but somehow you cannot converge to it exponentially fast, that's still probably fine as long as you converge to it eventually. All right. So I'm not sure whether that's what you asked. [INAUDIBLE] OK. Right, right. You do want to say-- and also you want to characterize the neural networks, right? So if they don't have the same property, right, and you somehow can optimize-- analyze the optimization of the neural networks, that's fine. But the relationship is something to help us to bridge the gap between what we knew and which we don't know, right? So the neural network is something we didn't know. But the kernel one is something we knew. And if they are similar, then you can hope to analyze the neural network, right? Yeah. So I think that's why we show they are doing something similar. OK. All right. I think I have a little bit more things to add about the neural tangent kernel. I guess we've discussed this many times. The limitation of neural tangent kernel is that you only, at most, do as well as the kernel method. Right? So they are kind of-- so basically the question is how well a kernel method can work? All right. So are we really characterizing the power of deep learning? If deep learning is only doing as well as kernels, is it really, is it good or bad? All right. And the answer, I think, is that at least I think most people believe in this-- the answer is that neural network can do much better things than kernels. And this characterization of the neural network as kernels is not characterizing the true power of neural network. And you can try to say this in various ways. So there are a lot of papers that tries to do this-- so beyond the NTK approach. I guess, if you search beyond NTK or beyond lazy training, you'll see a bunch of papers, including some of my papers. So we try to analyze deep learning in different regimes. But there is a simple separation if you don't care about the optimization performance. If you just care about the power of the regularization and the-- if you only care about the statistical aspect, you can easily show that neural network can do better things than kernels. And this is an example. So the example is something like this. So I guess this is an example, where NTK or any kernel method is statistically limited. And in some sense, the intuition is that the limitation comes from that the kernel or the features are fixed in the NTK approach. You don't have any adaptability to the data. So your data probably wants to use some features. But you are using a fixed feature for the data. And this is a simple case, where you have to have such a concrete example. So suppose you consider this case where x is in let's say r d And y is in plus 1 minus 1. And let's say each of the xi's are just a uniform like an iid Gaussian. So xi is the i-th index. The superscript is for the examples. And the subscript is for the dimension. And let's say y is equal to x1 x2 So we have a very simple function, which is just learning the product of the first two dimension of the data. So if you draw this, suppose this is x1, this is x2. Then you just have four different combinations. And this is a positive example. This is an active example-- positive example. And this is negative. And this is negative. So so this is not linearly separable. Because you have these four points that are positioned like this. So you have to use a nonlinear model, or a linear model on some feature space. All right. So if you use neural networks-- and suppose you regularize. Suppose you regularize the l2 norm. And this is equivalent to regularize this norm that we discussed-- this norm c of theta, which is something like sum of ai wi. I'm not sure whether you still do remember this, when we have this neural network which is the y is equals to something like a sum of ai sigma wi transposed x. And then you can define this complexity measure which is kind of the path norm, right? And we have shown that regularizing l2 norm of the outer parameters is the same as regularizing this somewhat complex-- and which gives the actually the complexity, the generalization guarantees right. So we have discussed this in some sense. And suppose you use the neural networks, then what you'll find is the best solution. By the best solution, I mean the minimum norms-- so which in the minimum norm solutions is a sparse one, right? It uses a sparse combination of neurons. So basically, the best solution actually you can in this case is you can exactly compute what's the best solution. I'm not going to prove it. But I think it's something relatively believable. So the best solution, first of all, it doesn't really use any other dimensions. That seems to be believable, right? Why you want to use any other dimensions if your function is only a function of the first two dimensions. And you only have to use something about the first two dimensions. You only need the following four neurons. So one neuron computes x1 plus x2. And another neuron compute minus x1 minus x2. And we need another neuron compute-- that computes ReLU of x1 minus x2 and another one that computes ReLU of x2 minus x1. So I claim that this is actually equals to the function. And if you want to verify that, I can briefly do that. So the ReLU of x the ReLU of t plus ReLU minus t is equals to absolute value of t. So this is equal to x1 plus x2 minus x1 minus x2. And now, I claim that this is actually equals to x1 times x2, where x1 x2 are both binary. And how do you see this? I guess, the only way I can see this is just to try all the four combinations. If x1 x2 have the same sign, then this will become 0 and this will become 1-- if x1 or x2 are either both 1 or both minus 1. And that's is the case in x minus 2, the product is one. And if x1 x2 have different sign, then the first term is 0 and the second term becomes 1. Wait, why I'm having a half here? Oh, yes. And the second term becomes 2. So you multiply 1/2 and you get minus 1. Right. So good. So basically if you use neural network and you regularize, you can show that this is the solution finds, which is a very sparse combination of a small number of features. In some sense, when you use regularization, you find these four features and you do a linear combination on them. So these four features are the right features for this task. However, if you use NTK-- suppose you use NTK. Suppose you're using NTK, then what you do is you just don't learn any features. You just try to do a-- you do a dense combination of your existing features. All right. So in some sense, what you do is you say-- I guess, how do I say this in the best way? So basically when you do NTK, what you will earn is something like-- oh sorry. Why am I not-- I guess I can still see them. But I think roughly the intuition is that your y will be-- your prediction will be something like a sum of ai sigma wi transpose x or maybe something like this-- sum of ai times phi of x. Right? And there are a bunch of features, and each feature is phi i. And its features use all the dimensions. And this depends. Of course, exactly what the features are, it depends on what kernels you are using. If you use NTK kernel, you're going to get some feature vector. If you use a random kernel, you get some other feature-- some other features. But whatever features you do, this is always the function of all the theta. You cannot specialize to a special subset of features. And also because you are doing a regularization on the minimum l2 norm solution for the coefficient in front of the feature, you don't prefer any sparse solution. All right. So recall that-- Yeah, sorry. I'm using the wrong version of notes. So I guess I have to improvise a little bit. So if you look at NTK, what you do is you try to minimize the l2 norm of this vector a such that the data, sum of ai phi i of x is equal to y. Maybe you also have j. Right? And if you do the neural network, I think we have claimed that neural network is the same as l1 SVM in a kernel space. So then the corresponding thing would be minimized to l1 norm of a such that the sum of ai phi i x t j t is equals to y jt. So in some sense, when you do the neural networks and you have a lot of features, you're choosing this subset of features-- a sparse subset of features. And when we do NTK, you are minimizing the l2 norm, and that never gives you sparse combinations. It's actually a preferred dense combination. It's the reverse direction. You want a smooth combinations of the existing features as possible. So that's why you have to pay more samples too if you use the NTK, because you are using kind of suboptimal features. And this can be proved in this case. Like you can prove that this is a theorem, where you can prove that kernel method with NTK kernel requires n to be omega of d squared samples to learn a problem with error less than 1. And in contrast, regularized network only need-- neural net only need n is equals to O of d samples. Any questions about this? I think this part is a little bit kind of like hand-wavy because I didn't want to go into all the details. And also, this depends a little bit on what we discussed in the past, right? So what I want is that we have some connections onto the-- a connection between l1 SVM networks. Any questions? So I guess maybe just to wrap up this once again-- so basically, if you do neural network with regularization, then we have shown that this is equivalent to l1 SVM in a feature space. We are trying to find the sparsest combination of features that face our data. Right? And in this particular example, this can sound pretty intuitive that finding a sparse combination is useful, because not all the features are equally useful. So these features we designed are much better features than a random feature. And that's why neural network with regularization could have good sample complexity. And on the other hand, when we do NTK kernel on most of the other kernels, so you are not trying to find a sparse combination of the features. You are trying to find a dense combination of the features, because you are doing l2-- you are finding the minimal l2 norm solution. And each of the features is a function of all the data-- all the coordinates of the data point. So the features are not that useful in some sense. There are a lot of noise on features. You have to rely on averaging all the noise over multiple features to learn something. You can still learn something, but it's going to be less efficient. Right. I think that's the summary. OK. So if there's no any other questions, I'm going to move on to the next topic, which is about implicit regularization effect. I'm not sure whether you still remember what we discussed in the mystery of the deep learning theory section. So I'm going to briefly repeat kind of the high-level goal here. So the observation we had about the empirical deep learning is that we found that there are multiple global minimum of training loss exist and the optimizers has some preference-- have some implicit preferences. And we have claimed that almost every aspect of the optimizers have some preferences. For example, if you use the particular initialization that enables NTK, then you have the NTK preferences. You are learning the NTK solution. And if you use some other initialization you have some other preferences, and we have kind of concluded that if the NTK solution is the wrong preference-- so like you don't do much beyond the kernel method, you actually do exactly the same as kernel method-- so basically that means you are finding the wrong global minimum that doesn't necessarily generalize as well as other global minimum. So from now on, we're going to try to look at other global minimum of this objective and see what other optimizers prefer. So if you use different optimizers, you may prefer a solution that is different from the NTK solution. [INAUDIBLE] NT-- oh yeah. What does it mean-- [INAUDIBLE] Right. So why I call it NTK initialization, right? So the NTK initialization basically mean the initialization under which you can prove the NTK result. So maybe specifically, I think last time we have two examples, right? So one example is-- maybe this example is-- [INAUDIBLE] Right, right, right. [INAUDIBLE] Right, right, right. So for example, I think, just when have to weight things. So like where there is-- something like here, I think we have this overparameterized model. We have this width. And we initialize with ai to be plus 1 minus 1. And wi to be this spherical Gaussian. And you can, for example, initialize something with something much smaller. Right? And actually, you should if you really do the experiments for this exact parameterization. You should initialize both either ai or wi's on some like a square root n factor smaller. And then you're going to see much different empirical results. And actually, we have done this in the-- it's actually in the paper. Many people have done this. It's relatively simple experiments. So here, you can already say that initialization is the culprit, right? So for the other case, I think when you change the parameterization to see the NTK regime, I think you can say that the parameterization is the culprit. And also even in this case-- even this case is supposed to initialize the same as the NTK. Suppose you do stochastic within this set. You have sufficiently large stochastic-- it doesn't have to be super large. But a little bit larger than zero, then you will leave that initialization. And you're going to convert to some other places. So that's another way to leave the NTK regime. All right. So we are going to sometimes discuss this other kind of ways. Basically, what we will discuss next are either using [INAUDIBLE] to leave NTK or use stochasticity. And what else? You can also use the learning rate. Learning rate is kind of almost the same as stochasticity, because if you have larger than the rate, in some sense-- and you have SGD then your stochasticity is bigger. Right. Right. So the first thing I'm going to do is the effective from the implicit regularization effect from initialization. So first is this effect of initialization. And you will see that in certain cases, you can leave-- we don't necessarily really care about leaving NTK, we really care about having a better generalization rate. So that means you have to leave NTK, but you have to do probably more than that to get better generalization. So this is what we're going to do the next 15 minutes of this lecture, and the next lecture, the effect of regularization. And I'm going to start with a simple case where you have overparameterized linear regression case. You need overparameterization because you need, especially if you consider linear models, right? One of the important thing is that you have to have multiple global minimum otherwise there's no so-called, implicit regularization effect, right? Because optimizers have to converge to a global minimum. You have to have multiple global minimums so that the optimizers can have a choice to choose between them. So that's why we need overparameterized regression, so that you have multiple global minimum. Actually, there is an infinite above global minimum where you have overparameterization. And we will see that in this case small initialization prefers a low-norm solution. And this is also the case when we in the next lecture we're going to go beyond linear model. And the high level conclusion is the same. If you use small initialization then you prefer lower-norm solution. And today we're only going to in the next 15 minutes we're only going to do the linear models. And this is actually not that hard. So let's set up first. So this is a standard linear regression case. For this lecture, I'm using the lower subscript for the number of examples just because you should really look up any linear regression book, then they use subscript for examples. And here we don't do this. So each of these xi's are examples. Example i, and you put them into a matrix. And let's assume x is full rank So it means rank n. And let's also assume n is much smaller than d, OK? And we have a parameter beta. So you get a loss function y vec minus x beta 2 non square. As you have 1/2 here just for convenience. This is my empirical loss, OK? So this is standard linear regression. And indeed L hat beta has infinite number of global min. And you actually can characterize exactly what that global mins are. And all the global mins with loss 0. So what are the global mins? So beta is equals to, supposed to take beta to be x pseudoinverse times y vec plus some vector zeta where zeta is any vector that is orthonormal, also orthogonal to x1 up to xn, is a global min. So as your beta has this form, then it's a global min. And these are actually all the global min. And actually here, I think last time someone asked about pseudoinverse. Maybe let me have quickly some basic properties of pseudoinverse. I guess my way of thinking about it is probably slightly different from Wikipedia. So the way I always think about pseudoinverse is the following. So I always think about it when in SVD. Because with the SVD it can verify everything so that I don't have to remember them. So suppose you have a matrix x in dimension n by d. And suppose x is of rank r, of course, r has to be less than either n, both less than n and less than d. So the way how I remember every property of the pseudoinverse is the following. So I consider SVD of x, which is u sigma v-transpose. And sigma is of dimension r by r. Let's say you ignore all the non-zero entries. And u is of dimension n by r. And v is of dimension d by r. So then you know that the column-span of u is the same as the column-span of x. And the column-span of v because there's a transpose here, right? So column-span of v is the row-span of x. And also you know that the pseudoinverse in this notation you can think of as defined to be v sigma inverse u-transpose. So here sigma is a diagonal matrix with entry sigma 1 up to sigma r. And sigma i's are all positive. So then this inverse is well defined. And x pseudoinverse is just this, v sigma inverse u-transpose. And now if you want to understand what's the property of the pseudoinverse, you can verify it yourself. So you know x pseudoinverse is going to be what? It's going to be u sigma v-transpose v sigma inverse u-transpose. v-transpose v is an entity. So u u-transpose, right? Because v-transpose v is an entity. Sigma times sigma inverse is an entity. So what is? This is the projection to the column-span of x. It's the projection to the column-span of u. And the column-span for u is the same as the column-span of x. And x pseudoinverse times x, this is equal if you do the same calculation, it's going to be equal to v sigma inverse u transpose times u sigma v transpose. Which is v v-transpose, which is the projection to the row-span of x. And you can also see the dimension matches because this is a matrix of dimension d by d. And the rows of v is in dimension d. Sorry, the rows of x is in dimension d. The column of v is in dimension d. So in this case where x is in n by d, and the rank is n then you know that x pseudoinverse is the projection to the column-span of x. And the column-span of x now is the full span of all the vectors. But column span is what? The column-span is the span of all the vectors. So that's why this is just an entity. And x pseudoinverse x, this is the projection of row-span of x. So how many rows there are. There are n-th rows of x. And they don't span everything because the dimension d is bigger. So you cannot span everything. So this is why this is not an entity. This is really just the projection of the row-span of x. You cannot simplify more, right? So it's a little bit too long as a building block, but OK. I hope this helps. This is how I understand pseudoinverse. I never remember what x x-pseudoinverse is equal to. So this is how I remember. So now we have so many global minima, right? I think with this it's easy to verify these are global minimum. Because you can verify this beta is global minimum because you can say take x beta, which is equal to x x-pseudoinverse y work plus x zeta. The zeta is orthogonal to the rows of x. So that's where x beta is 0. So you got x x-pseudoinverse y vec and x x-pseudoinverse in this case is an entity, so you get y vec, right? I just claim that x x-pseudoinverse is an entity. OK. So that's why x beta is equal to y vec. That's why it's a global minimum. And the question is which global minimum you're going to converge to. So the theorem is that if you initialize the grid in descent L hat beta with initialization beta 0 is equal to 0 and sufficiently smaller in rate. And the rate, actually you know exactly how small it is. I just don't want to give you too much jargon. So if you have lineal is small enough, and the initialization is 0, then this converges to the minimum-norm solution. So the minimum-norm solution beta hat is defined to be among all global minimum where the minimum-norm solution among all global minimum of the loss function. So basically you get this 2 norm for free, right? You are minimizing this, you didn't say that I want to have the minimum-norm solution. You just say I want to do gradient descent. But you get the minimum-norm solution for free. And the reason why you got it is because you express your implicit preferences through the initialization. OK, cool. So yeah, I think I have 5 minutes, which is perfect for the proof sketch. So I guess this is actually really a proof. But I think I ignored some small details. That's why I call you a sketch. So the first step is that if you do standard convex organization you know that this goes to 0, as t goes to infinity. You know that if you run for a long time, then your loss will become 0. I'm not going to show how do you do this. But you can invoke any off the shelf optimization results. And the second thing is that you know that the speed ahead is actually equal to x x-pseudoinverse times y vec, right? So we know that all of these are global minimum. But if you take zeta to be 0 then that's the-- Yes, if you take zeta to be 0, then that's the minimum-norm solution. I think this is this. There is no x here. And this can also be simply verified. So for any zeta orthogonal to x1 up to xn, then you look at x-pseudoinverse y work plus zeta, the 2 norm of this. This is equal to x-pseudoinverse y vec 2 norm, plus zeta 2 norm, plus 2 times x-pseudoinverse y vec zeta. And this is larger than x-pseudoinverse y vec 2 norm squared plus 0, because the norm is less than 0. And this quantity is just equal to 0. Why this is equal to 0? This is equals to 0 because I guess this is maybe let's say this is equal to 2 times y vec x. So what's this? This is zeta transpose x-pseudoinverse y vec, right? And I claim that this is equal to x-pseudoinverse y vec 2 norm because this is 0. And why this is 0? I guess this is actually a good way to practice what I had. So x-pseudoinverse is v sigma inverse u-transpose. Sorry. U-pseudoinverse-inverse is v sigma inverse u-transpose. So the column-span of x-pseudoinverse is the same as the row-span of x. And zeta-- sorry, this is transpose, not this. Zeta is orthogonal to the rows of x. So which means that zeta is orthogonal to the column of x-pseudoinverse, right? So that's why zeta times this is the zeta times the columns of the pseudoinverse. So that's why everything is 0, right? So this is 0. So zeta is orthogonal to column-span of x-pseudoinverse which is equal to the row-span of x. So basically you see that in the norm is only decreasing if you set a zeta to be 0. That's why when zeta is 0 that's the minimum-norm solution. Right, so 3, I guess-- 1 and 2 are basic facts about this linear regression thing. 3 is what's really about the optimization so you can prove that beta t is in a span of x1 up to xn. You can prove this inductively. And why this is the case? This is a super simple induction because beta t is equals beta t plus 1 is equals to beta t minus eta times the gradient and beta t. And what's the gradient? The gradient is x-transpose y vec minus x beta. So this is in the column-span of x-transpose. And it is also in the row span of x. So basically your update is always in the row-span of x. That's why you never leave this span, right? Maybe I should start with the beta 0 is in this span. Beta 0 is in this span of x1 up to xn. And each time you update, and update is in the span x1 up to xn. So by induction you get that you're always in this span. So basically then, so for-- Because you're always in your span, right, the only solution to L hat beta 0 in this span is this, right? Because what are the solutions with error 0? The solutions with error 0 are these ones. So this has solution 0. And among this who are in the row-span of x, right? So only this one is in the row-span of x because all of these are not in the row-span of x. They are orthogonal to the row-span of x. So the only solution that is in the row-span of x is just the first term. You just take the first term. And that happens to be the minimum-norm solution. And that's why you get the minimum-norm solution. And sometimes basically all the magic come from this, right? So basically this is a regularization sometimes. This is a constraint imposed by the algorithm. The algorithm say that you cannot go everywhere. The algorithm say you can only go to those places who are in the span of the data. So that's why you have to stay in the span of the data. And it happens that in the span of the data there is only one solution. And that solution is the minimum-norm solution. So I think in some sense the if I draw a picture-- OK, I guess I'm running late, but real quick. So if I draw a picture, I think this is a very difficult picture to draw. But I think you can still try it. So you can have maybe I say this is blue direction, this is the span of the data. Let's suppose the span has one data. So the direction of the span of the data is only one dimensional. And then you have a subspace of solutions which are orthogonal. So this is orthogonal here to the span. This is the solution where you have this, all right? So it's orthogonal to the span of the data and the intersection part is the target solution. So the intersection part is really x-pseudoinverse y vec. So you start with this. You try to reach this purple plane, because that's what the optimization wants to do. The optimization wants to reach the purple plane. But optimization also say you can only go in the blue direction. And so that's why you meet in the intersection. And the intersection is the closest point to the origin. OK, I guess that's, yeah. [INAUDIBLE] Yeah. So do you need a condition that you have to span on a span, right? So, yes, you do. Because if you don't span, suppose for example, you're supposed to start here. So what happens is that you can only move in this direction. That's what the algorithm says. But the algorithm says the update is in a span. So all your changes is in the span. So basically you go in this way, and you hit here. So then this place is not the minimum-norm solution anymore. This place is going to have some higher norm than the ideal point. [INAUDIBLE] Yes. So you can say the implicit regularization effect always happens. But the effect is the minimum-norm solution only if your initialization is 0. You always have your preferences. So whatever you do with initialization you have some preferences about which global minima you want to converge to, right? And if you want the preferences to be the minimum-norm solution, then you really have to choose 0 as the initialization. Any the other questions? [INAUDIBLE] So the question is whether there's any hope that this can transfer to nonlinear cases. I think here we are using a lot of things about linear algebra. So we know what is the minimum-norm solution. And so far we have the orthogonality, everything, right? So when you have nonlinearity you don't have most of this. So those parts that we discussed about the very heavily linear algebraic part, those don't transfer at all probably. But somehow at least we can find one other situation where when you have nonlinear models you still prefer the minimum-norm solution. And that's next lecture. But the mechanism is not exactly the same. So next lecture and this lecture, the only connection is that the final message is the same. But the techniques are quite different. We still don't know how to unify them in the right way. [INAUDIBLE] Right, right. [INAUDIBLE] Yeah, yeah. So you are absolutely right. So like the difficult case come from the very, very small linear rate case. I think infinitesimal small linear rate case. So even for infinitesimal small linear rate. So basically you have a differential equation, right? So you just have a trajectory. And you want to know where the trajectory goes, right? I don't know too much about differential equations, but I think the problem is how to solve that equation. You know solution exists. You know there's a trajectory. But where this trajectory really goes, that's the hard part. I don't think we, at least I'm not aware of any papers that use tools from differential equations heavily. So this is a useful language, right? The formulation of the language perspective, the differential equations language are very useful. But typically the hard part is, how do you solve it? [INAUDIBLE] In some cases, you can. I know one paper where you can solve it. It's using the structure of the problem. You have to literally solve it using some new math. It's not like you can invoke a theorem in the differential equation literature saying these kind of questions can all be solved. I don't think so. OK, sounds great. OK, cool. See you next week.
AI_LLM_Stanford_CS229
Stanford_CS330_I_Variational_Inference_and_Generative_Models_l_2022_I_Lecture_11.txt
So for this week, we are going to be talking about a Bayesian perspective on meta-learning. And today's lecture is going to be a little bit different than a lot of the lectures we've had so far because we're really not going to be talking that much about meta-learning in this lecture, we're going to be talking about doing approximate Bayesian inference via variational inference. And the reason why we're going to talk about that today is that it will be a lot of really useful background knowledge for understanding Bayesian meta-learning algorithms. And it's also pretty useful outside of meta-learning algorithms as well. And so we'll see a little bit of motivation for why these things matter in the context of learning, and we'll talk a lot about Bayesian meta-learning on Wednesday. But today we're going to go over some kind of really fundamentals on how to do Bayesian inference with more complex distribution classes. Awesome. So the plan for today is to talk about a class of probabilistic models called latent variable models. Then we're going to talk about how to actually train these latent variable models with variational inference. Then we'll talk about something called amortized variational inference, which will address a key shortcoming of trying to do non-amortized variational inference. And then we'll cover a couple example latent variable models. So the goals of today will be to understand how these latent variable models work, including in deep learning settings, and also understand how to use variational inference in order to train these models. Also, some of the stuff will be useful for doing the optional homework 4. It will be really themed about-- one of the questions will be themed on Bayesian meta-learning. But one of the derivations in that homework will draw upon some of the ideas that we talk about today. Cool. So let's get started by talking about probabilistic models. So in probabilistic modeling, we want to model some distribution. So for example, we may have some distribution p of x where we have samples from that distribution. We have some example data points. And we want to formulate a probabilistic model that can generate those data points or evaluate the likelihood of those data points. So for example, we might try to fit a Gaussian model to these data points, and that would allow us to generate other data points like that or generally model the distribution over those data points. Similarly, we might have a conditional probability distribution that we want to model where this won't be-- over the examples, we'll actually be conditioning on one variable in order to try to formulate a probabilistic model of another random variable. And we've already seen examples of conditional probabilistic models in this course. For example, trying to predict a distribution over the labels given the input. And most commonly, you'll view-- you'll be looking at things like probability values over a discrete categorical distribution or outputting the mean and variance of a Gaussian distribution. And so instead of actually literally outputting the label y, we often actually output the values of a probability distribution. And so that's what kind of the logits are. But of course, it doesn't have to necessarily be a categorical distribution or a Gaussian distribution. Those are extremely common in the literature. But there are all sorts of other distributions that we might want to formulate, and there's things that are much more expressive than categorical or Gaussian distributions. Now when we want to go about training a probabilistic model, we will formulate our model, so it could be p theta of x, it could be p theta of y given x. And we have data from that we're assuming to be from that distribution that we want to be able to model. And typically, we will formulate a maximum likelihood objective where our goal is to find the model for which it maximizes the likelihood of the data. And this will be kind of arguably the best fit for the data that we have observed. So this sort of objective is very easy to evaluate and differentiate for categorical distributions and Gaussian distributions, which is one of the reasons why we often use categorical and Gaussian distributions. And this maximum likelihood objective corresponds to the cross-entropy loss and the mean squared error loss that we all use and like. So really the goal of this lecture is to try to go beyond categorical or Gaussian distributions and try to model and train more complex distributions. Cool. So why might we want to train something more complex? So maybe we want to instead of training a normal distribution, we want to train a paranormal distribution, or instead of generating a label maybe, we want to generate an image or we want to generate text or video or something like that. And here's an example of a video generative model. So we might want to generate video of an HD video of riding a horse in the park at sunrise. And we don't just want it to give us one video, we want to give us kind of a range of videos that capture that description. Another example of a more complex scenario, we might want a more complex distribution is we want to represent uncertainty over labels. Maybe there's some ambiguity arising from limited data or some partial observability. And it may be that the true distribution over the labels may not be a unimodal distribution. It may be a multimodal distribution that's difficult to capture with a Gaussian distribution or a categorical distribution. And second, we may also want to represent uncertainty over functions rather than just uncertainty over examples or over labels. Now what do we mean by that? Well, in meta-learning, we have been representing essentially point estimates of distributions over functions. So we've often been trying to estimate what are our task specific parameters given a small training data set and our meta parameters. And typically, we just output a single set of task-specific parameters phi i. But there are cases where we might actually want to fully represent the distribution over task-specific parameters. So for example, you might have a few shot learning problem where there's some ambiguity. So say for example, this is like a small training data set or a small support set for a classifier. We have some positive examples on the left, some negative examples on the right. And then say I gave you this example right here. If you look at the attributes of the positive examples and the negative examples, you might notice that everyone in the positive examples is smiling, and everyone in the positive examples is also wearing a hat. And this person is not smiling, and they're wearing a hat. And so it's inherently ambiguous what the correct label should be. Is the classifier supposed to pay attention to the facial expression? Is it supposed to pay attention to whether they're wearing a hat? And there might be another example like this one where it's also somewhat ambiguous with respect to age. Maybe the classifier is supposed to also be looking at the age of the person. So this sort of ambiguity can come up and we may want to not just output a single classifier, but actually output multiple classifiers that might kind of generate hypotheses underlying the data that we see. So yeah. Can we learn to generate hypotheses about the underlying function so that we can sample from this distribution over task-specific parameters? And this ability to reason about ambiguity might be important in a few different settings. It might be important in really safety critical settings. So if you want to do something like medical imaging, then it'll be useful to know if your class-- if you're pretty confident about the function that you have or not. It can also be important for active learning settings. So if you want to figure out-- if you want to generate-- like should you generate more examples or more data points? Basically, can the algorithm tell you, yes, I need more labels or no, I'm actually pretty confident about this example? And there's actually a number of works that have looked at active learning with meta-learning algorithms. And it can also be useful in meta reinforcement learning as well. If you want to figure out how to explore a new environment with a small amount of data, reasoning about the uncertainty that you have can help you figure out what parts of the environment to explore and what parts of the environment are already-- kind of you're already sufficiently confident. OK. So those past few slides are a couple of motivation for why we might want to train more complex distributions. And then the rest of this lecture will really be on actually trying to do that. Cool. So let's start by looking at a couple examples of latent variable models. And some of them might start with examples that you've seen before. So say you have examples of some data points that look like this, and you want to fit a model to these data points. If you just fit a Gaussian to this, you probably wouldn't get a very good fit to this data. And so what you might use instead is something called a mixture model where you try to fit-- you have a few different components. And each component is one component of your mixture. And this would be an example of something like a Gaussian mixture model where the underlying data points, you don't know the underlying structure in which data points correspond to which mixture component, but through the modeling process you want to model both the mixture components and the parameters of each mixture component. And so mathematically, the way that you would write this down is something like this where your mixture component is z and your parameters of each mixture corresponds to the p of x given z. And in this particular case of a Gaussian mixture model, p of z would be a categorical distribution. And then p of x given z would be a conditional Gaussian distribution. Cool. So you may also have a conditional distribution that you want to model also with something like a mixture. And to do that, you can basically just take the same Gaussian mixture model and condition everything on the variable that you're conditioning. So this would now be instead of p of z you could condition your mixture element on x. So you'd have p of z given x. And instead of having your conditional Gaussian distribution only be conditioned on z, you would also condition it on the variable that you're conditioning on, which in this case is x. And this is referred to something-- It's really basically, just a conditional Gaussian mixture model, but it has this fancy name of a mixture density network. And so as an example of what this might look like is if you want to classify or if you want to regress to the length of a paper given the title. Then instead of just outputting the mean length that you think or the mean and variance of the length of the paper, you would actually output multiple mean and variances and the weight of the corresponding Gaussian. And so typically for a standard regression problem, we may even only output the mean just a single value corresponding to the length of the paper. And this sort of model will actually represent kind of a much more rich notion of what you think the label might be. And instead of outputting just one number, you would output n times 3 numbers where n is the number of mixture components in your mixture model. So these are a couple of examples of latent variable models. Now we can look at the more general case. So in general, our goal with latent variable models is to model a fairly complicated distribution, such as the distribution here. And the way that we do that is first we formulate a latent variable z. And it's called latent because we don't observe z. We only observe x. We don't observe z And then we sample from that latent variable. This is going to be a relatively simple distribution, such as a Gaussian. And then we'll pass a sample from that variable into a neural network that will also be a pretty simple distribution as well. So this will also be a Gaussian distribution. But the parameters of this conditional Gaussian distribution will be fairly complex. And by essentially kind of taking this easy distribution, and this easy distribution, we can kind of compose them to formulate a more complicated distribution. Basically, we're going to be kind of transforming samples from a Gaussian distribution with a neural network, to then give us samples from this more complicated distribution. And so the kind of general form of this is this pretty simple equation right here, where we have our distribution over our latent variable and our conditional distribution of our example given our latent variable. And so in both of these cases, p of z is going to be an easy or simple distribution and p of x given z is also going to be a simple distribution class. And so really the key idea is that we can represent this more complex distribution by composing to more simple distributions. Now it's worth mentioning that the p of x given z, I'm calling this a simple distribution. But the function itself can be very complex. The function itself can be represented by a neural network, which is a pretty complex function. But the distribution class is really going to be limited by the output distribution, which is a Gaussian distribution, and your neural network is going to be outputting the mean and variance of that Gaussian distribution. Yeah? So your x given z is always going to be like-- just that's always a Gaussian distribution. So in order to get that complicated distribution, you kind of just map the different x to like these differences that you've got, the corresponding p's in the more complex distribution in terms of composing the distributions? Yeah. So you can basically think of like, the neural network will take kind of one slice of this and then kind of map it onto a particular slice of the top distribution. And it may give you-- it's not just going to give you a single value. It'll still give you a distribution over that. But it might be-- it's going to be a fairly simple distribution. And so you can imagine like mapping to a small Gaussian right here, for example. And so yeah, you can think of it as taking samples from this and mapping it to samples into that function. Yeah? WIth this setup, you won't get any arbitrary distribution over p of x? So the question is, with this set up, can you get any arbitrary distribution over p of x? In general, yes. I don't know of-- I can't think of any distribution where you wouldn't be able to represent it. It does mean that there are some distributions that will be harder to represent than others. If you have an extremely discontinuous representation, it's hard for-- it was hard to also just optimize for those distributions in the first place. But this does give you really considerable expressive power over distributions. The other thing that I'll note here is that-- note that there's only one neural network in this picture, which is the P of x given z. Typically, p of z is going to be a Gaussian distribution. And you don't even actually have to learn the parameters of this Gaussian distribution. You can just set it to be like a standard normal distribution with 0 mean and unit variance. And that you don't lose any generality by doing that because the neural network you can very easily take that and transform it into whatever it wants. And so you can think of the first layer as like picking a different mean and variance for that. And so that means that you don't actually have to learn any aspect of p of z, you only have to learn p of x given z. OK. So now I have two questions for you. And in honor of Halloween, I have a little bit of candy for people who ask questions or who answer questions. So the first question is, once you train this neural network, how do you generate a sample from p of x? Yeah? Just from sample from p of z toward through the network and that your-- you should learn probably. You should produce p of x or something probably. Yeah. So basically, you can sample from p of z then pass that through your neural network, and then your neural network will give you a mean and variance. And then you just sample from the Gaussian with that mean and variance. And that will then give you a sample from your estimated p of x. So I'm not very good at throwing. It's only one seat off. Cool. And then the second question, which is a little bit harder than the first question is, now we have this model that we've trained, how do we evaluate the likelihood of a given sample? So we just talked about how do we sample something. Now instead of trying to sample something, we're actually just given an x. And how do we evaluate p of x for that given x? I mean, p of x these are-- probably you'd get some blocks to integrate over several projections. Yeah. Do you want to-- so you're saying that we could-- if x is our example, we can then integrate? For more p of x that we created or we draw a sample. So you say we can draw a sample from z, and then integrate? Yeah. Normally, we just give an integral function from px to z. But you just enforce a bunch of z and you just integrate the course. Yeah. So what you could do is we have our expression over here. And so we could sample a bunch of z's and use that to estimate this integral on the left. And that would give us an estimate for what p of x is. Of course, I will-- You're good. OK. I don't want to disincentivize incorrect answers. Cool. So you can estimate p of x. And one thing that's important that's worth noting here is that actually evaluating p of x like analytically is extremely difficult because of this integral right here. And that's going to make training these models also pretty difficult. Question? So we know that mean squared error is equivalent to likelihood, right? So can we sample that? So can we sample-- let's say, you're trying sampling from z and getting off with x and then doing mean squared error over x, is it possible? So you're asking-- we know that mean squared error is equivalent to log likelihood. It's equivalent to log likelihood for I think unit variance, if you don't basically learn the variance. And you're saying, if we sample something from our model and then measure mean squared error, would that give us likelihood, you're saying? That would not. In general, if you only sample one thing, you're not going to get a very good estimate of it, so you might need to sample much more than that. Yeah. So generally, if you're not learning the variance, you might be able to do something like that. But really the correct thing to do is to try to evaluate or estimate that integral right there. And you might get a form that-- if you don't learn the variance of certain things, you might get a form that looks like mean squared error. Yeah? After multiple axis of the radius [INAUDIBLE] everything like that? Yeah. So once you actually-- so log likelihood corresponds to mean squared error if you have unit variance. And if you learn the variance that it ends up being a weighted mean squared error. And so there is basically that sort of equivalence. Once you actually start to write out that equation for Gaussian distributions, you will get something that looks like a weighted mean squared error. Cool. So this gets into how we can train latent variable models. And we know that we have this likelihood function. And so we know that-- sorry, we know that p of x is the integral of p of x given z times p of z. And so if we wanted to go about training these models, we could basically plug-in this p of x into our equation right there. And so our objective would be kind of summing over our data points or averaging over our data points of log of the integral of p of x given z times p of z dz. And so we could try to go about optimizing for this equation right here. But unfortunately, generally, you need to sample a lot of z's in order to actually accurately estimate this integral. And so in general, this integral isn't going to be particularly tractable. If we needed to optimize-- if you need to every single time we take a gradient step that we need to sample a ton of different z's, then we wouldn't be able to optimize this particularly easily. And so that's where all these kind of different techniques for training latent variable models come in. They want to basically try to estimate the gradient of this objective without having to evaluate the integral. So there's a number of different flavors of latent variable models in the deep learning literature. And they basically all just come down to different ways to train these kinds of latent variable models. And so you may have heard of things like generative adversarial networks, variational autoencoders, normalizing flow models, and diffusion models. And all of these models are just different examples of different latent variable models. They all have some sort of latent variable that's typically-- I guess actually in all of these cases, I think it's typically a Gaussian latent variable, is then transformed with the neural network into the original problem into the space of the examples. There's one class of generative models that does not use latent variables, and this is autoregressive models. You saw these in the generative pre-training lecture. But basically, everything else that I know of uses a form of latent variable. And really the way that these different latent variable models differ is how they are trained. And they have different pros and cons in terms of the ability to optimize them and the ability to evaluate likelihoods of samples and so forth. In this lecture and also on Wednesday, we're really going to focus on methods that use variational inference to train them, primarily because it has a number of benefits, and it also is what's been used most for latent variable models over parameters in meta-learning. However, there's also lots of hybrids of these models as well where variational inference becomes quite useful. Like there are papers that combine VAEs and GANs. There are variational diffusion models. And there's also connections between variational inference and normalizing flow models as well. Yeah? What is like a good example of latent variables? What is a good example of latent variables? That's a great question. So in general, oftentimes when you train the latent variable model, the latent variables don't necessarily have any particular interpretable meaning to them. In the case of the mixture models that we saw, the latent variable corresponded to the different mixture components. And in some cases, that might have a natural kind of interpretable meaning. If your data naturally has those different clusters, then basically, the latent variable will correspond to the identity of that cluster. And in this example, the face attribute example that we saw different latent variables. If you were to try to generate images like that, different latent variables might correspond to different attributes. And if you're trying to train a classifier and basically generate these different classifiers, then different latent variables would correspond to different combinations of attributes that that classifier is paying attention to. So ideally, that's the kind of thing that we want these latent variables to correspond to. There's a question of whether the model will actually learn those things in practice. And it can use the latent variables to really correspond to whatever it wants. The other thing that I'll mention on this topic is that, especially in the context of conditional latent variable models-- for example, if you're training something to predict y given x where z is your latent variable, then the latent variable is basically the only source of randomness or the only source of information other than your input x. And so z or the latent variable needs to capture things in y that can't be found in x. Cool. So now let's try to talk about how we can actually train these models, like how do we train these models to generate pretty pictures or parameters or whatever. And so an outline of what we're going to talk about is, first, we're going to formulate a bound on the log likelihood objective that will allow us to get away from computing that annoying integral. Then we're going to check how tight the bound is. And then we're going to move towards amortized variational inference, which is kind of an important practical step that's important in deep latent variable models. And lastly, we'll talk about how to actually optimize the bound that we had. Cool. So we want to go about optimizing this objective, and we're going to formulate a slightly different objective called the expected log likelihood. And on this slide, I'm not going to justify this objective. This is mostly for just trying to build some intuition for what we're going to be doing on the next slide. And in particular, with this kind of expected log likelihood, it looks a lot like this. But we've now replaced the integral with an expectation over the latent variable given the input x. And the intuition here is that essentially, we don't-- when we have this integral, we're kind of considering all possible values of z. And really the ones that are relevant for optimizing this function are the z's that are most likely for a given xi. We don't want to have to consider all possible z's. We want to consider the ones that are actually going to be most likely corresponding to an xi. Of course, there isn't just a single z for a given xi. There may actually be multiple possible values of your latent variable. And so we're going to be using this distribution of z given xi, which is often referred to as the posterior distribution. And once we can kind of consider these possible values of z and then maximize for this kind of joint likelihood for that z and our sample x, this objective becomes a lot easier to optimize because we can estimate that expectation using samples. And so once we have this objective, it's pretty nice, although there's still this question of how do we actually estimate z given xi. That's not something that we're really given a priori. And so if you think about first what z given xi looks like maybe for this particular value of x, the corresponding z might have this distribution right here. What we're going to try to do is we're going to try to estimate this distribution of z given xi with something fairly simple. So we'll estimate it with what I'll call qi. And basically for every single data point, we're going to try to estimate what we think the latent variable is, and that will be represented by this q variable here. So instead of actually having the true z of xi, we're going to estimate it with something that looks like a Gaussian distribution. Now, this estimate isn't going to be perfect. And as you saw in the slide, these two distributions weren't exactly the same. But once we have this estimate, it will help us with optimizing the objective. So the thing that's nice about actually trying to approximate z given xi with something like qi is that once we have this estimate of the latent variable for a particular data point, it means that we can formulate a bound on our objective. And the bound is not too difficult to formulate. So we want to formulate a bound on p of xi or log p of xi. So this is the log likelihood for a single data point. In reality, we would actually be summing over all of our data points, but I'm just going to remove the sums for simplicity of notation. And like we talked about before, this is equal to the integral over-- sorry, this is equal to the log of the integral of p of xi given z times p of z dz. Now there's a question of where this qi will come into play. And one thing that you could do whenever you want to introduce new distributions or new variables into your equation is basically take your expression and multiply it by-- sorry, qi of z divided by qi of z. And this is OK to do because this is always going to be equal to 1. And so we're just multiplying the inside by 1. And once we have this, we can then formulate-- this as an expectation. So this is equal to the log of the expectation of qi-- of z sample from qi of everything on the inside. So we get p of xi given z times p of z divided by qi of z. And this is a much nicer form than what we had before because we know how to sample from qi. qi is Gaussian distribution that we're estimating right there. And so we now can-- yeah, we now basically have something that we can sample from instead of trying to evaluate the integral. So all I've done here is-- this is just algebra up until this point. We actually haven't made any approximations yet. And it's worth noting that we can actually-- this kind of holds true basically for any qi. We aren't making any assumptions on how good qi is. Cool. And then from here, what we can do is-- it's annoying to have an expectation inside of a log because we want the expectation to be on the outside so that we can sample and estimate, sample mini batches rather than sampling a ton of different z. And this is where Jensen's inequality can come in, which we saw previously. And in this case, Jensen's inequality will actually work in our favor rather than hurting us. So Jensen's inequality says that log of expectation of some variable y is greater than or equal to the expectation of the log of y. I should also mention actually that Jensen's inequality is actually-- it's actually a lot more general than this. It actually holds for any concave function. So log is an example of one concave function, but you can plug-in any other concave function other than log and also show the same bound. Also if you have a convex function, it's also the same. You just need to flip the sign of the inequality. And so with Jensen's inequality, we can now formulate a lower bound on our objective by basically just swapping the order of the expectation and the log. And because these are products, we can then start to actually write these out as a sum of logs rather than a log of the product. So log of p of xi given z plus log of p of z minus log of qi of z. And this is basically in our objective. So this is something that we can now optimize. So we'll sample a z from the distribution qi for that data point and maximize the likelihood of the data point of xi conditioned on that value of z. We'll also optimize for log of p of z for this value. And then lastly, we'll also get this last term. And actually, this last term with the expectation corresponds to the entropy of q of z. Now this whole objective is called the evidence lower bound or the ELBO. And let's try to talk a little bit through some of the intuition behind this objective. So this is basically just writing out what I wrote out on the board and also kind of writing it out in terms of with Jensen's inequality. So this is the final bound. And once you maximize this objective, that means that you're also maximizing our original objective, which is the log likelihood. So to talk a little bit about the intuition, I think it's helpful to first talk about some quantities like entropy and KL divergence. So entropy is something that we've kind of talked about before, but we haven't formally gone over. And it's defined as the negative log probability under basically the negative expected log probability, which is equal to the negative integral of p of x log p of x. And there's a couple of different intuitions behind what entropy corresponds to. The first is basically how random a variable is. And so for example, if you think about a coin flip like a Bernoulli variable, if the weight of that coin is 0.5, that means that it's more random and the entropy measures that. Whereas if it's the kind of probability of heads is 1, then it has zero entropy because it doesn't have any randomness. So you can essentially think of entropy as measuring how much randomness does this variable have. And the other way to look at it is thinking about how large is the log probability of the variable in expectation under itself. And so if you have two different distributions, in expectation, this one has high log probabilities. And this one has a lot of low log probabilities. And if you have a small log-- sorry, this one has high probability. This one has low probabilities. And something with low probabilities will have-- the log of that will be a large negative number. Whereas the log of something with a higher probability will be a small negative number. And so the thing that has the large negative numbers will have a large entropy. And the one that has these high probabilities, which correspond to the small log probabilities will have a low entropy. And so if we think about the objective that we just talked about right here, which is the evidence lower bound, it has really two main terms in it. And the first term-- so if we try to plot as a function of z, p of xi, z, and we draw a sample from q, you can think of the first part or basically the first two terms here as saying, OK, for a given z, my probability of x comma z should be high. And so that's going to basically pick something like that. It tries to pick a z that it has high p of x comma z. And then what the second term is going to try to do is it's going to try to maximize the entropy of q of z. And so it's going to try to basically make the distribution as wide as possible. And so overall what this objective is going to do is it has really these two parts one that's maximizing probability and one that's trying to make that distribution as wide as possible, such that you're matching the underlying distribution. Any questions on entropy or this intuition? Cool. So the second thing that we can look at is what's called the KL divergence. And this is also something that I think we've seen briefly. And the KL divergence between the two distributions is defined as the expectation under one of those distributions of the log of the ratio of the first distribution divided by the second distribution. And you can kind of equivalently write this out as the negative probability of one distribution under the other minus the entropy of that other distribution. And again, there's two different ways to intuitively look at the KL divergence. The first is thinking about how different are these two distributions, how different is q from p. And you can see that, for example, when q equals p, the KL divergence will be 0. Because when q equals p, that ratio on the left will be 1, and then the log of 1 is 0. And so this will be 0. And as they become more and more different the KL divergence will grow. And then the second way that we can look at it, especially looking at this equation right here is basically how small is the expected log probability of one distribution under the other distribution minus the entropy. And so this ends up giving us a similar picture to before, where if we think about the probability of one distribution under the other, that will basically give us the kind of-- maximizing that will maximize or minimizing that will maximize this first part. And then the entry people will also try to maximize it. And so if you want to try to minimize the KL divergence, this is going to try to make the distributions as similar as possible. So up until this point, we've kind of derived an objective for optimizing these latent variable models. And the way that we did that is we still have our p of z term, which is just can be like a Gaussian distribution like a standard normal distribution. And we have our neural network that's kind of mapping from a z to an xi. And then we additionally introduced this third thing which is this kind of distribution that's trying to approximate the posterior that is trying to approximate p of z given xi. And so then the next question is, we bounded this objective, is this bound actually tight? And in which cases, is this bound actually tight? And part of this relates to the question of the choice of qi and what makes it qi. And really the intuition is that we want qi to be as close as possible to p of z given xi. We want to basically be able to tell us the distribution over the latent variable for a given example. Yeah? Could you explain more about the [INAUDIBLE] because like where the equality would come from? You're asking, can I tell you when would qi not be equal to this? [INAUDIBLE] equality was identity, like where in when you go from the equality to use like Jensen's inequality to get to like the third line. Where is equality-- why would it be equal, I guess? Because it'd be [INAUDIBLE]. Yeah. So you're a little bit ahead of me. So basically, if qi is equal to p of z given xi, it turns out the bound will be tight. And in particular, if their KL divergence is minimized, if the KL divergence is zero, then the bound will be tight. And there is-- I guess, there's a question of-- do I want to go on this on the whiteboard or on the slides? I feel like it'd be just a little bit faster to go over on the slides and that, it might be nice to do that. So to answer your question, we can basically look at the KL divergence between qi and p of z given xi and derive what this actually corresponds to. And we can do this by just first writing out the equation for KL divergence, which is the expectation of q, of log q over p. And with Bayes' rule-- or if we multiply the top and bottom by p of xi, we get an expression that looks like this. And then if we kind of expand out the log of the product to the sum of logs, we get the negative log of the bottom, which is p of xi given z and p of z plus the log of the top, which is the log of qi of z and the log of p of x. And this starts to look pretty familiar. So the first three-- or basically, this is equal to the entropy and everything before this last part is exactly equal to the evidence lower bound. And so what this means is that if you set the KL divergence equal to zero, if we set qi of z equal to p of z given xi, then Li is equal to the log probability. And so that means that basically the bound is tight-- the loss function or the evidence lower bound is exactly equal to the log likelihood when the KL divergence between qi of z and p of z given xi is zero. Cool. And also because the KL divergence is always non-negative, this also kind of is another way to derive the bound on p of xi. And so this bound essentially, instead of using Jensen's inequality, it's just relying on the fact that the KL divergences is non-negative. Cool. So if the KL divergence is zero, then that means our evidence lower bound is tight, and that means that we are-- if we basically exactly represent p of z given xi, then we're going to be exactly maximizing the likelihood. The second thing to note here is that-- so far we've talked about how we're going to be optimizing this neural network. The other question is, how do we optimize for qi? And if we optimize for qi with respect to the same lower bound, then what that's going to do is that's going to be minimizing the KL divergence. I guess I can write this out right here. So we saw that the KL divergence between qi of z and p of z given xi is equal to our last function plus log p of xi. Actually, sorry, negative. Yeah. And so basically, if we decide to maximize Li with respect to qi, this doesn't depend on qi at all. And so that means that if we maximize L, this corresponds to minimizing this left hand side, which means that we're minimizing the KL divergence. And that's a really good thing because if we're minimizing the divergence with respect to q, that means that we're making the bound tighter. And so this really justifies optimizing this evidence lower bound both with respect to theta as well as with respect to q because it means that we're going to be optimizing a bound and we're going to be making that bound tighter as well. Cool. And so what are optimisation objective will look like is basically, we have this single objective Li and we're going to maximize it both with respect to our model parameters theta and with respect to what's called the variational distribution qi. All right. So then there's a question of actually kind of walking through what the algorithm looks like. So once we have this objective, the first step will be to sample an example xi or a mini batch of examples. Then we want to compute the gradient with respect to theta of Li. And the way that we'll do-- actually, does anyone want to actually say how do we compute the gradient of this with respect to theta, or what the first step is? So you can see theta appears just right here, and so that's the only place where theta appears. Yeah? I think we take the gradient inside because the function will behave positive. So if we just take the individual p of x given z, which we already know in the formula, it's like the normal thing. So you're saying that we could take the gradient inside. So we sampled an xi. But to evaluate the gradient of this, we need both an xi and a z. So where do we get the z from? I guess we can make a list of probabilities for z based on the values that for next cycle to most possible-- the most likely z given x because your sampled x [INAUDIBLE] of the z's most likely or they could solve probably for [INAUDIBLE].. Yeah. So what we'll do is-- I'm not fully sure if this is what you're suggesting, but what we can do is-- we have qi. And so we can sample a z or multiple z's from qi. And then once we have those z's, then we can use that to kind of pass that into here and compute the gradient of log p theta using those z's. And so yeah, we'll basically first sample z from qi. And then once we have the z, we can then basically kind of compute the gradient of log p theta of xi given that z. In practice-- well, I'm only writing a single sample here. Typically, if you want to get an extremely accurate gradient, then you would sample multiple z's from here. But in practice, actually, just sampling one often works just fine. Cool. And I feel like I should-- No. Don't worry about it. It's also pretty far. So I'm not sure how well I can throw. Yeah? Using this one z, does they're sort of implicitly assume that our data is well behaved? Does it implicitly assume that our data is what, sorry? Well behaved. Otherwise, it would be very noisy. So the question is, by only sampling one z, does that implicitly assume that our data is well behaved? I'm not sure what well behaved means. When the expectation would eventually correct itself. Right. So I think that one of the reasons why it works well is we're modeling qi as a Gaussian distribution. And Gaussian distributions are fairly simple and unimodal. And so I think that for that reason-- because Gaussian distributions are well behaved perhaps, using just one sample is OK. Yeah. I'm not sure if I have a better answer than that. Let's say we have beta distribution instead of a Gaussian, would it still work? If we had a beta distribution rather than a Gaussian. Well, I guess, we haven't yet gotten to the training the qi part. But it'd be harder to train a beta distribution. You can measure the entropy of a Gaussian distribution in closed form with respect to the mean and variance of that so that becomes a lot easier, specifically with respect to this step, I'm not sure. I've never tried it. In part because of the complexity there. But I could see it being possibly needing more samples in that case. Yeah? Do we have one qi per xi or do we have multiple choice for each? Yeah, good question. So far, we have one qi for every xi. Then when [INAUDIBLE] qi, zu, and x, or something? We will do that soon. It's a great idea. Was there a question? After selection, sampling just once really works well when training. For prediction, do we still sample multiple z's? Right. So the question is, for prediction, do we still sample multiple z's? So it depends on how you're using it. If you're just generating from the model, you're actually not going to be sampling from q at all. You'll be sampling from p of z, and then passing that through your neural network. If you do want to get a really good like-- if you do want to get a better estimate, one thing you can do is sample and then take-- basically, sample and then take the max or something like that and sample multiple times. So there are things that you can do to sample more than once at inference time. It kind of depends on how you're using it, though. In the case of variational autoencoders, you actually very rarely use q at test time. You often just throw it away at test time, and only just use it to generate samples. But there are other cases where you may actually be using q at test time. Cool. So we got through step two, which is how to compute the gradient. And then the last thing that we want to do is to update qi with respect to the loss function. And the way that we'll do this is, if qi is a Gaussian distribution, then we can set basically qi to be a Gaussian distribution with a mean of mu i and a variance of sigma i. And then basically compute the gradient of your loss function with respect to mu i, and also the gradient with respect to sigma i. And then use that to update mu i and sigma i. And so this is basically just what's written on the slide. So we estimate the gradient. It's an estimate if you only take one sample. Although even if you take a few samples, it's still an estimate unless you take an infinite number of samples, and then we'll update qi. And the way that we do that is we'll evaluate the gradient of the mean and the gradient of the variance. Cool. So this already sort of came up. But there's a little bit of a problem here, which is that typically in deep learning, we have a large number of samples. And that means that it may be somewhat impractical to fit a qi for every single example. And so one question is, when we have this kind of model, what is the total number of parameters? Yeah? So first you have all the thetas, and then you also have a set of big and ready sets of the means, variants of mu i and sigma i? Yeah, exactly. So we will have all of our theta, and we'll have one of those. And then for the number of examples that we have, we'll have 2 times the dimensionality of-- or well, n times the dimensionality of mu and n times the dimensionality of sigma. And so this is a pretty large number of parameters. So to solve this problem, what we can do, which was actually already suggested, which is that instead of having a single qi for every single data point, we can train a neural network to predict the z given xi. And so instead of only having a neural network that predicts x given z, we're going to have a second network, which is often called an inference network, to give you a distribution over z given x. And this neural network will give-- it'll take us input x, and it'll output a mean and a variance. And so q will again be a Gaussian distribution. But it'll be a conditional Gaussian distribution. And this neural network is outputting the mean and the variance, and then the normal distribution is parameterized by that mean in that variance. Cool. So this is where we get to amortize variational inference. And the reason why it's called amortize variational inference is that we're essentially amortizing the inference process using this neural network rather than just doing inference for every single data point individually. And so the way this works is we again will formulate the same exact objective, except instead of being qi of z, we'll just replace that with qi phi of z given xi. And this works out with the bound because we mentioned that this works really for any value of q. And so it can-- q can be conditioned on xi. It can really be conditioned on whatever you want it to be conditioned on. And so it's natural for it to be conditioned on xi because we want it to be very similar to p of z given xi. Cool. And then the algorithm ends up looking the same as before, except now instead of sampling from qi or sorry, qi over z, we're now going to be sampling over this conditional distribution that's conditioned on xi. And then instead of updating each of the individual qi's, we're going to be updating the parameters of our inference network, which I'm using phi to denote. Cool. So that's all written right here. And we can take a gradient descent step on phi. Now there's one more problem that we have to fix, which is actually, how do we end up calculating this gradient right here? And the reason why this is a problem is that if we look at our objective, there is a few different places where phi will come up. So phi will come up right here and phi will also come up right here. And this part isn't too hard to calculate. You can look up-- there's a closed form equation of the entropy of a Gaussian that you can look up. But this one ends up being a little bit more difficult because here you actually have to differentiate through the sampling process into phi. So this part is easy. The second part is a little bit harder. And in terms of going into this, we'll refer to this inside part. It doesn't really matter what this inside part is. We'll refer to it as r of xi given z. And then our goal is to be able to compute the gradient with respect to phi of this expectation of q phi of r of xi comma z. So this is our goal. This is what we want to be able to compute. And q is going to be outputting a Gaussian distribution. Now this is where what's called the reparameterization trick comes in. And so in particular, q of phi of z given x, this is defined to be a Gaussian distribution of some neural network that takes as input x and the variance that takes input x. And here we're going to be sampling in this expectation a z from this, our q distribution. And we want to be able to differentiate through the sampling process. And the really cool thing about what's called the reparameterization trick is we can rewrite this equation, the sampling process as the equation mu of x plus epsilon times sigma of x where epsilon is a Gaussian-- epsilon is a variable that's-- epsilon is value that is sampled from a standard normal distribution. And so this equation should be fairly straightforward. So we're just saying that a sample from a Gaussian distribution equals the mean plus some noise times the variance. And the really cool thing about this equation is that epsilon is completely independent of phi. Whereas the mu and sigma are parameterized by phi. And that means that we can basically generate a sample this way, where we generate a sample from epsilon and then plug it into this equation. And then that gives us a sample. And we can differentiate through this equation into phi in order to update phi through the sampling process. And so more concretely, this is the equation that I wrote up on the board, z equals mu plus epsilon times sigma. We can first sample sigma-- or sorry, we can first sample epsilon in a way that's completely independent of the inference that we're parameters, multiply that by sigma, and add in the mean. And then in order to compute the gradient of this expression, now we no longer actually have to sample from q. We can instead sample from the standard normal distribution. And this equation is much easier to differentiate through in comparison to the first equation. And so basically to estimate the gradient of phi, we first sample a bunch of samples from the standard normal distribution. A single sample actually also can work well in this case. And then the gradient will be approximately equal to the gradient of phi of r where we plug-in this equation for our sample, for z. Cool. Any questions on how that works? I should mention, it's really important that this is independent of phi. Yeah? The epsilon is actually one, always of identity by information that's on there? Right. So if this is vector value, then this will be identity. Yeah? The solution to the amortization that we're doing really-- usually, when someone lose something where you can't get over something which is really good, what's a good thing that we're doing in getting over? Do we have to use any parameters? Yeah. So the question is, are we losing anything by doing amortization? It's a good question. I think that the only thing that we may be really losing in this case is that you essentially have something that may be slightly-- in practice, I don't think this is really the case. But when you actually separately represent each qi, you can really represent like just a Gaussian for all of them. And you have a lot of expressive power. Whereas when you have a single neural network that has to output, a different z for a given x, it has to be able to look at x and then tell you what z is. And it may be somewhat difficult to do that with a single function. It may also be that xi-- there are two xi's that look very different, but actually have very different latent variables. And in cases like that, it may be especially difficult for the inference network to do that. So yeah. But really just being able to represent q of z given x like, you have to be able to represent that. But I think that's really the only thing that you're losing. I guess I could also mention that there are some works that kind of amortize and then run some additional gradient steps on z. And there are kind of more hybrid methods as well that you can consider. Cool. So for the reparameterization trick, it's really like super easy to implement. And it also gives you a pretty low variance estimate of your gradient. It only works for continuous latent variables one, specifically in this case, Gaussian random variables. If you are really excited about having discrete latent variables, then there are techniques for optimizing for those. So there's something called the straight-through estimator and vector quantization that has been used in conjunction with variational autoencoders. We won't go into it here, but it's something that's been pretty successful with discrete latent variables. And you can also use something called the reinforced trick, which is also policy gradients, which are often used in reinforcement learning, but also can be used to estimate gradients here. But for continuous latent variables, the reparameterization trick is far better than these options. Cool. In the last five minutes, I'd like to also go over another interpretation of the objective. So we've talked about the objective as basically kind of an entropy term and like a likelihood term. And alternatively, there's a different way to look at it. And the other way that you can look at it is, if you combine this term and this term, you end up getting q log-- it's expectation of q log p minus expectation of q log q. And these two terms combined actually correspond to the KL divergence, specifically the KL divergence between q of z given x and p of z. And then you're just left with-- you also still have this first term, which is the expectation of z sampled from q of log p theta of xi given z. And this is basically a separate way to look at the equation. And actually, sorry, this should be a minus sign. And put it this way, you can sort of think of this from the standpoint of an autoencoder where your q network is encoding your example into this latent space. And your model p of x given z is decoding from your latent variable back into your space. So if you think of the autoencoders that we saw before, then you can think of the encoder as q of z given x and the decoder as p of x given z. And then from this standpoint, this term right here you first encode, and then you evaluate the likelihood under your decoder. And this basically corresponds to reconstruction error, like how good are you at reconstructing the example after you've encoded it and decoded it. And the second term is telling you that once you encode your example into your z space, this should look like a Gaussian distribution. So your z's basically should be Gaussian distributed. And this is going to then basically add noise into this variable right here. And that will kind of force the model to not be able to represent the identity function through your encoder and decoder. So the KL divergence also has a convenient analytical form for Gaussian distributions. Yeah. So you can basically think of this as taking an x, outputting a mean invariance over z, then adding noise or sampling from q in order to generate a z sample. And then decoding that z sample with your p of theta to generate an x. Cool. And this basically corresponds to the variational autoencoder. And you can basically kind of sample a z from a Gaussian space, like a low-dimensional Gaussian space, and then have your decoder or your model p of x given z then generate some cool looking faces for you. Yeah? You're saying the KL term is sort of regularizing in there? Yeah, exactly. So you can think of the KL term as acting as a regularizer on the autoencoder so that you're not only optimizing for reconstruction, you're also encouraging it to kind of regularize the information content in z. Cool. Yeah? I'm just trying to see, what does variational mean? Yeah. What does variational mean? So I don't know if it has any meaning in English. But the distribution q of z, this is often referred to as the variational distribution maybe in the sense that it's not the true distribution. It's not the distribution given by your model. It's not like the true distribution, for example. Yeah? Can it come from calcuation of variations? Yes, that's possible. Cool. And then when you want to sample from a variational autoencoder, you can first sample z and then sample x given z. Lastly, if you want to do conditional models, you can also condition on something we primarily didn't-- we talked about unconditional models to start, but you could also basically condition your inference network and condition your model on something like a class variable to generate images of that class, or text and so forth. And so really everything is the same. We're just conditioning on some input value. Cool. So that's it. To summarize, we talked about latent variable models, how to train them with variational inference and amortized variational inference. And we learned about various things like the reparameterization trick to optimize these kinds of models. Yeah, and as a reminder, homework 3 is due on Friday. And I'll see you folks on Wednesday.
AI_LLM_Stanford_CS229
The_Future_of_AI_is_Here_FeiFei_Li_Unveils_the_Next_Frontier_of_AI.txt
visual spatial intelligence is so fundamental it's as fundamental as language we've got this ingredients compute deeper understanding of data and we've got some advancement of algorithms we are in the right moment to really make a bet and to focus and just unlock [Music] that over the last two years we've seen this kind of massive Rush of consumer AI companies and technology and it's been quite wild but you've been doing this now for decades and so maybe walk through a little bit about how we got here kind of like your key contributions and insights along the way so it is a very exciting moment right just zooming back AI is in a very exciting moment I personally have been doing this for for two decades plus and you know we have come out of the last AI winter we have seen the birth of modern AI then we have seen deep learning taking off showing us possibilities like playing chess but then we're starting to see the the the deepening of the technology and the industry um adoption of uh of some of the earlier possibilities like language models and now I think we're in the middle of a Cambrian explosion in almost a literal sense because now in addition to texts you're seeing pixels videos audios all coming out with possible AI applications and models so it's very exciting moment I know you both so well and many people know you both so well because you're so prominent in the field but not everybody like grew up in AI so maybe it's kind of worth just going through like your quick backgrounds just to kind of level set the audience yeah sure so I first got into AI uh at the end of my undergrad uh I did math and computer science for undergrad at keltech that was awesome but then towards the end of that there was this paper that came out that was at the time a very famous paper the cat paper um from H Lee and Andrew and others that were at Google brain at the time and that was like the first time that I came across this concept of deep learning um and to me it just felt like this amazing technology and that was the first time that I came across this recipe that would come to define the next like more than decade of my life which is that you can get these amazingly powerful learning algorithms that are very generic couple them with very large amounts of compute couple them with very large amounts of data and magic things started to happen when you compi those ingredients so I I first came across that idea like around 2011 2012-ish and I just thought like oh my God this is this is going to be what I want to do so it was obvious you got to go to grad school to do this stuff and then um sort of saw that Fay was at Stanford one of the few people in the world at the time who was kind of on that on that train and that was just an amazing time to be in deep learning and computer vision specifically because that was really the era when this went from these first nent bits of technology that were just starting to work and really got developed AC and spread across a ton of different applications so then over that time we saw the beginning of language modeling we saw the beginnings of discriminative computer vision you could take pictures and understand what's in them in a lot of different ways we also saw some of the early bits of what we would Now call gen generative modeling generating images generating text a lot of those Court algor algorithmic pieces actually got figured out by the academic Community um during my PhD years like there was a time I would just like wake up every morning and check the new papers on archive and just be ready it was like unwrapping presents on Christmas that like every day you know there's going to be some amazing new discovery some amazing new application or algorithm somewhere in the world what happened is in the last two years everyone else in the world kind of came to the same realization using AI to get new Christmas presents every day but I think for those of us that have been in the field for a decade or more um we've sort of had that experience for a very long time obviously I'm much older than Justin I I come to AI through a different angle which is from physics because my undergraduate uh background was physics but physics is the kind of discipline that teaches you to think audacious question s and think about what is the remaining mystery of the world of course in physics is atomic world you know universe and all that but somehow I that kind of training thinking got me into the audacious question that really captur my own imagination which is intelligence so I did my PhD in Ai and computational neuros siiz at CCH so Justin and I actually didn't overlap but we share um the same amam mat um at keltech oh and and the same adviser at celtech yes same adviser your undergraduate adviser in my PhD advisor petro perona and my PhD time which is similar to your your your PhD time was when AI was still in the winter in the public eye but it was not in the winter in my eye because it's that preing hibernation there's so much life machine learning statistical modeling was really gaining uh gaining power and we I I think I was one of the Native generation in machine learning and AI whereas I look at Justice generation is the native deep learning generation so so so machine learning was the precursor of deep learning and we were experimenting with all kinds of models but one thing came out at the end of my PhD and the beginning of my assistant professor there was a overlooked elements of AI that is mathematically important to drive generalization but the whole field was not thinking that way and it was Data because we were thinking about um you know the intricacy of beijan models or or whatever you know um uh kernel methods and all that but what was fundamental that my students and my lab realized probably uh earlier than most people is that if you if you let Data Drive models you can unleash the kind of power that we haven't seen before and that was really the the the reason we went on a pretty crazy bet on image net which is you know what just forget about any scale we're seeing now which is thousands of data points at that point uh NLP community has their own data sets I remember UC see Irvine data set or some data set in NLP was it was small compar Vision Community has their data sets but all in the order of thousands or tens of thousands were like we need to drive it to internet scale and luckily it was also the the the coming of age of Internet so we were riding that wave and that's when I came to Stanford so these epochs are what we often talk about like IM is clearly the epoch that created you know or or at least like maybe made like popular and viable computer vision and the Gen wave we talk about two kind of core unlocks one is like the Transformers paper which is attention we talk about stable diffusion is that a fair way to think about this which is like there's these two algorithmic unlocks that came from Academia or Google and like that's where everything comes from or has it been more deliberate or have there been other kind of big unlocks that kind of brought us here that we don't talk as much about yeah I I think the big unlock is compute like I know the story of AI is of in the story of compute but even no matter how much people talk about it I I think people underestimate it right and the amount of the amount of growth that we've seen in computational power over the last decade is astounding the first paper that's really credited with the like Breakthrough moment in computer vision for deep learning was Alex net um which was a 2012 paper that where a deep neural network did really well on the image net Challenge and just blew away all the other algorithms that F had been working on the types of algorithms that they' been working on more in grad school that Alex net was a 60 million parameter deep neural network um and it was trained for six days on two GTX 580s which was the top consumer card at the time which came out in 2010 um so I was looking at some numbers last night just to you know put these in perspective the newest the latest and greatest from Nvidia is the gb200 um do either of you want to guess how much raw compute Factor we have between the GTX 580 and the gb200 shoot no what go for it it's uh it's in the thousands so I I ran the numbers last night like that two We R that two we training run that of Six Days on two GTX 580s if you scale it it comes out to just under five minutes on a single GB on a single gb200 Justin is making a really good point the 2012 Alex net paper on image net challenge is literally a very classic Model and that is the convolution on your network model and that was published in 1980s the first paper I remember as a graduate student learning that and it more or less also has six seven layers the practically the only difference between alexnet and the convet what's the difference is the gpus the two gpus and the delude of data yeah well so that's what I was going to go which is like so I think most people now are familiar with like quote the bitter lesson and the bitter lesson says is if you make an algorithm don't be cute yeah just make sure you can take advantage of available compute because the available compute will show up right and so like you just like need to like why like on the other hand there's another narrative um which seems to me to be like just as credible which is like it's actually new data sources that unlock deep learning right like imet is a great example but like a lot of people like self attention is great from Transformers but they'll also say this is a way you can exploit human labeling of data because like it's the humans that put the structure in the sentences and if you look at clip they'll say well like we're using the internet to like actually like have humans use the alt tag to label images right and so like that's a story of data that's not a story of compute and so is it just is the answer just both or is like one more than the other or I think it's both but you're hitting another really good point so I think there's actually two EO that to me feel quite distinct in the algorithmics here so like the imag net era is actually the era of supervised learning um so in the era of supervised learning you have a lot of data but you don't know how to use data on its own like the expectation of imet and other data sets of that time period was that we're going to get a lot of images but we need people to label everyone and all of the training data that we're going to train on like a person a human labeler has looked at everyone and said something about that image yeah um and the big algorithmic unlocks we know how to train on things that don't require human labeled data as as the naive person in the room that doesn't have an AI background it seems to me if you're training on human data like the humans have labeled it it's just not explicit I knew you were GNA say that Mar I knew that yes philosophically that's a really important question but that actually is more try language than pixels fair enough yeah 100 yeah yeah yeah yeah yeah but I do think it's an important thinked learn itel just more implicit than explicit yeah it's still it's still human labeled the distinction is that for for this supervised learning era um our learning tasks were much more constrained so like you would have to come up with this ontology of Concepts that we want to discover right if you're doing in imag net like fa and and your students at the time spent a lot of time thinking about you know which thousand categories should be in the imag net challenge other data sets of that time like the Coco data set for object detection like they thought really hard about which 80 categories we put in there so let's let's walk to gen um so so when I was doing my my PhD before that um you came so I took U machine learning from Andre in and then I took like beigan something very complicated from Deany Coler and it was very complicated for me a lot of that was just predictive modeling y um and then like I remember the whole kind of vision stuff that you unlock but then the generative stuff is shown up like I would say in the last four years which is to me very different like you're not identifying objects you're not you know predicting something you're generating something and so maybe kind of walk through like the key unlocks that got us there and then why it's different and if we should think about it differently and is it part of a Continuum is it not it is so interesting even during my graduate time generative model was there we wanted to do generation we nobody remembers even with the uh letters and uh numbers we were trying to do some you know Jeff Hinton has had to generate papers we were thinking about how to generate and in fact if you do have if you think from a probability distribution point of view you can mathematically generate it's just nothing we generate would ever impress anybody right so this concept of generation mathematically theoretically is there but nothing worked so then I do want to call out Justin's PhD and Justin was saying that he got enamored by Deep learning so he came to my lab Justin PhD his entire PhD is a story almost a mini story of the trajectory of the of the uh field he started his first project in data I forced him to he didn't like it so in retrospect I learned a lot of really useful things I'm glad you say that now so we moved Justin to um to deep learning and the core problem there was taking images and generating words well actually it was even about there were I think there were three discret phases here on this trajectory so the first one was actually matching images and words right right right like we have we have an image we have words and can we say how much they allow so actually my first paper both of my PhD and like ever my first academic publication ever was the image retrieval with scene graphs and then we went into the Genera uh taking pixels generating words and Justin and Andre uh really worked on that but that was still a very very lossy way of of of generating and getting information out of the pixel world and then in the middle Justus went off and did a very famous piece of work and it was the first time that uh someone made it real time right yeah yeah so so the story there is there was this paper that came out in 2015 a neural algorithm of artistic style led by Leon gtis and it was like the paper came out and they showed like these these real world photographs that they had converted into van go style and like we are kind of used to seeing things like this in 2024 but this was in 2015 so this paper just popped up on archive one day and it like blew my mind like I just got this like gen brainworm like in my brain in like 2015 and it like did something to me and I thought like oh my God I need to understand this algorithm I need to play with it I need to make my own images into van go so then I like read the paper and over a long weekend I reimplemented the thing and got it to work it was a very actually very simple algorithm um so like my implementation was like 300 lines of Lua cuz at the time it was pre it was Lua there was there was um this was pre pie torch so we were using Lua torch um but it was like very simple algorithm but it was slow right so it was an optim optimization based thing every image you want to generate you need to run this optimization Loop run this gradient Dent Loop for every image that you generate the images were beautiful but I just like wanted to be faster and and Justin just did it and it was actually I think your first taste of a an academic work having an industry impact a bunch of people seen this this artistic style transfer stuff at the time and me and a couple others at the same time came up with different ways to speed this up yeah um but mine was the one that got a lot of traction right so I was very proud of Justin but there's one more thing I was very proud of Justin to connect to J AI is that before the world understand gen Justin's last piece of uh uh work in PhD which I I knew about it because I was forcing you to do it that one was fun that was was actually uh input language and getting a whole picture out it's one of the first gen uh work it's using gang which was so hard to use but the problem is that we are not ready to use a natural piece of language so justtin you heard he worked on sing graph so we have to input a sing graph language structure so you know the Sheep the the the grass the sky in a graph way it literally was one of our photos right and then he he and another very good uh uh Master student of grim they got that again to work so so you can see from data to matching to style transfer to to generative a uh uh images we're starting to see you ask if this is a abrupt change for people like us it's already happening a Continuum but for the world it was it's more the results are more abrupt so I read your book and for those that are listening it's a phenomenal book like I I really recommend you read it and it seems for a long time like a lot of you and I'm talking to you fa like a lot of your research has been you know and your direction has been towards kind of spatial stuff and pixel stuff and intelligence and now you're doing World labs and it's around spatial intelligence and so maybe talk through like you know is this been part of a long journey for you like why did you decide to do it now is it a technical unlock is it a personal unlock just kind of like move us from that kind of Meo of AI research to to World Labs sure for me is uh um it is both personal and intellectual right my entire you talk about my book my entire intellectual journey is really this passion to seek North Stars but also believing that those nor stars are critically important for the advancement of our field so at the beginning I remembered after graduate school I thought my Northstar was telling stories of uh images because for me that's such a important piece of visual intelligence that's part of what you call AI or AGI but when Justin and Andre did that I was like oh my God that's that was my live stream what do I do next so it it came a lot faster I thought it would take a hundred years to do that so um but visual intelligence is my passion because I do believe for every intelligent uh being like people or robots or some other form um knowing how to see the world reason about it interact in it whether you're navigating or or or manipulating or making things you can even build civilization upon it it visual spatial intelligence is so fundamental it's as fundamental as language possibly more ancient and and more fundamental in certain ways so so it's very natural for me that um world Labs is our Northstar is to unlock spatial intelligence the moment to me is right to do it like Justin was saying compute we've got these ingredients we've got compute we've got a much deeper understanding of data way deeper than image that days you know uh compared to to that those days we're so much more sophisticated and we've got some advancement of algorithms including co-founders in World la like Ben milen Hall and uh Kristoff lar they were at The Cutting Edge of nerve that we are in the right moment to really make a bet and to focus and just unlock that so I just want to clarify for for folks that are listening to this which is so you know you're starting this company World lab spatial intelligence is kind of how you're generally describing the problem you're solving can you maybe try to crisply describe what that means yeah so spatial intelligence is about machines ability to un to perceive reason and act in 3D and 3D space and time to understand how objects and events are positioned in 3D space and time how interactions in the world can affect those 3D position 3D 4D positions over space time um and both sort of perceive reason about generate interact with really take the machine out of the main frame or out of the data center and putting it out into the world and understanding the 3D 4D world with all of its richness so to be very clear are we talking about the physical world or are we just talking about an abstract notion of world I think it can be both I think it can be both and that encompasses our vision long term even if you're generating worlds even if you're generating content um doing that in positioned in 3D with 3D uh has a lot of benefits um or if you're recognizing the real world being able to put 3D understanding into the into the real world as well is part of it great so I mean Ju Just for everybody listening like the two other co-founders Ben M Hall and Kristoff lner are absolute Legends in the field at the at the same level these four decided to come out and do this company now and so I'm trying to get dig to like like why now is the the the right time yeah I mean this is Again part of a longer Evolution for me but like really after PhD when I was really wanting to develop into my own independent researcher both at for my later career I was just thinking what are the big problems in Ai and computer vision um and the conclusion that I came to about that time was that the previous decade had mostly been about understanding data that already exists um but the next decade was going to be about understanding new data and if we think about that the data that already exists was all of the images and videos that maybe existed on the web already and the next decade was going to be about understanding new data right like people are people are have smartphones smartphones are collecting cameras those cameras have new sensors those cameras are positioned in the 3D world it's not just you're going to get a bag of pixels from the internet and know nothing about it and try to say if it's a cat or a dog we want to treat these treat images as universal sensors to the physical world and how can we use that to understand the 3D and 4D structure of the world um either in physical spaces or or or generative spaces so I made a pretty big pivot post PhD into 3D computer vision predicting 3D shapes of objects with some of my colleagues at fair at the time then later I got really enamored by this idea of learning 3D structure through 2D right because we talk about data a lot it's it's um you know 3D data is hard to get on its own um but there because there's a very strong mathematical connection here um our 2D images are projections of a 3D World and there's a lot of mathematical structure here we can take advantage of so even if you have a lot of 2D data there's there's a lot of people have done amazing work to figure out how can you back out the 3D structure of the world from large quantities of 2D observations um and then in 2020 you asked about bre breakthrough moments there was a really big breakthrough Moment One from our co-founder Ben mildenhall at the time with his paper Nerf N Radiance fields and that was a very simple very clear way of backing out 3D structure from 2D observations that just lit a fire under this whole Space of 3D computer vision I think there's another aspect here that maybe people outside the field don't quite understand as that was also a time when large language models were starting to take off so a lot of the stuff with language modeling actually had gotten developed in Academia even during my PhD I did some early work with Andre Carpathia on language modeling in 2014 lstm I still remember lstms RNN brus like this was pre- Transformer um but uh then at at some point like around like around the gpt2 time like you couldn't really do those kind of models anymore in Academia because they took a way way more resourcing but there was one really interesting thing that the Nerf the Nerf approach that that Ben came up with like you could train these in in in an hour a couple hours on a single GPU so I think at that time like this is a there was a dynamic here that happened which is that I think a lot of academic researchers ended up focusing a lot of these problems because there was core algorithmic stuff to figure out and because you could actually do a lot with without a ton of compute and you could get state-of-the-art results on a single GPU because of those Dynamics um there was a lot of research a lot of researchers in Academia were moving to think about what are the core algorithmic ways that we can advance this area as well uh then I ended up chatting with f more and I realized that we were actually she's very convincing she's very convincing well there's that but but like you know we talk about trying to like figure out your own depent research trajectory from your adviser well it turns out we ended oh no kind of concluding converging on on similar things okay well from my end I want to talk to the smartest person I I call Justin there's no question about it uh I do want to talk about a very interesting technical um uh issue or or technical uh story of pixels that most people work in language don't realize is that pre era in the field of computer vision those of us who work on pixels we actually have a long history in a an area of research called reconstruction 3D reconstruction which is you know it dates back from the 70s you know you can take photos because humans have two eyes right so in generally starts with stereo photos and then you try to triangulate the geometry and uh make a 3D shape out of it it is a really really hard problem to this day it's not fundamentally solved because there there's correspondence and all that and then so this whole field which is a older way of thinking about 3D has been going around and it has been making really good progress but when nerve happened when Nerf happened in the context of generative methods in the context of diffusion models suddenly reconstruction and generations start to really merge and now like within really a short period of time in the field of computer vision it's hard to talk about reconstruction versus generation anymore we suddenly have a moment where if we see something or if we imagine something both can converge towards generating it right right and that's just to me a a really important moment for computer vision but most people missed it because we're not talking about it as much as llms right so in pixel space there's reconstruction where you reconstruct like a scene that's real and then if you don't see the scene then you use generative techniques right so these things are kind of very similar throughout this entire conversation you're talking about languages and you're talking about pixels so maybe it's a good time to talk about how like space for intelligence and what you're working on contrasts with language approaches which of course are very popular now like is it complimentary is it orthogonal yeah I think I think they're complimentary I I don't mean to be too leading here like maybe just contrast them like everybody says like listen I I I know opening up and I know GPT and I know multimodal models and a lot of what you're talking about is like they've got pixels and they've got languages and like doesn't this kind of do what we want to do with spatial reasoning yeah so I think to do that you need to open up the Black Box a little bit of how these systems work under the hood um so with language models and the multimodal language models that we're seeing nowadays they're their their underlying representation under the hood is is a one-dimensional representation we talk about context lengths we talk about Transformers we talk about sequences attention attention fundamentally their representation of the world is is onedimensional so these things fundamentally operate on a onedimensional sequence of tokens so this is a very natural representation when you're talking about language because written text is a one-dimensional sequence of discret letters so that kind of underlying representation is the thing that led to llms and now the multimodal llms that we're seeing now you kind of end up shoehorning the other modalities into this underlying representation of a 1D sequence of tokens um now when we move to spatial intelligence it's kind of going the other way where we're saying that the three-dimensional nature of the world should be front and center in the representation so at an algorithmic perspective that opens up the door for us to process data in different ways to get different kinds of outputs out of it um and to tackle slightly different problems so even at at a course level you kind of look at outside and you say oh multimodal LMS can look at images too well they can but I I think that it's they don't have that fundamental 3D representation at the heart of their approaches I totally agree with Justin I think talking about the 1D versus fundamental 3D representation is one of the most core differentiation the other thing it's a slightly philosophical but it's really important to for me at least is language is fundamentally a purely generated signal there's no language out there you don't go out in the nature and there's words written in the sky for you whatever data you feeding you pretty much can just somehow regurgitate with enough generalizability at the the same data out and that's language to language and but but 3D World Is Not There is a 3D world out there that follows laws of physics that has its own structures due to materials and and many other things and to to fundamentally back that information out and be able to represent it and be able to generate it is just fundamentally quite a different problem we will be borrowing um similar ideas or useful ideas from language and llms but this is fundamentally philosophically to me a different problem right so language 1D and probably a bad representation of the physical world because it's been generated by humans and it's probably lossy there's a whole another modality of generative AI models which are pixels and these are 2D image and 2D video and like one could say that like if you look at a video it looks you know you can see 3D stuff because like you can pan a camera or whatever it is and so like how would like spatial intelligence be different than say 2D video here when I think about this it's useful to disentangle two things um one is the underlying representation and then two is kind of the the user facing affordances that you have um and here's where where you can get sometimes confused because um fundamentally we see 2D right like our retinas are 2D structures in our bodies and we've got two of them so like fundamentally our visual system some perceives 2D images um but the problem is that depending on what representation you use there could be different affordances that are more natural or less natural so even if you are at the end of the day you might be seeing a 2D image or a 2d video um your brain is perceiving that as a projection of a 3D World so there's things you might want to do like move objects around move the camera around um in principle you might be able to do these with a purely 2D representation and model but it's just not a fit to the problems that you're the model to do right like modeling the 2D projections of a dynamic 3D world is is a function that probably can be modeled but by putting a 3D representation Into the Heart of a model there's just going to be a better fit between the kind of representation that the model is working on and the kind of tasks that you want that model to do so our bet is that by threading a little bit more 3D representation under the hood that'll enable better affordances for for users and this also goes back to the norstar for me you know why is it spatial intelligence why is it not flat pixel intelligence is because I think the Arc of intelligence has to go to what Justin calls affordances and uh and the Arc of intelligence if you look at Evolution right the Arc of intelligence eventually enables animals and humans especially human as an intelligent animal to move around the world interact with it create civilization create life create a piece of Sandwich whatever you do in this 3D World and and translating that into a piece of technology that three native 3D nness is fundamentally important for the flood flood gate um of possible applications even if some of them the the serving of them looks Tod but the but it's innately 3D um to me I think this is actually very subtle yeah and Incredibly critical point and so I think it's worth digging into and a good way to do this is talking about use cases and so just to level set this we're talking about generating a technology let's call it a model that can do spatial intelligence so maybe in the abstract what might that look like kind of a little bit more concretely what would be the potential use cases that you could apply this to so I think there's a there's a couple different kinds of things we imagine these spatially intelligent models able to do over time um and one that I'm really excited about is World Generation we're all we're all used to something like a text image generator or starting to see text video generators where you put an image put in a video and out pops an amazing image or an amazing two-c clip um but I I think you could imagine leveling this up and getting 3D worlds out so one thing that we could imagine spatial intelligence helping us with in the future are upleveling these experiences into 3D where we're not getting just an image out or just a clip out but you're getting out a full simulated but vibrant and interactive 3D World for gaming maybe for gaming right maybe for gaming maybe for virtual photography like you name it there's I think there even if you got this to work there'd be there'd be a million applications for Education yeah for education I mean I guess one of one of my things is that like we in in some sense this enables a new form of media right because we already have the ability to create virtual interactive world worlds um but it cost hundreds of hundreds of millions of dollars and a and a ton of development time and as a result like what are the places that people drive this technological ability is is video games right because if we do have the ability as a society to create amazingly detailed virtual interactive worlds that give you amazing experiences but because it takes so much labor to do so then the only economically viable use of that technology in its form today is is games that can be sold for $70 a piece to millions and millions of people to recoup the investment if we had the ability to create these same virtual interactive vibrant 3D worlds um you could see a lot of other applications of this right because if you bring down that cost of producing that kind of content then people are going to use it for other things what if you could have a an intera like sort of a personalized 3D experience that's as good and as rich as detailed as one of these AAA video games that cost hundreds of millions of dollars to produce but it could be catered to like this very Niche thing that only maybe a couple people would want that particular thing that's not a particular product or a particular road map but I think that's a vision of a new kind of media that would be enabled by um spatial intelligence in the generative Realms if I think about a world I actually think about things that are not just seene generation I think about stuff like movement and physics and so like like in the limit is that included and then the second one is absolutely if I'm interacting with it like like are there semantics and I mean by that like if I open a book are there like pages and are there words in it and do they mean like like are we talking like a full depth experience or we talking about like kind of a static scene I think I'll see a progression of this technology over time this is really hard stuff to build so I think the static the static problem is a little bit easier um but in the limit I think we want this to be fully Dynamic fully interactable all the things that you just said I mean that's the definition of spatial intelligence yeah so so there is going to be a progression we'll start with more static but everything you've said is is in the in the road map of uh spatial intelligence I mean this is kind of in in the name of the company itself World Labs um like the world is about building and understanding worlds and and like this is actually a little bit inside baseball I realized after we told the name to people they don't always get it because in computer vision and and reconstruction and generation we often make a distinction or a delineation about the kinds of things you can do um and kind of the first level is objects right like a microphone a cup a chair like these are discret things in the world um and a lot of the imet style stuff that F worked on was about recognizing objects in the world then leveling up the next level of objects I think of his scenes like scenes are compositions of objects like now we've got this recording studio with a table and microphones and people in chairs at some composition of objects but but then like we we envision worlds as a Step Beyond scenes right like scenes are kind of maybe individual things but we want to break the boundaries go outside the door like step up from the table walk out from the door walk down the street and see the cars buzzing past and see like the the the the leaves on the tree moving and be able to interact with those things another thing that's really exciting is just to mention the word New Media with this technology the boundary between real world and virtual imagin world or augmented world or predicted world is all blurry you really it there the real world is 3D right so in the digital world you have to have a 3D representation to even blend with the real world you know you cannot have a 2d you cannot have a 1D to be able to interface with the real 3D World in an effective way and with this it unlocks it so it it the use cases can can be quite Limitless because of this right so the first use case that that Justin was talking about would be like the generation of a virtual world for any number of use cases one that you're just alluding to would be more of an augmented reality right yes just around the time world lab was uh um being formed uh vision was released by Apple and uh they use the word spatial Computing we're almost like they almost stole our but we're spatial intelligence so spatial Computing needs spatial intelligence that's exactly right so we don't know what Hardware form it will take it will be goggles glasses contact lenses contact lenses but that interface between the true real world and what you can do on top of it whether it's to help you to augment your capability to work on a piece of machine and fix your car even if you are not a trained mechanic or to just be in a Pokemon go Plus+ for entertainment suddenly this piece of technology is is going to be the the the operating system basically uh for for arvr uh Mixr in the limit like what does an AR device need to do it's this thing thing that's always on it's with you it's looking out into the world so it needs to understand the stuff that you're seeing um and maybe help you out with tasks in your daily life but I'm I'm also really excited about this blend between virtual and physical that becomes really critical if you have the ability to understand what's around you in real time in perfect 3D then it actually starts to deprecate large parts of the real world as well like right now how many differently sized screens do we all own for different use cases too many right you've got you've got your you've got your phone you've got your iPad you've got your computer monitor you've got your t you've got your watch like these are all basically different side screens because they need to present information to you in different contexts and in different different positions but if you've got the ability to seamlessly blend virtual content with the physical world it kind of deprecates the need for all of those it just ideally seamlessly Blends information that you need to know in the moment with the right way mechanism of of giving you that information another huge case of being able to blend the the digital virtual world with the 3D physical world is for anying agents to be able to do things in the physical world and if humans use this mix art devices to do things like I said I don't know how to fix a car but if I have to I put on this this goggle or glass and suddenly I'm guided to do that but there are other types of Agents namely robots any kind of robots not just humanoid and uh their interface by definition is the 3D world but their their compute their brain by definition is the digital world so what connects that from the learning to to behaving between a robot brain to the real world brain it has to be spatial intelligence so you've talked about virtual world you've talked about kind of more of an augmented reality and now you've just talked about the purely physical world basically which would be used for robotics um for any company that would be like a very large Charter especially if you're going to get into each one of these different areas so how do you think about the idea of like deep deep Tech versus any of these specific application areas we see ourselves as a deep tech company as the platform company that provides models that uh that can serve different use cases is of these three is there any one that you think is kind of more natural early on that people can kind of expect the company to lean into or is it I think it's suffices to say the devices are not totally ready actually I got my first VR headset in grad school um and just like that's one of these transformative technology experiences you put it on you're like oh my God like this is crazy and I think a lot of people have that experience the first time they use VR um so I I've been excited about this space for a long time and I I love the Vision Pro like I stayed up late to order one of the first ones like the first day it came out um but I I think the reality is it's just not there yet as a platform for Mass Market appeal so very likely as a company will will will move into a market that's more ready than then I I think there can sometimes be Simplicity in generality right like if you we we have this notion of being a deep tech company we we believe that there is some fun underlying fundamental problems that need to be solved really well and if solved really well can apply to a lot of different domains we really view this long Arc of the company as building and realizing the the dreams of spatial intelligence r large so this is a lot of technology to build it seems to me yeah I think it's a really hard problem um I think sometimes from people who are not directly in the AI space they just see it as like AI as one undifferentiated massive Talent um and and for those of us who have been here long for for longer you realize that there's a lot of different a lot of different kinds of talent that need to come together to build anything in in AI in particular this one we've talked a little bit about the the data problem we've talked a little bit about some of the algorithms that we that I worked on during my PhD but there's a lot of other stuff we need to do this too um you need really high quality large scale engineering you need really deep understanding of 3D of the 3D World you need really there's actually a lot of connections with computer Graphics um because they've been kind of attacking lot of the same problems from the from the opposite direction so when we think about Team Construction we think about how do we find expert like absolute topof thee world best experts in the world at each of these different subdomains that are necessary to build this really hard thing when I thought thought about how we form the best founding team for World Labs it has to start with the a a group of phenomenal multidisciplinary funders and of course justtin is natural for me Justin cover your years as one of my best students and uh one of the smartest technologist but there are two two other people I have known by reputation and and one of them Justin even worked with that I was drooling for right one is Ben mhal we talked about his um seminal work in nerve but another person is uh Kristoff lner who has been reputated in the community of computer graphics and uh especially he had the foresight of working on a precursor of the gausian Splat um representation for 3D modeling five years right before the uh the Gan spat take off and when when we heard about when we talk about the potential possibility of working with Christof lastner Justin just jumped off his chair Ben and Kristoff are are are legends and maybe just quickly talk about kind of like how you thought about the build out of the rest of the team because again like it's you know there's a lot to build here and a lot to work on not just in kind of AI or Graphics but like systems and so forth yeah um this is what so far I'm personally most proud of is the formidable team I've had the privilege of working with the smartest young people in my entire career right from from the top universities being a professor at Stanford but the kind of talent that we put together here at uh at uh World Labs is just phenomenal I've never seen the concentration and I think the biggest differentiating um element here is that we're Believers of uh spatial intelligence all of the multidisciplinary talents whether it's system engineering machine uh machine learning infra to you know uh generative modeling to data to you know Graphics all of us whether it's our personal research Journey or or technology Journey or even personal hobby we believe that spatial intelligence has to happen at this moment with this group of people and uh that's how we really found our founding team and uh and that focus of energy and talent is is is really just uh um humbling to me I I just love it so I know you've been Guided by an Northstar so something about North Stars is like you can't actually reach them because they're in the sky but it's a great way to have guidance so how will you know when you've accomplished what you've set out to accomplish or is this a lifelong thing that's going to continue kind of infinitely first of all there's real northstars and virtual North Stars sometimes you can reach virtual northstars fair enough good enough in the world in the world model exactly like I said I thought one of my Northstar that would take a 100 years with storytelling of images and uh Justin and Andre you know in my opinion solved it for me so um so we could get to our Northstar but I think for me is when so many people and so many businesses are using our models to unlock their um needs for spatial intelligence and that's the moment I know we have reached a major Milestone actual deployment actual impact actually yeah I I don't think going to get there um I I think that this is such a fundamental thing like the universe is a giant evolving four-dimensional structure and spatial intelligence r large is just understanding that in all of its depths and figuring out all the applications to that so I I think that we have a we have a particular set of ideas in mind today but I I think this I think this journey is going to take us places that we can't even imagine right now the magic of good technology is that technology opens up more possibilities and and unknown so so we will be pushing and then the possibilities will will be expanding brilliant thank you Justin thank you fa this was fantastic thank you Martin thank you Martin thank you so much for listening to the a16z podcast if you've made it this far don't forget to subscribe so that you are the first to get our exclusive video content or you can check out this video that we've hand selected for you